entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
1
461k
http://arxiv.org/abs/2307.02344v1
20230705145733
Applying the Resonance Method to $\textrm{Re}\left(e^{-iθ}\logζ(σ+it)\right)$
[ "Mikko Jaskari" ]
math.NT
[ "math.NT" ]
We apply the resonance method to Montgomery's convolution formula for (e^-iθlogζ(σ+it)) in the strip 1/2 < σ < 0.88. This gives new insight into maximal values of (e^-iθlogζ(σ+it)) for t ∈ [T^β,T] for all β∈ (0,1) and real θ. Quantum Limits of Position and Polarizability Estimation in the Optical Near Field Stefan Nimmrichter ================================================================================== § INTRODUCTION The Riemann zeta function is a famously important function in number theory. All the non-trivial zeros of the zeta function are in the critical strip 0 ≤(s) ≤ 1 and they are related to the distribution of the prime numbers. This is one of the reasons why it is important to study the behavior of the zeta function inside the critical strip. In 1977 H. L. Montgomery <cit.> proved that if we let σ∈ (1/2,1) and T > T_0(σ), then for any real θ we have[We denote log_j as the jth iterated logarithm and for instance log_2T=loglogT.] max_T^(σ-1/2)/3≤ t ≤ T( e^-iθlogζ(σ+it) ) ≥1/20( σ-1/2)^1/2(logT)^1-σ/(log_2T)^σ. Moreover, assuming the Riemann hypothesis, Montgomery <cit.> obtains max_T^1/6≤ t ≤ T( e^-iθlogζ(σ+it) ) ≥1/20(logT)^1-σ/(log_2T)^σ for any real θ. In Montgomery's method the lower bound for t weakens notably without the use of the Riemann hypothesis when σ→ 1/2^+. C. Aistleitner improved the lower bound of the extreme value in <cit.> in the case θ≡ 0 (mod 2π) by showing that for fixed σ∈ (1/2,1) and sufficiently large T we have max_ 0 ≤ t ≤ T |ζ(σ+it)| ≥exp( 0.18(2σ-1)^1-σ(logT)^1-σ/(loglogT)^σ). Our goal is to prove a Montgomery type result with better lower bound for t when σ is close to 1/2. Fix σ∈ (1/2,0.88), β∈ (0,1) and 0 < κ < min(σ-1/2,1-β). Then there exists a positive constant c independent from any of the chosen parameters σ, β or κ such that for any θ∈ and sufficiently large T we have max_t ∈ [T^β, T]( e^-iθlogζ(σ+it) ) ≥ cκ^1-σ/√(|log(2σ-1)|)(logT)^1-σ/(log_2T)^σ. Under the Riemann hypothesis we could choose κ < 1 - β and would not require κ < σ - 1/2. Due to condition κ < σ - 1/2, our unconditional result in the case t ∈ [0,T] is weaker than Montgomery's. Taking θ = π we obtain the following corollary. Fix σ∈ (1/2,0.88), β∈ (0,1) and 0 < κ < min(σ-1/2,1-β). Then there exists a positive constant c independent from any of the chosen parameters σ, β or κ such that for sufficiently large T we have max_t ∈ [T^β, T] -log|ζ(σ+it)| ≥ cκ^1-σ/√(|log(2σ-1)|)(logT)^1-σ/(log_2T)^σ. Result like corollary <ref> can then be converted into estimate of the upper bound of the minimum of |ζ(σ+it)| for t ∈ [T^β, T]. We will prove Theorem <ref> by means of resonance method introduced by K. Soundararajan <cit.>. Our work is heavily inspired by the works of A. Bondarenko and K. Seip <cit.>, <cit.> and <cit.> and also by the work of K. Mahatab and A. Chirre <cit.>. In <cit.> Bondarenko and Seip proved that there exists a positive and continous function ν(σ) on (1/2,1) bounded from below by 1/(2-2σ), with the asymptotic behavior ν(σ) = (1-σ)^-1 + O(|log(1-σ)|), σ→ 1^- (1/√(2) + o(1))√(|log(2σ-1)|), σ→ 1/2^+ and such that the following holds. If T is sufficiently large, then for 1/2 + 1/log_2T≤σ≤ 3/4, max_t ∈ [√(T),T] |ζ(σ+it)| ≥exp( ν(σ)(logT)^1-σ/(log_2T)^σ) and for 3/4 ≤σ≤ 1-1/log_2T, max_t ∈ [T/2,T] |ζ(σ+it)| ≥log_2Texp( c+ν(σ)(logT)^1-σ/(log_2T)^σ) with c an absolute constant independent of T. This theorem already improves both Aistleitner's and Montgomery's theorems and is seemingly much stronger than Theorem <ref> in the specific case θ≡ 0 (mod 2π). The method Mahatab and Chirre used in <cit.> to prove under Riemann hypothesis that for each fixed β∈ (0,1) there exists a positive constant c > 0 such that for sufficiently large T we have max_ t∈ [T^β,T]±ζ( 1/2 + it ) ≥ c√(logTlog_3T/log_2T). would also give stronger result to the case θ≡±π/2 (mod 2π). However, these proofs do not directly generalize to the case θ∈ and particularly θ≡π (mod 2π). Our strategy is to use Montgomery's convolution formula and apply resonance method by choosing a similar resonator as Bondarenko and Seip with some modifications. It is worth to mention that the result of Bondarenko and Seip has been recently improved by Zikang Dong and Bin Wei <cit.> by a factor 2^σ as σ→ 1/2^+. Their result gives that if we let 1/2 < σ < 1 and fix β∈ (0,1) we have that max_t ∈ [T^β, T]log|ζ(σ+it)| ≥ c_β(σ)(logT)^1-σ/(log_2T)^σ holds for a function c_β(σ) which has the asymptotic behavior c_β(σ) = (√(2) +o(1))(1-β)^1-σ√(|log(σ-1/2)|), as σ→ 1/2^+. § THE CONVOLUTION FORMULA We first introduce the needed convolution formula. The following lemma is <cit.>. Let σ∈ (1/2,1) and t ≥ 15. Suppose that ζ(σ_0+iu) ≠0 for any σ_0 ∈ [σ,1) and any u ∈ such that |u-t| ≤ 2(logt)^2. Then, for any ψ > 0 and any real H, 2/π∫_-(logt)^2^(logt)^2logζ(σ+i(t+u))( sinψ u/u)^2 e^iHudu = ∑_n=1^∞Λ(n)max(0,ψ-|H-logn|)/n^σ+itlogn + O( e^|H|+2ψ/(logt)^2). Here Λ is the von Mangoldt function. We now argue as Montgomery did in <cit.>. Let ψ = 1/2 and take successively H = H_1,H_2,H_3 where H_1 := -logx, H_2 := 0, H_3 := logx where 1 ≤ x ≤ (logt)^2. Now the main term on the right hand side of (<ref>) vanishes when H ∈{H_1, H_2}. Nevertheless, multiplying (<ref>) for H_1, H_2 and H_3 by 1/2e^-iθ, 1 and 1/2e^iθ respectively and adding them together we get 2/π∫_-(logt)^2^(logt)^2logζ(σ+i(t+u))( sin(u/2)/u)^2 (1+cos(θ+ulogx))du = 1/2 e^iθ∑_e^-1/2x≤ n ≤ e^1/2xΛ(n)/n^σ+itlogn( 1/2 - | logn/x| ) + O( x/(logt)^2). § PROOF OF THEOREM <REF> BY THE RESONANCE METHOD The resonance method introduced by Soundararajan <cit.> is based on evaluation of the ratio of two moments. The numerator moment is the integral of the investigated function multiplied by a chosen non-negative resonator over a chosen interval and the denominator moment is the integral of the resonator over the same interval. The ratio then gives a lower bound for the investigated function on that chosen interval. In order to construct the resonator needed in the resonance method we will first define various sets motivated by <cit.> and <cit.>. Fix σ∈ (1/2,0.88), β∈ (0,1) and 0 < κ < min(σ-1/2,1-β). Let us denote N = ⌊ T^κ⌋. Define the following sets * P is the set of primes in the interval (elogNlog_2N,e^2logNlog_2N]. * Let a > 1 be fixed, let M := { m ∈ : m has at least alogN/|log(2σ-1)| prime factors in P } and let M' ⊂ M contain all the integers in M that have prime factors only in P. Let f be the multiplicative function that is supported on the square-free numbers with all prime factors in P and f(p):= 1/√(|log(2σ-1)|), p ∈ P. Define ℳ := supp(f) ∖ M which in other words means that ℳ = { m ∈ : m is square-free, all prime factors of m are in P and m has at most alogN/|log(2σ-1)| prime factors }. Now let 𝒥 be defined as 𝒥 := { j ∈ : [(1+T^-1)^j, (1+T^-1)^j+1) ∩ℳ≠∅} and ℳ' := { m_j : j ∈𝒥, m_j = min{[(1+T^-1)^j, (1+T^-1)^j+1) ∩ℳ}. Let γ be fixed so that 0 < γ < 1. Let L be the set of integers that have at most γlogN/|log(2σ-1)| prime factors in P. Let L' ⊂ L contain integers in L that have prime factors only in P. Now set ℒ := ℳ∖ L. Hence ℒ is the set of integers in ℳ that have at least γlogN/|log(2σ-1)| prime factors. We now define the resonator we need. Let r : ℳ' → be defined as r(m_j) := ( ∑_(1-T^-1)^j-1≤ n ≤ (1+T^-1)^j+2 n ∈ℳ f(n)^2 )^1/2, for every j ∈𝒥 and the resonator R : → as R(t) := ∑_m ∈ℳ'r(m)/m^it. It is crucial that |ℳ| ≤ N for large N and this is essentially shown in <cit.>. Note that the sets we use are subsets of the sets used in <cit.> and we only consider the case k=1. We will now move forward to the definition of the moments. First we denote Φ(t) := e^-t^2/2 and let Φ̂ be the Fourier transform of Φ defined as Φ̂(y) := ∫_-∞^∞Φ(t)e^-itydt = √(2π)Φ(y). Since in Lemma <ref> we need to assume that there are no zeros of Riemann the zeta function on the right side of the contour we have to bypass all those cases where such zeros exists. We do so by defining the following indicator function. Denote by ρ non-trivial zeros of the ζ-function and let I(σ,t) be an indicator function defined as I(σ,t) = 1, if there is no zero ρ such that (ρ) ≥σ and |t-(ρ)| ≤ (logt)^2 0, otherwise. Using the indicator function we define moments as follows. Let σ∈ (1/2,0.88) be fixed. We define two moments M_1(R,T) and M_2(R,T) as M_1(R,T) := ∫_T^β^TlogT( ∫_-(logt)^2^(logt)^2 K(u)du )|R(t)|^2Φ( t/T)dt, M_2(R,T) := ∫_T^β^TlogT( ∫_-(logt)^2^(logt)^2 e^-iθlogζ(σ+i(t+u)) K(u)du )|R(t)|^2Φ( t/T)I(σ,t)dt, where K(u) = ( sin(u/2)/u)^2 (1+cos(θ+ulog(elogNlog_2N))) +( sin(u/2)/u)^2 (1+cos(θ+ulog(e^3/2logNlog_2N))) +( sin(u/2)/u)^2 (1+cos(θ+ulog(e^2logNlog_2N))). Now max_t ∈ [T^β,TlogT](e^-iθlogζ(σ+it)) ≥(M_2(R,T))/M_1(R,T). We begin with the use of Lemma <ref> and (<ref>) to obtain |I(σ,t)( ∫_-(logt)^2^(logt)^2 e^-iθlogζ(σ+i(t+u)) K(u)du )| ≥|I(σ,t)(π/8∑_n ∈ PΛ(n)/n^σ+itlogn + O( log_2T/logT)) | ≥| I(σ,t)(π/8∑_n ∈ PΛ(n)/n^σ+itlogn +o(1)) | for t ∈ [T^β, TlogT]. We will first focus on the main term of (M_2(R,T)). Similarly to <cit.> we obtain (∫_T^β^TlogTπ/8∑_n ∈ PΛ(n)/n^σ+itlogn|R(t)|^2Φ( t/T)dt ) = π/8∑_n ∈ PΛ(n)/n^σlogn( ∫_T^β^TlogT n^-it|R(t)|^2Φ( t/T) dt ) = π/8∑_m,v ∈ℳ'∑_p ∈ Pr(m)r(v)/p^σ( ∫_T^β^TlogTΦ( t/T)e^-itlog(mp/v) dt). We first focus on evalution of ( ∫_T^β^TlogTΦ( t/T)e^-itlog(mp/v) dt). Combining (<ref>) with the fact that Φ(t) is an even and real function we get ( ∫_0^∞Φ( t/T)e^-itlog(mp/v) dt) = 1/2∫_-∞^∞Φ( t/T)e^-itlog(mp/v)dt = T√(2π)/2Φ( Tlogmp/v). By the trivial estimate |Φ(u)e^-iy| ≤ 1 we have | ( ∫_0^T^βΦ( t/T)e^-itlog(mp/v) dt) | ≤ T^β. For T > 193 we have by rapid decay of Φ(t) as t →∞ |( ∫_TlogT^∞Φ( t/T)e^-itlog(mp/v) dt)| ≤∫_TlogT^∞1/t^2 dt= o(1) as T →∞. We may then conclude by (<ref>), (<ref>) and (<ref>) that since β < 1 we have ( ∫_T^β^TlogTΦ( t/T)e^-itlog(mp/v) dt) = T√(2π)/2Φ( Tlogmp/v) + O(T^β). In order to evaluate M_2(R,T) we have to remove all t ∈ [T^β,TlogT] with I(σ,t)=0. Let N(σ,T) denote the number of zeros ρ of the ζ-function for which (ρ) ≥σ and 0 ≤(ρ) ≤ T. Now, for T ≥ 10 and 1/2 ≤σ≤ 1, N(σ,T) ≪ T^3/2-σ(logT)^5. See <cit.>. We note that for each zero ρ with (ρ) ∈ [T^β,TlogT] there exists at most one interval where I(σ,t)=0 and the length of such interval is ≪ (logT)^2. By Lemma <ref> and similar estimate as done for (<ref>) we may conclude that ( ∫_T^β^TlogTΦ( t/T)e^-itlog(mp/v) (1-I(σ,t))dt) ≪ T^3/2-σ(logT)^9, Hence ( ∫_T^β^TlogTΦ( t/T)e^-itlog(mp/v) I(σ,t)dt) = T√(2π)/2Φ( Tlogmp/v) + O(T^3/2-σ(logT)^9 + T^β). In order to evaluate the contribution of the error terms we have to estimate the size of the resonator. For any real t we have |R(t)|^2 ≤ 3T^κ∑_l∈ℳf(l)^2. We follow <cit.>. Using |R(t)| ≤ R(0) we begin with R(0)^2 = ∑_m,n ∈ℳ' r(m)r(n) ≤ |ℳ'|∑_m ∈ℳ' r(m)^2 where we used the inequality ab ≤ (a^2 +b^2)/2. Recall from the beginning of this section and notes after Definition <ref> that there are at most N = ⌊ T^κ⌋ elements in ℳ'. Now by the definition of the set ℳ' and the function r ∑_m ∈ℳ' r(m)^2 ≤ 3∑_l ∈ℳ f(l)^2. We obtain the desired result by combining the upper bound of |ℳ'|, (<ref>) and (<ref>). Combining the definition of M_2(R,T), (<ref>), (<ref>), (<ref>), (<ref>) and Lemma <ref> we obtain (M_2(R,T)) ≥ Tπ√(2π)/16∑_m,v ∈ℳ'∑_p ∈ Pr(m)r(v)/p^σ(Φ( Tlogmp/v) ) + O((T^3/2+κ-σ(logT)^9 + T^β+κ)∑_l∈ℳf(l)^2). The next two lemmas allow us to lower bound the main term on the right hand side of (<ref>). We have ∑_m,v ∈ℳ'∑_p ∈ Pr(m)r(v)/p^σΦ( Tlogmp/v) ≥∑_v ∈ℳ f(v)^2 ∑_p|v1/f(p)p^σ. We follow <cit.>. We consider all triples m', v' ∈ℳ' and p ∈ P such that |pm'/v'-1| ≤3/T. We use the notation J(m') := [(1+T^-1)^j, (1+T^-1)^j+1), where j is the unique integer such that (1+T^-1)^j≤ m < (1+T^-1)^j+1. By the definition of r(m') and the Cauchy-Schwarz inequality we have for any p ∈ P and any m',v' ∈ℳ' ∑_m,v ∈ℳ mp=v m ∈ J(m'), v ∈ J(v') f(m)f(v) ≤(∑_m,v ∈ℳ mp=v m ∈ J(m'), v ∈ J(v') f(m)^2)^1/2(∑_m,v ∈ℳ mp=v m ∈ J(m'), v ∈ J(v') f(v)^2)^1/2 ≤( ∑_m ∈ J(m') f(m)^2 )^1/2( ∑_v ∈ J(v') f(v)^2 )^1/2 ≤ r(m')r(v') and hence, by the definition of ℳ', that, for any p ∈ P, ∑_m,v ∈ℳ mp=v f(m)f(v) ≤∑_m', v' ∈ℳ' |pm'/v'-1| ≤3/T r(m')r(v'). Now ∑_m,v ∈ℳ'∑_p ∈ Pr(m)r(v)/p^σΦ( Tlogmp/v) ≥∑_p ∈ P∑_m,v ∈ℳ mp=vf(m)f(v)/p^σ = ∑_v ∈ℳ f(v)^2 ∑_p|v1/f(p)p^σ. In the last step we used multiplicativity of f. This completes the proof. For fixed 0 < γ <1 we have ∑_v ∈ℳ f(v)^2 ∑_p|v1/f(p)p^σ≥γ∑_v ∈ℒf(v)^2 e^-2σ/√(|log(2σ-1)|)(logN)^1-σ/(log_2N)^σ We follow similar ideas as in <cit.>. We recall the definition of the set ℒ from Definition <ref>. We have ∑_v ∈ℳf(v)^2∑_p|v1/f(p)p^σ≥∑_v ∈ℒf(v)^2∑_p|v1/f(p)p^σ. Now ∑_v ∈ℒf(v)^2∑_p|v1/f(p)p^σ≥∑_v ∈ℒf(v)^2 γlogN/|log(2σ-1)|min_p ∈ P1/f(p)p^σ. We obtain the desired result by noting that min_p ∈ P1/f(p)p^σ≥√(|log(2σ-1)|)(e^2logNlog_2N)^-σ and combining (<ref>) and (<ref>). Now by combining (<ref>) and Lemmas <ref> and <ref> we obtain the following lemma. Fix σ∈ (1/2,0.88), β∈ (0,1) and κ < min(σ-1/2, 1-β). Then there exists a positive constant c_1 independent from σ, β and κ such that (M_2(R,T)) ≥ c_1T∑_v ∈ℒ f(v)^21/√(|log(2σ-1)|)(logN)^1-σ/(log_2N)^σ +O((T^3/2+κ-σ(logT)^9 + T^β+κ)∑_v ∈ℳf(v)^2). Here the restriction κ < min(σ-1/2,1-β) guarantees that the error term is acceptable. We now move forward to the evaluation of the denominator moment M_1(R,T). There exists a positive constant c_2 such that M_1(R,T) ≤ c_2 T∑_n ∈ℳf(n)^2. Since K(u) ≥ 0 for all u note that as in <cit.> we have ∫_-∞^∞K(u)du ≪ 1 since K(u) is a sum of three distinct kernels introduced in Montgomery's paper. Next we proceed as in <cit.> and note that according to Fourier transform (<ref>) we have ∫_-∞^∞ |R(t)|^2Φ( t/T)dt = √(2π)T∑_m,n∈ℳ'r(m)r(n)Φ( Tlogm/n). By the definitions of the set ℳ' and the function r, we find that ∑_m∈ℳ' r(m)^2 ≤ 3∑_n∈ℳf(n)^2. We note that ∑_m,n∈ℳ'r(m)r(n)Φ(Tlogm/n) ≤∑_m ∈ℳ' r(m)^2 + ∑_m,n ∈ℳ' m≠nr(m)r(n)Φ(Tlogm/n). We recall the definition of the set 𝒥 from Definition <ref>. To deal with the off-diagonal terms we find, that ∑_m,n ∈ℳ' m≠nr(m)r(n)Φ(Tlogm/n) ≤∑_j,l ∈𝒥 j≠lr(m_j)r(n_l)Φ(T(|j-l|-1)log(1+T^-1)) ≪∑_j,l ∈𝒥 j≠lr(m_j)r(n_l)Φ(|j-l|-1 ) Applying the inequality ab ≤ (a^2 + b^2)/2 we have ∑_j,l ∈𝒥 j≠lr(m_j)r(n_l)Φ(|j-l|-1 ) ≤∑_j,l ∈𝒥 j≠lr(m_j)^2Φ(|j-l|-1 ). Hence ∑_m,n ∈ℳ' m≠nr(m)r(n)Φ(Tlogm/n) ≪∑_j ∈𝒥r(m_j)^2. Hence combining (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) we see that ∫_-∞^∞( ∫_-∞^∞K(u)du )|R(t)|^2Φ( t/T)dt ≪ T∑_n ∈ℳf(n)^2 and the result follows from the definition of M_1(R,T). Now according to (<ref>) and Lemmas <ref> and <ref> there is a need to evaluate the ratio ∑_v ∈ℒf(v)^2/∑_n∈ℳf(n)^2 and this is done in our next lemmas. Let a>1 be fixed and M is defined as in Definition <ref>. Then 1/∑_j ∈ f(j)^2∑_v ∈ M f(v)^2 = o(1) as N →∞. See <cit.>. If γ < 1 is fixed and ℒ is defined as in Definition <ref>. Then 1/∑_j ∈ f(j)^2∑_v ∉ℒ f(v)^2 = o(1) as N →∞. We follow the ideas in <cit.>. We note that ℒ = supp(f) ∖ (M ∪ L) and by Lemma <ref> it is sufficient to show 1/∑_j ∈ f(j)^2∑_v ∈ L f(v)^2 = o(1) as N →∞. We recall the definition of L' from Definition <ref> and use the fact that f is multiplicative to obtain 1/∑_j ∈ f(j)^2∑_v ∈ L f(v)^2 = 1/∏_p ∈ P (1+f(p)^2)∑_v ∈ L'f(v)^2 where, for any b ∈(0,1), ∑_v ∈ L' f(v)^2 ≤ b^-γlogN/|log(2σ-1)|∏_p ∈ P(1+bf(p)^2). Recall from Definition <ref> that f(p)=1/√(|log(2σ-1)|). Define C := inf_σ∈ (1/2,0.88)1/1+f(p)^2 = 0.21533…. For b that is chosen close enough to 1, we have 1/∏_p ∈ P (1+f(p)^2)∑_v ∈ L'f(v)^2 ≤ b^-γlogN/|log(2σ-1)|exp( C∑_p ∈ P (b-1)f(p)^2 ), due to the Taylor expansion of log( 1+bf(p)^2/1+f(p)^2)=log(1-(1-b)f(p)^2/1+f(p)^2). The cardinality of P is at most e^2logN. By the prime number theorem ∑_p ∈ P f(p)^2 = (1+o(1))logN/|log(2σ-1)|(e^2 -e). Hence we obtain 1/∑_j ∈ f(j)^2∑_v ∈ L f(v)^2 ≤exp(( C(e^2-e)(b-1)-γlogb+o(1) )logN/|log(2σ-1)|). If we choose b and γ close to 1 we may note that C(e^2-e)(b-1) - γlogb < 0. This completes the proof. Since ℒ = supp(f) ∖ (M ∪ L) and ℳ = supp(f) ∖ M we have by Lemmas <ref> and <ref> ∑_v ∈ℒf(v)^2/∑_n∈ℳf(n)^2 = 1-o(1), as N →∞. It follows from combining (<ref>), (<ref>) and Lemmas <ref> and <ref> that max_t ∈ [T^β,TlogT](e^-iθlogζ(σ+it)) ≥ cκ^1-σ/√(|log(2σ-1)|)(logT)^1-σ/(log_2T)^σ+O(T^1/2+κ-σ(logT)^9 + T^β+κ-1). for some positive c. Theorem <ref> then follows by noting that the desired restriction T^β≤ t ≤ T is obtained by trivial adjustment, applying (<ref>) for T/logT in place of T and β' ∈ (β,1-κ) in place of β. § ACKNOWLEDGEMENTS I was partially funded by UTUGS graduate school and completed this work while working in Academy of Finland projects no. 333707 and 346307. I would also like to thank Kaisa Matomäki for supervising my work and Kamalakshya Mahatab for giving ideas for this project. plain
http://arxiv.org/abs/2307.02799v1
20230706061757
Few-Shot Personalized Saliency Prediction Using Tensor Regression for Preserving Structural Global Information
[ "Yuya Moroto", "Keisuke Maeda", "Takahiro Ogawa", "Miki Haseyama" ]
eess.IV
[ "eess.IV", "cs.LG" ]
Online Linear Regression Based on Weighted Average Mohammad Abu-Shaira1Greg Speegle2 August 1, 2023 ================================================== This paper presents a few-shot personalized saliency prediction using tensor-to-matrix regression for preserving the structural global information of personalized saliency maps (PSMs). In contrast to a general saliency map, a PSM has been great potential since its map indicates the person-specific visual attention that is useful for obtaining individual visual preferences from heterogeneity of gazed areas. The PSM prediction is needed for acquiring the PSM for the unseen image, but its prediction is still a challenging task due to the complexity of individual gaze patterns. For recognizing individual gaze patterns from the limited amount of eye-tracking data, the previous methods adopt the similarity of gaze tendency between persons. However, in the previous methods, the PSMs are vectorized for the prediction model. In this way, the structural global information of the PSMs corresponding to the image is ignored. For automatically revealing the relationship between PSMs, we focus on the tensor-based regression model that can preserve the structural information of PSMs, and realize the improvement of the prediction accuracy. In the experimental results, we confirm the proposed method including the tensor-based regression outperforms the comparative methods. Salinecy prediction, personalized saliency map, tensor regression, person similarity, adaptive image selection. Online Linear Regression Based on Weighted Average Mohammad Abu-Shaira1Greg Speegle2 August 1, 2023 ================================================== § INTRODUCTION Humans can selectively obtain vital information from the abundant visual information in the complex real-world owing to their visual system. Traditionally, many researchers have tried to introduce such human mechanisms into image-processing models <cit.>. Specifically, a saliency map, which represents the salient parts more noticeable than the neighbor parts, is predicted for reproducing the human instinctive visual perception <cit.>. Such a saliency map is predicted for each image without personalization. However, the different persons actually focus on different areas even when they gaze at the same scene, that is, individual differences exist <cit.>. To model the individual visual attention, the personzalization of the salinacy map has been addressed over the past few years <cit.>. For distinguishing between a traditional saliency map and its personalisation, we call a universal saliency map (USM) and a personalized saliency map (PSM), respectively. While the USM omits the differences between individuals, the PSM is predicted for each person. Since the personalized visual preferences can be reflected by differences between PSMs <cit.>, such individuality can be useful for many situations (e.g., personalized video summarization <cit.>). Here, for obtaining the PSM for unseen images in advance, it is required to predict the PSM from the individual gaze tendency. To capture the individual gaze tendency, the relationship between the visual stimuli, e.g., images, and its individual PSM should be analyzed from eye-tracking data obtained from each person in the past. Then the gaze patterns emerging in images are quite complex and individually different, and those characteristics lead to the difficulty of the PSM prediction. For extracting the gaze patterns and tendencies, several researchers have collected eye-tracking data for thousands of images <cit.>. Moreover, in these researches, the simultaneous prediction of PSMs for several persons has been tried by using a multi-task convolutional neural network (multi-task CNN) <cit.> to compensate for the lack of data <cit.>. In <cit.>, personalized information has been introduced into the PSM prediction model of several persons as the same training data. The prediction models adopted in these researches are based on deep learning that requires a massive amount of training data for each person. Actually, the large-scale PSM dataset is openly available, but the acquisition of a large amount of individual eye-tracking data can be a significant burden and time-consuming for persons in the application. In this way, it is desired that the PSM prediction with the limited amount of training eye-tracking data. To tackle the challenging task to predict the PSM from the limited amount of data, the way to use the gaze data obtained from persons that have a similar gaze tendency to the target person can be an effective strategy. For determining whether the person has a similar gaze tendency to the target person, several pairs of eye-tracking data for the same images are needed. Thus, such pairs cannot be acquired in large quantities, and the selection of images to acquire eye-tracking data is an important process. In <cit.>, images that induce scattering of gazes are selected by using adaptive image selection (AIS) for efficiently and steadily obtaining the similarity of gaze tendencies between the target and other persons (called training persons in this paper). Here, we assume that enough amount of eye-tracking data is acquired from training persons, and this is not arbitrary as there is a large-scale PSM dataset. Under this assumption, we can use the PSMs of training persons for any images since their PSMs can be predicted by using the previous researches <cit.>. In this research, the similarity of gaze tendencies between training persons and the target person is used for the simple method of taking a weighted average of PSMs obtained from training persons to predict PSMs of the target person. On the other hand, in the several researches, learning-based methods are used to predict the PSM of the target person from the PSMs of the training persons. Concretely, in <cit.>, the collaborative multi-output Gaussian process regression (CoMOGP) <cit.> is used for predicting the PSM. However, the methods using the general regression or its varieties need the vector format as inputs, and the structural global information of PSMs cannot be effectively used. Such information is the important clue for detecting salient areas in the human visual system <cit.>, and thus, the improvement of prediction performance is expected by constructing the prediction method preserving the structural information. We propose an inter-person gaze similarity based on tensor regression for few-shot PSM prediction in this paper. In the proposed method, we construct the tensor-to-matrix regression model <cit.> that can predict PSMs of the target person from the PSMs of training persons. Here, note that the PSMs of training persons are maps predicted by the deep learning-based model. Then the tensor-to-matrix regression model can treat the multi-array tensor format as its inputs and outputs. Thus, this regression model can reveal the transformation coefficient tensor from the input tensor to the output matrix, that is, the PSM. In this way, we can preserve the structural global information without vectorizing. Our contribution is that we construct the novel PSM prediction method using tensor-to-matrix regression for preserving the structural global informatin and reveal that the proposed PSM prediction model is effective by experimentation with the open dataset. § PROPOSED FEW-SHOT PSM PREDICTION Our few-shot PSM prediction consists of three phases and the whole flow is shown in Fig. <ref>. We assume that there are the P training persons with a vast amount of eye-tracking data and a target person with a limited amount of eye-tracking data. In practice, the assumption that training persons exist is pragmatic since the large-scale dataset is available<cit.>. First the multi-task CNN <cit.> is trained to predict the PSMs of the training persons by referring to the previous study <cit.>. Next, we choose the common images that the target person gazes at based on the AIS scheme <cit.>. The common images are chosen such that they bring more diverse gaze patterns to persons. Finally, the proposed method predicts the PSM by using tensor-to-matrix regression <cit.> with the PSMs of training persons. §.§ Multi-Task CNN for Training Persons For predicting the PSMs of the P training persons, the multi-task CNN <cit.> is adopted by referring to the previous study <cit.> in the proposed method. Concretely, we prepare the training images X_n ∈ℝ^d_1 × d_2 × d_3 (n=1,2,…,N; N being the number of training images) and its USM U(X_n)∈ℝ^d_1 × d_2, where d_1 × d_2 and d_3 are the size of the image the color channel, respectively. For effectively obtaining the predicted PSMs of training persons, the previous study <cit.> adopts the specific approach, that is, predicting the difference map M(X)_p ∈ℝ^d_1 × d_2 (p=1,2,…,P) between the USM and PSM as M(X)_p = S (X)_p - U(X), where S (X)_p is the PSM of pth training person based on eye-tracking data for the image X. Next, to simultaneously predict PSMs of training persons, we construct the multi-task CNN consisting of one image encoder and P PSM decoders, and optimize its trainable parameters by minimizing the following objective function: ∑_p=1^P∑_n=1^N∑_l=1^L||M̂_l(X_n)_p-M(X_n)_p||^2_F, where M̂_l(X_n)_p (l=1,2,…,L; L being the number of convolution layers in one decoder) is a predicted difference map calculated from lth layer, and ||·||_F^2 represents the Frobenius norm. Given the test image X_tst, the predicted PSM of the pth person is calculated as Ŝ(X_tst)_p = M̂_L(X_tst)_p + U(X_tst). The multi-task CNN can predict the PSMs of the training persons, simultaneously, and consider the relationship of PSMs between the training persons. §.§ Adaptive Image Selection for PSM Prediction We need to acquire eye-tracking data for capturing the gaze tendency of the target person. We choose the limited number of images from N training images used in the previous subsection for obtaining the similarity of the tendency between the target and training persons. For effectively analyzing such similarity, the I common images that bring more diverse gaze patterns to persons are chosen by using the AIS scheme <cit.>. Concretely, the AIS scheme pays attention to the variety of the common images and variation in PSMs obtained from the training persons. For simultaneously considering these factors, the AIS scheme uses the variation in PSMs for objects in each image. First, we calculate the PSMs and their variance for each object B_n,j (j=1,2,…,J; J being the number of object categories in the training images) in training images X_n. To detect the object bounding box, we apply the novel object detection method  <cit.> to the training images and obtain the rectangle whose size is d^h_n,j× d^w_n,j for jth object in ith image. The PSM variance q_n,j for object B_n,j is calculated as follows: q_n,j = 1/d^h_n,jd^w_n,jP∑_p=1^P ||S̅ (B_n,j)_p⊙S̅ (B_n,j)_p||_F^1, S̅ (B_n,j)_p = S (B_n,j)_p-1/P∑_p=1^P S (B_n,j)_p, where S(B_n,j)_p represents the PSM for object B_n,j of person p, and ⊙ is the operator of the Hadamard product. Then we set q_n,j=0 when X_n not including jth object and set the largest q_n,j when the image X_n including several mth objects. Then we obtain the sum of q_n,j for nth image by q̅_n = ∑_j=1^J q_n,j. Finally, by using q̅_n, we choose top I images as common images under the constraint to maximize the number of object categories in common images. In this way, the chosen common images have multiple object categories and objects in common images have the high PSM variance. Therefore, by using the AIS scheme, we can choose the common images with the consideration of the variety and the PSM variation. §.§ PSM Prediction via Tensor-to-Matrix Regression This subsection shows the tensor-to-matrix regression model for few-shot PSM prediction. The PSMs predicted in Sec. <ref> are used to predict the PSM of the target person in the proposed method. That is, we need to treat the several PSMs as input, and the input tensor 𝒮(X_i) ∈ℝ^P × d_1 × d_2 (i=1,2,…,I) corresponding to the image X_i chosen in Sec. <ref> is constructed as follows: 𝒮(X_i)= [Ŝ(X_i)_1,Ŝ(X_i)_2,…,Ŝ(X_i)_P]. Moreover, we prepare the supervised PSM S(X_i)_p^tst of the target person p^tst for the input tensor 𝒮(X_i). Here, we assume that the target person gazes at only the common images in Sec. <ref>, and we can obtain the supervised PSM S(X_i)_p^tst. In the tensor-to-matrix regression scenario, the weight tensor 𝒲∈ℝ^P × d_1 × d_2 × d_1 × d_2 is used for predicting the PSM of the newly given image as follows: S_TReg(X_tst)_p^tst=⟨𝒮(X_tst),𝒲⟩_3, where ⟨·,·⟩_Q represents the tensor product and Q is the number of input arrays. For optimising the weight tensor 𝒲, we minimize the sum of squared error with L_2 regularization as follows: min_rank(𝒲)≤ R∑_i=1^I||S(X_i)_p^tst-⟨𝒮(X_i),𝒲⟩_3||_F^2 + λ||𝒲||_F^2. Note that it is difficult to solve this minimization problem due to the inputs and outputs being the multi-array. Thus, by referring to <cit.>, we assume that 𝒲 has the reduced PARAFAC/CANDECOMP rank such that rank(𝒲)≤ R, and solve Eq. (<ref>) under this constraint. In this way, by using the tensor-to-matrix regression model, the proposed method can preserve the structural information without vectorizing the input tensor and the output matrix. § EXPERIMENTS §.§ Dataset In this subsection, we explain the settings of the dataset. The PSM dataset <cit.> that is the open large-scale dataset, was used in our experiment. For details, the PSM dataset consists of 1,600 images with corresponding eye-tracking data obtained from 30 participants. Experimental participants had normal or corrected visual acuity and gazed at one image for three seconds under the free viewing condition. For evaluating the predicted PSMs, we constructed the PSMs of each participant for all images from eye-tracking data as the ground truth (GT) map based on the previous work <cit.>. As the USM used in the proposed method, we adopted the mean PSMs of the training persons since we reduce the influence of USM prediction errors. In the proposed method, we needed the training images with eye-tracking data for training multi-task CNN and common images chosen from training images for training the tensor-to-matrix regression model. Thus, 1,100 images were randomly selected for training and the rest 500 images were used as test images in this experiment. Moreover, I common images were chosen from training images based on the AIS scheme. In addition, we randomly selected 20 participants from the PSM dataset as training persons and the rest 10 persons were treated as the target persons. Although eye-tracking data of the target persons were available, we only used eye-tracking data of the target persons for common images since we assume that the target persons gaze at common images for the PSM prediction. §.§ Experimental Settings This subsection describes the parameter settings of the proposed method and evaluation settings using compared methods. We optimized the multi-task CNN used in Sec. <ref> and the tensor-to-matrix regression model used in Sec. <ref>, separately. Concretely, we used the stochastic gradient descent <cit.> by referring to <cit.>, and then the number of layers L, momentum, batch size, epoch, and learning rate were set to 3, 0.9, 9, 1000, and 3.0× 10^-5, respectively. On the other hand, the tensor-to-matrix regression model was optimized by simply differentiating weight parameters with the tensor unfolding. Moreover, we set I=100, and R ∈{5,10,…,50}, λ∈{0.01,0.1,…,10000}. To objectively evaluate the proposed method, we adopted several USM and PSM prediction methods as compared methods. Concretely, we adopted the following USM prediction methods; Signature<cit.>, GBVS<cit.>, Itti<cit.>, SalGAN<cit.> and Contextual <cit.>. Then Signature, GBVS, and Itti were the computational models that predict USM only from the input image without training. SalGAN and Contextual were deep learning-based models trained by using the SALICON dataset <cit.> that is a large-scale dataset without considering personalization. In addition, we adopted the following two few-shot PSM prediction (FPSP) methods by using only common images and their eye-tracking data as baselines. Baseline1: PSM prediction using visual similarity between the target and common images <cit.>. Baseline2: PSM prediction based on Baseline1 and the USM prediction method <cit.>. Moreover, we compared with following three PSM prediction methods that are the similar setting to the proposed method. Similarity-based FPSP: FPSP based on the similarity of gaze tendency similar to the proposed method <cit.>, but this method used simply weighted average of the predicted PSMs of training persons. CoMOGP-based FPSP: FPSP based on CoMOGP <cit.> instead of Sec.<ref>. Object-based Gaze Similarity (OGS)-based FPSP: FPSP using the object similarity between the target and common images <cit.>. As the evaluation metrics, we adopted Kullback-Leibler divergence (KLdiv) and cross correlation (CC) between the predicted PSM and the GT map from the literature <cit.>. Specifically, KLdiv was used for evaluating the similarity of the distribution, that is, the structural similarity, while CC was used for evaluating the pixel-based similarity. By using these two metrics, we can evaluate the proposed and compared methods from the perspectives of both global and local similarities between predicted PSMs and its GTs. §.§ Results and Discussion Figure <ref> shows the predicted results, and Table <ref> shows the quantitative evaluation results. From Fig. <ref>, the PSMs predicted by the proposed method have a distribution close to GTs, and thus, we confirm the effectiveness of preserving the structural global information. Besides, from Table <ref>, we compare the proposed and compared methods. In the evaluation metric “KLdiv”, our method outperforms all compared methods, and thus, we confirm that the tensor-to-matrix regression is effective to PSM prediction with considering the structural global information. Concretely, it is confirmed the effectiveness of the PSM prediction since our method outperforms the USM prediction methods with the state-of-the-art (SOTA) USM prediction method, Contextual <cit.>. Moreover, by comparing our method with other PSM prediction methods, the effectiveness of focusing on structural global information is confirmed. The “KLdiv” of OGS-based FPSP <cit.> is similar to our method, but this compared method cannot have high generalization performances since this method just retrieves the similar object in the target image from common images for considering structural information. While, our regression-based FPSP can have high generalization performances since our method captures the relationships of gaze tendencies between training and target persons in the training process. While, the evaluation metric “CC” of our method is comparative to the SOTA method, but not the best. This reason can be considered that the local information cannot be considered when the low rank approximation of the weight tensor is conducted in Sec. <ref>. Then “CC” is the pixel-based evaluation, while “KLdiv” is the distribution-based evaluation, and thus, the proposed method succeeds in preserving structural global information owing to the high “KLdiv” value. Therefore, we indicate theat the proposed method is effective for PSM prediction preserving the structural global information. In addition to the comparative experiments, we confirm the changes in the values of the evaluation metrics in response to changes in hyperparameters of the tensor-to-matrix regression. Figure <ref> shows the values of the evaluation metrics in response to R and λ. From this figure, we can verify that R becomes larger, the prediction performance also becomes better, while λ=1000 is the best performance regardless of R. Then we consider that higher R is better, but there is a risk of increased computational complexity. Moreover, λ needs not be so high since λ is the hyperparameter of the regularization. In this way, we show the desirable hyperparameters of tensor-to-matrix regression for few-shot PSM prediction. § CONCLUSIONS This paper has presented a few-shot PSM prediction using tensor-to-matrix regression for preserving the structural global information of PSMs. By treating the input and output PSMs without the vectorization, the proposed method can preserve the structural information. The experiment on the open dataset shows the effectiveness of the tensor-to-matrix regression for PSM prediction. IEEEbib
http://arxiv.org/abs/2307.00818v1
20230703075729
Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset
[ "Jing Lin", "Ailing Zeng", "Shunlin Lu", "Yuanhao Cai", "Ruimao Zhang", "Haoqian Wang", "Lei Zhang" ]
cs.CV
[ "cs.CV" ]
[-25]A Data-driven Under Frequency Load Shedding Scheme in Power Systems Qianni Cao, Student Member, IEEE, Chen Shen, Senior Member, IEEE ========================================================================= In this paper, we present Motion-X, a large-scale 3D expressive whole-body motion dataset. Existing motion datasets predominantly contain body-only poses, lacking facial expressions, hand gestures, and fine-grained pose descriptions. Moreover, they are primarily collected from limited laboratory scenes with textual descriptions manually labeled, which greatly limits their scalability. To overcome these limitations, we develop a whole-body motion and text annotation pipeline, which can automatically annotate motion from either single- or multi-view videos and provide comprehensive semantic labels for each video and fine-grained whole-body pose descriptions for each frame. This pipeline is of high precision, cost-effective, and scalable for further research. Based on it, we construct Motion-X, which comprises 13.7M precise 3D whole-body pose annotations (i.e., SMPL-X) covering 96K motion sequences from massive scenes. Besides, Motion-X provides 13.7M frame-level whole-body pose descriptions and 96K sequence-level semantic labels. Comprehensive experiments demonstrate the accuracy of the annotation pipeline and the significant benefit of Motion-X in enhancing expressive, diverse, and natural motion generation, as well as 3D whole-body human mesh recovery. § INTRODUCTION Human motion generation aims to automatically synthesize natural human movements. It has wide applications in robotics, animation, games, and generative creation. Given a text description or audio command, motion generation can be controllable to obtain the desired human motion sequence. Text-conditioned motion generation has garnered increasing attention in recent years since it behaves in a more natural interactive way <cit.>. Although existing text-motion datasets <cit.> have greatly facilitated the development of motion generation <cit.>, their scale, diversity, and expressive capability remain unsatisfactory. Imagine generating “a man is playing the piano happily", as depicted in Fig. <ref>(a), the motion from existing dataset <cit.> only includes the body movements, without finger movements or facial expressions. The missing hand gestures and facial expressions severely hinder the high level of expressiveness and realism of the motion. Additionally, certain specialized motions, such as high-level skiing, aerial work, and riding are challenging to be captured in indoor scenes. To sum up, existing datasets suffer from four main limitations: 1) body-only motions without facial expressions and hand poses; 2) insufficient diversity and quantity, only covering indoor scenes; 3) lacking diverse and long-term motion sequences; and 4) manual text labels that are unscalable, unprofessional and labor-intensive. These limitations hinder existing generation methods to synthesize expressive whole-body motion with diverse action types. Therefore, how to collect large-scale whole-body motion and text annotations from multi-scenarios are critical in addressing the data scarcity issue. Compared to indoor marker-based mocap systems, markerless vision-based motion capture methods <cit.> become promising to capture large-scale motions from massive videos. Meanwhile, human motion can be regarded as a sequence of kinematic structures, which can be automatically translated into pose scripts using rule-based techniques <cit.>. More importantly, although markerless capture (e.g., pseudo labels) is not as precise as marker-based methods, collecting massive and informative motions, especially local motions, could still be beneficial <cit.>. Besides, text-driven motion generation task requires semantically corresponding motion labels instead of vertex-corresponding mesh labels, and thus have a higher tolerance of motion capture error. Bearing these considerations in mind, we design a scalable and systematic pipeline for motion and text annotation in both multi-view and single-view videos. Firstly, we gather and filter massive video recordings from a variety of scenes with challenging, high-quality, multi-style motions and sequence-level semantic labels. Subsequently, we estimate and optimize the parameters of the SMPL-X model <cit.> for the whole-body motion annotation. Due to the depth ambiguity and various challenges in different scenes, existing monocular estimation models typically fail to yield satisfactory results. To address this issue, we systematically design a high-performance framework incorporating several innovative techniques, including a hierarchical approach for whole-body keypoint estimation, a score-guided adaptive temporal smoothing and optimization scheme, and a learning-based 3D human model fitting process. By integrating these techniques, we can accurately and efficiently capture the ultimate 3D motions. Finally, we design an automatic algorithm to caption frame-level descriptions of whole-body poses. We obtain the body and hand scripts by calculating spatial relations among body parts and hand fingers based on the SMPL-X parameters and extract the facial expression with an emotion classifier. We then aggregate the low-level pose information and translate it into textual pose descriptions. Based on the pipeline, we collect a large-scale whole-body expressive motion dataset named , which includes 13.7M frames and 96K sequences with precise 3D whole-body motion annotations, pose descriptions, and semantic labels. To compile this dataset, we collect massive videos from the Internet, with a particular focus on game and animation motions, professional performance, and diverse outdoor actions. Additionally, we incorporated data from eight existing action datasets <cit.>. Using , we build a benchmark for evaluating several state-of-the-art (SOTA) motion generation methods. Comprehensive experiments demonstrate the benefits of for diverse, expressive, and realistic motion generation (shown in Fig. <ref> (b)). Furthermore, we validate the versatility and quality of on the whole-body mesh recovery task. Our contributions can be summarized as follows: * We propose a large-scale expressive motion dataset with precise 3D whole-body motions and corresponding sequence-level and frame-level text descriptions. * We elaborately design a automatic motion and text annotation pipeline, enabling efficient capture of high-quality human text-motion data at scale. * Comprehensive experiments demonstrate the accuracy of the motion annotation pipeline and the benefits of in 3D whole-body motion generation and mesh recovery tasks. § PRELIMINARY AND RELATED WORK In this section, we focus on introducing existing datasets for human motion generation. For more details about the motion generation methods, please refer to the appendix. Benchmarks annotated with sequential human motion and text are mainly collected for three tasks: action recognition <cit.>, human object interaction <cit.>, and motion generation <cit.>. Specifically, KIT Motion-Language Dataset <cit.> is the first public dataset with human motion and language descriptions, enabling multi-modality motion generation <cit.>. Although several indoor human motion capture (mocap) datasets have been developed <cit.>, they are scattered. AMASS <cit.> is noteworthy as it collects and unifies 15 different optical marker-based mocap datasets to build a large-scale motion dataset through a common framework and parameterization via SMPL <cit.>. This great milestone benefits motion modeling and its downstream tasks. Additionally, BABEL <cit.> and HumanML3D <cit.> contribute to the language labels through crowdsourced data collection. BABEL proposes either sequence labels or sub-sequence labels for a sequential motion, while HumanML3D collects three text descriptions for each motion clip from different workers. Thanks to these text-motion datasets, various motion generation methods have rapidly developed and shown advantages in diverse, realistic, and fine-grained motion generation <cit.>. However, existing text-motion datasets have several limitations, including the absence of facial expressions and hand gestures, insufficient data quantity, limited diversity of motions and scenes, coarse-grained and ambiguous descriptions, and the lack of long sequence motions. To bridge these gaps, we develop a large-scale whole-body expressive motion dataset with comprehensive sequence- and frame-level text labels. We aim to address these limitations and open up new possibilities for future research. We provide quantitative comparisons of and existing datasets in Tab. <ref>. § MOTION-X DATASET §.§ Overview As shown in Tab. <ref>, we collect from eight public datasets and online videos and provide the following motion and text annotations: 13.7M 3D whole-body SMPL-X annotation, 96K sequence-level semantic descriptions (e.g., walking with waving hand and laughing), and frame-level whole-body pose descriptions. Notably, original sub-datasets lack either whole-body motion or text labels and we unify them with our annotation pipeline. All annotations are manually checked to guarantee quality. In Fig. <ref>, we show the averaged temporal standard deviation of body, hand, and face keypoints of each sub-dataset , highlighting the diversity of hand movements and facial expressions, which fills in the gaps of previous body-only motion data. §.§ Data Collection As illustrated in Fig. <ref>, the overall data collection pipeline involves six key steps: 1) designing and sourcing motion text prompts via large language model (LLM) <cit.>, 2) collecting videos, 3) preprocessing candidate videos through human detection and video transition detection, 4) capturing whole-body motion (Sec. <ref>), 5) captioning sequence-level semantic label and frame-level whole-body pose description(Sec. <ref>), and 6) performing the manual inspection. We gather 81K motion sequences from existing datasets using our proposed unified annotation framework, including the multi-view datasets (NTU-RGBD 120 <cit.>, AIST <cit.>), human-scene-interaction datasets (Egobody <cit.> and GRAB <cit.>), single-view action recognition datasets (HAA500 <cit.>, HuMMan <cit.>), and body-only motion capture dataset (AMASS <cit.>). For these datasets, steps 1 and 2 are skipped. Notably, only the mocap datasets (AMASS, Egobody, and GRAB) provide SMPL-X labels with body and hand pose, thus we annotate the SMPL-X label for the other 51.6K motion sequences. For the mocap data, which contains the body and roughly static hand motions, we skip step 4 and fill in the facial expression with a data augmentation mechanism. The facial expressions are collected from existing facial datasets BAUM <cit.> via a face capture and animation model EMOCA <cit.>. Details about the processing of each sub-dataset are available in the appendix. To improve the richness, we collect 15K monocular videos from online sources, covering various real-life scenes as depicted in Fig. <ref>. Since human motions and actions are context-dependent and vary with the scenario in which they occur, we design action categories as motion prompts based on the scenario and function of the action via LLM. To ensure comprehensive coverage of human actions, our dataset includes both general and domain-specific scenes. The general scenes encompass daily actions (e.g., brushing hair, wearing glasses, and applying creams), sports activities (e.g., high knee, kick legs, push-ups), various musical instrument playing, and outdoor scenes (e.g., BMX riding, CPR, building snowman). The inclusion of general scenes helps bridge the gap between existing data and real-life scenarios. In addition, we incorporate domain-specific scenes that require high professional skills, such as dance, Kung Fu, Tai Chi, performing arts, Olympic events, entertainment shows, games, and animation motions. Based on the prompts describing the above scenes, we run the data collection pipeline to gather the necessary data for our dataset. § AUTOMATIC ANNOTATION PIPELINE §.§ Universal Whole-body Motion Annotation Overview. To efficiently capture a large volume of potential motions from massive videos, we propose an annotation pipeline for high-quality whole-body motion capture with three novel techniques: (i) hierarchical whole-body keypoint estimation; (ii) score-guided adaptive temporal smoothing for jitter motion refinement; and (iii) learning-based 3D human model fitting for accurate motion capture. 2D Keypoint Estimation. 2D Whole-body keypoint estimation poses a challenge due to the small size of the hands and face regions. Although recent approaches have utilized separate networks to decode features of different body parts <cit.>, they often struggle with hand-missing detection and are prone to errors due to occlusion or interaction. To overcome these limitations, we customize a novel hierarchical keypoint annotation method, depicted in the blue box of Fig. <ref>. We train a ViT-WholeBody based on a ViT-based model <cit.> on the COCO-Wholebody dataset <cit.> to estimate initial whole-body keypoints 𝐊^2D∈ℝ^133×2 with confidence scores. Leveraging the ViT model's ability to model semantic relations between full-body parts, we enhance hand and face detection robustness even under severe occlusion. Subsequently, we obtain the hand and face bounding boxes based on the keypoints, and refine the boxes using the BodyHands detector <cit.> through an IoU matching operation. Finally, we feed the cropped body, hand, and face regions into three separately pre-trained ViT networks to estimate body, hand and face keypoints, which are used to update 𝐊^2D. Score-guided Adaptive Smoothing. To address the jitter resulting from per-frame pose estimation in challenging scenarios such as heavy occlusion, truncation, and motion blur, while preserving motion details, we introduce a novel score-guided adaptive smoothing technique into the traditional Savitzky-Golay filter <cit.>. The filter is applied to a sequence of 2D keypoints of a motion: 𝐊̅_i^2D = ∑_j=-w^w c_j 𝐊^2D_i+j, where 𝐊_i^2D is the original keypoints of the i_th frame, 𝐊̅_i^2D is the smoothed keypoints, w corresponds to half-width of filter window size, and c_j are the filter coefficients. Different from existing smoothing methods with a fixed window size <cit.>, we leverage the confidence scores of the keypoints to adaptively adjust the window size to balance between smoothness and motion details. Using a larger window size for keypoints with lower confidence scores can mitigate the impact of outliers. 3D Keypoint Annotation. Precise 3D keypoint can boost the estimation of SMPL-X. We utilize novel information from large-scale pre-trained models. Accordingly, for single-view videos, we adopt a pretrained model <cit.>, which is trained on massive 3D datasets, to estimate precise 3D keypoints. For multi-view videos, we utilize bundle adjustment to calibrate and refine the camera parameters, and then triangulate the 3D keypoints 𝐊̅^3D based on the multi-view 2D keypoints. To enhance stability, we adopt temporal smoothing and enforce 3D bone length constraints during triangulation. Local Pose Optimization. After obtaining the keypoints, we perform local pose optimization to register each frame's whole-body model SMPL-X <cit.>. Traditional optimization-based methods <cit.> are often time-consuming and may yield unsatisfactory results as they ignore image clues and motion prior. We propose a progressive learning-based human mesh fitting method to address these limitations. Initially, we predict the SMPL-X parameter Θ with the SOTA whole-body mesh recovery method OSX <cit.> and face reconstruction model EMOCA <cit.>. And then, through iterative optimization of the network parameters, we fit the human model parameters Θ̂ to the target 2D and 3D joint positions by minimizing the following functions, achieving an improved alignment accuracy: L_joint = ‖𝐊̂^3D-𝐊̅^3D‖_1 + ‖𝐊̂^2D-𝐊̅^2D‖_1 + ‖Θ̂ - Θ‖_1. Here, 𝐊̂^3D represents the predicted 3D joint positions obtained by applying a linear regressor to a 3D mesh generated by the SMPL-X model. 𝐊̂^2D is derived by performing a perspective projection of the 3D keypoints. The last term of the loss function provides explicit supervision based on the initial parameter, serving as a 3D motion prior. To alleviate potential biophysical artifacts, such as interpenetration and foot skating, we incorporate a set of physical optimization constraints: L = λ_jointL_joint + λ_smoothL_smooth + λ_penL_pen + λ_phyL_phy. Here, λ are weighting factors of each loss function and L_smooth is a first-order smoothness term: L_smooth = ∑_t ‖Θ̂_2:t-Θ̂_1:t-1‖_1 + ∑_t ‖𝐊̂^3D_2:t-𝐊̂^3D_1:t-1‖_1, where Θ̂_i and 𝐊̂^3D_i represent the SMPL-X parameters and joints of the i-th frame, respectively. To alleviate mesh interpenetration, we utilize a collision penalizer <cit.>, denoted as L_pen. Additionally, we incorporate the physical loss L_phy based on PhysCap <cit.> to prevent implausible poses. Global Motion Optimization. To improve the consistency and realism of the estimated global trajectory, we perform a global motion optimization based on GLAMR <cit.> to simultaneously refine the global motions and camera poses to align with video evidence, such as 2D keypoints: L_g = λ_2DL_2D + λ_trajL_traj + λ_camL_cam + λ_regL_reg, where L_2D represents the 2D keypoint distance loss, L_traj quantifies the difference between the optimized global trajectory and the trajectory estimated by Kama <cit.>. L_reg enforces regularization on the global trajectory, and L_cam applies a smoothness constraint on the camera parameters. Human Verification. To ensure quality, we manually checked the annotation by removing the motions that do not align with the video evidence or exhibit obvious biophysical artifacts. §.§ Obtaining Whole-body Motion Descriptions Sequence motion labels. The videos in were collected from online sources and existing datasets. For action-related datasets <cit.>, we use the action labels as one of the sequence semantic labels. Meanwhile, we input the videos into Video-LLaMA <cit.> and filter the human action descriptions as supplemental texts. When videos contain semantic subtitles, EasyOCR automatically extracts semantic information. For online videos, we also use the search queries generated from LLM <cit.> as semantic labels. Videos without available semantic information, such as EgoBody <cit.>, are manually labeled using the VGG Image Annotator (VIA) <cit.>. For the face database BAUM <cit.>, we use the facial expression labels provided by the original creator. Whole-body pose descriptions. The generation of fine-grained pose descriptions for each pose involves three distinct parts: face, body, and hand, as shown in Fig. <ref>(a). Facial expression labeling uses the emotion recognition model EMOCA <cit.> pretrained on AffectNet <cit.> to classify the emotion. Body-specific descriptions utilizes the captioning process from PoseScript <cit.>, which generates synthetic low-level descriptions in natural language based on given 3D keypoints. The unit of this information is called posecodes, such as `the knees are completely bent'. A set of generic rules based on fine-grained categorical relations of the different body parts are used to select and aggregate the low-level pose information. The aggregated posecodes are then used to produce textual descriptions in natural language using linguistic aggregation principles. Hand gesture descriptions extends the pre-defined posecodes from body parts to fine-grained hand gestures. We define six elementary finger poses via finger curvature degrees and distances between fingers to generate descriptions, such as `bent' and `spread apart'. We calculate the angle of each finger joint based on the 3D hand keypoints and determine the corresponding margins. For instance, if the angle between 𝐕⃗(𝐊_wrist, 𝐊_fingertip) and 𝐕⃗(𝐊_fingertip, 𝐊_fingeroot) falls between 120 and 160 degrees, the finger posture is labeled as `slightly bent'. We show an example of the annotated text labels in Fig. <ref>(b). Summary. Based on the above annotations, we bulid , which has 96K clips with 13.7M SMPL-X poses and the corresponding pose and semantic text labels. § EXPERIMENT In this section, we first validate the accuracy of our motion annotation pipeline on the 2D keypoints and 3D SMPL-X datasets. Then, we build a text-driven whole-body motion generation benchmark on . Finally, we show the effectiveness of in whole-body human mesh recovery. §.§ Evaluation of the Motion Annotation Pipeline 2D Keypoints Annotation. We evaluate the proposed 2D keypoint annotation method on the COCO-WholeBody <cit.> dataset, and compare the evaluation result with four SOTA keypoints estimation methods  <cit.>. We use the same input image size of 256× 192 for all the methods to ensure a fair comparison. From Tab. <ref>(a), our annotation pipeline significantly surpasses existing methods by over 15% average precision. Additionally, we provide qualitative comparisons in Fig. <ref>(a), illustrating the robust and superior performance of our method, especially in challenging and occluded scenarios. 3D SMPL-X Annotation. We evaluate our learning-based fitting method on the EHF <cit.> dataset and compare it with four open-sourced human mesh recovery methods. Following previous works, we employ mean per-vertex error (MPVPE), Procrusters aligned mean per-vertex error (PA-MPVPE), and Procrusters aligned mean per-joint error (PA-MPJPE) as evaluation metrics (in mm). Results in Tab. <ref>(b) demonstrate the superiority of our progressive fitting method (over 30% error reduction). Specifically, PA-MPVPE is only 19.71 mm when using ground-truth 3D keypoints as supervision. Fig. <ref>(b) shows the annotated mesh from front and side view, indicating reliable 3D SMPL-X annotations with reduced depth ambiguity. More results are presented in Appendix due to page limits. §.§ Impact on Text-driven Whole-body Motion Generation Experiment Setup. We randomly split into the train (80%), val (5%), and test (15%) sets. SMPL-X is adopted as the motion representation for expressive motion generation. Evaluation metrics. We adopt the same evaluation metrics as <cit.>, including Frechet Inception Distance (FID), Multimodality, Diversity, R-Precision, and Multimodal Distance. Due to the page limit, we leave more details about experimental setups and evaluation metrics in the appendix. Benchmarking Motion-X. We train and evaluate four diffusion-based motion generation methods, including MDM <cit.>, MLD <cit.>, MotionDiffuse <cit.> and T2M-GPT <cit.> on our dataset. Since previous datasets only have sequence-level motion descriptions, we keep similar settings for minimal model adaptation and take the semantic label as text input. The evaluation is conducted with 20 runs (except for Multimodality with 5 runs) under a 95% confidence interval. From Tab. <ref>, MotionDiffuse demonstrates a superior performance across most metrics. However, it scores the lowest in Multimodality, indicating that it generates less varied motion. Notably, T2M-GPT achieves comparable performance on our dataset while maintaining high diversity, indicating our large-scale dataset's promising prospects to enhance the GPT-based method's efficacy. MDM gets the highest Multimodality score with the lowest precision, indicating the generation of noisy and jittery motions. The highest Top-1 precision is 55.9%, showing the challenges of . MLD adopts the latent space design, making it fast while maintaining competent results. Therefore, we use MLD to conduct the following experiments to compare with the existing largest motion dataset HumanML3D and ablation studies. r0.50 < g r a p h i c s > Visual comparisons of motions generated by MLD <cit.> trained on HumanML3D (in purple) or (in blue). Please zoom in for a detailed comparison. The model trained with can generate more accurate and semantic-corresponded motions. Comparison with HumanML3D. To validate the richness, expressiveness, and effectiveness of our dataset, we conduct a comparative analysis between and HumanML3D, which is the largest existing dataset with text-motion labels. We replace the original vector-format poses of HumanML3D with the corresponding SMPL-X parameters from AMASS <cit.>, and randomly extract facial expressions from BAUM <cit.> to fill in the face parameters. We train MLD separately on the training sets of and HumanML3D, then evaluate both models on the two test sets. The results in Tab. <ref> reveal some valuable insights. Firstly, exhibits greater diversity (13.174) than HumanML3D (9.837), as evidenced by the real (GT) row. This indicates a wider range of motion types captured by . Secondly, the model trained on HumanML3D can not achieve a satisfactory result on the test set, while the model trained on performs well on the HumanML3D test set, even better than the intra-data training. These gaps arise from HumanML3D's limited action categories captured in a laboratory environment, whereas encompasses diverse motion types from massive outdoor and indoor scenes. For a more intuitive comparison, we provide the visual results of the generated motion in Fig. <ref>, where we can clearly see that the model trained on excels at synthesizing semantically corresponding motions given text inputs. These results prove the significant advantages of in enhancing expressive, diverse, and natural motion generation. Ablation study of text labels. In addition to sequence-level semantic labels, the text labels in also include frame-level pose descriptions, which is an important characteristic of our dataset. To assess the effectiveness of pose description, we conducted an ablation study on the text labels. The baseline model solely utilizes the semantic label as the text input. Since there is no method to use these labels, we simply sample a single sentence from the pose descriptions randomly, concatenate it with the semantic label, and feed the combined input into the CLIP text encoder. Interestingly, from Tab. <ref>, adding additional face and body pose texts brings consistent improvements, and combining whole-body pose descriptions results in a noteworthy 38% reduction in FID. These results validate that the proposed whole-body pose description contributes to generating more accurate and realistic human motions. More effective methods to utilize these labels can be explored in the future. §.§ Impact on Whole-body Human Mesh Recovery As discovered in this benchmark <cit.>, the performance of mesh recovery methods can be significantly improved by utilizing high-quality pseudo-SMPL labels. provides a large volume of RGB images and well-annotated SMPL-X labels. To verify its usefulness in the 3D whole-body mesh recovery task, we take Hand4Whole <cit.> as an example and evaluate MPVPE on the widely-used AGORA val <cit.> and EHF <cit.> datasets. For the baseline model, we train it on the commonly used COCO <cit.>, Human3.6M <cit.>, and MPII <cit.> datasets. We then train another model by incorporating an additional 10% of the single-view data sampled from while keeping the other setting the same. As shown in Tab. <ref>, the model trained with shows a significant decrease of 7.8% in MPVPE on EHF and AGORA compared to the baseline model. The gains come from the increase in diverse appearances and poses in , indicating the effectiveness and accuracy of the motion annotations in and its ability to benefit the 3D reconstruction task. § CONCLUSION In this paper, we present Motion-X, a comprehensive and large-scale 3D expressive whole-body human motion dataset. It addresses the limitations of existing mocap datasets, which primarily focus on indoor body-only motions with limited action types. The dataset consists of 127.1 hours of whole-body motions and corresponding text labels. To build the dataset, we develop a systematic annotation pipeline to annotate 96K 3D whole-body motions, sequence-level motion semantic labels, and 13.7M frame-level whole-body pose descriptions. Comprehensive experiments demonstrate the accuracy of the motion annotation pipeline and the significant benefit of in enhancing expressive, diverse, and natural motion generation, as well as 3D whole-body human mesh recovery. Limitation and future work. While this paper provides preliminary investigations into diffusion-based models, the field of relevant models is still limited. Additionally, the evaluation metrics used may not fully reflect the true results. Therefore, there is a need for further development of the motion generation models and evaluation metrics, which we leave as future work. As a large-scale dataset with multiple modalities, e.g., motion, text, video, and audio, Motion-X holds great potential for advancing downstream tasks, such as motion prior learning, understanding, and multi-modality pre-training. Finally, our large-scale dataset and scalable annotation pipeline open up possibilities for combining this task with large language model (LLM) to achieve an exciting motion generation result in the future. With Motion-X, we hope to benefit and facilitate further research in relevant fields. § APPENDIX § MOTION-X: ADDITIONAL DETAILS In this section, we provide more details about that are not included in the main paper due to space limitations, including statistic analyses, data processing, and motion augmentation mechanism. §.§ Statistic Analyses Fig. <ref>(a) shows each sub-dataset standard deviation of body, hand, and face joints. Our dataset has a large diversity of the hand and face joints, filling the gap of the previous body-only dataset in terms of expressiveness. Besides, as shown in Fig. <ref>(b), provides a large volume of long motion (>240 frames), which will be beneficial for long-term motion generation. §.§ Processing of Each Sub-dataset We gather 96K motion sequences from eight existing datasets and a large volume of online videos with the proposed annotation pipeline. As shown in Tab. <ref>, due to the lack of comprehensive annotations from their original datasets, we provide well-annotated whole-body motion, comprehensive semantic labels, and whole-body pose descriptions for all datasets. Here we introduce more details about each sub-dataset's data processing. AMASS <cit.> is the existing largest-scale motion capture dataset, which provides body motions and almost static hand motions. To fill in the face parameters, we extract facial expressions from the facial dataset BAUM <cit.> with the SOTA facial reconstruction method EMOCA <cit.> and perform a data augmentation (in Sec. <ref>). For the text labels, we utilize the semantic labels from HumanML3D <cit.> and annotate the pose description with our whole-body pose captioning module. NTU120 <cit.> is a widely used action recognition dataset. It provides the body keypoints, SMPL <cit.> parameters, action labels, and multi-view videos. Notably, we do not use the original body keypoints and SMPL parameters because of the insufficient quality. Instead, we annotate the SMPL-X format pseudo labels with the proposed motion annotation pipeline, which can generate high-quality whole-body motions. To obtain the semantic labels, we use the provided action labels and expand them with the large language model (LLM) <cit.>. The pose descriptions are annotated with our whole-body pose captioning method. AIST++ <cit.> is a large-scale dance dataset with 3D body keypoints, multi-view videos and dance genres label. Like NTU120, we do not use the original body-only motion data and annotate the whole-body motion via our motion annotation pipeline. We obtain the semantic labels by expanding the dance genres label and providing frame-level pose descriptions. HAA500 <cit.> is a large-scale human-centric atomic action dataset with manually annotated labels and videos for action recognition. It contains 500 classes with fine-grained atomic action labels, covering sports, playing musical instruments, and daily actions. However, it does not have the motion labels. We annotate the 3D whole-body motion with our pipeline. We use the provided atomic action label as semantic labels. Besides, we input the video into video-LLaMA <cit.> and filter the human action descriptions as supplemental texts. Pose description is generated by our automatic pose annotation method. HuMMan <cit.> is a human dataset with multi-modality data, including multi-view videos, keypoints, SMPL parameters, action labels, etc. It does not provide whole-body pose labels. We estimate the SMPL-X parameters with our annotation pipeline. Besides, we expand the action label with LLM into semantic labels and use the proposed captioning pipeline to obtain pose descriptions. GRAB <cit.> is human grasping dataset with body and hand motion. Meanwhile, it provides text descriptions of each grasping motion without corresponding videos. Therefore, similar to AMASS, we extract facial expressions from BAUM to fill in the facial expression. We use the provided text description as semantic labels and annotate pose descriptions based on the SMPL-X parameters. EgoBody <cit.> is a large-scale dataset capturing ground-truth 3D human motions in social interactions scenes. It provides high-quality body and hand motion annotations, lacking facial expression. Thus, we perform a motion augmentation to obtain expressive whole-body motions. Since EgoBody does not provide text information, we manually label the semantic description using the VGG Image Annotation (VIA) <cit.> and annotate the pose description with the automatic pose captioning pipeline. BAUM <cit.> is a facial dataset with 1.4K audio-visual clips and 13 emotions. We annotate facial expressions from BAUM with the SOTA face reconstruction method EMOCA. Online Videos. To improve the richness, we collect 15K monocular videos from online sources, covering various real-life scenes. We design action categories as motion prompts and input them into LLM. Then, we collect videos from online sources based on the answer of LLM, after which we filter the candidate videos by transition detection and annotate the whole-body motion, semantic label and pose description for the selected videos. §.§ Motion Augmentation Mechanism Lower-body Motion Augmentation. contains some upper-body videos collected from online videos, like the videos in UBody <cit.>, where the lower-body part is invisible. Estimating accurate lower-body motions and global trajectories for these videos is challenging. Thanks to the precise low-body motions provided in AMASS, we can simply perform a lower-body motion augmentation for these sequences, i.e., selecting the closest motion from AMASS based on the SMPL-X parameters and replacing the lower-body motion with it. Meanwhile, we incorporate relevant keywords (e.g., sitting, standing, walking) in the text descriptions. Fig. <ref>(a) depicts three plausible lower-body augmentations for the motion sequence with the semantic label "a person is playing the guitar happily." Facial Expression Augmentation. As shown in Tab. <ref>, the motion capture datasets AMASS, GRAB, and EgoBody do not provide facial expressions. Thus, we perform a facial expression augmentation for these motions by randomly selecting a facial expression sequence from the BAUM <cit.> dataset to fill the void and incorporating emotion labels (e.g., happy, sad, and surprise) in the semantic description. We perform interpolation for the selected sequence to ensure the same length as the original motion. An example of face expression augmentation is illustrated in Fig. <ref>(b). § MORE ANNOTATION VISUAL RESULTS In this section, we present some visual results of the 2D keypoints, SMPL-X parameters, and motion sequences to show the effectiveness of our proposed motion annotation pipeline. §.§ 2D Keypoints As the main paper claims, we propose a hierarchical Transformer-based model for 2D keypoints estimation. To demonstrate the superiority of our method, we compare it with two widely used methods, Openpose <cit.> and MediaPipe <cit.>. We use the PyTorch implementation of Openpose and only estimate the body and hand keypoints as it does not provide the face estimator. As shown in Fig. <ref>, Openpose and MediaPipe can not achieve accurate results in some challenging poses. Besides, there exists severe missing detection of hands for Openpose and MediaPipe. In contrast, our method performs significantly better, especially the hand keypoint localization. §.§ SMPL-X Parameters To register accurate SMPL-X parameters, we elaborately design a learning-based fitting method with several training loss functions. We compare our method with two SOTA learning-based methods, Hand4Whole <cit.> and OSX <cit.>. As shown in Fig. <ref>, our method achieves a much better alignment result than the other models, especially on some difficult poses, which benefits from the iterative fitting process. Notably, Hand4Whole <cit.> and OSX <cit.> can only estimate the local positions without optimized global positions, which will suffer from unstable and jittery global estimation. Furthermore, we compare with the widely used fitting method SMPLify-X <cit.>, using their officially released codes, in Fig. <ref>. Our method is more robust than SMPLify-X and can obtain better results about physically plausible poses, especially in challenging scenes (e.g., hard poses, low-resolution inputs, heavy occlusions). The results from the side view demonstrate that our method can properly deal with depth ambiguity and avoid the lean issue. §.§ Motion Sequences To highlight the expressiveness and diversity of our proposed motions, we illustrate examples of the same semantic label, like dance ballet, with six motion styles in Fig. <ref>. This one-to-many (text-to-motion) information can benefit the diversity of motion generation. Then, we demonstrate more motion visualization in Fig. <ref> and <ref> for different motion scenes. These motions show different facial expressions, hand poses, and body motions. § EXPERIMENT §.§ Experiment Setup Motion Representation. To capture the 3D expressive whole-body motion, we use SMPL-X <cit.> as our motion representation. A pose state is formulated as: 𝐱 = {θ_b, θ_h, θ_f, ψ, 𝐫}. Here, θ_b∈ℝ^22×3 and θ_h∈ℝ^30×3 denote the 3D body rotations and hand rotations. θ_f∈ℝ^3 and ψ∈ℝ^50 are the yaw pose and facial expression. 𝐫∈ℝ^3 is the global translation. Evaluation Metrics. We adopt the same evaluation metrics as <cit.>, including Frechet Inception Distance (FID), multimodality, diversity, R-precision, and multimodal distance. We pretrain a motion feature extractor and a text feature extractor for the new motion presentation with contrastive loss to map the text and motion into feature space and then evaluate the distance between the text-motion pairs. For each generated motion, its ground-truth text description and 31 mismatched text description randomly selected from the test set compose a description pool. We rank the Euclidean distances between the generated motion and each text in the pool and then calculate the average accuracy at the top-k positions to derive R-precision. Multimodal distance is computed as the Euclidean distance between the feature vectors of generated motion and its corresponding text description in the test set. Additionally, We include the average temporal standard deviation as a supplementary metric to evaluate the diversity and temporal variation of whole-body motion. Computational Costs. We use 8 NVIDIA A100 GPUs for motion annotation and 4 GPUs for motion generation experiments. It takes about 72 hours to annotate 1M frames with our annotation pipeline. §.§ More Ablation Study More Comparison with HumanML3D. Previous motion generation datasets are limited in expressing rich hand and face motions, as they only contain body and minimal hand movements. To demonstrate the expressiveness of our dataset, we conduct a comparison between HumanML3D and on face, hand, and body, separately. Specifically, we train MLD <cit.> on each dataset and evaluate the diversity of generated motions and ground-truth motions by computing the average temporal standard deviation of the SMPL-X parameters and joint positions. The SMPL-X parameters include body poses, hand poses, and facial expressions. Body, hand, and face joint positions are represented as root-relative, wrist-relative, and neck-relative, respectively. We randomly choose 300 generated samples from the validation set and repeat the experiment 10 times to report the average results. As shown in Tab. <ref>, the generated and ground-truth motions in exhibit a higher deviation, especially in hand and face parameters, indicating significant hand and face movements over time. These results demonstrate that the model trained with can generate more diverse facial expressions and hand motions, demonstrating the ability of our whole-body motion to capture fine-grained hand and face movements and expressive actions. § RELATED WORK In this part, we introduce relevant methods for human motion generation. According to different inputs, producing human motions can be divided into two categories: the general motion synthesis from scratch <cit.> and the controllable motion generation from given text, audio, and music as conditions <cit.>. Motion synthesis encompasses several tasks, such as motion prediction, completion, and interpolation <cit.>, developed over several decades in computer vision and graphics. These tasks tend to utilize nearby frames with spatio-temporal correlations to infer estimated frames in a deterministic manner <cit.>. On the other hand, motion generation is a more challenging task that aims to synthesize long-term, diverse, natural human motions. Many generative models, like GANs, VAEs, and recent diffusion models, have been explored <cit.>. This work mainly discusses text-conditioned motion generation. This field has evolved from inputting action classes <cit.> to sentence descriptions <cit.>, and generating motions from 2D to 3D keypoints, to the emerging parametric model (e.g., SMPL <cit.>). These models have become expressive and comprehensive toward real-world scenarios thanks to the development of related benchmarks. Recently, diffusion model-based methods have rapidly developed and shown advantages in diverse, realistic, and fine-grained motion generation <cit.>. Some concurrent works <cit.> introduce novel diffusion model-based motion generation framework to achieve state-of-the-art (SOTA) quality. For example, MLD <cit.> presents a motion latent-based diffusion model with a representative motion variational autoencoder, showing its efficiency. § LIMITATION AND BROADER IMPACT §.§ Limitation There are two main limitations of our work. (i) The motion quality of our markless motion annotation pipeline is inevitably inferior to the multi-view mark-based motion capture system. However, as the quantitative and qualitative results demonstrate, our method can perform much better than existing markless methods, thanks to large-scale models pre-trained on massive 2D and 3D keypoints datasets and our elaborately designed fitting pipeline. Besides, a 30 mm PA-MPVPE error would be acceptable for the text-driven motion generation task since the target is to synthesize natural and realistic motions that are semantically consistent with the text input. Furthermore, the experiment on the mesh recovery task has demonstrated that our dataset can also benefit the human reconstruction task, which requires a higher annotation quality. Accordingly, a better motion annotation will be beneficial, and we will leave it as our future work. (ii) During our experiment, we find out that existing evaluation metrics are not always consistent with visual results. Besides, SMPL-X parameters may not be the best motion representation for expressive whole-body motion representation. Thus, there is a need for further research on the evaluation metric, motion representation, and model designs for the expressive motion generation task. Since the main task of our work is to build a high-quality dataset, we leave them as future work. §.§ Broader Impact A large-scale 3D human motion dataset would have numerous applications and boost novel research topics in various fields, such as animation, games, virtual reality, and human-computer interaction. Until now, human motion datasets have had no negative social impact yet. Our proposed will strictly follow the license of previous datasets, and would not present any negative foreseeable societal consequence, either. § LICENSE All data is distributed under the CC BY-NC-SA (Attribution-NonCommercial-ShareAlike) license. Detailed license and instructions can be found on the page <https://motion-x-dataset.github.io>. Further, we will provide a GitHub repository to solicit possible annotation errors from data users. For the sub-datasets, we would ask the user to read the original license of each original dataset, and we would only provide our annotated result to the user with the approvals from the original Institution. Here, we provide a brief license of the used assets: * HumanML3D dataset <cit.> originates from the HumanAct12 <cit.> and AMASS <cit.> datasets, which are both released for academic research only and it is free to researchers from educational or research institutes for non-commercial purposes. * BAUM dataset <cit.> is CC-BY 4.0 licensed. * HAA500 dataset <cit.> is MIT licensed. * NTU120 dataset <cit.> is released for academic research only and is free to researchers from educational or research institutes for non-commercial purposes. * HuMMan dataset <cit.> is under S-Lab License v1.0. * AIST dataset <cit.> is CC-BY 4.0 licensed. * GRAB dataset <cit.> is released for academic research only and is free to researchers from educational or research institutes for non-commercial purposes. * EgoBody <cit.> is under CC-BY-NC-SA 4.0 license. * Other data is under CC BY-SA 4.0 license. * SMPLify-X <cit.> codes are released for academic research only and are free to researchers from educational or research institutes for non-commercial purposes. * Codes for preprocessing and training are under MIT LICENSE. ieeetr
http://arxiv.org/abs/2307.03118v1
20230706164358
Quantum Solutions to the Privacy vs. Utility Tradeoff
[ "Sagnik Chatterjee", "Vyacheslav Kungurtsev" ]
quant-ph
[ "quant-ph", "cs.CR", "cs.LG" ]
== quote 1]Sagnik Chatterjee 2]Vyacheslav Kungurtsev [1]Indraprastha Institute of Information Technology, Delhi (IIITD) [2]Czech Technical University, Prague affilsepx: [ ]Email ids affilsepx, [1][email protected] [2][email protected] Quantum Solutions to the Privacy vs. Utility Tradeoff [ ===================================================== In this work, we propose a novel architecture (and several variants thereof) based on quantum cryptographic primitives with provable privacy and security guarantees regarding membership inference attacks on generative models. Our architecture can be used on top of any existing classical or quantum generative models. We argue that the use of quantum gates associated with unitary operators provide inherent advantages compared to standard Differential Privacy based techniques for establishing guaranteed security from all polynomial-time adversaries. § INTRODUCTION The privacy versus accuracy tradeoff in machine learning models has been a central challenge for algorithmic development and understanding. While large-scale generative models such as DALL-E 2 <cit.>, Imagen <cit.>, and Stable Diffusion <cit.> have advanced state-of-the-art in terms of the quality of synthetic data beyond previous expectations; their application presents a significant security risk in terms of privacy violation. The leading methodologies for generative modeling include generative adversarial networks (GANs) <cit.>, variational autoencoders (VAEs) <cit.>, and diffusion models <cit.>. Whereas their intention is to generate data that samples from the population distribution, in practice, all of these generative model techniques have been shown to “memorize” <cit.> and regenerate the training data  <cit.>, which makes them vulnerable to various adversarial attacks such membership-inference attacks (MIA) <cit.> where an adversary can infer if a given sample was used for training. Such attacks can lead to colossal privacy breaches <cit.>, which must be addressed for safe and reliable operation. To tackle membership-inference attacks and, more generally, to ensure the privacy of training data, various sophisticated classical techniques <cit.> have been proposed with differential privacy (DP) based guarantees for diffusion models. However, it has been shown that there is a class of readily implementable adversarial attacks that can recreate training data samples for models satisfying DP-based guarantees <cit.>. In many domains of interest, if even a small fraction of the training dataset can be learned by an adversary, then the model cannot be considered private. Key Idea: While there are many types of adversarial attacks one can consider, we focus on providing privacy and security guarantees against non-malicious adversarial (NMA) attacks, a broad class that includes MIA. It can be observed from <cit.> that realistic NMA attacks can be modeled by providing adversarial access to the discriminator. Ensuring proper defense against such attacks can be formulated in terms of an interactive game <cit.> between a challenger and an adversary where the adversary should not be able to determine whether a particular example belongs to the training set even when it is given access to the learning model and the distribution over the data[MIA can also be viewed through the lense of a hypothesis test <cit.>.]. A novel reinterpretation to this work is that this game can be considered a cryptographic protocol, where we treat the training data as our message and the generated data as a cipher. The goal would be to ensure that the adversary should not be able to glean any meaningful information regarding the training data (our message) given the generated data (our cipher) and black-box access to the training model. This interpretation and its associated analysis uses tools of security guarantees such as CPA-security or CCA-security. These, in turn, can be considered to both subsume the weaker, more standard notions of privacy in ML, and provide essential guarantees qualitatively more significant than simply ensuring the privacy of the training data[Privacy alone does not imply security. For example, one-time pads (OTP) are perfectly private but not secure.]. Our contribution: Despite encryption being an extremely natural way to ensure security and privacy, classication encryption techniques are not particularly amenable to obtaining guaranteed accuracy or convergence of training. Therefore, incorporation of classical encryption techniques weights heavily towards complete privacy and no accuracy in the standard privacy-accuracy tradeoff balance. In this work, we describe a quantum framework for tackling membership-inference attacks based on cryptographic primitives. Working in the quantum regime presents a number of unique computational advantages. Beyond the security advantages which we detail below, under standard complexity-theoretic assumptions, there may exist quantum-only distributions that classical generative models may not be able to generate <cit.>. Another motivation to use quantum generative models is the increase in stability and a significant reduction in the number of parameters over their classical counterparts  <cit.>. We also remark that our framework is essentially a generalization of classical diffusion models. In the quantum framework, our forward process does not rely on the same asymptotic guarantees which classical diffusion models <cit.> rely on, thereby hinting at possible sampling speedups. A classical solution <cit.> in which non-Gaussian multimodal distributions were used to model the denoising step, could fall prey to the instability issues described in <cit.> since there are absolutely no theoretical guarantees regarding the intermediate distributions. In our case, the discriminator needs to distinguish between two quantum states which have identical support. Therefore, the usage of a wide variety of divergences and distances in the discriminator is mathematically advantageous. § BACKGROUND An n-qubit quantum state is a unit vector on a 2^n-dimensional complex Hilbert space. The uniform distribution of random quantum states on the 2^n-dimensional complex Hilbert space is referred to as the Haar measure over quantum states. Note that the Haar measure over quantum states is a continuous distribution even when the underlying support is finite. A simple example of a Haar random state is the ensemble |ψ⟩_r=2^-n/2∑_y∈𝔽_2^n(-1)^r(y)|y⟩ where r is a truly random function. Since there are 2^2^n possible choices for r, |ψ⟩_r cannot be generated by polynomial-sized circuits. Let 𝒦={𝒦_n}_n∈ℕ be a sequence of keys from which samples can be efficiently drawn. A quantum-secure pseudorandom function (QPRF) is a family of efficiently computable keyed functions QPRF={QPRF_k}_k∈ℕ:𝒦_k×{0,1}^n{0,1}^n <cit.> which can be used for the construction of Pseudorandom states. A Pseudorandom quantum state (PRS) <cit.> is a quantum state which is information-theoretically indistinguishable from a Haar-random state. In PRS, we replace the truly random function r with an efficiently constructible QPRF {f_k} indexed by secret key k, such that given query access to r and f_k:{0,1}^n→{0,1}^n, no efficient polynomially bounded non-uniform quantum adversary A_Q can distinguish between a PRS and a Haar random quantum state. Therefore, any scheme which involves the construction of a PRS is by definition IND-CPA secure <cit.>. The existence of QPRF assuming one-way functions exist was proved in <cit.>. One straightforward candidate for a QPRF function is the family of pseudorandom random permutations (QPRP) <cit.>. A Pseudorandom Unitary (PRU) <cit.> is a family of unitary operators which is efficiently constructible and indistinguishable from the Haar measure on the unitary group. Quantum Trapdoor functions (QTFs) <cit.> are a class of efficiently constructible unitaries associated with different secret keys. One can use them to form PRS from a classical message m and a secret key k. We show how to interpret the QTF construction as a phase-encoding+PRS construction in <ref>. Even though QTFs are unitaries, they are hard to invert without knowing the trapdoor k of the underlying QPRP. § TECHNICAL OVERVIEW In this section, we outline three constructions that are provably secure against NMA attacks. We denote the real data distribution as P_d and the generated data distribution P_g. The crux of the idea is as follows: Given x∼ P_d and x^'∼ P_g and a discrimination function 𝔻, we want to construct an IND-CPA (or IND-CCA) secure cryptographic scheme with an encryption function g s.t. 𝔻(x,x^')=𝔻(g(x),g(x^')) For IND-CPA (and IND-CCA) secure cryptographic schemes, the encrypted tuple g(x),g(x^') must appear pseudorandom w.r.t. to the input tuple x,x^'. It is unclear how to construct classical functions which simultaneously perform IND-CPA secure encryption but also preserve the the discriminative labels in the encrypted tuple. A straightforward answer lies in the realm of quantum operators and states. If there exist IND-CPA (or IND-CCA) secure cryptographic schemes with a unitary(quantum) encryption operator, then by the distance-preserving property of unitary operators, all notions of quantum state discrimination are preserved. Formally stated, if U_g is the unitary corresponding to the encryption g, and 𝔻 denotes the Fidelity measure, then for all pairs of quantum states |x⟩ and |x^'⟩, we have 𝔻(|g(x)⟩,|g(x^')⟩)=𝔻(U_g|x⟩,U_g|x^'⟩)=⟨x|U_g^† U_g|x^'⟩=⟨x||x^'⟩=𝔻(x,x^') The above argument holds for all distinguishability measures on quantum states as well. Even though 𝔻(|g(x)⟩,|g(x^')⟩)=𝔻(x,x^'), the discriminator only has access to the encrypted tuple of states (|g(x)⟩,|g(x^')⟩). Therefore, using the rules of CPA security, there is no polynomially bounded adversary (classical or quantum) who can perform MIA even with non-malicious access to the Generator. We now investigate three different constructions that satisfy <ref>. Phase Encoded PRS Construction. First, we recall the QTF construction <cit.> briefly. A QTF is defined as the tuple (GenTR, GenEV, Eval, Invert): GenTR(1^n)tr. GenEV(tr)|eval⟩=|PRS(tr)⟩. Eval(|eval⟩,x)|ϕ⟩=Z^x|eval⟩. Invert(tr,|ϕ⟩)x. The classical trapdoor tr is used to construct a quantum public key |eval⟩ which is a PRS. |eval⟩ can now be used to encode a classical string x using a Z-twirl operator: Z^x:=⊗_i=1^n Z^x_i. By properties of QPRF, the state Z^x|PRS(k)⟩ is also a PRS. We can now give the first construction. Given a pair of inputs x∼ P_d and x^'∼ P_g, * Create phase encoded quantum states |ϕ_x⟩=Z^x|+⟩^⊗ n and |ϕ_x^'⟩=Z^x^'|+⟩^⊗ n. * Pick a classical trapdoor key k and generate unitary access to a QPRF {f_k}. * Use the phase kickback trick on U_f_k once with with |ϕ_x⟩ and once with |ϕ_x^'⟩ to obtain the pair |PRS(x,k)⟩ and |PRS(x^',k)⟩. * Train the Discriminator and Generator on the PRS tuple. Firstly we note that |PRS(x,k)⟩ and |PRS(x^',k)⟩ are actually pseudorandom states. The proof follows directly from QTF construction <cit.> and the fact that pseudorandom states are invariant under unitary operation. Secondly, we highlight the fact that, unlike any classical crytographic map, 𝔻(|PRS(x,k)⟩,|PRS(x^',k)⟩)=𝔻(|x⟩,|x^'⟩). Therefore <ref> allows us to train the discriminator properly and securely against adversaries wanting to perform MIA. Parameterized Phase Encoded PRS Construction. One semantic drawback of <ref> is the fact that we encode the phases uniformly across all features (bits of the string in this case). In practice, we might choose to create quantum state encodings of classical feature vectors based on some principles of optimal coding. To incorporate this option, we make use of the Parameterized Z-twirl operator: RZ(θ)^x:=⊗_i=1^n RZ(θ_i)^x_i. Since RZ(θ)^x is also a diagonal unitary, it commutes with the QTF construction as well. Given a pair of inputs x∼ P_d and x^'∼ P_g, and a set of feature weights θ, * Create parameterized phase encoded quantum states |ϕ_x,θ⟩=RZ(θ)^x|+⟩^⊗ n and |ϕ_x^',θ⟩=RZ(θ)^x^'|+⟩^⊗ n. * Generate unitary access to a QPRF {f_k} and pick a classical trapdoor key k. * Use the phase kickback trick on U_f_k once with with |ϕ_x⟩ and once with |ϕ_x^'⟩ to obtain the pair |PRS(x,k,θ)⟩ and |PRS(x^',k,θ)⟩. * Train the Discriminator and Generator on the PRS tuple. Basis Encoded PRS Construction. Instead of strictly working with phase-encodings, we may also be interested in basis-encoded quantum states. In order to construct PRS from basis-encoded states, we have to use a construction similar to <cit.>. We assume oracle access to a PRU U, a secret key k=k_1 k_2… k_T, where each k_i∈[4], and access to n-qubit Pauli operators 𝒪={I_n,X_n,Y_n,Z_n}. Given any initial state |ϕ_0⟩, the following construction was proven to yield a PRS, provided that the adversary only has black-box access to the PRU U. |PRS(x,k,T)⟩=U𝒪_k_TU𝒪_k_T-1…𝒪_k_1U|ϕ_x⟩ Given a pair of inputs x∼ P_d and x^'∼ P_g, and a parameter T=n, * Create basis encoded quantum states |ϕ_x⟩ and |ϕ_x^'⟩. * Generate a classical trapdoor key k, and obtain unitary access to a PRU U. * Construct the PRS tuple (|PRS(x,k,T)⟩,|PRS(x^',k,T)⟩) as in <ref>. * Train the Discriminator and Generator on the PRS tuple. § DISCUSSION In this work, we discuss three novel constructions for preventing MIA by leveraging the properties of quantum operators and using privacy and security guarantees from cryptography. Classical analogues of our approach would be difficult to construct, as we discussed. However, since MIA at its heart can be distilled down to a cryptographic game, we believe that there may be Zero-Knowledge-Proof based constructions, or attribute-preserving encryption schemes that could allow our framework to extend to classical techniques. `
http://arxiv.org/abs/2307.02393v1
20230705160705
PSR J0026-1955: A curious case of evolutionary subpulse drifting and nulling
[ "Parul Janagal", "Samuel J. McSweeney", "Manoneeta Chakraborty", "N. D. Ramesh Bhat" ]
astro-ph.HE
[ "astro-ph.HE" ]
firstpage–lastpage Collision integral with momentum-dependent potentials and its impact on pion production in heavy-ion collisions Akira Ono August 1, 2023 =============================================================================================================== PSR J0026–1955 was independently discovered by the Murchison Widefield Array (MWA) recently. The pulsar exhibits subpulse drifting, where the radio emission from a pulsar appears to drift in spin phase within the main pulse profile, and nulling, where the emission ceases briefly. The pulsar showcases a curious case of drift rate evolution as it exhibits rapid changes between the drift modes and a gradual evolution in the drift rate within a mode. Here we report new analysis and results from observations of J0026–1955 made with the upgraded Giant Meterwave Radio Telescope (uGMRT) at 300-500 MHz. We identify two distinct subpulse drifting modes: A and B, with mode A sub-categorised into A0, A1, and A2, depending upon the drift rate evolutionary behaviour. Additionally, the pulsar exhibits short and long nulls, with an estimated overall nulling fraction of ∼58%, which is lower than the previously reported value. Our results also provide evidence of subpulse memory across nulls and a consistent behaviour where mode A2 is often followed by a null. We investigate the drift rate modulations of J0026–1955 and put forward two different models to explain the observed drifting behaviour. We suggest that either a change in polar gap screening or a slow relaxation in the spark configuration could possibly drive the evolution in drift rates. J0026–1955 belongs to a rare subset of pulsars which exhibit subpulse drifting, nulling, mode changing, and drift rate evolution. It is, therefore, an ideal test bed for carousel models and to uncover the intricacies of pulsar emission physics. stars: neutron - pulsars: general - pulsars: individual (PSR J0026-1955) § INTRODUCTION Radio pulsars are rotating neutron stars with highly coherent radiation emanating from the vicinity of magnetic poles, which cross our line of sight once every pulsar rotation <cit.>. They possess a large mass (∼1 to ∼2 M_⊙) confined within a small radius (≲ 10 km), with strong gravitational (∼10^11 times stronger than the Earth's surface gravitational field) and magnetic fields <cit.>. Pulsars are the sites of some of the highest energy physical processes, making them powerful astrophysical laboratories, owing to such extreme environments of very strong gravitational and magnetic fields surrounding them. However, even though more than 3000 pulsars are known to date, a definitive exposition of the processes by which pulsars emit beams of radio waves is still non-existent in the literature <cit.>. Radio emission from pulsars exhibits a variety of phenomena, which modulate their pulse-to-pulse emission, observable in the form of subpulse drifting, nulling, mode changing, etc., thereby providing a range of avenues to understand the complex physical processes that cause the emission. In many cases, individual pulses from a pulsar show substructure with one or more distinct components called subpulses. <cit.> observed the systematic `marching' of these subpulses with phase within the on-pulse window, leading to diagonal drifting structures in a pulse stack, called driftbands (pulse number vs rotation phase), as commonly seen in many pulsars <cit.>. For such a pulse stack, the drift rate is then defined as the reciprocal of the slope of the driftbands (^∘/P_1, where P_1 is the pulsar rotation period). Theoretical models explaining subpulse drifting were suggested early on after the discovery of subpulse drifting in pulsars <cit.>. The most well-developed model at the time was able to explain the subpulse drifting phenomenon exhibited by pulsars studied then, most with stable drift rates, such as B0809+74 and B0943+10 <cit.>. The original model proposed by <cit.> associated drifting subpulses with a rotating `carousel' of a discrete number of sparks (electrical discharges) in regions of charge depletion just above the neutron star surface near the magnetic poles. This carousel of sparks circulates around the magnetic axis due to an E × B drift, and the electron-positron pairs produced in the discharges are ultimately responsible for the observed radio emission. The rotation rate of the carousel (P_4) around the magnetic axis is generally different from the pulsar period. Two characteristic features of subpulse drifting pulsars are their drift rates and P_3. The drift rate is defined as D = Δϕ per pulse period (^∘/P_1), where Δϕ is the longitude shift in degrees during one pulse period P_1. A positive value indicates a drift from early to later longitudes, while a negative value corresponds to a drift from later to earlier longitudes. In a pulse stack, the vertical separation between driftbands at a given longitude is P_3 (typically expressed in units of the pulsar rotation period, P_1), which is a measure of time after which a subpulse will return at a particular phase. The caveat here is that the pulsar rotation only permits observation of the subpulse positions once every pulse. A specific subpulse in one pulse cannot be unambiguously identified in the next due to the difficulty in resolving the presence of aliasing, making it generally difficult to evaluate the true carousel speed. That is to say, if aliasing is present, the observed drift rate is related to the beating frequency between P_1 and P_4. Another consequence of aliasing is that the drift rate can appear to vary even if P_4 stays constant as long as the beamlet configuration changes. Thus, if multiple drift rates are present in a given pulsar, it does not necessarily mean that the rotation speed of the carousel has changed; it may be that the number of beamlets has changed instead <cit.>. Even in the simplest case of a constant P_4 and a fixed number of beamlets, the apparent drift rate (i.e. the slope of the driftbands) is not a steady function of rotation longitude. This is a purely geometric phenomenon related to the projection of the beamlets' motion onto the line of sight trajectory, as explained in <cit.>. This results in the driftbands themselves appearing curved, referred to as “geometric curvature”. Geometric curvature is always present but, similar to the polarisation position angle (PPA) of the rotating vector model <cit.>, will only be visible if the pulse window is sufficiently wide for a given pulsar's particular viewing geometry. Geometric curvature is also similar to the PPA in that it is symmetric about the fiducial point, giving the driftbands a characteristic `S'-shape, with an excess (or deficit) of the drift rate appearing in the peripheral part of the pulse window. The carousel model satisfactorily explains the subpulse drifting nature of some pulsars that show stable drift rates, citing the theoretical stability of electric and magnetic fields at the spark locations <cit.>. Furthermore, multi-frequency observations of a large fraction of the pulsar population have brought forth a variety of such atypical pulsars <cit.>. These studies show that a substantial fraction (∼50%) of known pulsars exhibit subpulse drifting. However, explaining the drifting behaviour in pulsars that exhibit anything more complicated than a single stable drift rate requires modifications or extensions to the basic carousel model. Such pulsars present ideal test beds to modify the classical carousel model. Several extensions have been proposed over the years to account for the observed complicated behaviour. For example, <cit.> suggest that a quasi-central spark can account for the non-drifting core components in profiles. The well-known phenomenon of bi-drifting may be explained in terms of the presence of an inner annular gap <cit.> or an inner acceleration gap <cit.>, or non-circular spark motions <cit.>. Similarly, the phenomenon of drift rate reversal, shown by some pulsars, can be explained by the modified carousel model, where sparks rotate around the location of the electric potential extremum of the polar cap instead of the magnetic axis <cit.>. These extensions/modifications are generally developed to explain specific drifting behaviours observed in a relatively small subset of subpulse drifting pulsars. However, there is still no comprehensive theory that can describe all the observed drifting behaviours. Several theories have also been suggested to interpret the drifting subpulses geometrically. <cit.> and <cit.> suggested that drifting subpulses result from modulation in the emission region caused by drift waves in some form of magnetospheric oscillations. In their model, the subpulses result from periodic variations in the magnetospheric plasma, which may cause the emission region to move across the observer's line of sight. <cit.> suggest that non-radial oscillations in the emission region could be responsible for subpulse drifting without invoking circulations in the magnetosphere. They propose that the drifting subpulses could be due to non-radial oscillations in the magnetosphere. Although these models are able to explain phenomena such as mode changing, other phenomena, such as bi-drifting, memory across nulls, etc., cannot be convincingly accounted for. Another phenomenon often seen in conjunction with subpulse drifting is `nulling', where the emission from a pulsar ceases abruptly for a few to hundreds of pulse periods <cit.> before it is restored. To date, pulse nulling has been reported in more than 200 pulsars <cit.>, which is less than 10% of the known pulsar population. Nulls lasting for one or two pulses are generally attributed to the stochastic processes within the pulsar magnetosphere <cit.>. However, in subpulse drifting pulsars, short nulls can be attributed to a slight variation of the spark distribution, where nulls are caused by an empty line-of-sight <cit.>. Long nulls, on the other hand, are thought to be related to changes in the plasma processes within the pulsar magnetosphere <cit.>, or even the spin-down energy loss in the most extreme cases <cit.>. If nulls and changes in the drift modes are, in fact, caused by intrinsic changes in the pulsar magnetosphere, their interactions could be crucial in understanding the mechanisms behind changes between different magnetospheric states. Nulling itself may be an extreme form of mode-changing, where a pulsar switches between different magnetospheric states, as suggested by the broadband behaviour of three nulling pulsars reported by <cit.>. Hence, the study of pulsars exhibiting both nulling and drifting phenomena is crucial for comprehending the true origin and nature of the nulling phenomenon. Pulsars which exhibit complicated drifting behaviour such as mode changing and nulling, and are also bright enough for single pulse analysis, are relatively rare. However, this combination is essential in shaping ideas concerning the pulsar radio emission process. Recently, the Murchison Widefield Array (MWA) independently discovered PSR J0026–1955 <cit.> in the shallow pass of their Southern-Sky MWA Rapid Two-metre (SMART) pulsar survey <cit.>. The pulsar was originally detected in 2018 in the Green Bank Northern Celestial Cap (GBNCC) pulsar survey <cit.>. However, the discovery was not followed up until the recent re-discovery by the MWA. J0026–1955 is a bright pulsar which has a period of 1.306150 s and a dispersion measure (DM) of 20.869 pc cm^-3. The pulsar exhibits complex subpulse drifting behaviour and mode switching in addition to a large nulling fraction (∼77% at 155 MHz). <cit.> found two distinct subpulse drifting modes A and B, with slow and fast drift rates, respectively, which were further categorised (A1/A2 and B1/B2) depending upon the qualitative properties of modal appearances and context. The pulsar was sometimes seen to abruptly change its drift rate, while at other times, it exhibits a consistent evolution of the drift rate within its drifting modes. The most distinctive feature of PSR J0026–1955 is its slow drift rate evolution, which has been found in only a handful of other pulsars like B0031-07 <cit.> and B0818-13 <cit.>. Furthermore, with its variable drift rates, the pulsar also poses an essential question to the stability of the carousel, as the basic models assume a stable configuration, leading to a non-variable drift rate throughout a drift mode. For J0026–1955, <cit.> attempt to model the observed drifting behaviour with an exponentially decaying drift rate, similar to what is seen in PSR B0818-13 and PSR B0809+74 <cit.>. However, they were unable to fully characterise the observed drifting behaviour owing to their limited data sets and, consequently, an insufficient number of drift sequences. They also suggest a possibility of subpulse phase memory across short null sequences, which would benefit from more observations. Using longer multi-frequency observations, the modal taxonomy presented by <cit.> can be tested for viability and to examine whether there is a need for something more sophisticated than an exponential model of drift rates. The complex drifting and nulling behaviour of this new pulsar thus warrants deeper investigations of its properties and their nature at different frequencies. The unusual behaviour of J0026–1955 reported in <cit.> will also benefit from a detailed study at higher frequencies, allowing us to test the frequency dependence of such characteristics. Furthermore, given the slow period and large nulling fraction of the pulsar, long-duration observations with a higher signal-to-noise ratio (S/N) are necessary to collect a sufficiently large number of complete burst sequences in order to undertake a more robust statistical analysis. In this study, we present a detailed investigation of subpulse drifting and nulling exhibited by J0026–1955, with new observations obtained using the upgraded Giant Metrewave Radio Telescope (uGMRT) at 300-500 MHz. This paper is organised as follows. In section <ref>, we briefly describe the observation details and data-reduction procedures; the subpulse drifting and nulling analysis are presented in section <ref>; our findings are discussed in section <ref>; and a summary of the paper is given in section <ref>. § OBSERVATIONS AND DATA REDUCTION The Giant Metrewave Radio Telescope (GMRT) is a radio interferometric array consisting of 30 antennas, each with a 45-meter diameter, and spread over an area of 28 square kilometres in a Y-shape <cit.>. The GMRT recently underwent an upgrade, which included the addition of wide-band receivers and digital instrumentation, allowing for near-seamless coverage in frequency from 120 MHz to 1600 MHz <cit.>. For our observations of PSR J0026–1955, we used the upgraded GMRT in the phased array mode, where signals from each antenna are coherently added for maximum sensitivity. J0026–1955 was observed with the uGMRT at Band 3 (300-500 MHz) and Band 4 (550-750 MHz), over two epochs at each frequency band. However, due to the presence of higher levels of radio frequency interference (RFI), Band 4 data quality was not adequate for meaningful single-pulse analysis. Thus for the work presented in this paper, we limit our analysis to Band 3 (300-500 MHz) data. Details of observations, including the number of pulses which had pulsar emission (burst) and lacked any emission (null), are summarised in Table <ref>. The data were recorded at 655.56μs time resolution, spread across 2048 channels, thus providing a frequency resolution of 97.65 kHz, and were converted to single-pulse archives using the package <cit.>. The single pulse files were then frequency scrunched and combined using the routines from <cit.>. Finally, each frequency-scrunched single-pulse sequence file was manually searched for RFI using the interactive RFI zapping subroutine of . The RFI-excised file was then converted into an ASCII format that contained the pulse time series and was used for all subsequent analyses. Following the methodology detailed above, the pulsar time series data obtained in the last step were used to generate the pulse stacks (pulse phase vs pulse number) as shown in Fig. <ref>. The bright yellow diagonally arranged pixels between pulse phase -20^∘ and +20^∘ are the subpulse driftbands, which can be clearly seen to march from a positive to a negative phase, with increasing pulse number. Despite multiple rounds of RFI excision using the subroutine, it is evident from Fig. <ref> that there is some residual RFI. For example, several seconds of RFI can be seen right before pulse number 850 in panel <ref>. However, cases where the subpulses are bright enough to be visually recognised (despite the RFI), were retained. With such a tradeoff, we were able to salvage data that could still be used for exploration without affecting the subpulse drifting analysis. Further, panels <ref> and <ref>, provide clear examples of short and long nulls, where any emission from the pulsar is absent for a certain duration ranging from a few to a few hundred pulses. Table <ref> presents a comprehensive summary of nulls and bursts observed in different observations. § ANALYSIS In subsequent analysis, we elaborate on different subpulse behaviours and attempt to characterise the drifting nature to learn the underlying mechanism in the context of the carousel model <cit.>. In general, the preliminary analysis of any subpulse drifting pulsar aims at the classification of drift modes. However, considering that J0026–1955 does not always exhibit well-defined discrete modes but rather a drift rate evolution, such an analysis is complicated. In section <ref>, we discuss a drift rate evolution-based classification scheme for deciding the mode boundaries, using linear and exponential models for drift rate evolution. Section <ref> discusses the modes identified using this scheme. The observed nulling behaviour of the pulsar at 400 MHz is discussed in section <ref>. Further, the evolutionary drift rate behaviour of the pulsar is studied in detail in section <ref>. The exponential drift rate model used in section <ref> was employed by <cit.> to demonstrate memory across nulls for the first time. Following their lead, we have also examined J0026–1955 for possible instances of memory across nulls, detailed in section <ref>. §.§ Drift Mode Boundaries PSR J0026–1955 poses a unique challenge for identifying the drift modes. Generally, subpulse-drifting pulsars exhibit stable modes, which can be uniquely characterised by their P_3 values. However, the slowly evolving drift rates and P_3 of J0026–1955 complicate the mode identification. Thus, to identify the drift modes, we employed a different strategy. Given that the pulsar exhibits both evolutionary and non-evolutionary drift rates, we modelled the drift rate behaviours to make a drift rate-based modal classification, which also accounts for the evolution. The linear model is the most straightforward generalisation of a constant drift rate, essentially including the next term in the Taylor expansion – and valid in situations where the drift sequences are sufficiently short relative to the rate of evolution.. Further, to account for the drift rate evolution across individual driftbands (which can show significant curvature), we follow the lead of <cit.> to use an exponential model. Hence, we used the linear model for non-evolving drift rates and an exponential model for evolutionary drift rates. To account for both kinds of drift rate behaviour, we used the following two models: * Linear model of drift rates <cit.> This model assumes a linearly evolving drift rate (D) with respect to increasing pulse number. Thus, devising an equation which depends linearly on pulse number (p) since the onset of the drift sequence, we get D = dϕ/dp = a_1 p + a_2 where a_1 and a_2 are constants, and ϕ is the pulse phase. We can integrate eqn. <ref> to get the dependence of pulse phase on pulse number which would be a quadratic relationship. ϕ(p) = a_1 p^2 + a_2 p + C Here C = P_2 d + ϕ_0 is the integration constant that can be associated with a physical parameter, P_2, which is the “horizontal” separation between driftbands and is assumed to be a constant for the model fit of each drift sequence. Thus, the model has four free parameters, a_1, a_2, a_3, and P_2, to be accounted for. * Exponential model of drift rates <cit.> For the cases where individual driftbands can show significant curvature, the exponential model can be a better fit. In this model, the driftbands are modelled with an exponential function which assumes an exponential decay rate for the drift rate, D D = dϕ/dp = D_0 e^-p/τ_ r + D_ f where D_ f is the asymptotic drift rate, D_0 is the difference between D_ f and the drift rate at the beginning of the drift sequence, p is the number of pulses since the onset of the drift sequence, and τ_ r is the drift rate relaxation time (in units of the rotation period). To get the relationship with ϕ and p, we integrate eqn. <ref> ϕ = τ_ r D_0 ( 1 - e^-p/τ_ r) + D_ f p + ( ϕ_0 + P_2 d ) where ϕ is the phase of a sub-pulse; ϕ_0 is an initial reference phase; d is the (integer) driftband number; and P_2 is the longitudinal spacing between successive driftbands. Thus, the model has five free parameters, D_0, D_ f, τ_ r, ϕ_0, and P_2, of which the expression ϕ_0+P_2d defines the pulse phase at p = 0. The last term in eqn. <ref> is the same as the constant in eqn. <ref>. Using eqn. <ref> and <ref> from the linear and exponential drift rate models, we employed either of the two on different mode sequences. The fitting procedure for either of the models was carried out following the method described in <cit.>. The pulsar exhibits both stable subpulse drifting (no evolution) and evolutionary drifting. Therefore, firstly we determined the mode boundaries (the beginning and end of a drift sequence) by visually inspecting the drift rate evolution in the pulse stack. Then, the evolutionary and non-evolutionary drift sequences were separated, with the caution that not too many mode boundaries are made. Sub-pulses were first smoothed using a Gaussian kernel of width ∼3.6 ms (i.e. 1^∘ of pulsar rotation, the approximate width of a subpulse). Then, the subpulses in each drifting sequence were identified by determining the peaks above a certain flux density threshold. This threshold was chosen such that for a minimum pixel value, no sub-pulse is identified in the off-pulse region. Each sub-pulse in the drift sequence is then assigned a driftband number. Finally, depending upon the drift rate model of choice, the driftbands are fitted (using SciPy’s method) with the functional form of the sub-pulse phases as mentioned in eqn. <ref> and <ref>. Examples of drift rate fitting are shown in Fig. <ref>, where the bright diagonally arranged patterns are the driftbands and the white overlayed lines are the driftband fits. The drift rate evolution across an entire observation (scan 2 of observation made on MJD 59529) using the above methodology is shown in Fig. <ref>. Here the x-axis shows the pulse number and the y-axis shows the drift rate in ^∘/P_1 units. The different curves correspond to either a linear or an exponentially varying drift rate across a drift sequence. The models described above do not take into account the geometric curvature that must be present (to some degree), as discussed earlier. However, we argue that the geometric curvature must be negligible in J0026–1955's pulse window. As seen in Fig. <ref> (and Fig. <ref>), the pulsar exhibits a variety of drifting modes with inconsistent drift rates. If the geometric curvature was significant across the pulse window, it should be visible in all the modes, despite their evolutionary and non-evolutionary features. The fact that the characteristic `S'-shape of geometric curvature is not visible throughout leads us to conclude that it must be negligible across the pulse window for this pulsar. We, therefore, do not attempt to include geometric curvature in our models. In the next subsection, we describe the drifting modes and their various sub-classes, among which is a non-evolutionary mode (A0), in which the driftbands appear straight (see the left panel of Fig. <ref>). This mode demonstrates the lack of a significant presence of geometric curvature. §.§ Drift Mode Classification <cit.> categorised the drifting behaviour into two different classes: A and B. They further made sub-categories of modes A and B depending on the qualitative properties of drift sequences, their appearance and context. In this work, the broad classification into modes A and B follows <cit.>, but our subcategories of these modes are completely different, and are based on the drift rate modulation instead of organisation in drifting patterns. In our analysis, we first modelled the drift rates exhibited by the pulsar and then categorised them. The drift rate modelling provided insight into the drifting behaviour, which was the basis for our mode classification, as described below. * Mode A: Mode A is classified as an umbrella mode category which encompasses the slower drift rates. The pulsar in mode A exhibits organised as well as unorganised driftbands. All of the evolutionary subpulse drifting behaviour exhibited by the pulsar can also be sub-categorised under mode A. * Mode A0: This is the non-evolutionary sub-category of mode A. These are mode sequences which possess an almost constant drift rate and do not exhibit any evolutionary behaviour, as shown in Fig. <ref>. Along with a stable drifting rate (∼ -0.6^∘/P_1), mode A0 also has the largest mode length. The sequences, at times, do show frequent interruptions and rapid but temporary deviations in the drift rate. However, the overall drift rate still hovers around a constant number. The occurrence fraction of mode A0 in the complete set of observations was about 12%. * Mode A1: According to our drift rate classification, sequences which demonstrate a slow evolution from fast to slow drift rates, as shown in Fig. <ref>, are labelled as mode A1. In an extreme case of drift rate evolution in mode A1, the sequence begins with a small P_3 value of about 16P_1 and ends after 110 pulses with a much different P_3 of about 60P_1. Mode A1 also had the largest occurrence fraction of ∼17% among all the subpulse drifting modes. * Mode A2: In addition to the evolution from faster to slower drift rates, the pulsar also exhibits the opposite evolutionary behaviour. Mode A2 sequences begin with a slow drift rate, where the driftbands are far apart and evolve towards a faster drift rate. In our data, mode A2 had a total occurrence fraction of ∼7%. The sequences in mode A2 are generally short-lived and consist of 3-4 driftbands before the sequence ends with a faster drift rate (see fig. <ref>). We also note that most occurrences of mode A2 are followed by a null. This possible correlation is discussed in detail in section <ref>. <cit.> assumed this mode as a combination of mode A and the faster drifting mode (mode B). However, we believe that this is yet another evolutionary mode of J0026–1955, as the driftbands are fully connected throughout the drift sequence and exhibit a slow evolution rather than a sudden change in drift rate. As expected, from Fig. <ref> it can be seen that the modal profiles of all the sub-categories of mode A show similar features despite the dissimilar drift rate behaviour, lending credibility to our classification scheme. * Mode B: The pulsar also exhibits a faster drift rate on its own without being a part of any evolutionary behaviour. This was present in only ∼4% of the total observation. Mode B sequences are short-lived, are generally isolated occurrences, and do not show a drift rate evolution, as shown in Fig. <ref>. Mode B sequences can be found anywhere in the pulse stack, even in the midst of long, otherwise uninterrupted null sequences. The average drift rate for mode B is ∼ -1.6^∘/P_1. The average modal profile of mode B is shown in Fig. <ref>, which shows slightly different features with a skewed average profile, as compared to mode A profiles. An additional feature was sometimes noted in the drift sequences of J0026–1955, where an extra driftband seems to appear towards the leading edge. An example can be seen at around pulse number 2125 in panel (c) of Fig. <ref>, in mode A2. There is a sudden break in the middle of the driftband, and both pieces look disassociated. It appears that towards the end of the first driftband, the drift rate fastens, and for the second driftband, the drift rate begins at a faster rate and then slows down. Overall, if the sudden drift rate change and the break are ignored, they seem to form a full driftband. During our analysis, we have not accounted for the break and considered the driftband in full wherever such a deviation was noted. §.§ Nulling Nulling is the temporary disappearance of emission from a pulsar for brief periods of time. After deciding the mode boundaries using the method described in <ref>, sequences with no subpulse detection were counted as nulls. In our observations, PSR J0026–1955 showed evidence of both long bursts of pulses and long nulls. The long nulls are sometimes interrupted with short mode B sequences. Fig. <ref> shows the entire length of null and burst sequences in all our observation scans. The longest null sequence goes on for 1117 pulses (∼25 minutes). In contrast, the most prolonged burst in our observations lasts for 867 pulses (∼19 minutes). Fig. <ref> shows intensity as a function of time for all the observations. The shaded grey regions show detected subpulses, and the rest are nulls. The degree of nulling in a pulsar can be quantified in terms of the nulling fraction (NF), which is the fraction of pulses with no detectable emission. Overall, J0026–1955 was in a null state for more than half of our observations, with an estimated total nulling fraction of ∼58%. This differs from the nulling fraction of ∼77% obtained at 155 MHz using the MWA <cit.>. This discrepancy and the nulling behaviour of J0026–1955 are discussed in detail in section <ref>. §.§ Drift Rate Evolution PSR J0026–1955 exhibits multiple drift rates and evolutionary features throughout the drift sequences and individual driftbands. Initially, we used the linear and exponential models to understand the drift rate behaviour within a sequence, as described in section <ref>. However, driftbands and sequences in J0026–1955 exhibit more complicated evolutionary features, as seen in the mode A1 and A2 occurrences in Fig. <ref>, where even an exponential model fails to accurately describe the evolutionary drift rates correctly. We further explored the drift rate behaviour of J0026–1955 by studying the variation in drift rate with each pulse in a driftband. To calculate the evolution of the drift rate with each pulse, we followed the methodology described in <cit.>. We first employed a cubic smoothing spline estimate using the SmoothingSplines[<https://github.com/nignatiadis/SmoothingSplines.jl>] software package on each of the driftbands, irrespective of their mode identity or drift mode boundaries. Then, we obtained the drift rate (phase/pulse number) by calculating the gradient of the fitted spline function at every pulse number for each driftband. As seen in J0026–1955 pulse stacks, there can be two driftbands at a given pulse number. Fig. <ref> shows examples of some drift sequences, where the top panel shows part of the pulse stack; where the red dots indicate the location of subpulses. Here, the green line is the cubic spline fit, which was obtained with the smoothing parameter λ = 100. In the bottom panel of Fig. <ref>, the black lines show the gradient calculated at each pulse number for every driftband. As some pulses contain two subpulses, yielding two measurements of the drift rate for that pulse number, we estimate multiple drift rates at some of the pulse numbers. The solid grey envelope shows the mean drift rate at every pulse number where the contribution from multiple driftbands at any given pulse is averaged. A subset of pulsars that exhibit multiple subpulse drifting modes shows a harmonic relationship between P_3, as reported in previous studies <cit.>. In the case of J0026–1955, a similar analysis with drift rates leads to interesting implications. A cumulative and modal histogram of individual drift rates obtained by taking a gradient of the smoothing spline fit of driftbands at each pulse number can be seen in Fig. <ref>. The top panel shows a distribution of all drift rates, where each colour corresponds to the different modes classified for J0026–1955. The lower panels of Fig. <ref> display the distribution of drift rates for all observed subpulse drifting modes of J0026–1955. Apart from two distinct drift rate peaks at approximately -0.5^∘/P_1 and -1.6^∘/P_1, a third peak is visible at around -2.7^∘/P_1. The peak values of the drift rates form an arithmetic sequence with a common difference of approximately -1.1^∘/P_1. Further, considering eqn. 1 and 2 in <cit.>, it can be implied that if an arithmetic spacing exists between drift rates, then the corresponding number of sparks will also have an arithmetic relationship, assuming a constant carousel rotation rate (P_4). §.§.§ A fourth-order polynomial fit of drift rates On a closer examination of Fig. <ref>, it is evident that the drift rate is irregular. Furthermore, the drift rate does not simply evolve towards a higher or lower rate but shows variability, even within a particular mode. A linear or exponential drift rate could not accurately comprehend the complexity of this drift rate evolution. Hence, we tried to fit the average drift rate with a polynomial. Employing the polynomial regression method using , we successfully fit a fourth-order (quartic) polynomial function to the evolving drift rates. A higher-order polynomial could also describe the evolutionary drift rate. However, such a model might be counter-intuitive and would only provide a customised fitting rather than a general model. Fig. <ref> shows the fourth-order polynomial fit of the drift rates (blue dots) for scan 2 of the observation made on MJD 59529 (November 11, 2022). The black line is the average drift rate at each pulse number, same as the grey envelope in Fig. <ref>. Different colours of the fourth-order polynomial fit correspond to different modes, following the colour scheme of Fig. <ref>. A direct comparison between the drift rate models in Fig. <ref> and Fig. <ref> can be made, where the latter shows the evolution of drift rate within a drift mode sequence. The fourth-order polynomial model can be seen to better describe the drift rate modulation as compared to the more simplistic, linear and exponential models, which lacked the detail. To understand the overall drift rate modulation in various modes, we overlayed the drift rate fits for each mode, as shown in Fig. <ref>, where the x-axis shows mode length and y-axis the drift rate. The black line in each subplot shows the mean of drift rates (from the polynomial fits) with the pulse number. The grey envelope corresponds to the 1σ deviation from the mean drift rate. As expected, mode A0, which does not show any noticeable evolutionary behaviour had an almost constant average drift rate across all instances, around -0.6. In contrast, the drift rate evolution of modes A1 and A2 is observable in their respective subplots. Drift rates in mode A1 can be seen to evolve towards a slower drift rate as compared to the commencing rate, whereas mode A2 shows an overall evolution towards a faster drift rate as it reaches the end of a drift sequence. §.§ Memory across nulls We also investigated the presence of memory across nulls, as reported by <cit.>. Our analysis shows that the short-lived nulls of PSR J0026–1955 indicate evidence of subpulse memory across nulls. This could be a subpulse phase memory or a drift rate memory. In the first scenario, the phase of the last subpulse before the null and the first subpulse after the null are similar. On the other hand, a drift rate memory could be a case where the drift rate across the null stays the same, and the driftbands could be extrapolated. We followed the model fitting technique described in <ref> to explore the latter. To check if there is indeed a drift rate connection between the sequence before the null and the sequence after the null, we fit the previous drift sequence (before null) using the model of choice (following the method in <ref>). We then extrapolate the model to a subpulse right after the null sequence ends, allocating the subpulse a reasonable driftband number. We can consider a drift rate memory if the projected phase of the subpulse after the null matches the real subpulse within an error range. The error on phase prediction is calculated from the covariance matrix of the model fit (using standard uncertainty propagation). If the phase of the real pulse falls outside the subpulse phase range projected by the model, then we consider that there is no memory across the null. On the other hand, if the projected phase and the phase of the real pulse are within the error bars and smaller than P_2 (phase distance between two subpulses within a pulse), then we classify that null as being consistent with being phase-connected subpulse across the null. An example of drift rate memory across nulls is shown in the top panel of Fig. <ref>, where the white lines depict the drift rate fits. The top panel in Fig. <ref> shows two drift sequences on either end and a null. By fitting the sequence before the null and projecting the drift rate behaviour to the latter sequence, one can note that the drift sequences before and after the null are consistent. We, however, do not find many instances of subpulse phase memory across nulls in our data. A handful of instances were initially recognised by visual inspection, as they did not show a drift rate memory across nulls. They were further investigated by calculating the phase of the subpulse before and after null. One such example is shown in the bottom panel of Fig. <ref>, where the subpulse phase before and after the null are almost the same. In contrast, since the sequences have different drift rates, a drift rate memory might not be present. A more careful analysis of longer observations is required to verify this fully. § DISCUSSION PSR J0026–1955 exhibits a multitude of pulse-to-pulse modulation phenomena, viz. subpulse drifting, mode switching, and nulling, as shown in Fig. <ref>. For each of the driftbands, subpulses arrive at earlier phases with pulsar rotation, thus conforming to the `positive drifting' class of subpulse drifters <cit.>. However, the vertical separation between driftbands (i.e., P_3) can be seen to vary between different sequences, as well as within a drift sequence. There are instances where J0026–1955 can be seen to abruptly switch from one subpulse drifting mode to another, a behaviour exhibited by many other pulsars <cit.>. However, in addition to the abrupt mode change, the drift rate can also sometimes be seen to evolve gradually, which is a relatively rare phenomenon, observed in only a small subset of subpulse drifting pulsars, e.g., PSR B0943+10 <cit.>, PSR B0809+74 and PSR B0818–13 <cit.>. For PSR J0026–1955, a quick glance at mode A1 and A2 sequences (evolutionary drift modes) in Fig. <ref> shows this variable drift rate. The pulsar also exhibits long and short-duration nulls, where an association between drifting and nulling can be drawn. Below, we discuss the variety of phenomena exhibited by J0026–1955 in light of the analysis presented in section <ref>. §.§ Nulling Behaviour of J0026–1955 The analysis presented in section <ref> highlights the unique nulling behaviour of J0026–1955, which exhibits complex emission properties. The nulling fraction at 400 MHz is approximately 58%, an estimate reached from 330 minutes of observation, whereas, at 155 MHz, it was estimated to be 77% from 192 minutes of MWA observation <cit.>. This discrepancy suggests a possibility that the nulling behaviour of J0026–1955 may not be broadband and that there may be frequency-dependent mechanisms at play. Frequency-dependent nulling has been observed in some pulsars, where the nulling behaviour varies depending on the observed frequency <cit.>. However, given the modest separation in the frequency bands (with band edges separated by ∼130 MHz and the centre frequencies differing by a factor of ∼2.5), it is unclear if the observed discrepancy is entirely attributable to frequency-dependent nulling. Alternatively, the observed inconsistency could simply be an unintended observational bias, where the MWA observations were incidentally made around the long nulls. The perceived discrepancy could also be because of the presence of an emission component with a shallow spectral index leading to a null at 155 MHz. Assuming that J0026–1955 exhibits broadband nulling, a combination of the number of nulls out of the total number of pulses observed at 155 MHz and 400 MHz will imply a nulling fraction of ∼65%. It is also possible that the nulling behaviour of J0026–1955 is complex and multi-faceted, involving both broadband and frequency-dependent mechanisms. Different models attribute pulsar nulling to intrinsic changes in the magnetosphere, such as temperature fluctuations altering coherence conditions <cit.>, switching between gap discharge mechanisms <cit.>, variations in magnetospheric currents <cit.>, change in pulsar beam geometry <cit.>, disruption of the entire particle flow in the magnetosphere <cit.>, etc. Though such models may be able to describe the long-period nulls seen for J0026–1955, the presence of subpulse (phase or drift rate) memory across nulls challenges these theories for at least the case of short nulls where such memory exists. Further investigation is needed to fully understand the nature of the observed disparity and the complex emission properties of J0026–1955. §.§ Drift Rate - Nulling Correlation There have been only a limited number of investigations that explored a correlation between nulling and subpulse drifting. For example, PSR B0818–41 presents a case where a decrease in the pulsar drift rate is accompanied by a gradual decrease in intensity before the onset of a null <cit.>. PSR B0809+74 also shows an association between nulls and subpulse drifting, where the drift rate deviates from normal after the nulls <cit.>. Using the partially screened gap model, the authors suggest that some kind of “reset” of the pulsar’s radio emission engine occurs during the nulls, which is responsible for the conditions of the magnetosphere. For PSR B0809+74, <cit.> suggest that nulling and subpulse drifting may be related, with changes in emission beam geometry potentially causing both the nulling and changes in the drift behaviour. <cit.> suggested that emission can cease or commence suddenly when the charge or magnetic configuration in the magnetosphere reaches the so-called “tipping point”; however, the triggering mechanism for such stimulus is unknown. In the case of PSR J1822–2256, <cit.> also showcase a relationship between nulling and mode changing, where a null preceded most occurrences of their mode D. Such correlations suggest that emission mechanisms and magnetosphere dynamics between nulls and subpulse drifting may be strongly related. PSR J0026–1955 also provides some compelling evidence of a possible correlation between subpulse drifting and nulling. Our observations suggest a complex and dynamic mechanism underlying the pulsar radio emission, with important implications for understanding astrophysical processes in extreme environments. In particular, we have found that the pulsar J0026–1955 likely switches to a null state after mode A2. Mode A2 is an evolutionary mode, where the drifting begins at a slower drift rate and evolves towards a faster drift rate before the mode eventually ends. In 27 out of 31 instances, mode A2 is followed by a null, either short or long. In such cases, the ramping up towards the faster drift rate begins remarkably consistently about 20 pulsar rotations prior to the null (see Fig. <ref>), which suggests that the null itself might be causally related to the preceding drifting behaviour. In the rest of the four sequences, mode A2 was once at the end of our observation and was followed by either mode A0 or A1 in three instances. These occurrences can also be seen as a drift reset, where the pulsar temporarily evolves to a faster drift rate and then returns to a slower one. This reset could also be due to a change in the magnetospheric conditions before a “tipping point” was reached. It was also noted that not every null sequence followed mode A2, but a null followed most occurrences of mode A2. We did not find any strong correlation between the intensity of subpulses towards the onset or end of a null. The variation in pulse intensity of transitions from burst to null was sometimes abrupt and smooth at other times and lacked any compelling evidence of intensity dependence. It is worth noting that this kind of phenomenon, where nulls affect the drifting behaviour leading up to them, is relatively rare in contrast with the more commonly studied effect of nulls on burst sequences that come after them <cit.>. A deeper understanding of these rare cases can provide useful insights into the complex dynamics relating nulls and subpulse drifting. §.§ Memory Across Nulls Only a limited number of pulsars retain the information about previous subpulses during nulls, as demonstrated by, e.g., <cit.> and <cit.>. This memory retention can provide valuable insights into the true nature of the nulling phenomenon and any correlation with subpulse drifting it may have. If intrinsic changes in the pulsar magnetosphere lead to both nulls and drift-rate modulations, studying the interactions between these phenomena could offer valuable insights into the mechanisms that trigger and facilitate transitions between the different states. Subpulse memory across nulling in drifting pulsars has been generally attributed to either of the two reasons: (1) the polar cap continues to discharge, but the emission is not observed <cit.>, or (2) the subpulse ceases drifting for the duration of the null <cit.>. In the first case, the absence of radio emission is attributed to the lack of coherent structure, and the drift rate remains the same before and after the null. Thus, a part of the drift sequence would be missing, though one could still map the driftbands with similar drift rates pre- and post-null, showing `drift rate' memory across nulls. Whereas in the second scenario, the subpulse drifting is thought to resume at the phase where it left off, demonstrating a subpulse `phase memory' across nulls. As described in section <ref> have encountered possibilities of both `drift rate' memory and `subpulse phase' memory across nulls in the case of J0026–1955. §.§.§ Drift rate memory J0026–1955 exhibits a clear case of subpulse drift memory (top panel in Fig. <ref>), where the pulsar seems to remember the drift rate even after a short (∼20 pulses) null. The study described in <cit.> suggests that nulling in pulsars results from an uninterrupted and stable discharge in the polar gap rather than a complete cessation of sparks. This observation is similar to the case of J1840–0840, where the drift rate stays the same across the null, and an entire driftband is missing from the sequence <cit.>. PSR J1840–0840 also shows `subpulse phase' memory in conjunction with the `drift rate' memory. The cause of undetectable radiation during nulling can be attributed to the absence of the dominant coherence mechanism present during regular pulsar operation rather than the lack of particle flux from the polar cap <cit.>. Therefore, during the null states, the subpulses on the polar cap may continue to drift during the null state, either at similar or different rotation speeds. In this case, the phase of the subpulse after the null sequence can be anticipated based on the duration of nulls. In the case of J0026–1955, when the pulsar switches its emission back on after a null, the phase of the subpulse can be extrapolated from the drift rate model before the null. The predicted phase was well within the error range for drift sequences around short nulls. This finding supports the idea that drifting and sparking may still be operational during nulls, providing further evidence for the two scenarios previously observed in drifting pulsars. §.§.§ Subpulse phase memory As shown in the lower panel of Fig. <ref>, J0026–1955 also presents evidence of a `phase' memory across nulls, where the drift rate does not necessarily stay the same across the nulls. Still, the phase of the last subpulse before the null is almost identical to that of the first subpulse after the null. Similar behaviour is observed in a few other pulsars like B0809+74 and B0818–13, where the drift rate changes after null sequences <cit.>. During some of these interactions, the pulsars also seem to indicate some kind of phase memory, such that information regarding the phase of the last subpulse was retained during the null state. <cit.> propose that during the null, the drifting stops and the position of the sparks are remembered by the presence of a hotspot on the pulsar surface. This would imply that once the drifting resumes, the sparks will reform at their previous position. PSR B0031–07 was shown to retain the memory of its pre-null burst phase across short nulls <cit.>. In their study by <cit.>, PSR J1840–0840 presents a unique example of both `subpulse phase' and `drift rate' memory across null. In our data, we only had a few occurrences of memory across nulls of both kinds. A more detailed analysis of drift rate and subpulse phase memory around nulls needs to be conducted for J0026–1955 using longer observations, thus increasing the sample size of such events. §.§ Subpulse Drifting Model for J0026–1955 J0026–1955 presents an exciting and rare phenomenon of drift rate evolution (modes A1 and A2), in addition to regular subpulse drifting with a constant drift rate (modes A0 and B). The pulsar exhibits changes in drift rate over sequences, with mode A1 showing a trend towards a slower drift rate and mode A2 showing a trend towards a faster drift rate with increasing pulse number. Furthermore, irrespective of the modal transitions, we have also observed an inter-modal driftband connectivity in most mode-switching cases. We utilised the techniques described in section <ref> to analyse the evolution of drift rates across the driftbands and sequences. Measuring the slopes of individual driftbands was essential to scrutinise any changes in the drifting pattern. Furthermore, we modelled the drift rate behaviour across the drift sequences using a quartic polynomial. The quartic model fits were then used to understand the global drift rate variation for the different modes, as shown in Fig. <ref>. The figure shows that the modes A1 and A2 for J0026–1955 display a gradual evolution from their initial drift rates. It is noteworthy that the change of drift rate for mode A2 (∼ -0.5^∘/P_1 to ∼ -2.0^∘/P_1) is almost twice as much as the change of drift rate in mode A1 (∼ -1.3^∘/P_1 to ∼ -0.5^∘/P_1). Hereafter, we discuss the possible modifications/additions to the existing carousel model that can explain the observed unique behaviour of J0026–1955. §.§.§ Variable Spark Configuration in Carousel Model As discussed in <cit.>, <cit.>, and <cit.>, different modes and drift rates of a pulsar can be attributed to a carousel with varying numbers of sparks for each subpulse drifting mode. In such cases, the drift rate is observed to change abruptly. This sudden change cannot be ascribed to the carousel rotation rate, as it would imply significant magnetosphere reconfiguration over a short time scale. However, the drift rate change within a single pulsar rotation can be due to the reconfiguration of spark distribution. In the case of the evolutionary drifting modes of J0026–1955, the idea of a carousel with a constantly changing number of sparks to describe the changing drift rate may be counter-intuitive. Nevertheless, the non-evolutionary modes (A0 and B) may still have a fixed spark configuration. An alternative hypothesis which could involve a changing carousel rotation rate, would need a slow change in the spark carousel itself, where the gradual evolution is a signature of the spark configuration relaxing into a new arrangement after a spark suddenly appears or disappears from the carousel <cit.>. Such “relaxation” of the drift rate is reminiscent of the behaviour observed in PSR B0809+74 <cit.>, where, after a null, the pulsar would temporarily attain a faster drift rate before relaxing into a steady drift rate. They further suggest that a perturbation might alter the drift rate (and emission, thus causing a null), after which the drift rate recovers exponentially to its normal value. If the number of sparks in the carousel is changing, then the carousel will take some finite amount of time for the sparks to rearrange themselves, for example, into the new configuration in which the sparks are equidistant from each other. During this relaxation time, the angular speed of individual sparks may differ from the angular speed of the whole carousel. In that case, the observed drift rate at any one moment will only depend on the spark that is “under” the line of sight during any given rotation period. In this view, the observed drift rate can appear to change slowly without requiring the average carousel rotation speed to change, as long as the time scale for the spark reconfiguration is relatively long. If, as the above suggests, the sparks reconfigure themselves only slowly after one of the sparks either appears or disappears, one observational consequence of this is that the driftbands should always look connected, which appears to be the case for J0026–1955. This still remains true in the presence of aliasing, although the observed drift rate changes may be magnified. In comparison, <cit.> point out that connectivity of driftbands is impossible if the carousel rotation rate transitions from non-aliased to aliased regimes. §.§.§ Carousel Model in Partially Screened Gap Alternatively, a steady change in the carousel rotation rate (P_4) with a constant spark configuration is also a plausible explanation for the evolutionary drift modes of J0026–1955. In such a case, some conditions might change at the pulsar surface, making the spark carousel change its rotation speed. As a result, the carousel rotation rate varies smoothly, resulting in a gradual drift rate evolution. Below we provide a hypothesis, using the partially screened gap model <cit.>, for such a phenomenon that can gradually alter the carousel rotation rate. According to the PSG, a reverse flow of electrons towards the polar cap lead to an ion discharge in the gap region, which ultimately acts as a screen. Due to this screen, the electric field in the polar cap reduces, which reduces the spark velocity <cit.>. <cit.> also suggests that the dependence of the drift rate on the electric and magnetic fields boils down to a dependence on the variation of the accelerating potential across the polar cap. Hence, a variation in the polar gap results in an increase/decrease in the spark velocity, which in turn may also increase/decrease the carousel rotation rate (regardless of whether the drifting is aliased). Consequently, the carousel rotation rate will also affect P_3. In the case of no aliasing, the drift rate will be more negative for a faster carousel rotation rate, and the P_3 value will be smaller. Further, as discussed in <cit.>, a change in drift rate is reflected in the emission heights, where faster drift rate (lower P_3 value) emission is thought to originate from higher emission heights; and at lower emission heights, emission from a slower drift rate (higher P_3 value) is observed. For curvature radiation, the only way to observe a change in the emission height at one observation frequency is if there is a change in the magnetic field lines. For a changing emission height, the foot points of the magnetic field lines will be closer to or further from the (dipolar) magnetic pole, implying a changing size of the observed spark carousel. For the evolutionary drift modes of J0026–1955, where the drift rate is seen to vary drastically within a drift sequence, a change in emission height (at a fixed observational frequency) would automatically suggest a variable carousel rotation rate for a fixed carousel configuration. Extrapolating the PSG model, if the screening increases due to the ion discharge in the polar cap region, the electric field in the polar gap region will decrease, lowering the spark velocity and, consequently, lowering the carousel rotation rate. Given the proposed relation between P_4 and emission heights, the emission from a carousel with lower P_4 will come from a lower altitude in the pulsar magnetosphere. We observe a slow drift rate evolution in mode A1, which could be where the screening steadily increases, causing an evolution towards a slower carousel rotation rate. Similarly, mode A2 can be the case where the screening lowers, causing the drift rate to increase and the emission to come from higher altitudes. Since the typical mode length of observed mode A2 occurrences is not as long as mode A0 or A1, we can only conjecture that the screening cannot lower beyond a certain extent. For modes A1 and A2, a gradual change in the carousel rotation rate might be an acceptable model. This is also consistent with the general understanding that the rotation rate of the carousel cannot change its magnitude or direction abruptly during a single pulsar rotation (as it implies a rapid change in the pulsar magnetosphere), although it may exhibit a slow evolution. §.§.§ Revisiting Mode A2 - Null Correlation Our findings suggest a strong correlation between mode A2 and nulling across multiple observation epochs, as discussed in section <ref>. Such an association indicates the intrinsic nature of these changes and their close relationship with the nulling process. The gradual transition from slow to fast drift rate, via addition/reduction of sparks as discussed in section <ref>, might trigger a “reset” of the pulsar's radio emission engine, causing the emission to cease for several pulses. J0026–1955 provides compelling evidence for a scenario in which the electromagnetic conditions in the magnetosphere region responsible for radio emission attain a null state after the considerable drift rate evolution seen in mode A2. Additionally, the stability of sparks for fast drift rates could be a reason for short sequences. Implying that for mode A2, the spark configuration becomes unstable once the drift rate is sufficiently fast due to an increase in the number of sparks, causing a null. The instability argument can also explain why mode B sequences only last for a short time compared to all the other modes. Alternatively, following the subpulse drifting model suggested in section <ref>, mode A2 might indicate the scenario where screening decreases, causing the electric field in the polar gap region to increase. This impacts the spark velocity and, thus, the carousel rotation rate. As the carousel rotation rate increases (due to a decrease in screening), P_3 decreases and the driftbands appear closer in the pulse stack. Using the direct dependence between P_4 and emission heights (derived from the inverse relation between P_3 and emission heights), we can deduce that the emission comes from increasingly higher emission heights for mode A2. Alternatively, it is possible that due to our line of sight, we cannot observe the pulsar past a certain emission height. The drift sequence ends very soon after mode A2 achieves a significantly low drift rate. Similarly, due to the line-of-sight constraint, we might explain why modes with lower drift rates, like mode B, have comparatively shorter mode lengths. § SUMMARY We have conducted a thorough analysis of subpulse drifting behaviour in PSR J0026–1955 at 300-500 MHz from uGMRT observations. Our results and conclusions from this study are summarised below. * From our observations, we have found that the pulsar exhibits short- and long-duration nulls, with an estimated nulling fraction of ∼58% from our uGMRT observations. The nulling fraction is in stark difference from ∼77% that was observed at 155 MHz in MWA observations. This disparity could be due to differences in the lengths of observations, or a shallow spectral index component, or a frequency dependence of nulling. * The pulsar exhibits unusual drifting behaviour, with both evolutionary and non-evolutionary drift rates. Further, we categorise the unusual subpulse drifting behaviour of this pulsar into two drifting modes: A and B, where mode B is a non-evolutionary mode with a faster drift rate. Mode A was further sub-categorised depending on its evolutionary behaviour. Mode A0 is a non-evolutionary mode with a drift rate 3-4 times slower than mode B. The lack of any curvature in A0 suggests that the viewing geometry must be such that the driftband curvature arising from purely geometric considerations (the “geometric curvature”) is negligible across the pulse window. Mode A1 is an evolutionary mode which shows a smooth evolution of drift rate from fast to slow. On the other hand, the drift rate in mode A2 evolves from a slower to a faster drift rate. * The individual driftbands for J0026–1955 are not linear and have variable drift rates. We used a cubic smoothing spline estimate on individual driftbands and calculated the gradient (drift rate) at each pulse. To understand the overall drift rate modulation, we fit the drift rates using a quartic polynomial. The model helped in recognising the inter-band and inter-mode variability in drift rates. Though a simplistic higher-order polynomial can describe the global evolution empirically, understanding the local drift rate variability requires more complex modelling. * The pulsar J0026–1955 shows an evolution in drift rate for modes A1 and A2. We advocate the following two models to explain their behaviour: * Variable spark configuration - The gradual evolution of drift rate in modes A1 and A2 could be caused by a slow change in the spark configuration as the carousel reconfigures into an optimal arrangement after a spark appears or disappears. During this reconfiguration, the angular speed of individual sparks may differ from the average carousel rotation. In this case, the observed evolution in drift rate will be due to the motion of sparks “under” the line-of-sight as the entire carousel slowly recomposes. * Variable carousel rotation rate - In this model, we propose that the evolution in drift modes can be explained by a smoothly varying carousel rotation rate with a direct correlation with emission height rather than changes in the number of sparks. The evolution in carousel rotation rate is thought to originate from an increase or decrease in screening in the polar gap region. Therefore, as the screening decreases/increases, the carousel rotates faster/slower (mode A2/A1), with the emission coming from higher/lower altitudes in the pulsar magnetosphere. A combination of the two suggested models might also be plausible, a possibility that should be explored in future. * J0026–1955 shows robust evidence of subpulse memory across nulls. In multiple instances, we have found the possibility of `drift rate' and `subpulse phase' memory across nulls. We believe that there could be an uninterrupted stable discharge in the polar gap during the null, which is not observed due to the absence of a dominant coherence mechanism or a partially screened gap making the generation of detectable radio emission difficult. * J0026–1955 exhibits an almost consistent behaviour, where a null often follows mode A2. We propose two hypotheses for this behaviour: * The transition from slow to fast drift rates due to the appearance/disappearance of a spark often triggers a “reset” of the pulsar’s radio emission engine, which often culminates in a null. This reset is most likely to take place after the occurrence of mode A2, where the pulsar transitions from a slow to a fast drift rate. Not every occurrence of a null sequence is preceded by mode A2. However, most occurrences of mode A2 are followed by a null state. A null, followed by mode A2, must relate to a defined pathway in the pulsar emission as it is consistently observed across all independent data sets (from two epochs of observations). * Using the proposed carousel model, we advocate the idea that a decrease in screening increases the electric field in the polar gap region and impacts the spark velocity and carousel rotation rate. This results in a decrease in P_3 and closer driftbands in the pulse stack, suggesting that emission comes from higher emission heights for mode A2. The sequence ends soon after mode A2 achieves a low drift rate, possibly due to our line of sight not being able to observe the pulsar beyond a certain emission height. § ACKNOWLEDGEMENTS We thank the anonymous referee for several useful comments that helped improve the paper. PJ acknowledges the Senior Research Fellowship awarded by the Council of Scientific & Industrial Research, India. We thank S. Kudale for their help with conducting these observations. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. § DATA AVAILABILITY This paper includes data taken from the uGMRT in the 41st observing cycle. mnras
http://arxiv.org/abs/2307.01463v1
20230704035248
Hybrid two-level MCMC for Bayesian Inverse Problems
[ "Juntao Yang" ]
math.NA
[ "math.NA", "cs.NA" ]
Unsupervised Quality Prediction for Improved Single-Frame and Weighted Sequential Visual Place Recognition This research is partially supported by an ARC Laureate Fellowship FL210100156 to MM, the QUT Centre for Robotics, the Centre for Advanced Defence Research in Robotics and Autonomous Systems, and received funding from the Australian Government via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571. The work of H.Carson was supported in part by an Australian Postgraduate Award. The authors are with the QUT Centre for Robotics, School of Electrical Engineering and Robotics at the Queensland University of Technology, Brisbane, Australia (e-mail: [email protected], [email protected], [email protected]). Helen Carson, Jason J. Ford, Michael Milford Department of Mathematics, Southern University of Science and Technology, Shenzhen, China E-mail: [email protected] ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Inverse problems arise from various scientific and engineering domains, such as recovering subsurface rock permeability, engineering design parameter optimization, or data assimilation in weather forecasts. Many of those problems are governed by physical laws described by Partial Differential Equations (PDEs) <cit.>. We consider the Bayesian inverse problems formulated with Bayes' rule, which accounts for uncertainties. Bayesian inverse problems can be finite-dimensional, in which only a finite number of parameters are recovered <cit.>. However, in many scenarios, scientists and engineers are interested in recovering functions that are considered infinite-dimensional Bayesian inverse problems <cit.>. Typically solving such Bayesian inverse problems leads to repeatedly solving the forward problem either in a deterministic approach or a statistical approach. In large-scale PDE-constrained problems, it means running an expensive numerical simulation iteratively. Particularly when dealing with infinite-dimensional Bayesian inverse problems, a high-dimensional numerical system has to be dealt with due to discretization during computation. It also causes various computational challenges due to the curse of dimensionality. The recent development of deep neural networks has shown promising opportunities for neural network based surrogate models for solving such Bayesian inverse problems effectively. We mention the following references on solving PDE equations with neural networks, including Physics Informed Neural Networks (PINNs), Fourier Neural Operator, DeepONet and etc MR3881695, DBLP:conf/iclr/LiKALBSA21, lu2021learning. Examples of PINNs have shown that they can learn the solution of parameterized PDEs and be used for design optimization which is a typical inverse problem. Literature on neural operators has also shown that neural networks can learn linear and nonlinear mappings between functional spaces very well. The fast inference time of neural operators motivated the idea of using trained neural operators as surrogate models. The outstanding performances of neural networks in high-dimensional problems make the use of neural operators even more promising in comparison with classical surrogate models, e.g. polynomial approximations. With such progress, solving inverse problems intractable with classical methods becomes possible. Despite the advantages of neural network models as surrogate models, neural networks themselves are nonlinear complex models. There is a lack of a rigorous mathematical framework to conclude an error bound of a given model, contrary to the well-understood numerical methods. The majority of the applications take it as a black box and check the results empirically. Though in theory, there is a universal approximation theorem for continuous functions and another universal approximation theorem for operators <cit.>, there is often an empirical accuracy ceiling given the complexity of the model and size of the data. Some numerical analysis can be found on the expressivity of ReLU neural networks which depends on the layers and number of connections petersen2018optimal, aadebffb88b448c89c654fcdda52c02b. Training the neural network which usually leads to a non-convex optimization problem, the expressivity is not guaranteed as an error rate. In practice, the training of neural networks often requires trial and error with empirical knowledge from experienced data scientists. It is known empirically that increasing AI model accuracy usually requires exponentially more data, known as the power law <cit.>. It is still not guaranteed that increasing model complexity and data size will always lead to higher accuracy. The limitation hinders the adoption of neural network based models in practical applications, as the error in the surrogate model will lead to the approximation error both in the Quantity of Interest and also the approximation of posterior distribution under the Bayesian framework. We propose a two-level hybrid MCMC approach to solve Bayesian inverse problems with neural network based surrogate models which takes account of the error of the neural network models. In essence, taking advantage of the mathematical framework of the multilevel MCMC methods, this method samples the differences between the neural network model and numerical model with known accuracy with a limited number of numerical samples. This approach will reach the same accuracy as a MCMC chain with expensive numerical solutions by correcting the errors of a plain MCMC approach with only neural network models with limited additional computational cost. § BAYESIAN INVERSE PROBLEM In this section, we consider the Bayesian inverse problems with a forward mathematical model that predicts the states u of a physical system given parameters z = {z_1, z_2, ..., z_n}. Here we assume the forward mathematical model governed by partial differential equations with z, a finite number of parameters of the governing equation or the coefficients of the spectral expansion of the initial condition or forcing. Some examples are subsurface flow model who depends on the porosity of rocks, elasticity equations who depends on the Lame constants of the material and Navier Stokes equations whose solution depends on the initial condition and forcing. To present the setting for the Bayesian inverse problem, we consider the inverse problem with uniform prior. We denote the probability space U=[-1, 1]^n, and sigma algebra on the domain U by Θ = ⊗_j=1^n ℬ([-1, 1]). we define the forward observation map 𝒢 : U →ℝ^k for all z ∈ U as 𝒢(z)= (𝒪_1(u), 𝒪_2(u), ..., 𝒪_k(u)), where u is the forward solution of the governing PDE equation and 𝒪_i, i=1,2,...k are k continuous bounded linear functionals. The forward map 𝒢(z) : U →ℝ^k is continuous as a mapping from the measurable space (U, Θ) to (ℝ^k, ℬ(ℝ^k)). Assumption <ref> is valid for most PDEs constrained systems. One can find proofs for elliptic equations and parabolic equations in MR2558668, MR3084684, MR4246090. Proofs for elasticity equations and Navier Stokes equations can be found in <cit.>. Let ϑ be the observation noise. It is assumed Gaussian and independent of the parameters z. Thus the random variable ϑ has values in ℝ^k and following normal distribution N(0, Σ), where Σ is a known k × k symmetric positive covariance matrix. The noisy observation δ is δ = 𝒢 + ϑ. We denote the posterior distribution as γ^δ and the prior distribution as γ. The posterior probability measure γ^δ is absolutely continuous with respect to the prior γ. The Randon-Nikodym derivative is given by d γ^δ/d γ∝exp(-Φ(z; δ)), where Φ is the potential function Φ(z, δ) = 1/2|δ - 𝒢(z)|_Σ^2. The proof of Proposition <ref> follows naturally from Assumption <ref>. Detailed proof can be found in <cit.>. Next we consider the continuity of the posterior measure in the Hellinger distance with respect to the observation data, which implies the well-posedness of the posterior measure. The Hellinger distance is defined as d_Hell(γ', γ”) = (1/2∫_U(√(d γ'/d γ) - √(d γ”/dγ))^2d γ)^1/2, where γ' and γ” are two measures on U, which are absolutely continuous with respect to the measure γ. Following the results in MR2652785, MR2558668,MR4069815, we know that the Lipschitzness of the posterior measure with respect to the Hellinger distance holds under general conditions, hence the following proposition. The measure γ^δ depends locally Lipschitz continuously on the data δ with respect to the Hellinger metric: for every r > 0 and δ, δ' ∈ℝ^d such that for |δ|_Σ, |δ'|_Σ≤ r, there exists C = C(r) > 0 such that d_Hell(γ^δ, γ^δ') ≤ C(r)|δ-δ'|_Σ. § NUMERICAL DISCRETIZATION APPROXIMATION We consider classical numerical discretizations of such PDE-constrained forward problems, such as the finite volume method and finite element method. Such numerical approximations errors usually depend on the mesh size. We make the following assumption on the numerical approximation error which is true for problems like FE approximation for elliptic equations, parabolic equations, two dimensional Navier Stokes equations and etc MR2050138, MR1043610. See <cit.> for the standard error rate of Finite Volume approximation. We denote the solution space of u as V, where the solution space V is typically H^1 or H_0^1. Consider the numerical discretization approximation of the forward problem, there is a constant C>0 such that for every l∈ℕ and for every z ∈ U, the following error bound holds u - u^l_V ≤ C 2^-l, where l is the level of discretization and the mesh size h = 2^-l. With the numerical approximation of the forward equation, we consider the approximation of the posterior measure. We denote the numerical approximation of the forward map 𝒢^l: U →ℝ^k by 𝒢^l(z)= (𝒪_1(u^l),𝒪_2(u^l) ...,𝒪_k(u^l) ). The potential function is now Φ^l(z; δ) = 1/2|δ - 𝒢^l(z)|_Σ^2. And we define the approximate posterior probability measure γ^l, δ on the measurable space (U, Θ) as d γ^l,δ/d γ∝exp(-Φ^l(z; δ)). Hence similar to Proposition <ref>, we have the following proposition for the approximate posterior probability measure. There exists a postive constant C(δ) depending only on the data δ such that for every l d_Hell(γ^δ, γ^l,δ) ≤ C(δ) 2^-l. § MACHINE LEARNING BASED SURROGATE MODEL One of the key challenges for Bayesian inverse problems is the high computational cost. Particularly, statistical approaches such as Markov Chain Monte Carlo method requires solving the forward problem repeatedly. And due to the sequential nature of MCMC method, the forward problem has to be solved for millions of times without a trivial way to parallelize. The surrogate model has been one of the solutions to accelerate the process. With the recent development of neural-network and deep learning, machine learning based surrogate model has been found with good potential to create accurate surrogate models for high dimensional forward problems.<cit.> Recent research on Physics Informed Neural-Network and Neural Operators have demonstrated such potentials MR3881695, MR4395087, DBLP:conf/iclr/LiKALBSA21. However, domain experts from different communities often have critics of the lack of conservation of physics in the machine learning models. Even with the physics informed models, we see many challenges in training a machine learning model that's accurate enough due to the soft constraint. In addition, the lack of understanding of the modern deep learning models is also challenging for researchers who care a lot about the model confidence and the estimated error rate. Examples of the residual errors of neural operator in Bayesian inverse problems can be found in <cit.>. We denote 𝒢: U →ℝ^k as a nonlinear map defined by a trained neural network. We assume the neural network model is trained with data generated with classical numerical methods, e.g. Finite element method and finite difference method. We assume the objective to be solving the inverse problem with an error less or equal to 𝒪(2^-L). The procedure to solve such an inverse problem with machine learning acceleration is as followed. First, we create a numerical model used to generate the data, which is discretized with a fine mesh with mesh size h. To achieve the targeted accuracy, we have h = 𝒪(2^-L). According section <ref>, we will achieve an accuracy of 𝒪(2^-L). Second, we use those generated numerical data as training data to train a neural netowrk model. Third, we use the trained machine learning model as a surrogate model to quickly run a MCMC chain. Lastly, according to the theory of MCMC, we will get an estimated expectation of the Quantity of Interest within the desired error if the machine learning model can be as accurate as the numerical model. However, empirically, we are aware that the trained neural network can hardly achieve the same level of accuracy as the numerical model used to generate the training data. Hence we make the following assumption. Given a neural network trained by data generated by a numerical model with mesh size h = 2^-L, we have G(x) - u_V ≤ 2^-L + ϵ, where x is the input and we expect ϵ to be a small value for a well defined and trained neural network. We expect the ϵ to be small in practice when we have a reasonably good machine learning model trained with sufficient data. In order to mitigate the shortcomings of machine learning based surrogate model, one can increase the accuracy of the numerical model used to generate more training data to better train the model in order to reach the desired accuracy. However, that will increase the computational cost by many folds. In general, to solve a two dimensional problem, the minimum increment of the computational cost of the numerical model is 4 times and 8 times for three dimensional problems, not to mention more challenging problems whose computational cost does not scale linearly. In addition to the cost of finer numerical solvers, the need for more data points also will increase the computational cost. The increment of data resolution and size will also increase the cost of training the machine learning model. Even if we are willing to pay the cost, <cit.> has shown that there is usually a limit on how accurate certain machine learning models can be. <cit.> also proposed a residual-based error correction to deal with the problem, where a variational model is used to correct the solution of each inference. In the next section, we propose a different approach to correct the error statistically. § HYBRID TWO LEVEL MCMC In this section, we propose the hybrid two level MCMC method for error correction of the machine learning accelerated surrogate model for Bayesian inverse problems. To tackle the high computational cost of MCMC with a numerical solver, Multilevel Markov Chain Monte Carlo (MLMCMC) was developed which in theory is optimal and reduces the computational cost by two order of magnitude for various problems MR3084684, MR4246090, MR4523340. Inspired by the multilevel approach, we describe the hybrid two level MCMC method for sampling the posterior probability of the Bayesian inverse problems, in which the base chain is run with a machine learning accelerated surrogate model with a small error and another chain run with a numerical model with known accuracy to correct the statistical error of the machine learning driven MCMC chain. Similar to the telescoping argument for the multilevel Monte Carlo method <cit.>, we use the machine learning MCMC chain as the base chain and use another chain to sample the difference between the machine learning model and the numerical model. We denote the Quantity of Interest as Q, the machine learning approximate posterior distribution γ^ ML, and the numerical approximate posterior distribution γ^ num. With the target accuracy of 𝒪(2^-L) and Assumption <ref>, γ^ ML and γ^ num is equivalent to γ^L, δ and γ^L-ϵ, δ in section <ref>. With the two level approach, we can rewrite the numerical approximation of the expected Quantity of Interest as 𝔼^γ^ num[Q] = 𝔼^γ^ num[Q] - 𝔼^γ^ ML[Q] + 𝔼^γ^ML[Q] = (𝔼^γ^ num-𝔼^γ^ ML)[Q] + 𝔼^γ^ML[Q]. To derive a computable estimator with MCMC chains, we observe the first term in (<ref>) can be transformed as such, (𝔼^γ^ num-𝔼^γ^ ML)[Q] = 1/Z^ num∫_Uexp(-Φ^ num) Q d γ - 1/Z^ ML∫_Uexp(-Φ^ ML) Q d γ = 1/Z^ num∫_U exp(-Φ^ num)(1-exp(Φ^ num-Φ^ ML)) Q dγ + (Z^ ML/Z^ num-1) 1/Z^ ML∫_U exp(-Φ^ ML) Q d γ. We note that the constant (Z^ ML/Z^ num-1) can be expanded as following which can be approximated with a MCMC chain, (Z^ ML/Z^ num-1) = 1/Z^ num∫_U (exp(Φ^ num-Φ^ ML)-1) exp(-Φ^ num) d γ. Hence with (<ref>), (<ref>) and (<ref>), we can define the hybrid two level MCMC estimator 𝐄^ hybrid[Q] of 𝔼^γ^δ[Q] as 𝐄^ hybrid[Q] = 𝐄^γ^ num_M_ num[(1-exp(Φ^ num-Φ^ ML))Q] + 𝐄^γ^ num_M_ num[exp(Φ^ num-Φ^ ML)-1] ·𝐄^γ^ ML_M_ ML[Q] + 𝐄_M_ ML^γ^ ML[Q]. Up to this point, we have constructed the hybrid two level MCMC method for Bayesian inverse problems. The hybrid two level MCMC approach is surprisingly simple. Next we will show the error analysis where we will see it will effectively correct the estimator error caused by machine learning models. To estimate the error, we decompose the error into three components as follows. The estimator error holds as such, 𝔼^γ^δ[Q] - 𝐄^ hybrid[Q] = I + II + III, where I := 𝔼^γ^δ[Q] - 𝔼^γ^ num[Q], II := 𝔼^γ^ML[Q] - 𝐄^γ^ML_M_ ML[Q] and III := 𝔼^γ^ num[(1-exp(Φ^ num-Φ^ ML))Q] - 𝐄^γ^ num_M_ num[(1-exp(Φ^ num-Φ^ ML))Q] + 𝔼^γ^ num[exp(Φ^ num-Φ^ ML)-1] ·𝔼^γ^ ML[Q] - 𝐄^γ^ num_M_ num[exp(Φ^ num-Φ^ ML)-1] ·𝐄^γ^ ML_M_ ML[Q] respectively. Given the estimator error, we observe 𝔼^γ^δ[Q] - 𝐄^ hybrid[Q] = 𝔼^γ^δ[Q] - 𝔼^γ^ num[Q] + 𝔼^γ^ num[Q]- 𝐄^ hybrid[Q] = I + 𝔼^γ^ num[Q] - 𝐄^ hybrid[Q] With equation (<ref>) and (<ref>), we have 𝔼^γ^δ[Q] - 𝐄^ hybrid[Q] = I + 𝔼^γ^ num[Q]+ 𝐄^ hybrid[Q] = I + 𝔼^γ^ML[Q] + 𝔼^γ^ num[(1-exp(Φ^ num-Φ^ ML))Q] + 𝔼^γ^ num[exp(Φ^ num-Φ^ ML)-1] ·𝔼^γ^ ML[Q] - 𝐄^γ^ML_M_ ML[Q]- 𝐄^γ^ num_M_ num[(1-exp(Φ^ num-Φ^ ML))Q] - 𝐄^γ^ num_M_ num[exp(Φ^ num-Φ^ ML)-1] ·𝐄^γ^ ML_M_ ML[Q] = I + II + III. Lastly we derive an error bound by estimating the three error terms in Proposition <ref> individually. For the first term I, we obtained from Assumption <ref> and Proposition <ref> , the bound I := 𝔼^γ^δ[Q] - 𝔼^γ^ num[Q] ≤ C(δ) 2^-L. For the second term, we obtained the bound from Proposition 5 in <cit.>, II := 𝔼^γ^ML[Q] - 𝐄^γ^ML_M_ ML[Q] ≤ M_ML^-1/2, which is a standard results from the theory of Markov chains. Now we estimate for the last term. With inequality |exp(x) - exp(y)| ≤ |x-y|(exp(x)+exp(y)), we have, sup_z ∈ U |1 - exp(Φ^ num - Φ^ ML)| ≤sup_z ∈ U |Φ^ num - Φ^ ML|(1+exp(Φ^ num - Φ^ ML)) ≤ C (1+2^ϵ)2^-L Hence we have, 𝐄[{𝔼^γ^ num[(1-exp(Φ^ num-Φ^ ML))Q] - 𝐄^γ^ num_M_ num[(1-exp(Φ^ num-Φ^ ML))Q]}^2] ≤ C M_ num^-1 (1+2^ϵ)^2 2^-2L Similarly we have 𝐄[{𝔼^γ^ num[exp(Φ^ num-Φ^ ML)-1] ·𝔼^γ^ ML[Q] - 𝐄^γ^ num_M_ num[exp(Φ^ num-Φ^ ML)-1] ·𝐄^γ^ ML_M_ ML[Q]}^2] ≤ C 𝐄[{𝔼^γ^ num[exp(Φ^ num-Φ^ ML)-1] - 𝐄^γ^ num_M_ num[exp(Φ^ num-Φ^ ML)-1]}^2] + C sup_z ∈ U|exp(Φ^ num-Φ^ ML)-1|^2 ·𝐄[{𝔼^γ^ ML[Q] -𝐄^γ^ ML_M_ ML[Q] }^2] ≤ C M_ num^-1 (1+2^ϵ)^2 2^-2L + C M_ ML^-1 (1+2^ϵ)^2 2^-2L Combine the two, we have the overall error estimate for III. III := 𝔼^γ^ num[(1-exp(Φ^ num-Φ^ ML))Q] - 𝐄^γ^ num_M_ num[(1-exp(Φ^ num-Φ^ ML))Q] + 𝔼^γ^ num[exp(Φ^ num-Φ^ ML)-1] ·𝔼^γ^ ML[Q] - 𝐄^γ^ num_M_ num[exp(Φ^ num-Φ^ ML)-1] ·𝐄^γ^ ML_M_ ML[Q] ≤ C M_ num^-1 (1+2^ϵ)^2 2^-2L + C M_ ML^-1 (1+2^ϵ)^2 2^-2L Up to now, we are still free to choose the number of samples for M_ num and M_ ML. To balance the error I, II and III, we can choose the sampling number M_ ML = C 2^2L and M_ num = C (1+2^ϵ)^2. With M_ ML = C 2^2L and M_ num = C(1+2^ϵ)^2, we have the following estimator error for the hybrid two level MCMC method, 𝐄[|𝔼^γ^δ[Q] - 𝐄^ hybrid[Q]|] ≤ C(δ)2^-L. § TWO LEVEL MCMC WITH GAUSSIAN PRIOR In preceding sections, we have discussed the Bayesian inverse problems with uniform priors. However, the multilevel approach used in section <ref> will diverge in the case of Gaussian prior <cit.>. For completeness, we discuss the hybrid two level MCMC method in the case of Gaussian prior in this section. We consider a forward model that predicts the states u of a physical system given paramter z. Typically for Bayesian inverse problems in infinite dimension, we have following random field for Bayesian inverse problem, K(z) = K_* + exp(K + ∑_j=1^n z_j ψ_j), where z = (z_1, z_2, ..., z_n) ∈ℝ^n and z_j ∼ N(0, 1). Such log normal priors can be found in applications such as linear elasticity modeling and subsurface modeling where the physical quantity of interest is positive. However we see the following type of random field as well, for example the initial conditions and the random forcing in applications of Navier Stokes equations, K(z) = K + ∑_j=1^n z_j ψ_j. We work with the following assumption by follow the Bayesian inverse problem setup with Gaussian prior in MR4069815,MR4246090,MR4523340. The functions K_*, K∈ L^∞(D) are non-negative. The functions ψ_j are in L^∞(D), and 𝐛 := (ψ_j_L^∞(D))_j=1,2,...,n∈ℓ^1. We denote the standard Gaussian measure in ℝ by γ_1. Hence we have γ = ⊗_j=1^n γ_1 on (ℝ^n, ℬ(ℝ^n)). With Assumption <ref>, we know that the set Γ_b:= {z ∈ℝ^n, ∑_j=1^n b_j |z_j| < ∞}∈ℬ(ℝ^n) has full Gaussian measure, i.e. γ(Γ_b) = 1. Often time, such Gaussian prior setup is more practical than uniform prior as an uninformative prior in Bayesian inverse problems. Hence it is a more interesting setup to investigate. The functions K_*, K and ψ_j belong to W^1, ∞(D) and 𝐛 := (ψ_j_W^1, ∞(D))_j=1,2,...,n∈ℓ^1. As a consequence, u(z) - u^l(z)_V ≤ C exp(c∑_j=1^n b_j |z_j|)(1+∑_j=1^n b_j |z_j|) 2^-l. The numerical error estimate in Assumption <ref> might not be true for all problems with Gaussian prior. It is problem dependent. But it is a typical error rate found in problem setups such as elliptic equations with unknown coefficients, diffusion problems and parabolic problems with unknown coefficients, referenced in hoang2016convergence, MR4246090. But this particular form of approximation error is of the most interest as the exponential term in the error will lead to divergence of the two level hybrid MCMC approach. With Assumption <ref>, we have u^ num(z) - u^ ML(z)_V ≤ C(1+2^ϵ) exp(c∑_j=1^n b_j |z_j|)(1+∑_j=1^n b_j |z_j|) 2^-l, |Φ^ num(z; δ) - Φ^ ML(z; δ)| ≤ C(1+2^ϵ) exp(c∑_j=1^n b_j |z_j|)(1+∑_j=1^n b_j |z_j|) 2^-l. Next we derive the two level MCMC approach for Gaussian prior. In order to avoid the unboundedness from the exponential term, we make use of the following switching function, I(z) = 1, if Φ^ num(z, δ) - Φ^ ML(z, δ) ≤ 0, 0, otherwise. With the switching function, we can decompose the difference between the expected quantity of interest as such, (𝔼^γ^ num-𝔼^γ^ ML)[Q] = 1/Z^ num∫_Uexp(-Φ^ num) Q I(z) d γ - 1/Z^ ML∫_Uexp(-Φ^ ML) Q I(z) d γ +1/Z^ num∫_Uexp(-Φ^ num) Q (1-I(z)) d γ - 1/Z^ ML∫_Uexp(-Φ^ ML) Q (1-I(z)) d γ = 1/Z^ num∫_U exp(-Φ^ num)(1-exp(Φ^ num-Φ^ ML)) Q I(z) dγ + (Z^ ML/Z^ num-1) 1/Z^ ML∫_U exp(-Φ^ ML) Q I(z) d γ +1/Z^ ML∫_U exp(-Φ^ ML)(exp(Φ^ ML-Φ^ num)-1) Q (1-I(z))dγ + (1-Z^ num/Z^ ML) 1/Z^ num∫_U exp(-Φ^ num) Q (1-I(z))d γ For the two constants (Z^ ML/Z^ num-1) and (1-Z^ num/Z^ ML), we can estimate them with (Z^ ML/Z^ num-1) = 𝔼^γ^ num[(exp(Φ^ num-Φ^ ML)-1)I(z)], (1-Z^ num/Z^ ML) = 𝔼^γ^ ML[(1-exp(Φ^ ML-Φ^ num))(1-I(z))]. For simplicity, we denote the following terms as A_1, A_2, A_3, A_4, A_5 and A_6. A_1 = (1-exp(Φ^ num-Φ^ ML)) Q I(z), A_2 = (exp(Φ^ ML-Φ^ num)-1) Q (1-I(z)), A_3 = Q I(z), A_4 = Q (1-I(z)). A_5 = (exp(Φ^ num-Φ^ ML)-1)I(z) A_6 = (1-exp(Φ^ ML-Φ^ num))(1-I(z)). Now we have the final estimator 𝐄^hybrid(Q) = 𝐄^γ^ num [A_1] + 𝐄^γ^ num [A_5] ·𝐄^γ^ ML[A_3] + 𝐄^γ^ ML[A_2] + 𝐄^γ^ ML[A_6] ·𝐄^γ^ num [A_4] + 𝐄^γ^ ML[Q] With M_ ML = C 2^2L and M_ num = C(1+2^ϵ)^2, we have the following estimator error for the hybrid two level MCMC method, 𝐄[|𝔼^γ^δ[Q] - 𝐄^ hybrid[Q]|] ≤ C(δ)2^-L. We decompose the error into three terms I, II, III, I := 𝔼^γ^δ[Q] - 𝔼^γ^ num[Q], II := 𝔼^γ^ML[Q] - 𝐄^γ^ML_M_ num[Q], III := 𝔼^γ^ num [A_1] -𝐄^γ^ num_M_ num [A_1]+ 𝔼^γ^ num[A_5] ·𝔼^γ^ ML[A_3]-𝐄^γ^ num_M_ num [A_5] ·𝐄^γ^ ML_M_ num[A_3] + 𝔼^γ^ ML[A_2] -𝐄^γ^ ML_M_ num[A_2] + 𝔼^γ^ ML[A_6] ·𝔼^γ^ num [A_4] -𝐄^γ^ ML_M_ num[A_6] ·𝐄^γ^ num_M_ num [A_4] Similar to section <ref>, we have the following error bound, I < C(δ) 2^-L, II < M_ ML^-1/2, III < C (1+2^ϵ)^2 M_ num^-1 2^-2L. Hence by choosing the following sampling number gives us the desired result, M_ ML = 2^2L, and M_ num = (1+2^ϵ)^2. The theorem and proceeding assumptions are valid for log-normal priors with elliptic equations, diffusion equations, and parabolic equations. The proof for the two-dimensional Navier Stokes equation is unavailable to our best knowledge. However, some experimental results also show the theorem for multilevel MCMC with Gaussian prior works for the two-dimensional Navier Stokes equation. § NUMERICAL EXPERIMENTS In this section, we show some numerical experiments to demonstrate the performance of the proposed two-level hybrid MCMC approach. §.§ Elliptic equation with uniform prior We consider the Bayesian inverse problem with the forward model governed by the following elliptic equation. ∇· (K(x) ∇ u(x)) = cos(2π x_1) sin(2 π x_2), u(x_1=0) = 0, u(x_1=1)=1, ∂ u/∂ x_2 (x_2 = 0) = ∂ u/∂ x_2 (x_2 = 1) = 0. The forward observation is the Eulerian observation of u on equally distanced fixed positions as shown in Figure <ref>. Thirty-six equally distanced observations are captured from a random realization of the forward model with additional Gaussian noise δ with zero mean and variance σ^2 = 0.001. The quantity of interest for the problem is K(x) with the coefficient z being uniformly distributed. K(x) = zcos(2 π x_1)sin(2 π x_2)+2.0, z ∼ U[0, 1] We solve the above equation with the Finite Element method with uniformly spaced mesh. A 32 × 32 mesh resolution is used in this experiment. We randomly generated 8000 samples with the FEM solver. The 8000 data are partitioned into 4000 training data, 2000 validation data, and 2000 test data to train a fully connected ReLU neural network with two hidden layers. Each layer of hidden layer has 512 nodes. The neural network is trained with Adam for 10000 epochs. We estimate the numerical error with 𝔼_z ∼ U[0, 1](|u(z) - u_ML(z)|_L2). To estimate the error, we use a high-fidelity resolution (1024×1024) solver to generate reference solutions and compute their L2 error. Gaussian Legendre quadrature methods are used to estimate the expectation value. We have the following estimator of the expected L2 error 𝐄(|u_L=10-u_L=5|_L2) = 5.576e^-5 and 𝐄(|u_L=10-u_ML|_L2) = 3.132e^-4. With the error, we have an estimate of ϵ = 2.49. We run three experiments with numerical solver based MCMC chain, ML model based MCMC chain and the proposed hybrid approach. The results of the average of 5 MCMC run are concluded in Table <ref>, from where we can see the results from the hybrid approach are comparable to the pure numerical solver based MCMC. A reference value computed with 32 points Gaussian Legendre quadrature and fine-meshed numerical solver (1024 × 1024) is also included. §.§ Elliptic equation with log-normal prior We consider the same Bayesian inverse problem with the forward model governed by the following elliptic equation shown in Section <ref>, ∇· (K(x) ∇ u(x)) = cos(2π x_1) sin(2 π x_2), u(x_1=0) = 0, u(x_1=1)=1, ∂ u/∂ x_2 (x_2 = 0) = ∂ u/∂ x_2 (x_2 = 1) = 0. Thirty-six equally distanced observations are captured from a random realization of the forward model with additional Gaussian noise δ with zero mean and variance σ^2 = 0.2. Now we consider the K(x) of Gaussian random distribution. We sample the Gaussian random field from the following bi-laplacian Gaussian prior. 𝒜 m=γ∇·(Θ∇ m)+δ m in Ω (Θ∇ m) ·n+β m on ∂Ω where we have γ=0.1, δ=0.5 and β = √(γδ). In the numerical experiment, we have the precision matrix 𝒜 by discretizing the above equation and get K(x) = exp(m). We attach several examples of random field samples generated in this experiment in Figure <ref>. We solve the above equation with the Finite Element method with uniformly spaced mesh. A 32 × 32 mesh resolution is used in this experiment. We randomly generated 8000 samples with the FEM solver. The 8000 data are partitioned into 4000 training data, 2000 validation data, and 2000 test data to train a convolutional neural network. The convolutional neural network consists of 3 encoding layer, 1 fully connected layer and 3 decoding layer. The neural network is trained with Adam for 10000 epochs. We estimate the numerical error with 𝔼_m ∼ N(0, 𝒜^-1)(|u_L=5(m) - u_ML(m)|_L2/|u_L=5(m)|_L2). In this experiment, we got an error of 0.102. With the error, we have an rough estimate of ϵ = 1.179. We run three experiments with numerical solver based MCMC chain, ML model based MCMC chain and the proposed hybrid approach. The results of the average of 5 MCMC run are concluded in Figure <ref>. §.§ Nonlinear reaction-diffusion problem with Gaussian prior We consider the Bayesian inverse problem with the forward model governed by the following reaction diffusion equation, ∇· (K(x) ∇ u(x)) + u^3 = 0, u(x_1=0) = 0, u(x_1=1)=1, ∂ u/∂ x_2 (x_2 = 0) = ∂ u/∂ x_2 (x_2 = 1) = 0. Thirty-six equally distanced observations are captured from a random realization of the forward model with additional Gaussian noise δ with zero mean and variance σ^2 = 0.2. Now we consider the K(x) of Gaussian random distribution. We sample the Gaussian random field from the following bi-laplacian Gaussian prior. 𝒜 m=γ∇·(Θ∇ m)+δ m in Ω (Θ∇ m) ·n+β m on ∂Ω where we have γ=0.1, δ=0.5 and β = √(γδ). In the numerical experiment, we have the precision matrix 𝒜 by discretizing the above equation and get K(x) = exp(m). We solve the above equation with the Finite Element Method with uniformly spaced mesh. A 32 × 32 mesh resolution is used in this experiment. We randomly generated 4000 samples with the FEM solver. The 4000 data are partitioned into 2000 training data, 1000 validation data, and 1000 test data to train a U-net neural network. The U-net consists of 3 layers, each with dimensions of 32 × 32, 16 × 16, and 8 × 8. The neural network is trained with Adam for 10000 epochs. We run three experiments with a numerical solver based MCMC chain, ML model based MCMC chain and the proposed hybrid approach. The results of the average of 5 MCMC run are concluded in Fig <ref>. The results show a reduction in differences from maximum 1.2 to 0.5 with 2% of original numerical samples, to 0.35 with 10% of original numerical samples. §.§ Two dimensional Navier Stokes Equation with Gaussian prior We consider the Bayesian inverse problem with the forward model governed by the two dimensional Navier Stokes Equations in the vorticity form, ∂_t ω(x, t) + u(x, t) ·∇ω(x, t) - νΔω(x, t) = f, for x∈𝕋^2, ∇· u(x, t) = 0, for x∈𝕋^2, ω(x,0) = ω_0; with the periodic boundary condition and the random forcing f = 0.1 (sin(2 π (x_1+x_2))+cos(2 π (x_1+x_2))). ω is the vorticity, u is the velocity and ν = 0.001. 36 equal distanced observations are captured from a random realization of the forward model with additional Gaussian noise δ with zero mean and variance σ^2 = 1. Now we consider the K(x) of Gaussian random distribution. We sample the Gaussian random field from the following bi-laplacian Gaussian prior with the following distribution: 𝒩(0, 3^3/2(-Δ+9 I)^-1). We solve the above equation with the pseudo-spectral method with Crank-Nicolson method. A 64 × 64 resolution is used in this experiment. We randomly generated 4000 samples with the FEM solver. The 4000 data are partitioned into 2000 training data, 1000 validation data, and 1000 test data to train the Fouier Neural Operator (FNO) as described in <cit.>. There are 12 modes for both height and width, 8 hidden channels and 4 layers for the FNO we used in this experiment. The neural network is trained for 10000 epochs. We run three experiments with a numerical solver based MCMC chain, ML model based MCMC chain and the proposed hybrid approach. The results of the average of 5 MCMC run are concluded in Figure <ref>. From Figure <ref> we can see the posterior results from all three methods are the same. The posterior is close to the actual initial condition but different due to the no-linearly of the Navier Stokes equations. Now we change the forcing from (<ref>) to (<ref>). Because the FNO model is not trained with this new forcing, the prediction of the AI model will not be accurate. Instead of regenerating training data and retraining the model, we use our hybrid method to correct the error of the AI model with the numerical model updated with the new forcing term. f = 0.2 (sin(2 π (x_1+x_2))+cos(2 π (x_1+x_2))). We run the same experiment and get the results shown in Figure <ref>. From the posterior expectation results, we can see the hybrid approach has a closer result to the numerical reference. The maximum difference is dropped from around 0.06 to 0.035. The L1 error is reduced from 0.030 to 0.013. We show that even with non-FEM-based numerical methods, e.g. spectral method used in this example, the hybrid approach still can accelerate the computation and improve accuracy. § CONCLUSION In this paper, we introduced a novel method to solve Bayesian inverse problems governed by PDE equations with a hybrid two-level MCMC where we took advantage of the AI surrogate model speed and the accuracy of numerical models. We have shown theoretically the potential to solve Bayesian inverse problems accurately with only a small number of numerical samples when the AI surrogate model error is small. Several numerical experiment results are included which are aligned with our claim. amsrefs
http://arxiv.org/abs/2307.00629v2
20230702175659
Can photonic heterostructures provably outperform single-material geometries?
[ "Alessio Amaolo", "Pengning Chao", "Thomas J. Maldonado", "Sean Molesky", "Alejandro W. Rodriguez" ]
physics.optics
[ "physics.optics" ]
APS/123-QED Department of Chemistry, Princeton University, Princeton, New Jersey 08544, USA [email protected] Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544, USA Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544, USA Department of Engineering Physics, Polytechnique Montréal, Montréal, Québec H3T 1J4, Canada Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544, USA Recent advances in photonic optimization have enabled calculation of performance bounds for a wide range of electromagnetic objectives, albeit restricted to single-material systems. Motivated by growing theoretical interest and fabrication advances, we present a framework to bound the performance of photonic heterostructures and apply it to investigate maximum absorption characteristics of multilayer films and compact, free-form multi-material scatterers. Limits predict trends seen in topology-optimized geometries—often coming within factors of two of specific designs—and may be exploited in conjunction with inverse designs to predict when heterostructures are expected to outperform their optimal single-material counterparts. Can photonic heterostructures provably outperform single-material geometries? Alejandro W. Rodriguez August 1, 2023 ============================================================================= Large-scale optimization or “inverse design” in electromagnetism involves maximizing a desired field objective (e.g. absorbance, overlap with a known mode, or scattered power) over thousands to millions of structural degrees of freedom. The approach has begun to play an important role in the design of high-performing optical devices <cit.>, leading to advances in nonlinear frequency conversion <cit.>, multiplexing <cit.>, and bandgap engineering <cit.>. While structural optimization is generally NP-hard <cit.>, effectively forbidding guarantees of optimal solutions <cit.>, a new, complementary, approach based on convex relaxations has been shown to provide predictive performance bounds in a variety of settings <cit.>. Examples of this method include recent predictions of maximum angle-integrated absorption <cit.>, scattering cross-sections <cit.>, communication limits <cit.>, field screening <cit.>, dipole masking <cit.>, and local density of states <cit.>, among other canonical electromagnetic objectives <cit.>. In addition to partially assessing the optimality of inverse designs, performance bounds can also yield insights into the physical processes that underpin desired wave behaviors. For instance, a bound that increases rapidly and then saturates beyond a certain characteristic length suggests that a minimum device size is needed to achieve a given phenomenon. Limit information can also be used to guide structural optimization toward optimal solutions <cit.>. The relaxations which allow such bounds to be structure agnostic have limited previous work to a single design material against a background (typically vacuum) <cit.>, precluding investigations of heterostructures comprising two or more design materials. Burgeoning use of these devices for ultrabroadband absorption <cit.>, passive cooling <cit.>, ultrafast photonics <cit.>, among other applications <cit.>, and corresponding efforts to address fabrication challenges, highlight a growing need for definitive statements about the possible advantages of heterostructures. Potentially loose bounds and the NP-hardness of the inverse design problem mean that neither method on its own can prove heterostructure superiority. However, the two methods can be combined to provide performance certificates and therefore a new way of assessing the trade-offs associated with multi-material engineering. In this article, we show that the duality bounds formalism of Ref. <cit.> can be extended to handle an arbitrary number of design materials. We present two basic examples that showcase this theory: bounds on the absorbed power from a plane-wave incident on any multilayer film or from an oscillating dipole in the vicinity of any free-form structure restricted to a square design region. Comparisons to topology-optimized designs show heterostructure bounds consistently coming within a factor of two of device performance. More fundamentally, we demonstrate heterostructure designs exhibiting greater performance than either of their corresponding single-material bounds, providing definitive proof that at least in these settings use of multiple materials is advantageous. Formulation: The key idea underpinning recent bound optimizations, detailed in Refs. <cit.>, involves relaxing structural and physical information in the typical field optimization problem by reducing the vector field constraints stated by Maxwell's equations to a user-defined set of scalar constraints ensuring the conservation of power over sub-volumes of the total design region—generalizations of Poynting's theorem <cit.>. Convex relaxations and solutions of the resulting quadratically constrained quadratic program via duality <cit.> or semi-definite programming <cit.> provide a bound on the original (primal) objective. More precisely, optimization of a given quadratic field objective f_0 over the possible polarization fields |ψ_k⟩ that may arise from a set of harmonic fields |S_k⟩ incident on a given design region V can be framed as a quadratic program of the form {ψ_k}max f_0({|ψ_k⟩}) s.t. ⟨S_k|_j |ψ_k'⟩ - ⟨ψ_k|( χ_k^-1 † - _0^(k)†) _j |ψ_k'⟩ = 0, 250mu ∀ j,k,k' where χ_k denotes the electric susceptibility of the design material at frequency ω_k, and _0^(k) the corresponding vacuum propagator acting on sources to yield their respective fields in vacuum—namely, via convolution of the vacuum Green's function G^(k)_0( r, r', ω_k) satisfying c^2ω_k^2∇×∇× G^(k)_0( r, r', ω_k) - G^(k)_0( r, r', ω_k) = δ( r - r'). Here, and _j represent spatial projections into either the full or a subset V_j ∈ V of the design region V, respectively, and bra-ket notation is used to express complex vector fields over V. The integral constraints in Eq. (<ref>) enforce Poynting's theorem—power conservation—over each selected region. The Lagrange dual of Eq. (<ref>) bounds the primal objective <cit.>. Extending this formalism to multiple materials may be carried out as follows. Expressing |ψ_k⟩=∑_m |ψ_k,m⟩ as a sum of polarization currents associated with a given material m of susceptibility χ_k,m, it suffices to allow each region to be filled with any combination of the selected χ_k,m. To avoid unphysical solutions, however, one must also enforce an additional constraint precluding the overlap of distinct materials: the polarization currents associated with distinct materials must be orthogonal in any sub-region. Performing this modification, the optimization problem becomes: {ψ_k,m}max f_0({|ψ_k,m⟩}) s.t. ∑_m ( ⟨S_k|_j |ψ_k',m⟩ - ⟨ψ_k,m|_j χ_k,m^†|ψ_k',m⟩ - ∑_m'⟨ψ_k,m| - _j _0^(k)†|ψ_k',m'⟩) = 0 ∀ j,k,k', ⟨ψ_k,m|_j |ψ_k',m'⟩ = 0 ∀ j, k, k', m≠ m'. A schematic of the problem under consideration is shown in Fig. <ref>. Detailed calculations of the corresponding Lagrange dual and gradients, as well as a proof of the existence of dual solutions are provided in the Supplemental Material (SM) <cit.>. The remainder of the article showcases the utility of Eq. (<ref>) by computing bounds on two representative scattering problems: maximization of absorption of a planewave or a dipolar source. Resulting bounds are compared to the best performing devices obtained via topology optimization, as outlined in <cit.> for single materials and in the SM <cit.> for multiple materials. In all cases, structural optimization routines and bounds calculations were initialized with an empty design region or as described in the SM <cit.>, respectively. Planewave incident on a multilayer film: We first consider a TM polarized planewave with wavelength λ incident on a device of length L≤λ consisting of multiple layers of variable thicknesses. We seek to maximize the ratio of absorbed to incident power. The computational ease of this effectively one dimensional problem allows us to readily enforce the constraints in Eq. (<ref>) over each pixel of the computational grid. We consider use of either a single dielectric of χ = 4+0.1i, a single metal of χ=-4+1i, or the use of both materials. Results are shown in Fig. <ref> (left) and compared to topology-optimized inverse designs. Notably, performance values for optimized structures and associated bounds reflect the increased ability of thicker films and greater material choice to achieve higher planewave absorption, which eventually saturates to 100% for sufficiently large devices. We note that the designs of size L/λ = 0.1 were verified to be globally optimal by brute force, and that all bounds approach zero as L/λ→ 0. Heterostructure bounds are found to correctly anticipate the earlier onset of perfect absorbance and devices' provably superior performance compared to their single-material counterparts. This increased performance is evident at intermediate thicknesses 0.1λ≤ L ≤ 0.5λ but rapidly vanishes in highly subwavelength or optically thick films, with metallic films outperforming their dielectric counterparts, and heterostructures primarily if not entirely composed of metal. As expected, metallic devices outperform dielectrics at small L/λ by exploiting plasmonic confinement, with dielectrics becoming increasingly effective as L →λ. Optimized designs suggest that the primary mechanism by which hybrid heterostructures outperform single-material structures at intermediate L/λ is by the creation of metallic cavities filled with dielectric (as opposed to vacuum) absorbing material. As seen from the optimized multilayer structure of thickness L=0.4λ shown on the inset (orange star), there appears to be little to no use of vacuum regions within the design domain, indicating that optimal performance may be predicted from a single-material framework, with either the metal or dielectric as the background medium. More examples of this phenomenon, which is likely specific to the case of maximizing absorption, are presented in the SM <cit.>. This could in theory be studied by modifying the single-material bounds formalism to take into account a design region surrounded by vacuum but comprising exclusively the design materials (generalizing ideas in Ref. <cit.>). Lastly, we note the non-additivity of the bounds: calculated limits on heterostructure absorbance are not given by the sum of the single material limits. In fact, topology optimized inverse designs outperform the sum of the two single-material bounds when 0.08λ≤ L ≤ 0.5. Dipole near a compact structure: Next, we consider a TM polarized dipole oscillating at frequency λ/c and located a distance d=0.4λ from a square design region of side length L ≤λ. We now seek to maximize the ratio of absorbed power to the power radiated by the dipole in vacuum. For computational convenience, the energy conservation constraints in Eq. (<ref>) are only enforced in 100 equally sized sub-domains for single materials and 25 sub-domains for two materials. Again, we consider the use of a dielectric with χ = 4+0.1i, a metal with χ=-4+1i, and the use of both materials. Results are shown in Fig. <ref> (right) and compared to topology-optimized inverse designs. Similar to the multilayer examples, bounds are seen to accurately predict trends in device performance, with the non-additive nature of multiple-material bounds becoming particularly pronounced for wavelength-scale devices. The performance of two-material devices is provably better than any single-material structure for 0.1 λ≤ L ≤ 0.9 λ. Inspecting bound solutions and inverse designs, we postulate that the dielectric (blue star) and metallic device (green star) of size L/λ = 1 work by reflecting incident fields into an absorbing block near the dipole and by forming a cavity mode, respectively. As seen from the representative optimized structure shown on the inset (orange star), we find that greater performance in this setting is achieved through use of vacuum, dielectric, and metallic regions combined. The structure appears to be a hybrid of the two single-material structures, highlighting that optimal structures may be combinations of single-material structures that leverage the benefits of both materials. Interestingly, this is the only multi-material structure that significantly utilizes vacuum, as well as the only L/λ where the dielectric single-material device greatly outperforms the corresponding metallic device. Rather than replacing vacuum in a metallic cavity with absorbing dielectric (as was the case for multilayer films or for structures with L/λ < 1 in this setting), the primary absorbing mechanism here appears to be dielectric confinement. Concluding remarks: This work shows that our extension to the duality bounds framework <cit.> for handling multi-material settings is both effective and useful. For the studied applications, access to multiple materials is seen to provide non-trivial performance benefits over single-material systems, except in very large designs where increased structural freedom makes greater material choice less consequential. While there is no proof of strong duality, the observation of optimized structures exceeding single-material bounds provides evidence that structures composed of multiple media offer meaningful advantages for photonic design. Examples where inverse designs converge on structures with three different refractive indices further motivates use of multi-material physical limits. Promising applications of this theory include problems involving multiple sources that may be separately addressed by distinct material responses (e.g., with multiple dispersive materials). We acknowledge the support by the National Science Foundation under the Emerging Frontiers in Research and Innovation (EFRI) program, Award No. EFMA-164098, the Defense Advanced Research Projects Agency (DARPA) under Agreements No. HR00111820046, No. HR00112090011, and No. HR0011047197, and by a Princeton SEAS Innovation Grant. SM also acknowledges financial support from IVADO (Institut de valorisation des données, Québec). The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's High Performance Computing Center and Visualization Laboratory at Princeton University. The views, opinions and findings expressed herein are those of the authors, and should not be interpreted as representing the official views or policies of any institution.
http://arxiv.org/abs/2307.02561v1
20230705180217
Unification of species, gene, and cell trees for single-cell expression analyses
[ "Samuel H. Church", "Jasmine L. Mah", "Casey W. Dunn" ]
q-bio.PE
[ "q-bio.PE" ]
Double Copy from Tensor Products of Metric -algebras [ August 1, 2023 ==================================================== Shaded[sharp corners, breakable, frame hidden, interior hidden, borderline west=3pt0ptshadecolor, boxrule=0pt, enhanced] Samuel H. Church, Jasmine L. Mah, Casey W. Dunn Department of Ecology and Evolutionary Biology, Yale University, New Haven, CT, 06511 abstract § ABSTRACT Comparisons of single-cell RNA sequencing (scRNA-seq) data across species can reveal links between cellular gene expression and the evolution of cell functions, features, and phenotypes. These comparisons invoke evolutionary histories, as depicted with phylogenetic trees, that define relationships between species, genes, and cells. Here we illustrate a tree-based framework for comparing scRNA-seq data, and contrast this framework with existing methods. We describe how we can use trees to identify homologous and comparable groups of genes and cells, based on their predicted relationship to genes and cells present in the common ancestor. We advocate for mapping data to branches of phylogenetic trees to test hypotheses about the evolution of cellular gene expression. We describe the kinds of data that can be compared, and the types of questions that each comparison has the potential to address. Finally, we reconcile species phylogenies, gene phylogenies, cell phylogenies, and cell lineages as different representations of the same concept: the tree of cellular life. By integrating phylogenetic approaches into scRNA-seq analyses, we can overcome challenges for building informed comparisons across species, and robustly test hypotheses about gene and cell evolution. introduction § INTRODUCTION Single-cell RNA sequencing (scRNA-seq) generates high-dimensional gene expression data from thousands of cells from an organ, tissue, or body1. Single-cell expression data are increasingly common, with new animal cell atlases released every year2–6. The next step is to compare atlases across species2, to identify the dimensions in which these results differ, and to associate these differences with other features of interest7. Because all cross-species comparisons are inherently evolutionary comparisons, there is an opportunity to integrate approaches from the field of evolutionary biology, and especially phylogenetic biology8. Drawing concepts, models, and methods from these fields will overcome central challenges with comparative scRNA-seq, especially in how to draw coherent comparisons over thousands of genes and cells across species. In addition, this synthesis will help avoid the unnecessary reinvention of analytical methods that have already been rigorously tested in evolutionary biology for other types of data, such as morphological and molecular data. Comparative gene expression has been used for decades to answer evolutionary questions (e.g. How are changes in gene expression associated with the evolution of novel functions and phenotypes?)9. Single-cell RNA sequencing represents a massive increase in the scale of these experiments1, from working with a few genes or a few tissues, to assays of the entire transcriptome, across thousands of cells in a dissociation experiment. Comparative scRNA-seq therefore allows us to scale up our evolutionary questions, for example: * How has the genetic basis of differentiation evolved across cell populations and over time? * What kinds of cells and gene expression patterns were likely present in the most recent common ancestor? * What changes in cell transcriptomes are associated with the evolution of new ecologies, life-histories, or other features? * How much variation in cellular gene expression do we observe over evolutionary time? Which changes in gene expression are significant (i.e. larger or smaller than we expect by chance)? * Which genes show patterns of correlated expression evolution? Can evolutionary screens detect novel interactions between genes? For these comparisons, we seek to compare and analyze the results of individual scRNA-seq experiments across species. These experiments generate matrices of count data, with measurements along two axes: cells, and genes (Figure 1). Comparative scRNA-seq adds a third axis: species. At first glance, we might seek to align scRNA-seq matrices across species, creating a three-dimensional tensor of cellular gene expression. But neither genes nor cells are expected to share a one-to-one correspondence across species. In the case of genes, gene duplication (leading to paralogous relationships) and gene loss are rampant10. In the case of cells, there is rarely justification for equating two individual cells across species, instead, we typically consider similarity across populations of cells (“cell types”)11. Therefore to align matrices, we must first find the appropriate system of grouping these dimensions. This is essentially a question of homology12: which genes and cell types are homologous, based on their relationship to predicted genes and cell types in the common ancestor. Questions about homology can be answered using phylogenies12. Species relationships are defined by their shared ancestry, as depicted using a phylogeny of speciation events (Figure 2). Gene homology is also defined by shared ancestry, depicted using gene trees that contain nodes corresponding to either speciation and gene duplication events. Cell homology inference requires assessing the evolutionary relationships between cell types12,13, defined here as populations of cells related via the process of cellular differentiation, and distinguishable from one another e.g. using molecular markers14. Relationships between cell types can be respresented with cell phylogenies that, like gene trees, contain both speciation and duplication nodes13. As with genes, the evolutionary relationships between cell types may be complex, as differentiation trajectories drift, split, or are lost over evolutionary time7,13,15. In this paper we consider each of these axes: across species, genes, and cells. In each case we lay out challenges and solutions derived from a phylogenetic comparative approach. We relate these solutions to methods previously proposed in the literature, including SAMap7 for pairwise alignment of cellular dimensional reductions. Finally, we reconcile species trees, gene trees, cell phylogenies and cell lineages as descriptions of the same concept–the tree of cellular life. comparing-across-species § COMPARING ACROSS SPECIES Shared ancestry between species will impact the results of all cross-species analyses, and should therefore influence our expectations and interpretations16. For scRNA-seq data, this has several implications: First, we expect species to be different from one another, given that they have experienced evolutionary time since diverging from their common ancestor. Therefore, by default we expect there to be many differences in cellular gene expression across the thousands of measurements in an scRNA-seq dataset. Second, we expect the degree of difference to be correlated with time since the last common ancestor. Our null expectation is that more closely related species will have more similar cellular gene expression than more distantly related species. The structure of this similarity can be approximated with a species phylogeny calibrated to time. Methods for the evolutionary comparison of scRNA-seq data have already been proposed in packages like SAMap7. These packages have overcome significant challenges, such as how to account for non-orthologous genes (see the description of gene comparisons below). However, up to now they have relied on pairwise comparisons of species, rather than phylogenetic relationships. The problems with pairwise comparisons have been well-described elsewhere17; in summary, they result in pseudo-replication of evolutionary events. An evolutionary comparative approach, on the other hand, maps evolutionary changes to branches in the phylogeny8,10. In this approach: * data are assigned to tips of a tree; * ancestral states are reconstructed using an evolutionary model; * evolutionary changes are calculated as differences between ancestral and descendant states; * the distribution of evolutionary changes along branches are analyzed and compared18. Shifting toward a phylogenetic approach to comparative scRNA-seq unlocks new avenues of discovery, including tests of co-evolution of cellular gene expression and other features of interest9, as well as evolutionary screens for signatures of correlated gene and cell modules19. In phylogenetic analyses, statistical power depends on the number of independent evolutionary events rather the absolute number of taxa8. Therefore the choice of which species to compare is critical, especially as we construct comparisons to capture potential convergence. One consideration when comparing species is the degree to which the history of scientific study has favored certain organisms (e.g. model organisms)20. This is especially relevant to single-cell comparisons, as we have a much deeper knowledge about cell and gene function for some species (e.g. mice, humans) than others. This creates a risk of bias toward observing described biological phenomena, while missing the hidden biology in less well-studied organisms20. Consider the identification of “novel” cell types based on the absence of canonical marker genes. Because most canonical marker genes were originally described in well-studied species, cell type definitions that rely on these will be necessarily less useful in the context of other species2. Technologies such as single-cell sequencing have great potential to even the playing field by democratizing the types of data collected2. For example, scRNA-seq allows us to assay all genes and thousands of cells, rather than a curated list of candidates. To leverage this to full effect, we must acknowledge the remaining filtering steps in our analyses, including how we identify orthologous gene sequences and how we label cell types. comparing-across-genes § COMPARING ACROSS GENES Due to gene duplication and loss, there is usually not a one-to-one correspondence between genes across species21. Instead, evolutionary histories of genes are depicted using gene trees (Figure 2). Pairs of tips in gene trees may be labeled as “orthologs” or “paralogs” based on whether they descend from a node corresponding to a speciation or gene duplication event22. Gene duplication happens both at the individual gene level, or in bulk via whole or partial genome duplication21. Gene loss means that comparative scRNA-seq matrices may be sparse not only due to failure to detect, but also because genes in one species don't exist in another. Many cross-species comparisons have confronted the challenge of finding equivalent genes across species23. These often start by restricting analyses to sets of one-to-one orthologs24. There are several problems with this approach22: [1] One-to-one orthologs are only well-described for a small set of very well annotated genomes23. [2] The number of one-to-one orthologs decreases rapidly as we both add species to our comparison, and as we compare across deeper evolutionary distances7. [3] The subset of genes that can be described by one-to-one orthologs is not randomly drawn from across the genome, they are enriched for indispensable genes under single-copy control25. New tools like SAMap7 expand the analytical approach beyond one-to-one orthologs to the set of all homologs across species7. Homolog groups are identified with a clustering algorithm by which gene separated into groups with strong sequence or expression similarity. These may include more than one representative gene per species. Gene trees can be inferred for these gene families, and duplication events can be mapped to individual nodes in the gene tree. How can we compare cellular expression measures across groups of homologous genes? One option is to use summary statistics, such as the sum or average expression per species for genes within a homology group26. However these statistics might obscure or average over real biological variation in expression that arose subsequent to a duplication event (among paralogs)19. An alternative is to connect genes via a similarity matrix, and then make all-by-all comparisons that are weighted based on putative homology, as in the approach in SAMap7. A third approach is to reconstruct changes in cellular expression along gene trees, rather than the species tree10,27. Here evolutionary changes are associated with branches descending from either speciation or duplication events. Such an approach has been demonstrated by Munro et. al. for bulk RNA-sequencing27, as follows: * cellular expression data are assigned to tips of a gene tree; * ancestral states and evolutionary changes are calculated, as above; * equivalent branches between trees are identified using “species branch filtering”27. Branches between speciation events can be unambiguously equated across trees based on the composition of their descendant tips (see numbered branches in Figure 2); * changes across equivalent branches of a cell tree are analyzed (e.g. to identify significant changes, signatures of correlation, etc). Mapping cellular gene expression data to branches of a gene tree sidesteps the problem of finding sets of orthologs by incorporating the history of gene duplication and loss into the analytical framework. comparing-across-cells § COMPARING ACROSS CELLS Like with genes, there is usually not an expectation of a one-to-one correspondence between cells across species. We can rarely equate individual cells across species, with notable exceptions such as the zygote, or the cells of certain eutelic species, with an invariant number of cellular divisions. Instead we typically consider the homology of groups of cells (cell types), with the hypothesis that the cell developmental programs that give rise to these groups are derived from a program present in the shared ancestor11. We also don't expect a one-to-one correspondence between cell types across species. As with genes, cell types may be gained or lost over evolutionary time. The relationships between cell types across species can been described using phylogenetic trees. These cell phylogenies are distinct from cell lineages–the bifurcating trees that describe cellular divisions within an individual developmental history. Nodes in cell lineages represent cell divisions, while nodes in cell phylogenies represent either speciation events or splits in differentiation programs leading to novel cell types (Figure 2). The term cell “type” has been used for several distinct concepts15, including cells defined and distinguished by their position in a tissue, their form, function, or in the case of scRNA-seq, their relative expression profiles that fall into distinct clusters14. Homology of structures across species is often inferred using many of the same criteria: position, form, function, and gene expression patterns28. The fact that the same principles are used for inferring cell types and cell homologies presents both an opportunity and an obstacle for comparative scRNA-seq. We can potentially leverage the same methods we use for identifying clusters of cells within species to identify clusters of cells across species. This could be done simultaneously, inferring a joint cell atlas in a shared expression space7, or it could be done individually for each species and subsequently merged2,23. In either case, this inference requires contending with the complex evolutionary histories between genes and species described above. One obstacle is that, because cell types are not typically defined according to evolutionary relationships15, cells labeled as the same type across species may constitute paraphyletic groups11. A solution is to use methods for reconstructing evolutionary relationships to infer the cell tree15,29 (Figure 2). This method is robust to complex evolutionary histories (e.g. duplication and loss), and has the additional advantage of generating a tree, comparable to a species or gene tree, onto which cellular characters can be mapped and their evolution described15. As described in Mah and Dunn, 202330, tools for tree building based on character data can be applied to the high-dimensional data of gene expression to successfully reconstruct a cell tree. In this approach: * cell trees are inferred; * gene expression data are assigned to tips of the cell tree; * ancestral states and evolutionary changes are calculated, as above; * changes along branches are analyzed (e.g. to identify changes in gene expression associated with the evolution of novel cell types) Another obstacle is that there are reported batch effects26 across single-cell experiments which may need to be accounted for via integration23. However our null expectation is that species are different from one another. Naive batch integration practices have no method for distinguishing technical effects from the real biological differences that are the target of study in comparative scRNA-seq23. Other approaches (e.g. 31, 32) have been reported as able to distinguish and characterize species-specific differences23. Given that we are still developing null hypotheses16 for how much variation in expression we expect to observe across species19, we hold that cross-species integration should be treated with caution until elucidation of the approach can robustly target and strictly remove technical batch effects. A final obstacle is that cell identities and homologies may be more complex than can be accurately captured by categorizations into discrete clusters or “types”14,15. Single-cell experiments that include both progenitor and differentiated cells can highlight the limits of clustering algorithms33. In experiments that capture cells along a differentiation trajectory, there may or may not be obvious boundaries for distinguishing cell populations. In cases where boundaries are arbitrary, the number of clusters, and therefore the abundance of cells within a cluster will depend on technical and not biological inputs, such as the resolution parameter that the user predetermines for the clustering algorithm. A solution is to define homology for the entire differentiation trajectory, rather than individual clusters of cells26. This may be accomplished by defining anchor points where trajectories overlap in the expression of homologous genes, while allowing for trajectories to have drifted or split over evolutionary time, such that sections of the trajectories no longer overlap15. Cellular homologies within a trajectory may be more difficult to infer, as this requires contending with potential heterochronic changes to differentiation (e.g. as cell differentiation evolves, genes may become expressed relatively earlier or later in the process)26. scrna-seq-data § SCRNA-SEQ DATA Single-cell comparisons potentially draw on a broad range of phylogenetic comparative methods for different data types, including binary, discrete, continuous, and categorical data34 (Figure 3). The primary data structure of scRNA-seq is a matrix of integers, representing counts of transcripts or unique molecular identifiers for a given gene within a given cell35. In a typical scRNA-seq analysis, this count matrix is passed through a pipeline of normalization, transformation, dimensional reduction, and clustering36,37. The decisions of when during this pipeline to draw a comparison determines data type, questions we can address, and caveats we must consider. gene-expression §.§ Gene expression Unlike bulk RNA-sequencing, where counts are typically distributed across a few to dozens of samples, scRNA-seq counts are distributed across thousands of cells. The result is that scRNA-seq count matrices are often shallow and sparse38. In our recent paper35, we highlighted that the vast majority of UMI counts in standard scRNA-seq datasets (often 95%) are either 0, 1, or 2. These values are discrete, low-integer numbers, and not continuous measurements. The high-dimensionality and sparse nature of single-cell data present a unique challenge when considering cross-species comparisons2. In a standard scRNA-seq approach, expression values are analyzed following depth normalization and other transformations. With depth normalization, counts are converted from discrete, absolute measures to continuous, relative ones, though our instruments don't actually quantify relative expression. There is a growing concern that this and other transformations are inadequate for the sparse and shallow sequencing data introduced here39,40. Further transformations of the data, such as log-transformation or variance rescaling, introduce additional distortions that may obscure real biological differences between species. Alternatively, counts can be compared across species directly, without normalization or transformation35. There are two potential drawbacks to this approach: First, count values are influenced by stochasticity, due to the shallow nature of sequencing, resulting in uncertainty around integer values. Second, cells are not sequenced to a standard depth. Comparing raw counts does not take this heterogeneity into account, though our recently published approach35 describes how this might be accomplished using a restricted algebra to analyze counts. An alternative is to transform count values to a binary or categorical trait41, for example, binning counts into “on” and “off” based on a threshold value, and then model the evolution of these states on a tree. Analyzing expression as a binary or categorical trait eliminates some of the quantitative power of scRNA-seq, but still allows us to address interesting questions about the evolution of expression dynamics within and across cell types. models-of-expression §.§ Models of expression A promising avenue for scRNA-seq is using generalized linear models to analyze expression40,42,43. These models describe expression as a continuous trait and incorporate the sampling process using a Poisson or other distribution, avoiding normalization and transformation, and returning fitted estimates of relative expression. These estimates can be compared using models that describe continuous trait evolution. One feature of generalized linear models models is they can report uncertainty values for our estimates of relative expression, which can then be passed along to phylogenetic methods to assess confidence in the evolutionary conclusions drawn. cell-diversity §.§ Cell diversity In a standard scRNA-seq approach cells are analyzed in a reduced dimensional space and clustered by patterns of gene expression37. There are several types of cellular data that can be compared. The evolution of the presence or absence of cell types can be modeled as a binary trait. When cell type labels are unambiguously assigned, this approach can answer questions about when cell types evolved and are lost. Such a comparison is hampered, however, when cells do not fall into discrete categories14, or when equivalent cell types cannot be identified across species due to substantial divergence in gene expression patterns. An alternative is to model the evolution of cell differentiation pathways as a binary trait on a tree, to ask when pathways, rather than types, evolved and have been lost. Similarly, the abundance of cells of a given type might be compared across species, for example to ask how how dynamics of cell proliferation have evolved. The number of cells within a cluster, however, can be influenced by technical features of the experiment, such as the total number of clusters identified (often influenced by user-supplied parameters), as well as where cluster boundaries are defined. An alternative is to compare relative cell abundance values, which may account for experimental factors but will still be susceptible to variation in how clusters are determined. cellular-manifolds §.§ Cellular manifolds One area for further development are methods that can model the evolution of the entire manifold of cellular gene expression on an evolutionary tree. Practically, this might be accomplished by parameterizing the manifold, for example by calculating measures of manifold shape and structure such as distances between cells in a reduced dimensional space. The evolution of such parameters could be studied by analyzing them as characters on a phylogenetic tree. Alternatively, we can envision a method in which we reconstruct entire ancestral landscapes of cellular gene expression, and then describe how this landscape has been reshaped over evolutionary time. Such an approach would require an expansion of existing phylogenetic comparative models to ones that can incorporate many thousands of dimensions. It would also likely require dense taxonomic sampling to build robust reconstructions. discussion § DISCUSSION Comparative single-cell RNA sequencing spans the fields of evolutionary, developmental, and cellular biology. Phylogenetic trees–branching structures depicting relationships across time–are the common denominator of these fields. Taking a step back reveals that many of the trees we typically encounter, such as species phylogenies, gene phylogenies, cell phylogenies, and cell fate maps, can be reconciled as part of a larger whole (Figure 4). Because all cellular life is related via an unbroken chain of cellular divisions, species phylogenies and cell fate maps are two representations of the same larger phenomenon, visualized at vastly different scales. Gene trees and cell trees (i.e. cell phylogenies) depict the evolution of specific characters (genes and cells) across populations within a species tree. These characters may have discordant evolutionary histories due to patterns of gene and cell duplication, loss, and incomplete sorting across populations. The synthesis of species, gene, and cell trees makes several key points clear. First, phylogenetic trees are essential for testing hypotheses about cellular gene expression evolution. Mapping single-cell data to trees, whether gene trees, cell trees, or species trees, allows us to build statistical tests of co-evolution, diversification, and convergence. The choice of which trees to use for mapping data will be determined by the questions we hope to answer. For example, mapping cellular expression data to gene trees would allow us to test whether expression evolves differently following gene duplication events (i.e. to test the ortholog conjecture44). Second, because the fields of evolutionary, developmental, and cellular biology study the same phenomena at different scales, there is a potential benefit from sharing methods. In the case of single-cell sequencing, building evolutionary context around data can prove essential for understanding the fundamental biology, including how to interpret cell types and cellular differentiation trajectories, and how to reconcile gene relationships. An evolutionary perspective is also critical for building robust null expectations of how much variation we might expect to observe across species16, which will allow us to interpret the significance of results as new species atlases come to light. Methods that infer and incorporate trees are essential not only for evolutionary biology, but for developmental and cellular biology as well. As single-cell data become increasingly available, rather than reinvent methods for building cell trees or comparing across cellular network diagrams, we can draw approaches from the extensive and robust fields of phylogenetic inference and phylogenetic comparative methods. These approaches include Bayesian and Maximum Likelihood inference of trees, evolutionary models, ancestral state reconstruction, character state matrices, phylogenetic hypothesis testing, among many others45–47. Biology has benefited in the past from syntheses of disparate fields of study, including the modern synthesis of Darwinian evolution and Mendelian genetics48, and the synthesis of evolution and development in the field of evo-devo49. With the advent and commercialization of technologies like single-cell sequencing, there is broadened opportunity for new syntheses50. Rich and complex datasets are increasingly available from understudied branches on the tree of life, and comparisons between species will invariably invoke evolutionary questions. By integrating phylogenetic thinking across fields, we can start to answer these questions and raise new ones. acknowledgments § ACKNOWLEDGMENTS We thank Daniel Stadtmauer, Namrata Ahuja, Seth Donoughe and other members of the Dunn lab for helpful conversation and comments on an initial version of the manuscript. declaration-of-interests § DECLARATION OF INTERESTS The authors declare no competing interests. references § REFERENCES tocsectionReferences refs 00 preref-gawad2016single 1. Gawad, C., Koh, W. & Quake, S. R. Single-cell genome sequencing: Current state of the science. Nature Reviews Genetics 17, 175–188 (2016). preref-tanay2021evolutionary 2. Tanay, A. & Sebé-Pedrós, A. Evolutionary cell type mapping with single-cell genomics. Trends in Genetics 37, 919–932 (2021). preref-sebe2018early 3. Sebé-Pedrós, A. et al. Early metazoan cell type diversity and the evolution of multicellular gene regulation. Nature Ecology & Evolution 2, 1176–1188 (2018). preref-li2021single 4. Li, P. et al. Single-cell analysis of schistosoma mansoni identifies a conserved genetic program controlling germline stem cell fate. Nature Communications 12, 485 (2021). preref-levy2021stony 5. Levy, S. et al. A stony coral cell atlas illuminates the molecular and cellular basis of coral symbiosis, calcification, and immunity. Cell 184, 2973–2987 (2021). preref-hulett2022acoel 6. Hulett, R. E. et al. Acoel single-cell atlas reveals expression dynamics and heterogeneity of a pluripotent stem cell population. BioRxiv 2022–02 (2022). preref-tarashansky2021mapping 7. Tarashansky, A. J. et al. Mapping single-cell atlases throughout metazoa unravels cell type evolution. Elife 10, e66747 (2021). preref-smith2020phylogenetics 8. Smith, S. D., Pennell, M. W., Dunn, C. W. & Edwards, S. V. Phylogenetics is the new genetics (for most of biodiversity). Trends in Ecology & Evolution 35, 415–425 (2020). preref-romero2012comparative 9. Romero, I. G., Ruvinsky, I. & Gilad, Y. Comparative studies of gene expression and the evolution of gene regulation. Nature Reviews Genetics 13, 505–516 (2012). preref-dunn2013phylogenetic 10. Dunn, C. W., Luo, X. & Wu, Z. Phylogenetic analysis of gene expression. Integrative and Comparative Biology 53, 847–856 (2013). preref-arendt2016origin 11. Arendt, D. et al. The origin and evolution of cell types. Nature Reviews Genetics 17, 744–757 (2016). preref-wagner2014homology 12. Wagner, G. P. Homology, genes, and evolutionary innovation. (Princeton University Press, 2014). preref-arendt2008evolution 13. Arendt, D. The evolution of cell types in animals: Emerging principles from molecular studies. Nature Reviews Genetics 9, 868–882 (2008). preref-domcke2023reference 14. Domcke, S. & Shendure, J. A reference cell tree will serve science better than a reference cell atlas. Cell 186, 1103–1114 (2023). preref-kin2015inferring 15. Kin, K. Inferring cell type innovations by phylogenetic methods—concepts, methods, and limitations. Journal of Experimental Zoology Part B: Molecular and Developmental Evolution 324, 653–661 (2015). preref-church2020null 16. Church, S. H. & Extavour, C. G. Null hypotheses for developmental evolution. Development 147, dev178004 (2020). preref-dunn2018pairwise 17. Dunn, C. W., Zapata, F., Munro, C., Siebert, S. & Hejnol, A. Pairwise comparisons across species are problematic when analyzing functional genomic data. Proceedings of the National Academy of Sciences 115, E409–E417 (2018). preref-felsenstein1985phylogenies 18. Felsenstein, J. Phylogenies and the comparative method. The American Naturalist 125, 1–15 (1985). preref-church2023evolution 19. Church, S. H., Munro, C., Dunn, C. W. & Extavour, C. G. The evolution of ovary-biased gene expression in hawaiian drosophila. PLoS Genetics 19, e1010607 (2023). preref-dunn2015hidden 20. Dunn, C. W., Leys, S. P. & Haddock, S. H. The hidden biology of sponges and ctenophores. Trends in Ecology & Evolution 30, 282–291 (2015). preref-sankoff2001gene 21. Sankoff, D. Gene and genome duplication. Current Opinion in Genetics & Development 11, 681–684 (2001). preref-dunn2016comparative 22. Dunn, C. W. & Munro, C. Comparative genomics and the diversity of life. Zoologica Scripta 45, 5–13 (2016). preref-shafer2019cross 23. Shafer, M. E. Cross-species analysis of single-cell transcriptomic data. Frontiers in Cell and Developmental Biology 7, 175 (2019). preref-stuart2019integrative 24. Stuart, T. & Satija, R. Integrative single-cell analysis. Nature Reviews Genetics 20, 257–272 (2019). preref-waterhouse2011correlating 25. Waterhouse, R. M., Zdobnov, E. M. & Kriventseva, E. V. Correlating traits of gene retention, sequence divergence, duplicability and essentiality in vertebrates, arthropods, and fungi. Genome Biology and Evolution 3, 75–86 (2011). preref-marioni2017single 26. Marioni, J. C. & Arendt, D. How single-cell genomics is changing evolutionary and developmental biology. Annual Review of Cell and Developmental Biology 33, 537–553 (2017). preref-munro2022evolution 27. Munro, C., Zapata, F., Howison, M., Siebert, S. & Dunn, C. W. Evolution of gene expression across species and specialized zooids in siphonophora. Molecular Biology and Evolution 39, msac027 (2022). preref-wagner2000developmental 28. Wagner, G. P., Chiu, C.-H. & Laubichler, M. Developmental evolution as a mechanistic science: The inference from developmental mechanisms to evolutionary processes. American Zoologist 40, 819–831 (2000). preref-wang2021tracing 29. Wang, J. et al. Tracing cell-type evolution by cross-species comparison of cell atlases. Cell Reports 34, 108803 (2021). preref-mah2023reconstructing 30. Mah, J. L. & Dunn, C. Reconstructing cell type evolution across species through cell phylogenies of single-cell RNAseq data. bioRxiv 2023–05 (2023). preref-welch2019single 31. Welch, J. D. et al. Single-cell multi-omic integration compares and contrasts features of brain cell identity. Cell 177, 1873–1887 (2019). preref-butler2018integrating 32. Butler, A., Hoffman, P., Smibert, P., Papalexi, E. & Satija, R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nature Biotechnology 36, 411–420 (2018). preref-tritschler2019concepts 33. Tritschler, S. et al. Concepts and limitations for learning developmental trajectories from single cell genomics. Development 146, dev170506 (2019). preref-cornwell2017phylogenetic 34. Cornwell, W. & Nakagawa, S. Phylogenetic comparative methods. Current Biology 27, R333–R336 (2017). preref-church2022normalizing 35. Church, S. H., Mah, J. L., Wagner, G. & Dunn, C. Normalizing need not be the norm: Count-based math for analyzing single-cell data. bioRxiv 2022–06 (2022). preref-satija2015spatial 36. Satija, R., Farrell, J. A., Gennert, D., Schier, A. F. & Regev, A. Spatial reconstruction of single-cell gene expression data. Nature Biotechnology 33, 495–502 (2015). preref-luecken2019current 37. Luecken, M. D. & Theis, F. J. Current best practices in single-cell RNA-seq analysis: A tutorial. Molecular Systems Biology 15, e8746 (2019). preref-liu2016single 38. Liu, S. & Trapnell, C. Single-cell transcriptome sequencing: Recent advances and remaining challenges. F1000Research 5, (2016). preref-hicks2018missing 39. Hicks, S. C., Townes, F. W., Teng, M. & Irizarry, R. A. Missing data and technical variability in single-cell RNA-sequencing experiments. Biostatistics 19, 562–578 (2018). preref-townes2019feature 40. Townes, F. W., Hicks, S. C., Aryee, M. J. & Irizarry, R. A. Feature selection and dimension reduction for single-cell RNA-seq based on a multinomial model. Genome Biology 20, 1–16 (2019). preref-qiu2020embracing 41. Qiu, P. Embracing the dropouts in single-cell RNA-seq analysis. Nature Communications 11, 1169 (2020). preref-hafemeister2019normalization 42. Hafemeister, C. & Satija, R. Normalization and variance stabilization of single-cell RNA-seq data using regularized negative binomial regression. Genome Biology 20, 296 (2019). preref-ahlmann2020glmgampoi 43. Ahlmann-Eltze, C. & Huber, W. glmGamPoi: Fitting gamma-poisson generalized linear models on single cell count data. Bioinformatics 36, 5701–5702 (2020). preref-nehrt2011testing 44. Nehrt, N. L., Clark, W. T., Radivojac, P. & Hahn, M. W. Testing the ortholog conjecture with comparative functional genomic data from mammals. PLoS Computational Biology 7, e1002073 (2011). preref-swofford19961996 45. Swofford, D., Olsen, G., Waddell, P. & Hillis, D. Phylogenetic inference. in Molecular systematics 407–514 (Sinauer, 1996). preref-baum2008phylogenics 46. Baum, D. A. & Offner, S. Phylogenics & tree-thinking. The American Biology Teacher 70, 222–229 (2008). preref-harmon2019phylogenetic 47. Harmon, L. Phylogenetic comparative methods. (Independent, 2019). preref-huxley1942evolution 48. Huxley, J. Evolution. The modern synthesis. (George Alien & Unwin Ltd., 1942). preref-carroll2008evo 49. Carroll, S. B. Evo-devo and an expanding evolutionary synthesis: A genetic theory of morphological evolution. Cell 134, 25–36 (2008). preref-abouheif2014eco 50. Abouheif, E. et al. Eco-evo-devo: The time has come. Ecological Genomics: Ecology and the evolution of genes and genomes 781, 107–125 (2014).
http://arxiv.org/abs/2307.02939v1
20230706120818
Template synthesis approach for radio emission from extensive air showers
[ "Mitja Desmet", "Stijn Buitink", "Tim Huege", "David Butler", "Ralph Engel", "Olaf Scholten" ]
astro-ph.HE
[ "astro-ph.HE" ]
http://arxiv.org/abs/2307.01240v1
20230703154418
MWPRanker: An Expression Similarity Based Math Word Problem Retriever
[ "Mayank Goel", "Venktesh V", "Vikram Goyal" ]
cs.IR
[ "cs.IR", "cs.AI" ]
: A Graph Transformer for Semantic Segmentation of 3D Meshes Giuseppe Vecchio1, Luca Prezzavento1, Carmelo Pino2, Francesco Rundo2, Simone Palazzo1, Concetto Spampinato1 1 Department of Computer Engineering, University of Catania 2 ADG, R&D Power and Discretes, STMicroelectronics August 1, 2023 ========================================================================================================================================================================================================================================================================= Math Word Problems (MWPs) in online assessments help test the ability of the learner to make critical inferences by interpreting the linguistic information in them. To test the mathematical reasoning capabilities of the learners, sometimes the problem is rephrased or the thematic setting of the original MWP is changed. Since manual identification of MWPs with similar problem models is cumbersome, we propose a tool in this work for MWP retrieval. We propose a hybrid approach to retrieve similar MWPs with the same problem model. In our work, the problem model refers to the sequence of operations to be performed to arrive at the solution. We demonstrate that our tool is useful for the mentioned tasks and better than semantic similarity-based approaches, which fail to capture the arithmetic and logical sequence of the MWPs. A demo of the tool can be found at <https://www.youtube.com/watch?v=gSQWP3chFIs> § INTRODUCTION Math Word Problems (MWPs) are intriguing as they require one to decipher the problem model and operators from the given problem statement. Studies have shown that users lacking this ability often commit mistakes when presented with new problems <cit.>. It has been demonstrated that solving paraphrased versions of the original problem might aid in better learning to make critical inferences from varying linguistic information <cit.>. However, manual curation of such problems is cumbersome. Hence, we design a tool in this work to recommend problems with similar algebraic expressions as the input MWP. Numerous works have tackled the automated solving of Math Word Problems (MWPs) task <cit.>, <cit.>,<cit.>. Recently, large language models (LLMs) have demonstrated multi-step reasoning ability to solve MWPs <cit.> among other tasks. However, the authors, in their work <cit.>, demonstrate that LLMs fail when problems contain certain linguistic variations. However, very few works have dealt with the retrieval of MWPs. Certain works like Recall and Learn <cit.> leverage the task of retrieving analogous problems to solve MWPs using semantic similarity. However, they may wrongly recommend problems with different algebraic operations. For instance, “John had 5 apples, and Mary had 6 oranges. Find the total number of fruits" would be considered similar to “John had 5 apples, and Mary had twice as many oranges after selling 2 of them. Find the total number of fruits". Though these MWPs look similar, the second MWP requires additional multiplication and subtraction operation. An overview of the workflow of the proposed system is shown in Figure 1. The core contributions of our work are: * We propose a hybrid approach to retrieve similar Math Word Problems based on expression tree similarity. * The code and data can be found at <https://github.com/goelm08/MWP-ranker> § SYSTEM DESIGN In this section, we describe the proposed system for identifying similar Math Word Problems for a given input. Given a corpus of MWPs P= {p_1,p_2....p_n} and a new input MWP sequence p_new=w_1,w_2...w_n, the goal is to recommend exact duplicate problems p_dup based on expression similarity. In our proposed pipeline, two problems could be similar if they are paraphrased versions of each other but evaluate the same algebraic expression. We propose a hybrid pipeline MWPRanker which is efficient for similar MWP retrieval. The pipeline consists of the following stages. * The input MWP p_new is parsed into an algebraic expression a_new using the neural expression generator Graph2Tree. We derive an expression tree from the resulting expression. * We devise a tree matching algorithm to match the resulting tree with other expression trees in the repository. The MWPs corresponding to matching expressions are returned to the user. The expressions trees are derived and indexed for efficient retrieval at inference time. This is a one time activity as this is the repository our pipeline performs search on. §.§ Generating Expression from MWPs We employ Graph2Tree <cit.> with minor variations. Graph2Tree leverages the dependency graph of the input MWP with minor variations and translates it to an algebraic expression using a decoder model. First, the dependency parse of the input MWP is obtained using Stanford CoreNLP <cit.>. We identify the keyphrases in the sentence, such as noun phrases, and establish relationships between them. The relationships indicate important linkages and are created as separate nodes. This yields a heterogeneous graph. Then BiGraphSAGE is employed to compute graph-based contextualized embeddings. BiGraphSAGE is a variation of GraphSAGE <cit.>, including forward and reverse mode aggregation for computing node embeddings. Then a Bi-LSTM is employed as a decoder to generate the expression sequence leveraging the node embeddings from the graph. The decoder at inference time yields an algebraic expression y_exp which is sent to the tree generator and matching module for retrieving similar MWPs. We adapt and modify the implementation of Graph2Tree in MWPToolkit <cit.> with same hyperparameters. §.§ Tree Matching and Retrieval We derive a postfix expression and convert it to an expression tree for clear operator precedence. t_exp = f_tree(y_exp). We replace numbers that represent variables with variable names. We replace constant values with the expression "<CONSTANT>" The generated expression tree is compared with other expression trees T_exp = {t_exp^1....t_exp^n} in the MWP repository and the top-k problems are recommended to the user (Figure <ref>). The matching algorithm works as follows: * The expression trees are matched pairwise through postorder traversal. * If the node in a tree contains an operator, the other tree must contain the same operator in the same place. * In the same way, variable nodes must match. If a variable is encountered, the corresponding node in the other tree should also be a variable. When encountering a constant, we just check if the corresponding node in the other tree also contains the same placeholder. When a match is encountered, the corresponding natural language form of the MWP from the repository is returned. § DEMONSTRATION We train the models using PyTorch. The models are served through a Flask API as backend, and UI is designed using Streamlit [https://streamlit.io/]. We employ the MAWPS <cit.> and ASDIV-a <cit.> datasets, which contains algebraic word problems of varying complexity curated from various websites. We filter out ungrammatical MWPs, yielding 1873 in MAWPS and 1844 train samples in ASDIV-a. §.§ Qualitative Analysis We evaluated the tool with 19 graduate level users. The interface and an example are shown in a screenshot in Figure <ref>. We can observe from recommended results that all MWPs have the same sequence of algebraic operations. The overall feedback in terms of ease of use and relevance of results was positive. Around 94% of the users found the tool easy to use and 84% found the tool to produce relevant results. About 15.8% of the users found the results to be relevant, with minor errors. Overall, all users rated that they would recommend the tool to the academicians. §.§ Quantitative Analysis For quantitative analysis, we collect 40 samples from a test set of MAWPS and ASDIV-a and use them as queries to retrieve top 3 questions from an MWP repository curated from the mentioned data sources. We present them to two independent researchers and asked them to annotate a recommendation as 1 if it is a similar (duplicate) MWP, else 0. We use the vector based semantic similarity based model proposed in <cit.> as a baseline. We observed a reasonable level of agreement between the annotators, with a Cohen's kappa of 0.629. From Table <ref>, we observe that the proposed MWPRanker tool outperforms the semantic similarity based approach by a significant margin. The quality of retrieval depends on the quality of algebraic expression generated by the neural expression generator. To generate better questions, complexity based attributes are necessary for more fine-grained retrieval, which are not currently supported by MWPRanker. § CONCLUSION In this work, we propose a new task of retrieving Math Word Problems based on the similarity of the algebraic expression. We develop and deploy a tool for the same to aid in recommending more practice questions. In the future, for ease of access to the tool, we plan to explore automated search completion and the usage of MWPRanker for automated problem-solving. This module can also be used to retrieve samples for In-Context learning in language models. splncs04
http://arxiv.org/abs/2307.02920v1
20230706111746
Logical possibilities for physics after MIP*=RE
[ "Adán Cabello", "Marco Túlio Quintino", "Matthias Kleinmann" ]
quant-ph
[ "quant-ph" ]
*rep@theorem@title theoremTheorem theoremTheorem claimClaim conjectureConjecture definitionDefinition remarkRemark factFact resultResult assumptionAssumption *assumpAssumption 1* lemmaLemma lemmaLemma corollaryCorollary assAssumption observationObservation problemProblem possibilityPossibility
http://arxiv.org/abs/2307.01608v1
20230704094520
Dynamical Localization for the Singular Anderson Model in $\mathbb{Z}^d$
[ "Nishant Rangamani", "Xiaowen Zhu" ]
math-ph
[ "math-ph", "math.MP", "math.SP", "82B44 (Primary) 81Q10 (Secondary)" ]
A Language Model for Grammatical Error Correction in L2 Russian Nikita Remnev0000-0002-1816-3823 Sergei Obiedkov0000-0003-1497-4001Ekaterina Rakhilina0000-0002-7126-0905 Ivan Smirnov0000-0001-8361-0282 Anastasia Vyrenkova0000-0003-1707-7525 ===================================================================================================================================================================================== § ABSTRACT We prove that once one has the ingredients of a “single-energy multiscale analysis (MSA) result” on the ℤ^d lattice, several spectral and dynamical localization results can be derived, the most prominent being strong dynamical localization (SDL). In particular, given the recent progress at the bottom of the spectrum for the ℤ^2 and ℤ^3 cases with Bernoulli single site probability distribution, our results imply SDL in these regimes. § INTRODUCTION We consider the d-dimensional Anderson model, a random Schrödinger operator on ℓ^2(^d) given by: (H_ωϕ)(n):=∑_|m-n|=1(ϕ(m)-ϕ(n))+V_ω(n)ϕ(n). Here, the V_ω(n) are independent and identically distributed (i.i.d.) real-valued random variables with common distribution μ, ∀ n∈^d. We will assume that S⊂, the topological support of μ, is compact and contains at least two points. The underlying probability space is the infinite product space (Ω, ℱ, ) = (S^^d, ℬ(^^d), μ^^d), where we denote ω∈Ω by {ω_n}_n∈^d. Given Λ⊂^d, we denote the restriction of the probability space (Ω, ℱ, ) to Λ by (Ω_Λ, ℱ_Λ, _Λ). In this paper, we provide a comprehensive, self-contained proof that extracts localization results from the single-energy multi-scale analysis (MSA) result. In order to properly contextualize this paper, it is necessary to briefly describe some chronological background. Soon after the MSA had taken a firm foothold in the literature and community, Germinet and De Bièvre <cit.> provided an axiomatic treatment of extracting dynamical localization results from an energy-interval MSA by checking the so-called SULE condition that was originally proposed in <cit.>. Later, <cit.> improved the result by extracting strong dynamical localization up to a certain order from the same energy-interval MSA. On the one hand, such an energy-interval MSA can be established for many random models when the single site distribution μ is absolutely continuous, e.g. <cit.> and the references therein, so that localization results can be extracted using <cit.>. On the other hand, for the continuous Bernoulli-Anderson model (μ is Bernoulli) in high dimensions (d≥ 2), only a single-energy MSA is available due to a weak probability estimate, as shown in <cit.>. As a result, additional efforts are required to extract localization information from the single-energy MSA. In <cit.>, Germinet and Klein addressed this issue by introducing a new infinite volume localization description. Nonetheless, while the proof in <cit.> contains the key ideas, it is not directly applicable to the discrete Bernoulli-Anderson model in high dimensions due to the absence of a quantitative unique continuation principle in the discrete regime. Recently, inspired by a probabilistic unique continuation principle developed for the ℤ^2 lattice, Ding and Smart <cit.> obtained the single-energy MSA result with weak probability estimates for the 2d discrete Bernoulli-Anderson model, i.e. (<ref>) with d = 2 and μ being Bernoulli. This work was then extended to the ℤ^3 lattice by Li and Zhang <cit.>. As with the continuous Bernoulli-Anderson model, no energy-interval MSA is available under these regimes due to the weak probability estimate. Thus it is our aim to tackle this problem and extract (strong) dynamical localization results from the single-site MSA result derived in <cit.> by following the method developed in <cit.>. It is worth mentioning that our proof works for arbitrary dimension d. If a single-energy MSA can be established at the bottom of the spectrum when d>3, or for the entire spectrum when d=2, as anticipated by physicists (which remains an open question in the field), then our results would indicate strong dynamical localization in those regions. While the techniques presented in this paper closely follow the work of <cit.>, there are some features worth mentioning. Firstly, the approach of <cit.> is developed in the continuum and applies to more general operators, leading to technical difficulties that can be avoided or simplified in the discrete setting. For instance, the generalized eigenfunction expansion (GEE) can be constructed more directly in the discrete regime without referring to more general GEE theory and other references (but we still need the BGKM-decomposition theorem <cit.>). Secondly, the extraction of localization from MSA had undergone several revisions before <cit.> and resulted in a form that is perceived as user-friendly but difficult to comprehend, as demonstrated in Definition <ref> and Theorem <ref> below. Hence, we attempt to offer some clarification on the evolution of the definition that may provide insight into why it is defined and stated in such a way. Finally, as mentioned above, we formulate our results in a more axiomatic way that we hope will provide help for potential use in the future. The paper is organized as follows: * Section <ref> includes preliminaries, introduction of main result (Theorem <ref>), key concept and key Theorem (Theorem <ref>). * Section <ref> and <ref> proves Theorem <ref> assuming the key Theorem <ref>. * Section <ref> and <ref> proves Theorem <ref> using two spectral reductions. § PRELIMINARIES AND MAIN RESULTS §.§ Preliminaries For x∈^d, let |x|_1=max_i=1,…,d|x_i| and |x| = (∑ |x_i|^2)^1/2. Let ⟨ x ⟩ = (1+|x|^2)^1/2. Let ⟨ X⟩^p denote the multiplication operator ⟨ x⟩^p on ℓ^2(^d). Fix some ν>d/2 through out the paper, for x_0∈^d, let (T_x_0ϕ)(x) = ⟨ x - x_0 ⟩^νϕ(x). Let Λ_L(x)={y∈^d:|y-x| < L/2} and Λ_L_2,L_1(x) = Λ_L_2(x)∖Λ_L_1(x). We omit x if it is clear in the context. Let ‖·‖_ℓ^2(Λ) denote the ℓ^2 norm on ℓ^2(Λ) for any Λ⊂^d. We omit Λ if it is clear in the context. Let P_Λ denote the projection from ℓ^2() →ℓ^2(Λ). Let H_ω,Λ:=P_Λ H_ω P_Λ and G_ω,Λ,E:=(H_ω,Λ-E)^-1. Let χ_B(x) denote the characteristic function of a Borel set B⊂ and χ_B(H) denote the spectral projection of H. For an operator A : ℓ^2(^d) →ℓ^2(^d), let ‖ A ‖ denote the operator norm and let ‖ A ‖_p = (|A|^p)^1/p denote the Schatten norm. In particular, ‖ A ‖_1 and ‖ A ‖_2 are the trace and Hilbert-Schmidt norm respectively. §.§ Main results Our main result is to extract “localization” from “single-energy MSA result”. In order to be more explicit, we need some preparations: We say that: * The box Λ = Λ_L(x_0) is (ω,E,m)-regular if ∀ x,y∈Λ with |y-x|≥L/100, we have |G_ω,Λ,E(x,y)| ≤ e^-m|y-x|. * The box Λ = Λ_L(x_0) is (ω,E,m,η)-good if Λ is (ω,m,E)-regular and ‖ G_ω,Λ,E‖≤ e^L^1-η. * The box Λ = Λ_L(x_0) is (ω,E,m,η)-jgood (just as good) if Λ is (ω,m,E)-regular and ‖ G_ω,Λ,E‖≤ 2e^L^1-η. * The scale L∈ is (E,m,η,p)-good if for any x∈^d, we have {ω:Λ_L(x)  is  (ω,E,m,η)-good}≥ 1-L^-pd. We say H_ω has the “single-energy MSA result” on an interval ℐ⊂ if there are m_0>0, 0<η_0<1, p_0>0, and some L_0 s.t. any scale L≥ L_0 is (E,m_0,η_0,p_0)-good for any E∈ℐ. We are interested in the following types of localization: (localization) We say H_ω exhibits * Anderson localization (AL) in an interval I⊂ if for a.e. ω, H_ω has pure point spectrum and its eigenfunctions decay exponentially. * Dynamical localization (DL) of order p in I if for a.e. ω, sup_t∈‖⟨ X⟩^p e^-itH_ωχ_I(H_ω)δ_0‖_ℓ^2<∞ * Strong dynamical localization (SDL) in expectation of order (p,s) in I if 𝔼{sup_t∈‖⟨ X⟩^p e^-itH_ωχ_I(H_ω)δ_0‖_ℓ^2^s } <∞ It is well-known that SDL implies DL by definition and DL implies AL by the RAGE theorem <cit.>. But AL does not imply DL, c.f. <cit.>. Once the “single-energy MSA result” is built on some interval ℐ, our main result below provides a blackbox for people to use to extract SDL, thus DL and AL, on ℐ. Recall that ν>d/2. Let ℐ⊂ be a bounded open interval. Assume there is m_0>0, 0<η_0<1, p_0>0, and some L_0 = L_0(m_0,η_0,p_0,ℐ)>0, s.t. any L≥ L_0 is (E,m_0,η_0,p_0)-good for all E∈ℐ. Then for any b>0, for all s∈(0,p_0 d/bd + ν), H_ω exhibits strong dynamical localization of order (bd,s) on ℐ, i.e. 𝔼{sup_t≥ 0‖⟨ X-x_0⟩^bde^-itH_ωχ_I(H_ω) δ_x_0‖_ℓ^2^s }≤ C <∞. As a result, H_ω also exhibits dynamical and Anderson localization on ℐ. In particular, since <cit.> has derived the “single-energy MSA result” when d = 2,3 near the bottom of the spectrum with Bernoulli distribution μ, our result implies SDL in the corresponding setup. Let d=2,3. For any 0<p_0<1/2, there is E_0>0, s.t. for any b>0, for any s∈ (0,p_0 d/bd + ν), H_ω exhibits strong dynamical localization of order (bd,s) on [0,E_0]. §.§ Key concept and key theorem Here we also want to introduce the key concept (Definition <ref>) and key theorem (Theorem <ref>) in the proof of Theorem <ref> since they may seem unintuitive at first sight. Recall that ν>d/2 is fixed through out the paper. If H ψ_E = E ψ_E and 0≠ψ_E(x)≤ C⟨ x ⟩^ν for some C>0, then we say ψ_E is a generalized eigenfunction (g.e.f.) of H w.r.t. the generalized eigenvalue (g.e.v.) E. Let Θ_ω,E denote the set of all g.e.f.'s of H_ω w.r.t. E and set Θ̃_ω,E = Θ_ω,E∪{0}. Recall that T_aϕ(x) = ⟨ x-a⟩^νϕ(x). Thus ψ_E(x) ≤ C⟨ x⟩^ν ⇔ ‖ T^-1ψ_E‖_ℓ^2<∞. Furthermore, ⟨ b ⟩≤√(2)⟨ a ⟩⟨ a - b ⟩, thus ‖ T_a^-1‖≤ 2^ν/2⟨ a - b ⟩^ν‖ T_b^-1‖ and vice versa. Thus ψ_E(x) ≤ C⟨ x⟩^ν ⇔ ‖ T^-1ψ_E‖_ℓ^2<∞ ⇔ ‖ T^-1_aψ_E‖_ℓ^2<∞, ∀ a∈^d. Key concept Now we can introduce a key definition, originally introduced in <cit.> and further developed in <cit.>, that plays an important role in our proof of localization results. It is well-defined by the argument above. Given ω∈Ω, E∈ and x∈^d, we define two quantities :=sup_ψ_E∈|ψ_E(x)|/‖ T_x^-1ψ_E‖ _ℓ^2, if ≠∅, 0, otherwise. :=sup_ψ_E∈‖ψ_E‖_ℓ^2(Λ_2L,L)/‖ T_x^-1ψ_E‖_ℓ^2, if ≠∅, 0, otherwise. Notice that is not a normalized g.e.f. w.r.t. E since the denominator changes with x. To see the intuition behind this definition, one has to refer back to <cit.>, where the author introduces a WULE condition in order to prove dynamical localization. Similar idea is also used in <cit.> to prove strong dynamical localization. This key idea in <cit.> is to normalize eigenfunction ψ_E(x) by considering ψ_E(x) = ψ_E(x)/‖ T^-1ψ_E‖_ℓ^2. Then one obtains two nice properties of ψ_E(x) that ψ_E(x) does not have: The first one is uniform (in E) upper-bound control |ψ_E(x)| ≤⟨ x ⟩^ν, ∀ x∈^d, E∈. In fact, having certain “uniformity in E” of eigenfunctions is essential in the proof of dynamical localization as discussed in <cit.>. The second benefit helps overcome the “summability in E” issues that are involved in the proof of dynamical localization. Let us illustrate this issue by assuming H_ω already has an orthonormal basis of eigenfunctions {ψ_ω,E}_E. Then a common estimate used to derive DL or SDL is |⟨δ_x, e^-it H_ωδ_y⟩| ≤∑_E |ψ_ω, E(x)| |ψ_ω, E(y)|, which requires summability in E in a certain sense if one want to bound it. This is also resolved through the normalized ψ_E(x) because ∑_E ‖ψ_E‖_ℓ^2^2 = ∑_x∈^d1/⟨ x ⟩^2ν∑_E |⟨δ_x, ψ_E⟩|^2 = ‖1/⟨ x ⟩^ν‖_ℓ^2^2<∞. These two nice properties play an important role in the proof of (strong) dynamical localization in <cit.>. However, this argument requires the a priori existence of a complete basis of eigenfunctions ψ_E(x). This will hold for a.e. ω if one can prove AL first. In the more general case, where AL is not a priori known, e.g. <cit.> and our case, one may turn to the g.e.f. ψ_E(x)∈ and use the generalized eigenfunction expansion (see Sec <ref>) to derive SDL directly (see Sec <ref>). Returning to the definition of , it can be thought of as a “locally normalized ψ_E(x)” so that the uniform upper bound (<ref>) becomes even stronger ≤ 1,  and ≤⟨ 2L⟩^ν, ∀ x∈^d, ∀ E, ω. The price to pay is that is no longer a g.e.f.. Thus, the traditional Poisson formula that connects the g.e.f. with the Green's function need to be adapted. Nevertheless, since and ψ_E(x) are equivalent up to a polynomial, i.e. ψ_E(x) ≤ 2^ν/2⟨ x ⟩^ν≤ 2^ν⟨ x ⟩^2νsup_ψ_E∈|ψ_E(x)|, their exponential/subexponential decay will not be influenced. In other words, there is no intrinsic difference between using or ψ_E to state the result but we choose to use following the argument in <cit.>. Key Theorem The following theorem is the key to extract localization from “single-energy MSA result”. Once we have “single energy MSA result”, the theorem states that with high probability, if some g.e.f. is subexponentially localized near x_0, then all g.e.f. will decay exponentially away from x_0 for all E. Let ℐ⊂ be a bounded open interval, m_0>0, p_0>0, η_0∈(0,1). Assume there is a scale ℒ, s.t. any L≥ℒ is (E,m_0,η_0,p_0)-good for all energies E∈ℐ. Let M=m_0/30^n̂+2, where n̂=n̂(p_0):={n∈ℕ: 2^1/n-1<p_0}. Fix p∈(0,p_0), let μ=β/2, with β=ρ^n_1 where ρ>0 and n_1∈ℕ s.t. (1+p_0)^-1<ρ<1 and (n_1+1)β<p_0-p. Let ℐ_L:={E∈ℐ:  dist (E,∖ℐ)>e^-ML^μ}. Then given a sufficiently large L, for any x_0∈^d, there exists an event 𝒰_L,x_0 s.t. * 𝒰_L,x_0∈ℱ_Λ_L(x_0) and {𝒰_L,x_0}≥ 1-L^-pd. * If ω∈𝒰_L,x_0, E∈ℐ_L, we have W_ω,x_0(E)>e^-ML^μ⇒ W_ω,x_0,L(E)≤ e^-M/40L and thus W_ω,x_0(E)W_ω,x_0,L(E)<e^-1/2ML^μ, for large enough  L. The idea of the proof is the following: We want to find a large set of configurations ω (i.e. with high probability) such that (2), in particular, ≤ e^-ML, holds for all E∈ℐ. This is not hard to achieve for a given E_0 and its exponentially small neighborhood |E - E_0| ≤ e^-m_0L, c.f. Lemma <ref>. However, to cover a fixed interval ℐ⊂, we need to apply Lemma <ref> to e^mL-many evenly distributed E_0 over ℐ. But this would destroy the probability estimate since e^mLL^-pd>>1. Thus, we need to control the number of E_0 for which we invoke Lemma <ref> in order to control the probability. This is done by the so-called “spectral reduction”. In particular, if p>1, we only need the “first spectral reduction”, see Subsec. <ref>, where we (roughly speaking) apply Lemma <ref> to L^d-many E_0∈σ(H_ω, Λ). If p<1, we will also need the “second spectral reduction”, see Subsec. <ref>, where we need to turn to even more sparsely-distributed, L^(p_0 - p)d-many “reduced spectrum” E_0∈σ^(red)(H_ω) for some 0<p<p_0. In both reductions, assumption like ≥ e^-ML^μ in (2) is there to guarantee E is close to one of the chosen E_0 correspondingly. We will do some preparation in Sec. <ref>; extract localization (Theorem <ref>) from Theorem <ref> in Sec. <ref> and finally prove Theorem <ref> in Sec. <ref> and <ref>. § GENERALIZED EIGENFUNCTION EXPANSION We give a short introduction of generalized eigenfunction expansions (GEE) needed for the proof of Theorem <ref> in Sec <ref>. The idea is the following: Not every self-adjoint operator has eigenfunctions, e.g. those with continuous spectrum, and so, one cannot decompose them via eigenspaces. However, if one could enlarge the domain (using rigged Hilbert space) and allow generalized functions (functions belonging to some larger space) to be the“generalized eigenfunctions", then every self-adjoint operator could have decomposed w.r.t. such “generalized eigenspaces”. This procedure is rigorously done for general appropriate operators and rigged spaces in <cit.> and is eventually summarized as <cit.>, the so-called BGKM-decompostion or GEE. Here, in the discrete regime, we will do most of the construction more directly. It can be directly verified that the construction here coincides with <cit.>. We will borrow the Bochner theorem and BGKM-decomposition from <cit.> without proof. Rigged spaces Let ℋ_+, ℋ_- be weighted ℓ^2 spaces: ℋ_+ = ℓ^2(^d, ⟨ x ⟩^2νdx), ℋ = ℓ^2(^d, dx), ℋ_- = ℓ^2(^d, ⟨ x ⟩^-2νdx). with inner product and norm being ⟨ u,v⟩_± = ∑_x∈^du(x)v(x)⟨ x ⟩^± 2ν, ‖ u‖_+ = ‖⟨ x ⟩^νu‖_ℓ^2, ‖ u‖_- = ‖u/⟨ x ⟩^ν‖_ℓ^2. Since ‖·‖_- ≤‖·‖_ℓ^2≤‖·‖_+, we have ℋ_+ ⊂ℋ⊂ℋ_- in a both continuous and dense sense. This chain is a chain of rigged Hilbert spaces, cf <cit.>. The definition there is more general and involved, but in our discrete setting, it coincides with the ℋ_± we give above. Since the embeddings are dense, the inner product ⟨·, ·⟩_ℓ^2 defined on ℋ×ℋ extends continuously to ℋ_- ×ℋ_+. More specifically, for u∈ℋ_+ = ⟨ x ⟩^-νℋ, v∈ℋ_- = ⟨ x⟩^νℋ, the formula for extended inner product (which we still denote as ⟨·, ·⟩_ℓ^2) is still ⟨ u, v⟩_ℓ^2 = ∑_x u(x)v(x). Operators in ℬ(ℋ_+, ℋ_-) Given a bounded operator A∈ℬ(ℋ_+, ℋ_-), we say A is positive if ⟨ Au,u⟩_ℓ^2≥ 0 for u∈ℋ^+. We define the trace of a positive operator A∈ℬ(ℋ_+, ℋ_-) to be _±(A) := ∑_n⟨ Au_n,u_n⟩_ℓ^2 when the sum is finite, where {u_n}_n is an orthonormal basis (ONB) in ℋ_+ and the inner product is actually the extended inner product on ℋ_-×ℋ_+. In particular, let p(x) = ⟨ x ⟩^ν. Since {1/p(x)δ_x}_x∈^d forms an ONB of ℋ_+, we have _±(A) = ∑_x∈^d⟨ A p(x)^-1δ_x, p(x)^-1δ_x⟩_ℓ^2 = ∑_x∈^d⟨ p(x)^-1 A p(x)^-1δ_x, δ_x⟩ = (p(·)^-1 A p(·)^-1) = (T^-1AT^-1) where denotes the usual trace of a trace class operator in ℬ(ℋ, ℋ). This sheds the light of the consideration of (T^-1f(H)T^-1) in <cit.> and <cit.>. Embeddings and the Bochner Theorem Let i_+:ℋ_+→ℋ, and i_-:ℋ→ℋ_- be the embedding maps i_+u = u, i_-v = v. Assume B:ℋ→ℋ is a bounded operator. It can be easily checked that B_± := i_-B i_+:ℋ_+→ℋ_- induced by B is also bounded. In particular, from the spectral projection χ_I(H_ω)∈ℬ(ℋ, ℋ), we can induce a family of operators χ_I,±(H_ω):= i_-χ_I(H_ω)i_+∈ℬ(ℋ_+, ℋ_-), for all Borel set I⊂, we get a ℬ(ℋ_+, ℋ_-)-operator-valued measure, cf. <cit.>. Furthermore, _±(χ_,±(H_ω)):= (T^-1i_- i_+ T^-1) = ∑_x p(x)^-2<∞, ∀ω. Thus {χ_I, ±(H_ω)}_I is a ℬ(ℋ_+, ℋ_-)-operator-valued measure with finite trace, for which we can apply the Bochner theorem <cit.> below. Let θ: ℬ() →ℬ(ℋ_+,ℋ_-) be an operator-valued measure with finite trace, i.e. * θ(I) is non-negative for any Borel set I⊂, * _±(θ())<∞, * θ(_j I_j) = ∑_j θ(I_j), with convergence in the weak sense. Then θ can be differentiated w.r.t. the trace measure ρ(I) := _±(θ(I)) and there exists P(E): ℋ_+ →ℋ_- with 0≤ P(E) = _±(P(E)) = 1, ρ-a.e.  E, P(E)  is weakly measurable w.r.t. ℬ(), The integral converges in the Hilbert-Schmidt norm. such that θ(I) = ∫_I Q(E) dρ(E). Generalized eigenfunction decomposition Applying Theorem <ref> to χ_I,±(H_ω), we obtain part of the BGKM decomposition, or GEE, see <cit.>: There exists weakly measurable operators P_ω(E):ℋ_+ →ℋ_-, and μ_ω(B) := _±(i_-χ_I(H_ω)I_+) = (T^-1χ_I(H_ω) T^-1) = ∑_x ⟨ p(x)^-1δ_x, χ_I(H_ω)p(x)^-1δ_x⟩ = ∑_x ⟨χ_I(H_ω)p(x)^-1δ_x, χ_I(H_ω)p(x)^-1δ_x⟩ = ‖ T^-1χ_I(H_ω)‖_2^2, such that i_- χ_I(H_ω) i_+ = ∫_I P_ω(E) dμ_ω(E) with _±(P_ω(E)) = 1, for μ_ω-a.e.  E. Furthermore, assume u∈ℋ_+, f∈ℬ_1,b() is a bounded Borel function, <cit.> stated that χ_I(H_ω)u = (∫_B P_ω(E) dμ_ω(E))u, f(H_ω)χ_I(H_ω) u = ( ∫_B f(E)P_ω(E) dμ_ω(E)) u, and Range(P_ω(E))= Θ_ω, E,  for μ_ω-a.e. E. This is precisely the decomposition of χ_I(H_ω) w.r.t. “generalized eigenspaces” (since Range(P_ω(E)) = Θ_ω,E), i.e. the generalized eigenfunction expansion. Applications The first application of GEE is an alternative derivation of Schnol's theorem: Since μ_ω is indeed a spectral measure of H_ω by definition, to prove pure point spectrum, it is enough to show that Range(P_ω(E))⊂ℋ; because then all generalized eigenfunctions are bona fide eigenfunctions. This argument is widely used in the proof of localization for random or quasiperiodic operators, although we do not need it for this paper, since we prove SDL directly using the next application. The second application follows by direct computation using (<ref>), definition <ref> and (<ref>). It is used in the proof of localization in next section. We have ‖χ_Λ_2L,L(x_0) f(H_ω) P_ω(I) δ_x_0‖ _1 ≤∫_I f(E) ‖χ_Λ_2L,L(x_0) P_ω(E) δ_x_0‖ _1 dμ_ω(E), ‖χ_Λ_2L,L(x_0) P_ω(E) δ_x_0‖ _1 ≤‖χ_Λ_2L,L(x_0) P_ω(E)‖ _2 ‖δ_x_0 P_ω(E) ‖ _2 ≤ || ‖ T_x^-1 P_ω(E)‖ _2 ‖ T_x^-1 P_ω(E)‖ _2. Notice that <cit.> introduced and W_ω(x;E) with the only difference being the range of ϕ_E considered when taking the supremum. was defined for all ϕ_E∈ while W_ω(x;E) was defined for ϕ_E∈Range(P_ω(E))∖{0}. This is not necessary in our setting due to (<ref>). § PROOF OF THEOREM <REF> In this section, we extract localization results, i.e. Theorem <ref>, from Theorem <ref>. By Lemma <ref>, we have ‖χ_x,L f(H_ω) P_ω(I) δ_x_0‖_1 ≤∫_I |f(E)| ‖χ_x,L P_ω(E) δ_x ‖_1 dμ_ω(E) ≤∫_I |f(E)| ‖χ_x,L P_ω(E) ‖_2 ‖δ_x P_ω(E) ‖_2 dμ_ω(E) ≤∫_I |f(E)| | | μ_ω,x({E}) dμ_ω(E) ≤μ_ω(I) ‖ f‖ _L^∞(I,dμ_ω)sup_E∈ I || where by Theorem <ref>, we have 𝔼{‖ W_ω,x_0(E)W_ω,x_0,L(E)‖_L^∞(I,dμ_ω(E))^s } ≤ Ce^-s/2L^μ{𝒰_x_0,L^c}+C2^sνL^sν{𝒰_x_0,L} ≤ Ce^-s/2L^μ+C2^sνL^sνL^pd ≤ CL^(sν-pd) Thus 𝔼{‖χ_x,L f(H_ω)P_ω(I) δ_x ‖_1^s }≤ C‖ f‖ _L^∞(I,dμ_ω)L^-(pd-sν) By taking L = 2^k above and taking the sum over k, we get 𝔼{sup_t‖⟨ X-x ⟩^bd e^-itH_ωP_ω(I) δ_x ‖^s } ≤ C∑_k 2^sbd/2+(k+1)sbd-k(pd-sν) ≤ C∑_k (2^sbd-pd+sν)^k<∞. where s<pd/bd+ν. § PRELIMINARY LEMMAS In this section, we make some preparations for the proof of Theorem <ref> in the next section. §.§ Poisson formula Given Λ⊂^d, let ∂Λ = {(y,y')∈^d×^d: |y - y'|_1 = 1,  either y∈Λ, y'∉Λ,  or y'∈Λ, y∉Λ. Assume H_ωψ = Eψ. Recall H_ω, Λ = P_Λ H_ω P_Λ, G_ω, Λ, E := (H_ω, Λ - E)^-1. Then the famous Poisson's formula (c.f. <cit.>) states that ψ(x) = -∑_(y,y')∈∂Λ y∈Λ, y'∉ΛG_ω, Λ,E(x,y) ψ(y'). §.§ Stability of goodness The first lemma below describes the stability of “goodness of boxes” under exponential perturbation of energy E_0. Assume ω,E_0,L_0,x are fixed and ∀ L≥ L_0, Λ_L(x) is (ω,E_0,m_0,η_0)-good. Then for any m<m'<m_0, there is L_1 = L_1(m,m'), s.t. ∀ L≥ L_1, ∀ E satisfying |E-E_0|≤ e^-m'L, we have Λ_L(x) is (ω,E,m,η_0)-jgood. Recall the resolvent identity: G_ω,E,Λ_L(x)-G_ω,E_0,Λ_L(x)=(E_0-E)G_ω,E,Λ_L(x)G_ω,E_0,Λ_L(x). Thus, ‖ G_ω,E,Λ_L(x)‖ ≤‖ G_ω,E_0,Λ_L(x)‖ +|E_0-E| ·‖ G_ω,E,Λ_L(x)‖·‖ G_ω,E_0,Λ_L(x)‖ ≤ Ce^L^1-η+Ce^-m'L+L^1-η‖ G_ω,E,Λ_L(x)‖ , and so, ‖ G_ω,E,Λ_L(x)‖≤Ce^L^1-η/1-Ce^-m'L+L^1-η≤ 2Ce^L^1-η. Also, if m<m', |G_ω,E,Λ_L(x)(a,b)| ≤ |G_ω,E_0,Λ_L(x)(a,b)|+|E-E_0| · |G_ω,E,Λ_L(x)(a,b)| · |G_ω,E_0,Λ_L(x)(a,b)| ≤ e^-m_0|a-b|+e^-m'Le^L^1-η+L^1-η ≤ e^-m|a-b| when L is large enough. Denote the threshold by L_1. To state the other two lemmas, we need the following definitions. Fix l>10. Let α_l:= 3l5. Let ℒ_l be the “coarse lattice” with nodes 𝒩_l=(α_l)^d and sides 𝒮_l = {(x,y)∈𝒩_l×𝒩_l:|x-y|_1=α_l}. Notice that boxes of size l centered at the coarse lattice ⋃_x∈𝒩_lΛ_l(x) covers the whole ^d space. We use coarse lattice when we want to use box of size l to cover some region but not too dense (like centered at ^d). §.§ Fixed energy trap The following lemma is mentioned in the idea of the proof of Theorem <ref> in Subsec. <ref>. Under the same assumption of “single-energy MSA result”, when L is large enough, given some E_0, with high probability, one can make exponentially small for any |E - E_0|≤ e^-mL. Under the same assumption with Theorem <ref>, for any p'<p_0, m<m'<m_0, when L is large enough, for any x_0∈^d, give any E_0, there is event ℳ_L,x_0^(E) such that * ℳ_L,x_0^(E)∈ℱ_Λ_L(x_0) and (ℳ_L,x_0^(E)) ≥ 1 - L^-p' d. * ≤ e^-m/100L for any |E - E_0| ≤ e^-m'/100L. By assumption, when L is large enough, scale l:=L/100 is (E_0, m_0, η_0, p_0)-good. Consider the coarse lattice 𝒩_l. Let L_+ = L + L/100, L_- = L - L/100. Set ℳ_L,x_0^(E):= ⋃_x∈𝒩_l ∩Λ_2L_+,L_-{ω: Λ_l(x)  is  (ω, E_0, m_0,η_0)-good}. Then * (ℳ_L,x_0^(E)) ≥ 1 - (1000/3)^d (L/100)^-p_0d≥ 1 - L^-p'd when L is large enough. * If ω∈ℳ_L,x_0^(E), then all Λ_l(x) is (ω, E_0,m_0,η_0)-good. By Lemma <ref>, for any m<m”<m'<m_0, for any |E - E_0| ≤ e^-m'/100L, Λ_l(x) is (ω, E, m”, η_0)-jgood. Thus by (<ref>), for any ψ_E∈Θ_ω, E, ‖ψ_E‖_ℓ^2(Λ_2L, L) ≤ l^d - 1e^-m”lsup_x∈Λ_2L_+,L_-(x_0)|ψ_E(x)| ≤ (L100)^d - 1e^-m”/100L(2L_+)^ν‖ T_x_0^-1ψ_E‖_ℓ^2 ≤ e^-m/100L‖ T_x_0^-1ψ_E‖_ℓ^2 when L is large enough. Thus ≤ e^-m/100L. §.§ Percolation argument Fix ω, we say * x∈𝒩_l is a (ω,E_0,m_0,η_0)-good (-bad) node if Λ_l(x) is a (ω,E_0,m_0,η_0)-good (-bad) box. * 𝒜⊂𝒮_l is a (l,E_0,m_0,η_0)-good loop (shell) if it is a closed loop (shell) in 𝒮_l where each node x∈𝒜 is a good node. * 𝒫⊂𝒮_l is a bad path if it is a non-self-intersecting path of the graph (𝒩_l, 𝒮_l) and each node is a bad node. We say a (l,E,m,s)-good shell 𝒜 is totally inside a subset S⊂^d if ⋃_x∈𝒜Λ_l+2(x)⊂ S. Let l>12. For fixed ω, if there is a (l,E_0,m_0,s)-good loop 𝒜 in Λ_L_2,L_1(x_0), then there is some C=C(d) = √(d)/4, s.t. ∀ |E-E_0|≤ e^-m_0l, E∈ℐ, ∀ m<m_0, and we have dist(E,σ(H_Λ_L_2)) W_ω,x(E)≤ CL_2^d+ν4^de^-mℓ/3. In particular, if l = √(L), L_1 = L/2, L_2 = L, then when L is large enough, If ≥ e^-m/30√(L) ⇒ dist(E, σ(H_ω, Λ_L(x_0))≤ e^-m/30√(L) Let M = ‖ V‖ _ℓ^∞+sup_E∈ℐ|E|. Let l>12 so that l/2-2>l/3. Note that dist(E,σ(H_Λ_2)) = ‖ (H_Λ_L_2-E)^-1‖ ^-1= inf_ψ∈ℓ^2‖ (H_Λ_L_2-E)ψ‖/‖ψ‖. Recall 𝒜 is a closed loop in 𝒢, so it “circles" a region A⊂^d. Let χ_A: ^d →{0,1} be the characteristic function of A on ^d. Let ϕ_0 be a g.e.f. of H_ω w.r.t. E_0, i.e. (H_ω-E_0)ϕ_0 = 0. Taking ψ = χ_Aϕ_0, we find that (H_Λ_L_2-E_0)ψ(x) will be 0 at most points x∈Λ_L_2 except for those near 𝒜. dist(E,σ(H_Λ_2)) ≤‖ (H_Λ_L_2-E_0 + E_0 - E)χ_Aϕ_0‖/‖χ_Aϕ_0‖ ≤∑_dist(x,𝒜)≤ 1,x∈^d (2^d+1) 2^d+1 M max_|y-x| ≤ 2,y∈^d |ϕ(y)|/|ϕ(x_0)| ≤ C(L_2^d-L_1^d)4^dMe^-m_0l/3 l^d-1max_ y∈Λ_L_2,L_1 |ϕ(y)|/|ϕ(x_0)| ≤ CM 4^dL_2^d e^-ml/3max_y ∈Λ_L_2,L_1 |ϕ(y)|/|ϕ(x_0)|. Notice max_ y∈Λ_L_2,L_1|ϕ(y)| = max_ y∈Λ_L_2,L_1⟨ y-x_0⟩^ν|ϕ(y)|/⟨ y-x_0⟩^ν≤⟨ L_2⟩^ν‖ T_ν^-1ϕ‖ . Thus, by the definition of W_ω,x(E), we get dist(E,σ(H_Λ_L_2)) W_ω,x(E)≤ CL_2^d+ν4^de^-mℓ/3. Let 𝒴^(E)_x_0,l,L_1,L_2 denote the event {ω:an (ω,l,E,m,s)-good loop exists totally inside Λ_L_2,L_1(x_0)}. If a scale l is good, each node has a large probability of being “good”, and we expect a relatively large probability for having good loops as well. The next Lemma quantifies this intuition. Assume E is fixed, and the scale l is (E,m,s,p)-good. We have {𝒴^(E)_x_0,l,L_1,L_2}≥ 1-2d(L_1+3l/l)^d-1(2^d)^L_2-L_1-l/ll^-pdL_2-L_1-l/(3^d-1)l. In particular, if l=√(L), L_1=L/2, L_2=L, when L is large enough, then {𝒴^(E)_x_0,l,L/2,L}≥ 1-L^-c_d,p√(L). Notice that 𝒴^(E)_x_0,l,L_1,L_2 only depends on Ω_Λ_L_2,L_1(x_0). Fix E∈ℐ. First notice that (𝒴^(E)_x_0,l,L_1,L_2)^c = {ω: there is no good loop totally inside Λ_L_2,L_1} = {ω:there is a bad path escaping from ∂Λ^+_L_1+l+2 to ∂Λ^-_L_2-l-2} Notice that each such bad path must contain at least N:=L_2-L_1-2l-4/6l/5+1 many bad nodes starting from ∂Λ^+_L_1+l+2, which means it should contain N/(3^d-1)l-many independent bad nodes. And the number of all such potential paths is less than 2d(L_1+l+2/3l/5)^d-1(2^d)^N. So we obtain {(𝒴^E_x_0,l,L/2,L/2)^c} ≤ 2d(L_1+l+2/3l/5+1)^d-1(2^d)^N · l^-pdN = 2d(L_1+l+2/3l/5+1)^d-1(2^d)^L_2-L_1-2l-4/6l/5+1· l^-pd(L_2-L_1-2l-4/(3^d-1)l+1) The second inequality follows from the first one by N = L_2 - L_1 - 2l + 4/6l/5 and letting L be sufficiently large. § MULTISCALE TO LOCALIZATION We prove Theorem <ref> in this section by first performing two spectral reductions: Theorem <ref> and Theorem <ref>. §.§ The first spectral reduction Given b≥ 1, there exists a constant K_d,p,b≥ 1 s.t. for any K≥ K_d,p,b, for large enough L, for any x_0∈^d, there is an event 𝒬_L,x_0, with * 𝒬_x_0,L∈ℱ_Λ_L(x_0) and {𝒬_L,x_0}≥ 1-(LK)^-5d * for any ω∈𝒬_x_0,L, given E∈ℐ s.t. if there exists a g.e.f. ϕ with ϕ(x_0)≠ 0, then for L large enough dist(E,σ^(ℐ)(H_ω,Λ_L(x_0)))≤ e^-m̂L/K. The strategy here is two-fold: * Construct 𝒬_x_0,L by layers. * Estimate the probability of the event 𝒬_x_0,L occurring. Given L_0, we define l_0=√(L_0), and l_k=l_k-1^1+η, for k=1,2,⋯,n̂, where (1+η)^n̂=2, so l_n̂=l_0^2=L_0. Let L_k=L_k-1+2Jl_k where J is a large constant to be determined later. Then we have L_n̂=L_0+2J∑_k=1^n̂l_k≤ (1+2Jn̂)L_0. Now we use an inductive construction to form the set 𝒬_x,L. * Given m_0>0. For the initial layer Λ_L_0, we pick 𝒴^E_0,i_l_0,√(L_0),L_0 where E_0,i are energies s.t. the union of [E_0,i-e^-m_0l_0,E_0,i+e^-m_0l_0] covers ℐ. We need to choose |ℐ|/2e^-m_0l_0=O(e^√(L_0)) many of them. Let Y_0=⋂_i 𝒴^E_0,i_l_0,√(L_0),L_0, then we have {Y_0}≥ 1-Ce^√(L_0)L_0^-c_d,p√(L_0)≥ 1-Ce^-√(L_0). Recall that 𝒴^(E)_l,L_1,L_2 only depends on Ω_Λ_L_2,L_1. In particular, 𝒴^E_0,i_l_0,√(L_0),L_0 only depends on Ω_Λ_L_0. * For the remaining events, we use an inductive scheme. If ω_Λ_L_k-1∈Ω_Λ_L_k-1 is given, we can consider all eigenvalues E_k,j=E_k,j(ω_Λ_L_k-1) of H_Λ_L_k-1,ω_Λ_L_k-1. For each such E_k,j, we can then consider 𝒴^E_k,j_L_k-1,l_k,L_k. Notice the dependence here is only on ω∈Ω_Λ_L_k,L_k-1 so there is no conflict with the previous ω_Λ_L_k-1 and the induction is well-defined. Let Y_k(ω_Λ_L_k-1)=⋂_j 𝒴^E_k,j_L_k-1,l_k,L_k(ω_Λ_L_k-1), where we pick J≥ J_p,d (note that J depends on c_d,p). In this situation, we have {Y_k}≥ 1-L^d+d-1_k-13^Jdl_k^-c_d,p2J≥ 1-L_0^-6d. Having obtained the box Λ_L_n̂ and we choose the event depending on Ω_Λ_L_n̂ to be 𝒬_x_0,L_n̂=⋂_k=0^n̂Y_k(ω_k-1) and we have {𝒬_x_0,L_n̂}≥ 1-L_0^-5d. Thus, to obtain 𝒬_x_0,L, we choose an L_0 s.t. L=L_n̂≤(1+2J_d,pn̂)L_0. Then for any K≥ K_d,p=1+2J_d,pn̂, we have 𝒬_x_0,L≥ 1-L_0^-5d≥ 1-(LK)^-5d. We now need to verify that for ω∈𝒬_x_0,L, E∈ℐ with some g.e.f. ϕ(x_0)≠ 0, we have dist(E,σ^(ℐ)(H_ω,Λ_L(x_0)))≤ e^-m̂L/K. Notice that Lemma <ref> applying to 𝒴^E_0,i_√(L_0),l_0,L_0 implies there exists an E_0,i s.t. |E-E_0,i|≤ e^-m_0l_0. So, we have, dist(E,σ^(ℐ)(H_ω,Λ_L_0(x_0)))≤ e^-m_1l_0, where we choose m_1<m_0. Thus, there exists an E_1,j∈σ(H_ω,Λ_L_0(x_0)) s.t. |E-E_1,j|≤ e^-m_1l_0≤ e^-m_1l_1. We now apply Lemma <ref> to 𝒴^E_1,j_L_0,l_1,L_1 and repeat this process n̂ times to obtain: dist(E,σ^(ℐ)(H_ω,Λ_L(x_0)))≤ e^-m̂L/K. §.§ Second spectral reduction The reduced spectrum of H_ω in Λ_L(x_0), in the energy interval ℐ is given by σ^(ℐ,red)(H_ω,Λ_L(x_0)):= {E∈σ^(ℐ)(H_ω,Λ_L(x_0)): dist (E,σ^(ℐ)(H_ω,Λ_L_n(x_0))≤ 2e^-m̂/KL_n,n=1,⋯,n_1 } where L_n=L^ρ^n for n=0,1,2,3,⋯,N, and ρ^N=β. Let b≥ 1, Given large enough L, for each x∈^d there exists an event χ_L,x_0, with χ_L,x_0∈ℱ_Λ_L(x_0) and {χ_L,x_0}≥ 1-L^-bβ d, s.t. ∀ω∈χ_L,x_0 * If E∈ℐ satisfies W_ω,x_0(E)>e^-m̂√(L^β/K) and dist (E,∖ℐ)>2^-m̂√(L^β/K) then dist(E,σ^(ℐ,red)(H_ω,Λ_L(x_0)))≤ e^-m̂/KL * and we have #σ^(ℐ,red)(H_ω,Λ_L(x_0))≤ CL^(n_1+1)β d To obtain (<ref>) from (<ref>), one needs 𝒬̃_x_0,L=⋂_n=0^N𝒬_x_0,L_n. In this case, by Theorem <ref> and the definition of the reduced spectrum, the desired results follow. Proving (<ref>) requires sufficiently more work. First notice that, compared to the typical estimates on the number of eigenvalues of H_ω,Λ_L(x_0), i.e. #σ(H_ω,Λ_L(x_0))≤ CL^d, we want a much tighter bound because (n_1+1)β>0 is too small. The reduced spectrum nomenclature stems from the fact that the number of elements in it is largely reduced (but there are still enough to obtain the required deterministic estimate). To achieve these goals, we need the notion of a “notsobad set" which helps control the number of close eigenvalues for Λ_L(x_0) and Λ_L'(x_0). Let L'<L, x_0∈, and consider Λ_L,L', the annulus centered at x_0 (we omit x_0 from the notation for the time being). Let L_n=L^ρ^n, for n=0,1,2,⋯,n_1. Let ℛ_n={Λ_L_n(r)}_r∈ R_n be the standard L_n-covering of Λ_L,L' (see <cit.>). Given K_2∈ℕ (where K_2 will be chosen later), we define The annulus Λ_L,L' is (ω,E,K_2)-notsobad if there are at most K_2 points in R_n_1, denoted by r_i, 1≤ i≤ K_2, s.t. ∀ x∈Λ_L,L'∖Θ, where Θ=⋃_r_iΛ_3L_n_1(r_i), there exists a (ω,E,m,s)-good box Λ_L_n_x(r)∈ℛ_n_x s.t. x∈Λ_L_n_x(r) for some n_x∈{1,2,…,n_1}. An event 𝒩 is (Λ_L,L',E,K_2)-notsobad if 𝒩∈ℱ_Λ_L,L', and Λ_L,L' is (ω,E,K_2)-notsobad for all ω∈𝒩. Remark: Θ is called the singular set and the above definition captures the fact that outside of the singular set, each point is good in at least one level L_n, n∈{1,2,…,n_1}. The following lemma provides a probability estimates on such a set. If K_2≥K̂_̂2̂=K̂_̂2̂(d,p,b) and L≥L̂=L̂(d,p,b,K_2), then for all E∈ℐ, there exists a (ΛL,L',E,K_2)-notsobad event 𝒩_Λ_L,L'^(E) with {𝒩_Λ_L,L'^(E)}>1-L^-5bd We can now define the set we need to satisfy (<ref>) by: 𝒩_Λ_L,L'=⋂_E∈σ(H_ω,Λ_L')𝒩_Λ_L,L'^(E)∈ℱ_Λ_L 𝒩_L,x_0=⋂_n=1^n_1𝒩_Λ_L_n-1,L_n(x_0) Thus, {𝒩_L,x_0}>1-Cn_1L_n_1-1^-4bd≥ 1-Cn_1L^-4bβ d/ρ by Lemma <ref>. We also have: If ω∈𝒩_L,x_0, then #σ^(ℐ,red)(H_ω,Λ_L)≤ CL^(n_1+1)β d. First notice by definition that #σ^(ℐ,red)(H_ω,Λ_L) ≤#{{E_n}_n=0^n_1: E_n∈σ(H_ω,Λ_L_n) & |E_i-E_j|≤ 2e^-m̂/KL_maxi,j} := # D_0^n_1 We can count the RHS by layers inductively. We start with the layer L_n_1 and omit x_0 and ω for convenience so, #D_n_1^n_1=#σ(H_ω,Λ_L_n_1)≤ C(L_n_1)^d. Given {E_n}_k^n_1∈ D_k^n_1, we compute #{E:if  E_k-1=E,  then {E_n}_k-1^n_1∈ D_k-1^n_1}. Denote the previous set by B_k-1. Since ω∈𝒩_Λ_L_n-1,L_n for any n, Λ_L_n-1,L_n is an (ω,L_n-1,L_n,E_n)-notsobad set. Let Θ_n be the corresponding singular set and set Θ_k^n_1=⋃_n=k^n_1Θ_n∪Λ_L_n_1. Then we have: |Θ_k^n_1|≤ L_n_1^d+∑_n=k^n_1K_2(3(L_k-1)_n_1)^d=L^β d+(n_1-k+1)K_23^dL^ρ^k-1β d≤ CL^β d. If E∈ B_k-1, then ∀ x, x ∈λ_L_k-1∖Θ_k^n_1. So there is n_x∈{k,k+1,…,n_1}, s.t. x∈Λ_L_n_x-1,L_n_x∖Θ_n_x and there exists a (ω,E_n_x,m,s)-good box Λ_(L_n_x-1)_j containing x for some j∈1,2,…,n_1, where (L_n_x-1)_j=L^ρ^n_x+j-1. Since |E-E_n_x|≤ e^-m̂/KL_n_x≤ e^-m̂/KL^ρ^n_x≤ e^-m̂/K(L_n_x)_j, Λ_(L_n_x-1)_j is also (ω,E,m,s)-good by Lemma <ref>. Let ϕ_E be the normalized eigenfunction of E on H_ω,Λ_L_k-1, Then |ϕ_E(x)|≤ e^-m'L^ρ^n_x+j-1≤ e^-m'L^ρ^2n_1-1 So we have ∑_x∈Θ_k^n_1|ϕ_E(x)|^2=1-∑_x∈Λ_L_k-1∖Θ_k^n_1|ϕ_E(x)|^2≥ 1- CL^β de^-m'L^ρ^2n_1-1≥ 1/2 when L is large enough. #B_k-1∑_x∈Θ_k^n_1|ϕ_E(x)|^2≤ tr{P_Θ_k^n_1P_ℐ(H_ω,Λ_L_k-1)}≤ C|Θ_k^n_1≤ CL^β d Thus #B_k-1≤ 2CL^β d and using the inductive estimate from layer L_n_1 to layer L_1, we have #D_0^n_1≤ CL_n_1(L^β d)^n_1≤ CL^(n_1+1)β d. χ_L,x_0=Q̃_L,x_0∪𝒩_L,x_0 provides the desired event. Pick p' such that (n_1 + 1)β<p' - p<p_0 - p where n_1 and β is defined in the assumption of Theorem <ref>. Define ℳ_L,x_0=⋂_E∈σ^(ℐ,red)(H_ω,Λ_L(x_0))ℳ_L,x_0^(E). using Lemma <ref> with such p'<p_0. By Lemma <ref>, <ref> and the choice of β, n_1 in Theorem <ref>, we have (ℳ_L,x_0)≥ 1-CL^-(p'-(n_1+1)β)d when L is large enough. Also by our assumptions, W_ω,x_0≥ e^-ML^β/2≥ e^-m̂√(L^β/K) whenever K≥ 900, so we are free to choose K=900 for Theorem <ref>. Having done so, if we take b=1+1/β(p'-(n_1+1)β), and take 𝒰_L,x_0=χ_L,x_0∩ℳ_L,x_0, then {𝒰_L,x_0}≥ 1- CL^-(p' - (n_1 + 1)β)d≥ 1 - L^-pd. Now if ω∈𝒰_L,x_0, by Theorem <ref>, we have dist(E,σ^(ℐ,red)(H_ω,Λ_L(x_0)))≤ e^-m̂/KL = e^-m̂/9L/100 i.e. there exists E_0∈σ^(ℐ,red)(H_ω,Λ_L(x_0)) s.t. |E-E_0|≤ e^-m̂/9L/100. Since ω∈ℳ_L,x_0^(E_0) and |E-E_0|≤ e^-m̂/9L/100, by Lemma <ref>, W_ω,x_0,L(E)≤ e^-m̂/12L/100 = e^-M/40L. § ACKNOWLEDGMENTS The authors would like to thank Abel Klein for explaining the subtleties of MSA arguments and providing invaluable commentary on this work and Svetlana Jitomirskaya for providing her encouragement, support, and insight throughout this project. They would also like to thank University of California, Irvine where most of the work was done when the authors were graduate students there. N.R. and X.Z. are partially supported by Simons 681675, NSF DMS-2052899 and DMS-2155211. X.Z. is also partially supported by NSF DMS-2054589 and Simons Foundation Targeted Grant (917524) to the Pacific Institute for the Mathematical Sciences. plain
http://arxiv.org/abs/2307.00236v1
20230701055717
Visualizing departures from marginal homogeneity for square contingency tables with ordered categories
[ "Satoru Shinoda", "Takuya Yoshimoto", "Kouji Tahata" ]
stat.ME
[ "stat.ME" ]
=1.2 plain Visualizing departures from marginal homogeneity for square contingency tables with ordered categories Satoru Shinoda^1, Takuya Yoshimoto^2 and Kouji Tahata^3 ^1Department of Biostatistics, Yokohama City University, School of Medicine, Japan ^2Biometrics Department, Chugai Pharmaceutical Co., Ltd., Japan ^3Department of Information Sciences, Faculty of Science and Technology, Tokyo University of Science, Japan E-mail: [email protected] Abstract Square contingency tables are a special case commonly used in various fields to analyze categorical data. Although several analysis methods have been developed to examine marginal homogeneity (MH) in these tables, existing measures are single-summary ones. To date, a visualization approach has yet to be proposed to intuitively depict the results of MH analysis. Current measures used to assess the degree of departure from MH are based on entropy such as the Kullback-Leibler divergence and do not satisfy distance postulates. Hence, the current measures are not conducive to visualization. Herein we present a measure utilizing the Matusita distance and introduce a visualization technique that employs sub-measures of categorical data. Through multiple examples, we demonstrate the meaningfulness of our visualization approach and validate its usefulness to provide insightful interpretations. Key words: Marginal homogeneity, Matusita distance, power-divergence, visualization. 1. Introduction Numerous research areas employ categorical data analysis. Such data is summarized in a contingency table (see e.g., Agresti, 2013; Kateri, 2014). A special case is a square contingency table where the row and column variables have the same ordinal categories. When we cannot obtain data as continuous variables for the evaluation of the efficacy and safety/toxicity of treatments in clinical studies, ordered categorical scales are used alternatively. For example, Sugano et al. (2012) conducted a clinical study where they examined the modified LANZA score (MLS) after 24 weeks’ treatment with esomeprazole 20 mg once daily or a placebo. The MLS is a popular evaluation scale with five stages (from 0 to +4) and is used for clinical evaluations of gastroduodenal mucosal lesions. Table 1 shows a square contingency table that summarizes the location shift of the MLS from pre-treatment to post-treatment for each patient. Such research is interested in whether the treatment effect tends to improve or worsen after an intervention relative to before the intervention. Thus, the evaluation is interested in the similarity from marginal homogeneity (MH), but not independence. Stuart (1955) introduced the MH model to indicate homogeneity with respect to two marginal distributions. We are also interested in the structure of inhomogeneity of the two marginal distributions when the MH model does not hold. This is because we are more interested in the deviation between the pre-treatment and post-treatment marginal distributions (i.e., intervention results) than whether the MH model that represents the structure shows an equal marginal distribution for the data in Table 1. Consequently, our strategy is to estimate measures representing the degree of departure from MH. Measures must quantify the differences in probability distributions, mainly using information divergences such as Kullback-Leibler divergence or power-divergence. To this end, Tomizawa, Miyamoto and Ashihara (2003) proposed a measure using the marginal cumulative probability for square contingency tables with ordered categories. This measure ranges from 0 to 1 and directly represents the degree of departure from MH. However, it cannot distinguish the direction of degree of departure. The two marginal distributions are interpreted as equal (no intervention effect) when the value is 0. When the values are greater than 0, an improvement is indistinguishable from a worsening effect. Yamamoto, Ando and Tomizawa (2011) proposed a measure, which lies between -1 and 1, to distinguish the directionality. This measure cannot represent the degree of departure directly from MH. Even if the value of the measure is 0, the marginal distribution cannot be exactly interpreted as having no intervention effect. To simultaneously analyze the degree and directionality of departure from MH, Ando, Noguchi, Ishii and Tomizawa (2021) proposed a two-dimensional visualized measure that combines the measure proposed by Tomizawa et al. (2003) and the measure proposed by Yamamoto et al. (2011). They also considered visually comparing the degrees of departure from MH in several tables because their measure is independent of the dimensions (i.e., number of categorical values) and sample size. Appendix 1 explains the main points of the above measures. These measures proposed by Tomizawa et al. (2003), Yamamoto et al. (2011) and Ando et al. (2021) are single-summaries. They are expressed using the sub-measure weights at each categorical level. For a given category level, different behaviors cannot be distinguished as a single-summary measure. The artificial data examples in the data analysis section provide specific situations. Hence, a single-summary-measure may overlook different behaviors in a given categorical level. To address this limitation, we apply visualization as a method utilizing sub-measures defined at each category level. This visualization also assumes that satisfying distance postulates can achieve a natural interpretation. To date, a measure for ordered categories does not exist because the Kullback-Leibler divergence or power-divergence used in existing measures do not satisfy the distance postulates. Therefore, we consider a measure using the Matusita distance to capture the discrepancy between two probability distributions while satisfying the distance postulates (see Matusita, 1954, 1955; Read and Cressie, 1988, p.112). Both academia and general society employ methods to visualize quantitative data. Examples include pie charts, histograms, and scatterplots. Although visualizing categorical data has attracted attention recently, different visualization techniques from those for quantitative data are necessary (see, e.g., Blasius and Greenacre, 1998; Friendly and Meyer, 2015; Kateri, 2014). Visualization of categorical data has two main objectives: revealing the characteristics of the data and intuitively understanding analysis results (Friendly and Meyer, 2015). Methods for the former include the “mosaic plot” and “sieve diagram” (see e.g., Friendly, 1995; Hartigan and Kleiner, 1981, 1984; Riedwyl and Schüpbach, 1983, 1994). Methods for the latter include the “fourfold display” for odds ratios and the “observer agreement chart” for Cohen’s κ (see e.g., Bangdiwala 1985, 1987; Fienberg, 1975; Friendly, 1994). Although the visualization objectives for categorical data may vary, they share common techniques: (i) separating data by categorical levels and (ii) adjusting the size of figure objects based on the frequency of each cell. Our research aims to realize a visualization for an intuitive understanding of the analysis results for MH. To date, such a visualization has yet to be proposed. Although the “mosaic plot” and “sieve diagram” can be applied to square contingency tables, they are not suitable for examining the structure of MH. These visualizations are designed to observe the data itself and identify features or patterns without making hypotheses before analyzing the data. Therefore, our proposed visualization provides an intuitive understanding of the structure of MH using categorical data visualization techniques (i) and (ii). This paper conducts a comprehensive analysis of the degree and directionality of departure from MH for square contingency tables with ordered categories. Our approach has two components: (i) measures to quantify the degree of departure of MH using information divergence satisfying distance postulates and (ii) a visualization technique designed for categorical data. The rest of this paper is organized as follows. Section 2 defines the proposed measure and visualization. Section 3 derives an approximated confidence interval for the proposed measure. Section 4 provides examples of the utility for the proposed measure and visualization. Section 5 presents the discussion. Finally, Section 6 closes with concluding remarks. 2. Proposed measure and visualization Here, we detail the proposed measure and visualization. Section 2.1 explains the probability structure of the MH model using formulas. Section 2.2 defines the sub-measures and single-summary-measure expressed using weights for the sub-measures at each categorical level along with the properties of the proposed measure. Section 2.3 details the visualization of the proposed measures. 2.1. MH model Consider an r × r square contingency table with the same row and column ordinal classifications. Let X and Y denote the row and column variables, respectively, and let Pr(X = i , Y = j) = p_ij for i = 1, … , r; j = 1, … , r. The MH model can be expressed with various formulas. For example, the MH model is expressed as p_i · = p_· i for  i = 1, … , r, where p_i · = ∑^r_t=1p_it and p_· i = ∑^r_s=1p_si. See e.g., Stuart (1955) and Bishop, Fienberg and Holland (1975, p.294). This indicates that the row marginal distribution is identical to the column marginal distribution. To consider ordered categories, the MH model can be expressed using the marginal cumulative probability as F_1(i) = F_2(i) for  i = 1, … , r-1, where F_1(i) = ∑^i_s=1 p_s · = Pr(X ≤ i) and F_2(i) = ∑^i_t=1 p_· t = Pr(Y ≤ i). The MH model can also be expressed as G_1(i) = G_2(i) for  i = 1, … , r-1, where G_1(i) = ∑^i_s=1∑^r_t=i+1 p_st = Pr(X ≤ i, Y ≥ i+1) and G_2(i) = ∑^r_s=i+1∑^i_t=1 p_st = Pr(X ≥ i+1, Y ≤ i). Furthermore, the MH model can be expressed as G^c_1(i) = G^c_2(i)( = 1/2) for  i = 1, … , r-1, where G^c_1(i) = G_1(i)/G_1(i) + G_2(i), G^c_2(i) = G_2(i)/G_1(i) + G_2(i). The MH model states that the conditional probability of X ≤ i is given if either X or Y ≤ i and the other ≥ i+1 is equal to the conditional probability that Y ≤ i for the same conditions. 2.2. Measure of departure from MH Several measures have been proposed for various formulas of the MH model. Here, we consider a measure that is independent of the diagonal probabilities because the MH model does not have constraints on the main-diagonal cell probabilities. For instance, Tomizawa et al. (2003) and Yamamoto et al. (2011) proposed measures that do not depend on the diagonal probabilities. First, we consider a sub-measure satisfying the distance postulates. Assuming that G_1(i) + G_2(i)≠ 0, the degree of departure from MH at each categorical level i (i=1, …, r-1) is given as γ_i = [ 2+√(2)/2( υ_1(i)^2 + υ_2(i)^2 ) ]^1/2, where υ_1(i) = √(G^c_1(i)) - √(1/2), υ_2(i) = √(G^c_2(i)) - √(1/2). The sub-measure γ_i has the following characteristics: (i) 0 ≤γ_i ≤ 1 (ii) γ_i = 0 if and only if G^c_1(i) = G^c_2(i) (= 1/2) (iii) γ_i = 1 if and only if G^c_1(i) =1 (then G^c_2(i) = 0) or G^c_1(i) =0 (then G^c_2(i) = 1) The sub-measure γ_i is the Matusita distance between ( G^c_1(i), G^c_2(i)) and ( 1/2, 1/2), and satisfies all three distance postulates. When the value of the sub-measure is 0, it means the marginal cumulative probabilities are equivalent until categorical level i. The value of the sub-measure increases as the separation between the marginal cumulative distributions increases. The separation is maximized when the value of the sub-measure is 1. Noting that a distance d is defined on a set W if for any two elements x, y ∈ W, a real number d(x, y) is assigned that satisfies the following postulates: (i) d(x, y) ≥ 0 with equality if and only if x=y; (ii) d(y, x) = d(x, y); (iii) d(x, z) ≤ d(x, y) + d(y, z) for x, y, z ∈ W (the triangle inequality). See also Read and Cressie (1988, p.111). Then the power-divergence I^(λ) (especially, the Kullback-Leibler divergence I^(0)) does not satisfy postulates (ii) and (iii). The Matusita distance, which is the square root of I^(-1/2), satisfies all three postulates. Assuming that { G_1(i) + G_2(i)≠ 0 }, we consider a measure using sub-measure γ_i to represent the degree of departure from MH, which is given as Γ = ∑^r-1_i=1( G^∗_1(i) + G^∗_2(i)) γ_i, where Δ = ∑^R-1_i=1( G_1(i) + G_2(i)), and G^∗_1(i) = G_1(i)/Δ, G^∗_2(i) = G_2(i)/Δ, for i=1, …, r-1. The measure Γ has the following characteristics: (i) 0 ≤Γ≤ 1 (ii) Γ = 0 if and only if the MH model holds (iii) Γ = 1 if and only if the degree of departure from MH is a maximum, in the sense that G^c_1(i)=1 (then G^c_2(i)=0) or G^c_1(i)=0 (then G^c_2(i)=1), for i = 1, …, r-1 Thus, this measure is the weighted sum of the Matusita distance for the two distributions ( G^c_1(i), G^c_2(i)) and ( 1/2, 1/2). 2.3. Visualization of the proposed measure To visualize the proposed measure, we used the techniques for visualizing categorical data. First, for the fixed i (i=1, …, r-1), γ_i, which represents the relationship between G^c_1(i) and G^c_2(i), is defined by the following steps: (i) Plot the x-axis is G^c_1(i) and the y-axis is G^c_2(i) point for each ( G^c_1(i), G^c_2(i)) coordinate (ii) Adjust the point size according to the weight ( G^∗_1(i) + G^∗_2(i)) (iii) Display the value of γ_i as a text label at each ( G^c_1(i), G^c_2(i)) point (iv) Color the points red when (G^c_1(i) < G^c_2(i)) and blue when ( G^c_1(i)≥ G^c_2(i)) (v) Draw the dashed line within the diagonal point’s range of movement and color the dashed line using the same rules Therefore, the top-left side is red, while the bottom-right side is blue with respect to the point ( 1/2, 1/2) in the visualization. Table 2 shows a visualization image. Table 3 presents the necessary information to visualize Table 2, including G^c_1(i) and G^c_2(i) used for the coordinates of the point, the weight used for the point size, and the sub-measure γ_i used for the text label. Step 1 visualizes each level i. As an example, Figure 1 depicts how γ_i is visualized at level i=1. Next, we provide additional definitions to integrate each γ_i in step 1 into one figure: (i) Consider the x-axis as i for G^c_1(i) and the y-axis as i for G^c_2(i) (ii) Place the figure of γ_i on the diagonal Figure 2 shows the integrated figure using the example from Table 2 in step 2 according to the definition of the proposed visualization. The visualization of the proposed measure using the categorical data methods has the following benefits. First, the visualization provides information about each i, allowing trends in MH to be identified in a square contingency table. Since the figure visualizes each γ_i, points do not overlap even if their coordinates are close. Thus, points are easily identifiable. It is important to visualize each γ_i separately since each one is assumed to be nearly the same value. Ando et al. (2021) used a Kullback-Leibler divergence-type measure, but the Kullback-Leibler divergence does not satisfy the distance postulates. To naturally interpret the point distances in the figure, the distance postulates must be satisfied. (Section 4.1.1. gives a specific example). Additionally, the proposed visualization can be considered as utilizing sub-measures. 3. Approximate the confidence interval for the measure Let n_ij denote the observed frequency in the ith row and jth column of a table (i =1, …, r; j = 1, …, r). The sample version of Γ (i.e., Γ̂) is given by Γ in which {p_ij} is replaced by {p̂_ij}, where p̂_ij = n_ij/n and n = ∑∑ n_ij. It should be noted that the sample version of G^c_k(i), γ_i and F_k(i), which are Ĝ^c_k(i), γ̂_i and F̂_k(i), respectively, are given in a similar manner (i=1, …, r-1; k=1, 2). Given that {n_ij} arises from a full multinomial sampling, we can estimate the standard error for Γ̂ and construct a large-sample confidence interval for Γ. The delta method can approximate the standard error. √( n)(Γ̂ - Γ) has an asymptotic (as n →∞) normal distribution with mean zero and variance σ^2[ Γ ]. See Appendix 2 for the details of σ^2[ Γ ]. Let σ̂^2[ Γ ] denote σ^2[ Γ ] where {p_ij} is replaced by {p̂_ij}. Then σ̂ [ Γ ]/√( n) is the estimated approximate standard error for Γ̂, and Γ̂± z_p/2σ̂ [ Γ ]/√( n) is an approximate 100(1-p) percent confidence interval for Γ, where z_p/2 is the 100 (1-p/2)th percentile of the standard normal distribution. The asymptotic normal distribution may not be applicable when estimating measures on small sample datasets. In small dataset, the sample proportion of (i, j) cell may fall 0 (i.e., p̂_ij = 0). Thus, we consider Bayesian methods. Although the sample proportion is typically used to estimate the approximate standard error for Γ̂, herein we consider the Bayes estimator derived from the uninformed prior probability. To have a vague prior, the Haldane prior is used for the prior information (see Haldane 1932; Berger 1985, p.89). We set all parameters of the Dirichlet distribution to 0.0001 when estimating the approximate variance of the proposed measure. 4. Data analysis 4.1. Artificial data 4.1.1. Role of distance postulates for visualization To illustrate the concept of visualization, we used artificial datasets in two scenarios: one that satisfies the structure of MH and one that has location-shifted marginal distributions. The visualization in Table 4(a) shows that all values of sub-measure γ̂_i are equal to zero, and the value of the proposed measure Γ̂ is zero (i.e., the MH model holds). In terms of information divergences, the two marginal distributions can be interpreted as the same. Therefore, the values of the label, which is the sub-measure using the Matusita distance, are zero, and points are drawn at ( 1/2, 1/2) in the visualization (Figure 3(a)). The visualization in Table 4(b) shows that all values of sub-measure γ̂_i are equal to 0.341 because the assumed structure shows location-shifted marginal distributions. Since we estimated (Ĝ^c_1(i) < Ĝ^c_2(i)), the point on the graph is drawn from ( 1/2, 1/2) to the upper left (Figure 3(b)). Because the label values are sub-measures using the Matusita distance that satisfies distance postulate (ii), it can be interpreted as the distance between ( G^c_1(i), G^c_2(i)) and ( 1/2, 1/2). However, the direction is crucial when using the Kullback-Leibler divergence (see Appendix 1). When using the Kullback-Leibler divergence in Table 4(b), the distance from ( 1/2, 1/2) to ( G^c_1(i), G^c_2(i)) and the distance from ( G^c_1(i), G^c_2(i)) to ( 1/2, 1/2) differ (Table 5). Therefore, the label value must be selected carefully because this divergence may hinder an intuitive interpretation. In addition, it can be evaluated appropriately in indirect comparisons between two points for the distance from a reference since the proposed measure satisfies the triangular inequalities. Thus, the visualization must use a divergence that satisfies the distance postulates. In addition, the proposed visualization gives a natural and intuitive interpretation because we can understand the degree of departure from MH for each level i, and the sub-measure calculated by Ĝ^c_1(i) and Ĝ^c_2(i) compares the marginal cumulative distributions ( F̂_1(i)  and F̂_2(i)). This section shows the visualization in monotonic differences of the marginal cumulative distributions, but the next section illustrates the relationship between marginal cumulative distributions and visualizations in several patterns. 4.1.2. Perception of different behaviors between categorical levels Our visualization can interpret the relationships between the marginal cumulative distributions, which is difficult using a single-summary-measure. Here, we treat artificial data where the values of the single-summary-measure are the same, but the visualizations of the sub-measures behave differently. Tables 6(a)–(d) show the artificial data, which are setup so that the value of the measure is 0.341. Figures 4(a)–(d) show the visualizations of Tables 6(a)–(d). Table 6(a) illustrates a scenario where the marginal cumulative distribution is location-shifted constantly. This structure would be expected based on the value of the measure. In a clinical study, assuming such a situation implies a constant treatment effect from pre-treatment to post-treatment. In contrast, Table 6(b) represents a scenario where the marginal cumulative distribution spreads as the categorical level i increases. Moreover, Tables 6(c)–(d) show situations where the marginal cumulative distribution differs at the categorical level i. In a clinical study, assuming such a situation suggests that the treatment effect depends on the pre-intervention condition. 4.2. Simulation studies Monte Carlo simulations were performed to theoretically derive the coverage probabilities of the approximate 95% confidence intervals assuming random sampling of an underlying bivariate normal distribution. Here, we considered random variables Z_1 and Z_2 with means E(Z_1) = 0 and E(Z_2) = d, variances Var(Z_1) = Var(Z_2) = 1, and correlation Corr(Z_1, Z_2 ) = 0.2. Assuming a 6 × 6 table is formed using the cutoff points for each variable at -1.2, -0.6, 0, 0.6, 1.2, we evaluated several simulation scenarios where d = 0.00  to 4.00 by 0.25 and n = 36, 180, 360, 3600 (sparseness index=1, 5, 10, 100). The simulation studies were performed based on 100,000 trials per scenario. Figure 5 plots the mean of random variable Z_2 along with the true value of the measure based on a bivariate normal distribution. When d=0, the true value of the measure is observed as 0 because there is no difference in the means whose condition is stronger than the structure of the MH. Although the true value increases monotonically for d=0, …, 1, a large mean difference between random variables is necessary for the true value to reach 1. Figure 6 shows the coverage probability according to the true values. For a small sample size, it is difficult to obtain a nominal coverage probability, whereas the coverage probability is maintained at a 95% confidence interval for a sufficient sample size. 4.3. Example As an example, consider the data in Table 1. In the original work (Sugano et al., 2012), the proportion of improvement or deterioration for the esomeprazole group (drug group) and placebo group were described. Table 7 shows the results of applying the proposed measure Γ to these data to statistically consider the treatment effects for the drug or placebo. The estimate of asymptotic variance using the sample proportion cannot be calculated because Ĝ^c_1(4)=0 in Table 1(b). Hence, a Bayes estimator is used to estimate the asymptotic variance. The 95% confidence intervals do not cross zero, suggesting that both groups have a higher degree of deviation from MH. That is, the marginal distribution after the treatment shifts compared to that before the treatment. For an intuitive understanding, Figure 7 plots the trend, where blue indicates an improving trend and red a deteriorating one. The drug group shows an improving trend (Ĝ^c_1(i)≥Ĝ^c_2(i)), while the placebo group displays a deteriorating trend (Ĝ^c_1(i) < Ĝ^c_2(i)). For the drug group, i=1, 2, 3 show an improvement trend, while i=4 shows a deteriorating trend although the circle is small (i.e., the proportion of observed frequencies comprising Ĝ^c_1(4) and Ĝ^c_2(4) is small relative to the total). These results imply that there might be differences in treatment effects between i levels. 5. Discussion In the proposed measure, sub-measures are used in the visualization to capture features overlooked by a single summary measure. Previous studies have adopted similar approaches, except that the sub-measures are not used for interpretation (Tomizawa et al., 2003; Yamamoto et al., 2011). This study demonstrates that sub-measures allow two kinds of marginal inhomogeneities to be visualized, providing a more detailed interpretation of the single-summary-measure. The proposed visualization is analyzed using Table 1. First, because the Matusita distance satisfies the distance postulates, the visualization that draws points on two-dimensional coordinates can give a natural and intuitive interpretation. In particular, the values of existing measures based on the power-divergence (Kullback-Leibler divergence) that do not satisfy distance postulate (ii) would give different values if the distance from the start point to the end point is swapped. That is, the data in Table 1 would create two visualization patterns. In contrast, for the Matusita distance, the same value is obtained even if the distance from the start point to the end point is swapped. Hence, a special annotation is unnecessary for a visual interpretation. Furthermore, the point in Figure 7(a) where i=1, 2, 3 and i=4 show different directions is difficult to discern using the existing measure proposed by Yamamoto et al. (2011) because it is a single-summary-measure. However, the different directions can be considered intuitively through visualization by level i. The proposed visualization does not draw the points on one coordinate because the degree of departure from MH is likely the same for each level in real data analysis (Figure 7). This is because identifying which level i of points is drawn is difficult. Hence, it is important to satisfy the distance postulates and to consider methods for visualizing categorical data of square contingency tables. The visualization program was implemented in the R programming language (R Core Team, 2023). Noting that a graphical layout in package “ggplot2” is defined by “gtable” (and also “grid”). In addition, the arrangement of multiple figure objects can be set by package “gridExtra”. We used “grid” and “gridExtra” packages for visualization purposes. We referenced the function “agreementplot()” by the “vcd” package, which is the categorical data visualization package for the “observer agreement chart”. 6. Conclusion The proposed measure Γ is the weighted sum of the sub-measures that satisfy all three distance postulates. Here, we demonstrate the approximated confidence interval for Γ. The proposed visualization using the Matusita distance provides a natural visual interpretation of MH in a square contingency table. In addition, we show that the visualization can provide useful interpretations using an example. 99 Agresti, A. (2013). Categorical Data Analysis, 3rd edition. Wiley, Hoboken, New Jersey. Ando, S., Noguchi, T., Ishii, A. and Tomizawa, S. (2021). A two-dimensional index for marginal homogeneity in ordinal square contingency tables . SUT Journal of Mathematics 57, 211–224. Bangdiwala, S.I. (1985). A graphical test for observer agreement. Proceeding of the International Statistics Institute 1, 307–308. Bangdiwala, S.I. (1987). Using SAS software graphical procedures for the observer agreement chart. Proceedings of the SAS User’s Group International Conference 12, 1083–1088. Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, 2nd edition. Springer, New York. Bishop, Y.M.M., Fienberg, S.E. and Holland, P.W. (1975). Discrete Multivariate Analysis: Theory and Practice. The MIT Press, Cambridge, Massachusetts. Blasius, J. and Greenacre, M. (1998). Visualization of Categorical Data. Academic Press, San Diego, California. Cressie, N. and Read, T.R.C. (1984). Multinomial goodness-of-fit tests. Journal of the Royal Statistical Society, Series B 46, 440–464. Fienberg, S.E. (1975). Perspective Canada as a social report. Social Indicators Research 2, 153–174. Friendly, M. (1994). A fourfold display for 2 by 2 by k tables. Technical Report 217, York University, Psychology Department. Friendly, M. (1995). Conceptual and visual models for categorical data. The American Statistician 49, 153–160. Friendly, M. and Meyer, D. (2015). Discrete Data Analysis with R: Visualization and Modeling Techniques for Categorical and Count Data. CRC Press, Boca Raton, Florida. Haldane, J.B.S. (1932). A Note on Inverse Probability. Mathematical Proceedings of the Cambridge Philosophical Society 28, 55–61. Hartigan, J.A. and Kleiner, B. (1981). Mosaics for contingency tables. Computer Science and Statistics: Proceedings of the 13th Symposium on the Interface, 268–273. Hartigan, J.A. and Kleiner, B. (1984). A mosaic of television ratings. The American Statistician 38, 32–35. Kateri, M. (2014). Contingency Table Analysis: Methods and Implementation Using R. Birkhäuser/Springer, New York. Matusita, K. (1954). On the estimation by the minimum distance method. Annals of the Institute of Statistical Mathematics 5, 59–65. Matusita, K. (1955). Decision rules based on the distance, for problems of fit, two samples, and estimation. Annals of the Institute of Statistical Mathematics 26, 631–640. R Core Team (2023). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/ Read, T.R.C. and Cressie, N. (1988). Goodness-of-Fit Statistics for Discrete Multivariate Data. Springer, New York. Riedwyl, H. and Schüpbach, M. (1983). Siebdiagramme: Graphische Darstellung von Kontingenztafeln. Technical Report 12, Institute for Mathematical Statistics. University of Bern, Bern, Switzerland. Riedwyl, H. and Schüpbach, M. (1994). Parquet Diagram to Plot Contingency Tables. In F. Faulbaum, ed., Softstat ’93: Advances in Statistical Software, 293–99. New York: Gustav Fischer. Sugano, K., Kinoshita, Y., Miwa, H. and Takeuchi, T. (2012). Randomised clinical trial: esomeprazole for the prevention of nonsteroidal anti-in ammatory drug-related peptic ulcers in Japanese patients. Alimentary Pharmacology and Therapeutics 36, 115–125. Stuart, A. (1955). A test for homogeneity of the marginal distributions in a two-way classification. Biometrika 42, 412–416. Tomizawa, S. (1995). Measures of departure from marginal homogeneity for contingency tables with nominal categories. Journal of the Royal Statistical Society, Series D 44, 425–439. Tomizawa, S., Miyamoto, N. and Ashihara, N. (2003). Measure of departure from marginal homogeneity for square contingency tables having ordered categories. Behaviormetrika 30, 173–193. Yamamoto, K., Ando, S. and Tomizawa, S. (2011). A measure of departure from average marginal homogeneity for square contingency tables with ordered categories. Revstat 9, 115–126. Appendix 1 Assuming that { G_1(i) + G_2(i)≠ 0 }, the power-divergence-type measure representing the degree of departure from MH proposed by Tomizawa, Miyamoto and Ashihara (2003) for λ > -1 is given as Φ^(λ) = λ(λ+1)/2^λ-1∑^r-1_i=1( G^∗_1(i) + G^∗_2(i))                × I^(λ)_i ( { G^c_1(i), G^c_2(i)} ; {1/2, 1/2}), where I^(λ)_i (·, ·) = 1/λ(λ+1)[ G^c_1(i){( G^c_1(i)/1/2)^λ - 1 }                       + G^c_2(i){( G^c_2(i)/1/2)^λ - 1 }], and the value at λ=0 is taken to the limit as λ→ 0. Note that I^(λ)_i (·, ·) is the power-divergence between two distributions (see Cressie and Read, 1984; Read and Cressie, 1988, p.15). Namely, I^(0)_i (·, ·) = G^c_1(i)log( G^c_1(i)/1/2) + G^c_2(i)log( G^c_2(i)/1/2). This measure has the following characteristics: (i) Φ^(λ) = 0 if and only if the MH model holds (ii) Φ^(λ) = 1 if and only if the degree of departure from MH is a maximum, in the sense that G^c_1(i)=1 (then G^c_2(i)=0) or G^c_1(i)=0 (then G^c_2(i)=1), for i = 1, …, r-1 Second, assuming that { G_1(i) + G_2(i)≠ 0 }, the measure representing two kinds of marginal inhomogeneities proposed by Yamamoto, Ando and Tomizawa (2011) is given as Ψ = 4/π∑^r-1_i=1( G^∗_1(i) + G^∗_2(i)) ( θ_i - π/4), where θ_i = cos^-1( G_1(i)/√( G^2_1(i) + G^2_2(i))). This measure has the following characteristics: (i) Ψ = -1 if and only if there is a structure of maximum upper-marginal inhomogeneity (ii) Ψ = 1 if and only if there is a structure of maximum lower-marginal inhomogeneity (iii) If the MH model holds then Ψ = 0, but the converse does not hold Yamamoto et al. (2011) defined this structure (Ψ = 0) as the average MH model. Third, assuming that { G_1(i) + G_2(i)≠ 0 }, the two-dimensional measure that can simultaneously analyze the degree and directionality of departure from MH proposed by Ando, Noguchi, Ishii and Tomizawa (2021) is given as τ = [ Φ^(0); Ψ ]. This two-dimensional measure has the following characteristics: (i) τ = (0, 0)^t if and only if the MH model holds (ii) τ = (1, -1)^t if and only if there is a structure of maximum upper-marginal inhomogeneity (iii) τ = (1, 1)^t if and only if there is a structure of maximum lower-marginal inhomogeneity Appendix 2 Using the delta method, √( n)(Γ̂ - Γ) has an asymptotic variance σ^2[ Γ ], which is given as σ^2[ Γ ] = ∑^r-1_k=1∑^r_l=k+1( p_kl D^2_kl + p_lk D^2_lk), where D_kl = 1/Δ√(2+√(2)/2)∑^r-1_i=1 I(k ≤ i, l ≥ i+1) A_i - (l-k)/ΔΓ, D_lk = 1/Δ√(2+√(2)/2)∑^r-1_i=1 I(k ≤ i, l ≥ i+1) B_i - (l-k)/ΔΓ, A_i = 1/2√(C_i)( 2C_i + υ_1(i)G^c_2(i)/√(G^c_1(i)) - υ_2(i)√(G^c_2(i))), B_i = 1/2√(C_i)( 2C_i - υ_1(i)√(G^c_1(i)) + υ_2(i)G^c_1(i)/√(G^c_2(i))), C_i = υ_1(i)^2 + υ_2(i)^2, and I(·) is indicator function.
http://arxiv.org/abs/2307.01664v1
20230704115323
Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues
[ "Ye Liu", "Stefan Ultes", "Wolfgang Minker", "Wolfgang Maier" ]
cs.CL
[ "cs.CL" ]
Unified Conversational Models with System-Initiated Transitions between CC and TO Dialogues]Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues 0009-0005-4578-769X Mercedes-Benz AG & Ulm University Sindelfingen & Ulm Germany [email protected] 0000-0003-2667-3126 University of Bamberg Bamberg Germany [email protected] 0000-0003-4531-0662 Ulm University Ulm Germany [email protected] 0000-0001-6396-4956 Mercedes-Benz AG Sindelfingen Germany [email protected] Spoken dialogue systems (SDSs) have been separately developed under two different categories, task-oriented and chit-chat. The former focuses on achieving functional goals and the latter aims at creating engaging social conversations without special goals. Creating a unified conversational model that can engage in both chit-chat and task-oriented dialogue is a promising research topic in recent years. However, the potential “initiative” that occurs when there is a change between dialogue modes in one dialogue has rarely been explored. In this work, we investigate two kinds of dialogue scenarios, one starts from chit-chat implicitly involving task-related topics and finally switching to task-oriented requests; the other starts from task-oriented interaction and eventually changes to casual chat after all requested information is provided. We contribute two efficient prompt models which can proactively generate a transition sentence to trigger system-initiated transitions in a unified dialogue model. One is a discrete prompt model trained with two discrete tokens, the other one is a continuous prompt model using continuous prompt embeddings automatically generated by a classifier. We furthermore show that the continuous prompt model can also be used to guide the proactive transitions between particular domains in a multi-domain task-oriented setting. <ccs2012> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Human computer interaction (HCI) [ Wolfgang Maier ^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia ===================================================================================== § INTRODUCTION The discussion about the initiative during interaction in spoken dialogue systems (SDSs) dates back to the 1990s. Taking the initiative is expressed as “taking the conversational lead” at a more expansive definition <cit.>. For task-oriented dialogues, the initiative tends to represent “driving the task” <cit.>. <cit.> introduces the mixed-initiative interaction and explores that initiative is a multi-factor concept, which includes choice of task, choice of speaker and choice of outcome. Hence, the term "initiative" may be interpreted differently in different dialogue scenarios. In more recent years, <cit.> elucidates the challenges of proactiveness in dialogue systems. <cit.> introduces the mixed-initiative topic transitions in a open-domain dialogue system. <cit.> elaborates on three different types of proactive transitions initiated by a unified dialogue agent. In this work, we investigate the initiative in a unified conversational model. Unified SDSs that can reply to both task-oriented and chit-chat requests have been proposed recently <cit.>, but potential initiatives that emerge as dialogue modes change in one dialogue have not been explored. It is desired that a unified model can not only reply to both chit-chat and task-oriented requests, but also proactively initiate the transition when the user implicitly shows the willingness to switch between chit-chat and task-oriented services. Like dialogue examples shown in Figure <ref>, without the bold transition sentences, it is the user who controls the dialogue flow and the dialogue agent is merely providing services mechanically. To enable the proactive capability, the unified SDSs should be developed to recognize the dialogue mode transition and initiate this transition by generating a transition sentence (please see the bold transition sentences in Figure <ref>). To be more specific, the following two dialogue scenarios are studied in this paper and the potential benefits are also explained as follows: * The human-machine interaction starts with chit-chat implicitly involving task-related topics and eventually switches to task-related requests, as the first dialogue of Figure <ref>. The initiative dialogue system consciously anticipates in advance the user need for a specific task-related service, such as “booking a train ticket”, while kindly asking the user “If you want, I could help you book a train ticket.”. This is beneficial for commercial dialogue systems to proactively offer or sell their task-related services <cit.> at the right time. * The human-machine interaction starts with task-oriented requests and eventually switches to casual chat after providing all user needed information, as the second example in Figure <ref>. Users may have the feeling that they are talking to an acquaintance if the initiative dialogue agent naturally switches to chit-chat interaction after getting all needed task-related information. This can highly improve user interaction experience <cit.>, even provide mental support <cit.>. In summary, the main contributions of this paper are as follows: * We utilize the original FusedChat dataset, where each dialogue includes two dialogue modes with chit-chat and task-oriented parts being interdependent (please see dialogue examples in Figure <ref>) and adopt the pre-trained GPT-2 <cit.> to build a unified dialogue model (section <ref>), which can generate both chit-chat and task-oriented responses. This is the basis for the subsequent work. * We artifically augment transition sentences (section <ref>) for 592 FusedChat <cit.> dialogues, which include Appended FusedChat and Prepended FusedChat. These augmented transition sentences are further applied for the discrete and continuous prompt learning. * We leverage discrete and continuous prompt learning (section <ref>) to efficiently extend the proactiveness capability to the unified model for generating a transition sentence at the transition turn. The remainder of this paper is structured as follows: Section <ref> shows the related works of our research. Section <ref> introduces the human annotation task on transition sentences collection. Section <ref> briefly describes the unified model that can reply to both chit-chat and task-oriented requests. Section <ref> mainly introduces the discrete and continuous prompt learning to activate the initiative transitions of the unified model. Section <ref> elaborates on the performance of our proposed models via automatic metric scores and use case study. Section <ref> concludes this work and outlines future research. § RELATED WORK There are several works on using chit-chat utterances to augment task-oriented dialogue datasets. For instance, <cit.> artificially augmented a task-oriented dataset with randomly sampled open-domain utterances. It demonstrated that the model trained on the augmented data can improve the out-of-domain recovery performance for the task-oriented system. <cit.> introduced a novel data augmentation approach to automatically create MR-to-Text data from open-domain texts. <cit.> proposed a method for adding contextually relevant chit-chat to enhance the engaging and social of task-oriented dialogues. However, these models only concern the performance of the underlying task-oriented dialogue model. Several previous works combined chit-chat and task-oriented datasets to train a unified dialogue model. For instance, <cit.> introduced the dodecaDialoguen task, to assemble important aspects of an engaging conversational agent into a single collection by leveraging 12 tasks. The Adapter-Bot presented in <cit.> utilized multiple adapter layers with the pre-trained DialoGPT model to activate new response skills and styles. <cit.> proposed a dialogue model for training chit-chat and task-oriented in a unified data schema, which both include belief states, representation of dataset results, and system acts. However, these models simply fuse chit-chat dialogue and task-oriented dialogue into one model and do not consider the dependency between different types of dialogues in the multi-turn setting. In contrast, all dialogues in FusedChat released in <cit.> include both chit-chat and task-oriented turns, and treats them as parallel dialogue modes of equal importance. While in this way, the contextual dependency between these two dialogue modes is taken into account, the potential system initiative that occurs during switchover has not been discussed. Our work, on the other hand, leverages FusedChat to explore the proactive transitions between these two dialogue modes guided by dialogue systems rather than users. <cit.> recently proposed SalesBot and investigated the conversations starting from open-domain social chatting and then gradually transitioning to task-oriented purposes. However, our work not only explores the transition from chit-chat to task-oriented, but also from task-oriented to chit-chat. <cit.> recently also discussed three kinds of initiative transitions in a unified dialogue system. The recent achievement in prompt learning <cit.> enables researchers to efficiently adapt a given task to pre-trained models rather than modifying the structure of models <cit.>, which also inspired our work. The prompt learning is generally categorized into discrete prompt learning, where the prompt is natural language string and was used in GPT-3 <cit.>; and continuous/soft prompt learning, where the prompt is encoded in a continuous embedding space. Our work utilizes both discrete and continuous prompt learning to efficiently activate the system-initiated transitions for the unified conversational model. § TRANSITION SENTENCES COLLECTION In this work, we utilize the FusedChat dataset <cit.>, where human augmented open-domain dialogues are prepended and appended to the dialogues of the popular task-oriented dataset MultiWOZ <cit.>. In Appended FusedChat, the appended open-domain sentences are content-dependent on the preceding task-oriented interaction. In Prepended FusedChat, the prepended open-domain dialogue must retain a slot value mentioned in the first turn of the succeeding task-oriented dialogue to enable the inter-mode dependency. Hence, every dialogue in FusedChat includes two dialogue modes with chit-chat and task-oriented parts being interdependent. As the dialogue examples in Figure <ref>, most FusedChat dialogues do not take the dialogue mode transitions initiated by system into account, so we manually augment transition sentences at transition turns (highlighted in bold in Figure <ref>) for 592 FusedChat dialogues and then utilize these augmented dialogues to train the initiative models (section <ref>). In the human augmentation task, we employ two Master students with computational linguistics background as annotators. To validate and improve the annotation schemes, we have defined the following annotation guidelines: * When creating a transition sentence, both chit-chat and task-oriented contextual segments have to be considered. Therefore, augmented transition sentences must be semantically and contextually reasonable. * In Appended FusedChat dialogues (from task-oriented to chit-chat), sentences with patterns like “Do you need anything else?”, “What else can I do for you?” and “Is there anything else I can do for you?” are commonly occurring in the last system turn of task-oriented interaction. To better reflect system-initiated transitions to the chit-chat interaction, these generic responses have to be removed firstly and an informative transition sentence is rewritten. The chit-chat in Appended FusedChat dialogues usually starts with a declarative sentence containing user’s personal thoughts. In this case, using an interrogative as transition sentence can better induce to the chit-chat interaction (see the second dialogue in Figure <ref>). * In Prepended FusedChat dialogues (from chit-chat to task-oriented), since the prepended chit-chat already includes one slot that exists in the first user utterance of the succeeding task-oriented interaction, the annotated transition sentences must be more task-oriented rather than open-domain sentences for smooth switching to task-oriented interaction (see the transition sentence of the fist dialogue in Figure <ref>). At the same time, the annotators are instructed to avoid generic transition sentences which offer advice or provide further help without specific intention, such as “do you need some recommendations?”. To increase reliability and establish agreement among annotators, two annotators independently augmented 50 common samples in the first step and conclude annotation guidelines to follow in the follow-up annotation section. We finally annotate a total of 592 dialogues, which include both Prepended and Appended FusedChat examples, cover different task domains and span over train/test/valid dataset. Some Appended dialogues need to be relabelled due to ambiguous dialogue mode labels. Hence, we re-label these dialogue modes and write a transition sentence at the transition turn for those dialogues. That is why the number of annotated Appended FusedChat is larger than Prepended in Table <ref>, which shows statistics of the human augmented dialogues. § UNIFIED GENERATION MODEL We build a unified generation model in the first step and the initiative systems are adapted from this unified model. We tackle the unified generation problem through fine-tuning conditional GPT-2 <cit.>. Given the FusedChat dataset 𝒟 = {(u_n, d_n, r_n)_n=1^N, (u_m, r_m)_m=1^M} with N task-oriented samples and M chit-chat samples, the goal is to build a unified generation model parameterized by θ to be able to respond to both chit-chat and task-oriented requests, as shown in the Equation <ref>, p_θ(r) = ∏_t=1^T p_θ(r_t|r_<t, u, d) if task-oriented ∏_t=1^T p_θ(r_t|r_<t, u) if chit-chat where r_<t indicates all tokens before t. The u represents the dialogue context; d means the dialogue actions only exist in the task-oriented data and r is the system response which includes (r_1, ... r_t, ...) tokens with length T. The θ is optimized via maximizing the loglikelihood (MLE) of the conditional probabilities in Equation <ref> over the entire task-oriented and chit-chat dataset: ℒ_θ(𝒟) = ∑_n=1^N∑_t=1^T_nlogp_θ(r_t,n|r_<t,n, u_n, d_n) + ∑_m=1^M∑_t=1^T_mlogp_θ(r_t,m|r_<t,m, u_m) We utilize the FusedChat for the training of unified generation model. The train/validation/test split of FusedChat dataset is aligned with MultiWOZ dataset, hence we also use the same split in this paper. During the GPT-2 fine-tuning (please see the unified generation model in Figure <ref>), we add [USER] and [SYSTEM] to the GPT-2 tokenizer to distinguish user utterances from system responses. Only one preceding user utterance is used as the dialogue context for response generation. The behind reasons are, firstly, the context size analysis in <cit.> demonstrated that the immediately preceding user utterance contains most useful information for contextual response generation; secondly, using a smaller context is more memory-efficient for training. During training, the learning rate is 5e-5, batch size is 16. The best model is saved at epoch 5 with early stopping. We mix top-K sampling and top-p (nucleus) sampling <cit.> for decoding. We apply top-K of 5 and top-p of 0.9 for chit-chat response generation and top-K of 10 and top-p of 0.5 for task-oriented response generation respectively[In our experiment, we found that decoding with lower top-p can generate task-oriented responses that have better automatic metric scores.]. The discrete and continuous prompt model (section <ref>) follow with the same decoding strategy as the unified model. § SYSTEM-INITIATED UNIFIED MODEL We further extend the initiative capability of the unified generation model that is able to distinguish between chit-chat and task-oriented responses, while proactively initiating transitions between these two dialogue modes through generating transition sentences. Given the efficient performance of prompt learning <cit.>, we utilize only TWO tokens as the prompt to enable the proactive feature. The first prompt token indicates that the dialogue system should generate a chit-chat or task-oriented response, the second prompt token indicates whether the system should generate a transition sentence to proactively initiate the dialogue mode transition. We propose two system-initiated generation models. One is utilizing two discrete tokens as the prompt, the other one is using two continuous embeddings as the prompt. The former discrete prompt model and the latter continuous prompt model are introduced in detail in this section. §.§ Discrete Prompt Model Adapted from the unified generation model, we first add the following special tokens to the GPT-2 tokenizer for discrete prompt learning. 1) The [CHIT-CHAT] token implies the system should generate a chit-chat response. 2) The [TASK-ORIENTED] implies the system should generate a task-oriented response. 3) The [TRANSITION-TURN] token implies the dialogue system to generate a transition sentence for proactively initiating the dialogue mode transition. 4) The [NORMAL-TURN] token implies the system to stay in the original dialogue mode and generate the same type of dialogue as preceding context without transition sentences. 5) The [TRANSITION][In addition to, [TRANSITION] is also used to evaluate the performance of initiative models for generating transition sentences in section <ref>.] token is a special token inserted into the response at the transition turn to separate normal response and transition sentence. During discrete prompt learning, we prepend two special tokens[The number of discrete prompt tokens, TWO, is aligned with the continuous prompt model, where TWO outputs of RoBERTa classifiers are converted to the continuous prompt through LSTM bridge layer (section <ref>).] as discrete prompt to the input to indicate what kind of response the dialogue system should generate, a chit-chat or task-oriented response, with or without a transition sentence to initiate the dialogue mode transitions (see the discrete prompt model in Figure <ref>). The four prompt combinations representing different types of response generation are shown in Table <ref>. The CCTO is an acronym for Chit-Chat and Task-Oriented; while TTNT stands for Transition-Turn and Normal-Turn in this paper. As the prompt tokens in Table <ref> are newly added in this step, we need to activate them through continually training the unified model with human augmented dialogues (section <ref>). For the discrete prompt learning, only dialogue generations at transition turns as training dataset, which includes a normal response without a transition sentence activated by [NORMAL-TURN] at the second prompt, and a normal response with transition sentence activated by [TRANSITION-TURN]. The Figure <ref> visualizes input examples during discrete prompt training. For the system-initiated unified model, we try to take more dialogue context into account for generating a contextual transition sentence and finally maximal 3 dialogue turns are used for memory-efficient training. Adapted from the unified GPT-2 model (section <ref>), batch size for training of the discrete prompt model is 16, learning rate is 5e-5, and the best discrete prompt model is saved at epoch 4 with early stopping. §.§ Continuous Prompt Model In discrete prompt learning, we have to artificially add two prompt tokens to each dialogue input to trigger different generation modes. To realize a fully automated process, we propose a continuous prompt model where the RoBERTa <cit.> classifiers predict generation modes and the outputs are then automatically converted to two continuous prompts by a long-short term memory network (LSTM) <cit.> bridge layer (see the continuous prompt model in Figure <ref>). Therefore, the continuous prompt model learns proactive transitions directly from dialogue context and the generated latent high-dimensional continuous prompts can also help to discover other transition possibilities besides dialogue mode transitions. §.§.§ RoBERTa Classifiers To achieve full automation and to learn initiative transitions directly from dialogue context, we utilize the pre-trained RoBERTa[We also tried the pre-trained BERT for this classification task. The experiment results show similar performance between BERT and RoBERTa. However, the works in <cit.> and <cit.> demonstrate that the combination of pre-trained RoBERTa and GPT-2 works better. Hence, we finally opted for the pre-trained RoBERTa.] to predict generation modes given dialogue history. The outputs of the trained RoBERTa classifiers are converted into two continuous prompts to replace the word embeddings of the two discrete prompt tokens in the discrete prompt model. The same FusedChat dataset as section <ref> is used for fine-tuning RoBERTa classifiers. We define the training dataset 𝒟^' = {(h_l, y_l^ccto, y_l^ttnt)_l=1^L} with L data samples here, where h represents dialogue history, y^ccto and y^ttnt are the ground-truth label of CCTO and TTNT classifiers respectively. Given dialogue context h, the pre-trained RoBERTa <cit.> returns a sequence of contextualized vectors: v_[CLS], v_h_1, ... = RoBERTa(h) Then two classifier layers, CCTO classifier in Equation <ref> and TTNT classifier in Equation <ref>, are separately added on the pooled output of [CLS] token, v_[CLS]. p^ccto = 𝐖_0^cctov_[CLS] + 𝐛_0^ccto ŷ^ccto = 𝐖_1^cctoDropout(p^ccto) + 𝐛_1^cctos p^ttnt = 𝐖_0^ttntv_[CLS] + 𝐛_0^ttnt ŷ^ttnt = 𝐖_1^ttntDropout(p^ttnt) + 𝐛_1^ttnt The binary CCTO classifier is responsible for predicting the dialogue mode, i.e., that the dialogue system should generate a chit-chat or task-oriented response (same meaning as the fist prompt token in discrete prompt model, see Table <ref>); the binary TTNT classifier is responsible for predicting whether the dialogue model should generate a transition sentence to trigger the system-initiated transition, i.e., whether the generation is at a transition turn or normal turn (same meaning as the second prompt token in discrete prompt model). In Equation <ref> and <ref>, the p^ccto and p^ttnt are the pooled output of classifiers and will be fed into LSTM bridge layer (section <ref>) for generating two continuous prompts[We choose the pooled outputs rather than binary classifier outputs to feed into LSTM bridge layer because the pooled outputs have the same embedding size as GPT-2 embedding layer, 768.]. The dropout layer in classifier layers has the same dropout rate as RoBERTa model 0.1. The ŷ^ccto and ŷ^ttnt are the output of binary CCTO and TTNT classifier respectively. The 𝐖 and 𝐛 are trainable parameters and updated during the training of RoBERTa classifiers. All parameters of RoBERTa classifiers θ^' are jointly trained via minimizing cross-entropy losses of both CCTO and TTNT, as the Equation <ref> shown: ℒ_θ^'(𝒟^') = - ∑_l=1^L∑_c=1^2y_lc^cctologŷ_lc^ccto - ∑_l=1^L∑_c=1^2y_lc^ttntlogŷ_lc^ttnt c is in [1, 2], because CCTO and TTNT classifier are binary classifier. The RoBERTa is trained independently from the unified GPT-2 generation model. The input for RoBERTa is the complete dialogue history with maximal 256 tokens limitation. The [CLS] token is inserted to the first position and [SEP] token is used to separate user utterances and system responses in the input. The RoBERTa is fine-tuned using AdamW optimizer <cit.> with a learning rate of 5e-5. The batch size is 60 and the best model is saved at epoch 4 with early stopping. Table <ref> shows the performance of fine-tuned RoBERTa for the binary CCTO and TTNT classifier. §.§.§ LSTM Bridge Layers After RoBERTa fine-tuning, we further project the high-dimensional outputs of RoBERTa classifiers to the GPT-2 decoder space. The two pooled outputs of CCTO and TTNT classifiers are first converted to two continuous prompt embeddings by the LSTM bridge layers and then used to replace the word embeddings of the two discrete prompt tokens of the discrete prompt model to activate different generation modes. In addition, the latent high-dimensional continuous prompts can learn other potential transition possibilities beyond the four types of generation modes (see Table <ref>) in the discrete prompt model. Inspired from <cit.>, we choose a unidirectional LSTM[We also tried bidirectional LSTM as the bridge layers in this step. However, with bidirectional LSTM, the continuous prompt model has worse performance. The possible reason for that is that CCTO promt and TTNT promt is independent and these two discrete prompt tokens is also unidirectional in discrete prompt model given the property of auto-regressive GPT-2.] with a ReLU activated two-layer multilayer perceptron (MLP) to convert the classifier outputs to the continuous prompts. As shown in Equation <ref>, the pooled outputs of CCTO classifier and TTNT classifier are fed into LSTM bridge layers and generate the CCTO and TTNT continuous prompts. The cp^ccto and cp^ttnt continuous prompts are used to replace the word embeddings of the CCTO and TTNT prompt tokens in discrete prompt learning respectively, and then append word embeddings of other input tokens to guide the response generation. Because the hidden size of RoBERTa and embedding size of GPT-2 are both 768, hence the input size and output size of LSTM layer are also set to 768, and with dropout rate 0.1. [cp^ccto, cp^ttnt] = MLP(LSTM([p^ccto, p^ttnt])) §.§.§ Continuous Prompt Model In the continuous prompt model, the fine-tuned RoBERTa classifiers (section <ref>) and the discrete prompt GPT-2 (section <ref>) are connected through the LSTM bridge layers (section <ref>). The architecture of the continuous prompt model is shown in Figure <ref>. The RoBERTa classifiers automatically predict the generation modes, then LSTM layers project the classifier outputs to the continuous prompts and feed them into the word embedding layer of discrete prompt GPT-2. Because RoBERTa classifiers and discrete prompt GPT-2 have already been trained separately, we freeze both RoBERTa classifiers and discrete prompt GPT-2 to only train the LSTM bridge layers in this step. In order to better train the parameters θ^” of LSTM bridge layers, the first and second prompt predictions are also included besides response generation during training. As shown in Equation <ref>, ccto prediction could be [CHIT-CHAT] or [TASK-ORIENTED] token; ttnt prediction could be [TRANSITION-TURN] or [NORMAL-TURN] token. In this step, we train the LSTM bridge layers with all turns of human augmented dialogues. The input of RoBERTa is a maximum of 256 dialogue history tokens, which is aligned with the fine-tuned RoBERTa classifiers. The input of GPT-2 is the past 3 turns' dialogue context, which is the same as the training of the discrete prompt model. The batch size is 16, learning rate is 5e-5 and best model is saved at epoch 10 with early stopping. p_θ^”(ccto, ttnt, r) = ∏_t=1^T p_θ^”(r_t|r_<t, cp_ccto, cp_ttnt, u, d) if task-oriented ∏_t=1^T p_θ^”(r_t|r_<t, cp_ccto, cp_ttnt, u) if chit-chat § EXPERIMENTAL RESULTS This section elaborates on the performance of our models from different perspectives, including automatic metrics comparison and use case studies. §.§ Automatic Metrics Comparison This section discusses the comparison of automatic metrics between different generation models shown in the Table <ref>. To evaluate generated chit-chat responses, Distinct-1 and Distinct-2 <cit.> are used to measure the proportion of distinct unigrams and bigrams in all generated results to indicate diversity. To evaluate generated task-oriented responses, the N-gram matching metrics, BLEU-4 <cit.> and Meteor <cit.> are used to evaluate overall generation quality. In addition, the machine learned automatic metric BERTScore <cit.> is also utilized to evaluate the task-oriented responses. Compared with the unified model, our proposed system-initiated generation models, namely the discrete prompt model and the continuous prompt model, both suffer a small but acceptable loss in automatic metrics for normal chit-chat and task-oriented response generation. This is expected compared to the unified model that can only generate normal chit-chat and task-oriented responses, while a new feature, transitional sentence generation, is activated along with updated parameters in the discrete and continuous prompt models. Therefore, how to extend the initiative capability in a unified conversational model while maintaining its original performance will be our future research direction. For transition sentence generation, we propose an automatic metric, transition accuracy, to detect whether the responses generated at transition turns include the [TRANSITION] special token, which represents the proactive transitions in the initiative dialogue models are activated. Adapted from the unified generation model, the discrete prompt model can generate a normal task-oriented or chit-chat response along with a transition sentence at the transition turn with 98.98% transition accuracy. To further verify the effectiveness of prompt tokens in this work, we also trained a discrete prompt model without two prompt tokens. This leads to an extremely low transition accuracy of only 3.05%. This is a good evidence that our proposed prompt learning method works and helps the model to be sensitive to dialogue mode transitions. The proposed continuous prompt model has a lower transition accuracy compared to the discrete prompt model even though it can predict generation modes automatically. The possible reason is that the dataset is seriously imbalanced between normal turns and transition turns, i.e., there is only one transition turn in multi-turn FusedChat dialogues. However, the latent high-dimensional continuous prompts show us another transition possibility between task-oriented domains, like a proactive transition from restaurant domain to taxi domain, as the third dialogue example in Table <ref>. This also inspires our future work on the system-initiated transitions in a task-oriented multi-domain setting. §.§ Use Cases Study In addition to automatic metric evaluation, we also illustrate several dialogue examples with generated transition sentences by our proposed models in this section. The first and second dialogue cases in Table <ref> show that our proposed discrete prompt and continuous prompt model both can generate a reasonable transition sentence at the transition turn to realize the system-initiated transitions between chit-chat and task-oriented. In dialogue mode transitions from chit-chat to task-oriented, prepended chit-chat generally includes a slot (e.g., “Indian food” in the first case in Table <ref>), to start with the succeeding task-oriented dialogue and this information can potentially guide transition sentence generation. However, it is more difficult for the system to realize proactive transition from task-oriented to chit-chat, where there is no explicit transition topic to the succeeding chit-chat dialogue. Furthermore, the use cases study also show that the generation of transition sentences to chit-chat interaction is highly dependent on human augmented dataset. The third dialogue case in Table <ref> show another transition possibilities between task domains (transition from restaurant to train or taxi domain) in the continuous prompt model. Because most of task-oriented dialogues in MultiWOZ are multi-domain, the continuous prompt model learns the possibility, domain transitions, from latent high-dimensional continuous prompts. This is also an additional advantage of continuous prompt learning compared with discrete prompt learning and points us a future direction for this research. § CONCLUSION AND OUTLOOK In this work, we explore the system-initiated transitions between dialogue modes in dialogue systems. Adapted from pre-trained GPT-2, we first build a unified model that can reply to both chit-chat and task-oriented requests. Afterwards, we extend the proactive transitions of the unified model through efficient prompt learning. By prepending only two special discrete tokens, we build a discrete prompt model adapted from unified model. Furthermore, we utilize pre-trained RoBERTa to predict dialogue modes directly from dialogue history and generate continuous prompts through LSTM bridge layers for the continuous prompt model. Both initiative models show promising performance in proactive switching between chit-chat and task-oriented dialogues. We did not perform inter-rater reliability scores on the human augmentation task (section <ref>), nor human evaluation (section <ref>) for performance comparison. These limitations will be addressed in future work. The study of proactive transitions in a unified conversational model is a new problem. The promising performance of this work motivates us to continue this research. The proposed discrete prompt model can better control the transitions initiated by the system, while the continuous prompt model can learn more potential system-initiated possibilities from a latent high-dimensional space. However, both suffer from the problem of insufficient training data, especially for the continuous prompt model. Based on current experiment results, we are also interested in exploring the proactive transitions from chit-chat to task-oriented and from task-oriented to chit-chat in a unified dialogue model separately, because the transition sentence generation in these two scenarios are totally different. In FusedChat, only one turn in each multi-turn dialogue is labelled as the transition turn. In real dialogue scenarios, it might be much more complicated. Therefore, multiple rounds of proactive transitions in one dialogue is a more challenging research topic. ACM-Reference-Format
http://arxiv.org/abs/2307.03758v1
20230707021446
Federated Learning over a Wireless Network: Distributed User Selection through Random Access
[ "Chen Sun", "Shiyao Ma", "Ce Zheng", "Songtao Wu", "Tao Cui", "Lingjuan Lyu" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.NI" ]
Federated Learning over a Wireless Network: Distributed User Selection through Random Access Chen Sun, Senior Member, IEEE, Shiyao Ma, Ce Zheng, Songtao Wu, Tao Cui, Lingjuan Lyu Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== User selection has become crucial for decreasing the communication costs of federated learning (FL) over wireless networks. However, centralized user selection causes additional system complexity. This study proposes a network intrinsic approach of distributed user selection that leverages the radio resource competition mechanism in random access. Taking the carrier sensing multiple access (CSMA) mechanism as an example of random access, we manipulate the contention window (CW) size to prioritize certain users for obtaining radio resources in each round of training. Training data bias is used as a target scenario for FL with user selection. Prioritization is based on the distance between the newly trained local model and the global model of the previous round. To avoid "excessive contribution" by certain users, a counting mechanism is used to ensure fairness. Simulations with various datasets demonstrate that this method can rapidly achieve convergence similar to that of the centralized user selection approach. random access, carrier sensing multiple access (CSMA), federated learning (FL), user selection, data bias, fairness o § INTRODUCTION Artificial intelligence (AI) has gained widespread popularity as a technology that is used to enhance various aspects of our daily lives and augment human capabilities. For instance, in vehicles, AI engines can identify objects such as cars or pedestrians with front-facing cameras, even in low-light conditions where the human eye may fail. However, training an AI model for such tasks requires the collection of extensive image data from diverse locations. Collaborative training of an AI model on a cloud server by multiple users can facilitate training with a wider range of scenarios, but this requires uploading significant amounts of image data, which raises privacy concerns <cit.>. To address the concerns related to user privacy and mitigate the heavy traffic load associated with uploading training images, federated learning (FL) through wireless networks has emerged and is a widely used approach for autonomous driving scenarios <cit.>. In this system, each user trains a model locally and transmits only the model parameters, rather than the images, to the cloud via a wireless network. Through a downlink channel, an updated model that incorporates the local updates from all the users is generated. This process iterates until the model's performance converges. However, the application of FL over wireless networks faces the challenge of limited wireless resources for model uploading. One solution overcomes this issue is to choose a subset of users in each round of the FL process. Consequently, user selection has become a crucial area of research in FL. User selection is normally based on certain rules to improve the performance of FL in the following ways. First, training data are critical to the FL process. When cars that are located close to each other use their front cameras to gather training data, the resulting data may have correlations due to the overlapping camera views. Using these data can lead to a decrease in the training performance. To address this issue, it was recommended in <cit.> that users who are sufficiently distant from each other be selected in order to reduce correlation in the data. Moreover, the assumption that training data collected by all users are independent and identically distributed (IID) may not always be accurate. This is because the training data come from a diverse group of users, each with unique preferences, such as preferences for driving on highways or city roads. Consequently, these preferences may result in biases within each user's data. In their work <cit.>, Cho et al. put forth a user selection strategy where the server chooses users with the highest loss out of all users. The second point pertains to the uniqueness of the device of each user involved in the training process, including computational power and the algorithm employed in local training. Users with varying computational capabilities conduct their local training at different times during the FL process, and waiting for the slowest user to finish their training is not an efficient approach. To mitigate this issue, researchers proposed a solution in <cit.>; in this approach, users with similar completion times are selected in each round of training. Third, the success of FL relies heavily on the accessibility and reliability of radio resources, particularly when transferring models from users to the server. As a result, FL researchers are interested in user selection schemes that consider user radio resources. In <cit.>, the authors investigate the convergence of FL with existing wireless network scheduling policies for users. In <cit.>, users with high communication capacity and computing capacity were encouraged to participate the FL process. With the digital twin modeling of these physical parameters, incentive schemes for the FL process were designed. Motivated by the findings mentioned above, the telecommunication industry has undertaken the development of cellular network standards to enable the widespread adoption of FL over wireless networks. These standards facilitate the selection of users based on predetermined rules by providing an application programming interface (API) of its core network functions to the FL server, thereby creating an open platform for FL <cit.>. As an illustration, the 5G network's core network can provide an FL application server with users' locations, enabling the server to handpick certain users. Additionally, the FL server can stipulate specific user selection criteria at network functions, and subsequently, the 5G core network functions, e.g., the network exposure function (NEF), can present a roster of users filtered by the selection criteria. These core network functions were originally designed to allow cellular network operators to control and manage wireless connections of users including access control, roaming, quality of service (QoS) management, etc. Improving these functions to support FL application over wireless networks is limited by the inherent design philosophy of legacy wireless communication protocols, which prioritize the efficiency of information transportation rather than computational objectives. Selecting users for FL involves collecting their training characteristics, which can result in increased communication and computing overhead due to additional parameter uploads and radio resource control. Additionally, user selection and resource allocation of wireless resources typically rely on a centralized mechanism. With the increasing demand for widespread connectivity, leveraging unlicensed bands and deploying wireless networks more flexibly and densely is becoming increasingly crucial. Communication relies on a random access mechanism by which each user competes for radio resources once he or she has data to transmit. This trend emphasizes the importance of decentralizing radio resource management. To enable the deployment of FL over these wireless networks, distributed user selection becomes necessary. In this regard, we propose the manipulation of the wireless resource competition mechanism based on the unique characteristics of each user's local learning process to achieve user selection with random access. As an example, we modify the carrier sensing multiple access (CSMA) mechanism by reducing the contention window (CW) size for selected users. This adjustment provides users with a greater chance to gain access rights and upload their local AI models to a server. Taking data bias as a target scenario for user selection, an issue arises, namely, how to determine the CW size in order to select users while avoiding the training data bias issue. § PRELIMINARIES §.§ FL with User Selection FL is a machine learning framework that operates in a distributed manner, it involves two key entities: the server and users. This approach enables the collaborative training of a global model for completing common tasks without directly accessing a user's data. The training process involves three key steps: * Broadcast the global model: At the start of each training round, the server, which is normally deployed on the cloud or edge server, selects a subset of users 𝒦^t to participate in the training process. The server then broadcasts the aggregated global model to these users through a wireless network, with the exception of the first round during which the server broadcasts the initialized model to all users. * Local training: Upon receiving the global model, users proceed to train it on their respective local datasets, resulting in the creation of local models. This process can be expressed as ω_k^t = ω^t - η∇ F(ω^t), where ω^t is the global model in the t-th round. Here, ω_k^t is the local model of the k-th user in the t-th round and F is the objective function. Subsequently, the users upload their local models to the server over wireless connections. * Aggregating the global model: The server aggregates the local models by using FedAvg <cit.> to generate the new global model that is expressed as follows: ω^t+1 = D_k/∑_k ∈𝒦^t D_k, where D_k is the local dataset of the k-th user. This process is repeated until convergence of the global model is achieved. §.§ FL with Random Access The evolution of wireless connectivity requires flexible use of radio spectrum such as the use of unlicensed bands, for example the WiFi system where radio resources as media for connection are obtained through a competition mechanism. A device must first determine whether media are being used by someone before sending signals. If the media are in use, the device enters a backoff state and waits for a specific period before attempting to transmit again <cit.>. This backoff time is a randomly selected number of time slots (a 20 μs duration), and the random number must be within the value of the CW. Consider the scenario where the FL is implemented over a WiFi network and the server is implemented on the access point (AP) or on the cloud that is connected to this AP. Local training is performed by users who are connected to the FL server through a WiFi network. The wireless channel resource by which each user uploads their local model in each round depends on a competition mechanism. In the implementation described in <cit.>, FL is executed on Raspberry Pi devices connected via a WiFi network. Each user has an equal opportunity to upload their local model but in a random fashion. The Third Generation Partnership Project (3GPP) cellular system also defines a mode for sidelinks by which devices that communicate directly with another need to obtain radio resources from a region-based resource pool. In such a system a device performs signal strength measurements on radio resources over a certain time window. If the energy is higher than a predetermined threshold, a channel is considered busy. The transmission parameters are subsequently set based on the channel busy ratio (CBR) <cit.>. A typical use scenario where FL is implemented for autonomous driving applications utilizes edge-based FL server communication with cars via cellular-based sidelink communication protocols. When implementing FL over an operator's 5G network, radio resource allocation can be centrally controlled for model transfer. However, when FL applications are executed such that users and the server communicate with a random access mechanism, the subset of users participating in each round of the learning process, denoted as 𝒦^t, becomes randomly dependent on this autonomous resource allocation for each user. To address user selection and avoid system complexity with centralized resource allocation, we propose studying user selection for FL with random access. We suggest prioritizing certain users by adjusting their resource competition mechanism based on learning properties to increase the probability of obtaining available channels. For example, in the WiFi system we can modify the CW size and in the sidelink we can modify the threshold in the signal strength measurement, thus changing the CBR of the resources. The priority level information of each user that is broadcasted to other users who are competing for resources in the sidelink control information (SCI) can also be adjusted. § PRIORITIZED RADIO RESOURCE COMPETITION This paper uses WiFi as a case study to demonstrate how to adjust the resource competition according to priority level of each user. The data bias scenario is taken as a target application area where user selection is critical to FL performance. Compared to the centralized approach, the proposed method only requires the transmission of locally trained models to the server without extra parameters related to the training process by the users. The aggregating step, which involves collecting knowledge from each user, is a crucial part of the FL training process, as highlighted in Sec.<ref>. The level of contribution from each user is determined by the distance between the newly trained local model and the aggregated model, with a greater distance indicating a more significant contribution. To leverage this insight, we propose a user selection approach that prioritizes users with greater distances from the global model for the data bias scenario. Note that other learning characteristics such as computation power and battery life can be utilized to adapt the resource competition mechanism in different application scenarios. Here, we adopt the distance calculation method proposed by <cit.> to compute the distance between the two models, i.e., the newly trained local model and the global model. We define the priority of a user using the following formula: priority = ∏_l=1^L (1 + ∥ω_k,l^t-ω^t_l ∥_2 /∥ω^t_l ∥_2), where L is the number of layers in neural network, ω^t_l is the l-th layer in the global model of the t-th training round, and ω_k, l^t is the l-th layer in the local model of the k-th user. Note that we have checked the values of the priority level in a few experiments and found that it is normally within [1, 1.2] and is not affected by different models. We adjust the CW size, W, through the priority of users as follows. Then, we generate a random number, R, that is uniformly distributed within (0, 1) to multiply the window size W to obtain the backoff time, T_backoff as follows: W = N/priority, T_backoff = R * W, where N is the hyperparameter to control the range of common CW sizes as the basis for adjustment. After completing their local model training, users upload their models to the server through CSMA with their own backoff time T_backoff. Those users with lower values of W are likely to upload their models earlier than those with higher values of W. When the FL server is configured to merge the uploads of the first few users and broadcast the updated model, user selection is achieved through this backoff mechanism. It is worth noting that this user selection method is influenced by the order in which users upload their local models. It is possible that user selection will become biased, with the server more likely to aggregate with users that have higher priority in the upload process. This phenomenon is particularly apparent in the non-IID scenario, where some categories of data are difficult to train or exclusively owned by certain users. As a result, there is a significant variation between the newly trained local models of these users and the global model, causing the server to select these users with higher frequency or probability. This bias toward certain users can cause the global model to become biased as well. To mitigate this, we propose that each user maintains a counter to adjust the distribution of user participation in training, thus reducing the impact of this bias on the final global model. To adjust the distribution of user participation in training, before uploading their local models to the server, users first check their selected frequency using their counters. If the number of times a user has been selected exceeds a certain threshold, they do not upload their local models. This process limits the frequency of user selection via the use of their counters. The counter for each user is calculated as a percentage of the number of times that a particular user has contributed to the global model divided by the total number of users that have contributed to the global model. Note that if the server of the FL is configured to merge 𝒦^t in each round, after T rounds the server has merged models contributed by users ∑_t=1^T𝒦^t times. Suppose one user has obtained channels and uploaded their local model k times. In this case, the counter value for this user would be k/∑_t=1^T𝒦^t in the T-th round. In summary the process of FL with distributed user selection through random access is shown in Fig.<ref>. There are five steps: * Step 1: The server broadcasts the global model that was aggregated in the previous round of training to the users. * Step 2: The users train the global model on their local datasets to generate local models. * Step 3: The users calculate their priority with Eq. (<ref>), and then obtain the backoff time T_backoff with Eq. (<ref>). * Step 4: Users check their counter values k/𝒦^t and if their values exceed the threshold, they refrain from uploading their local models. Otherwise, they upload their local models according to the backoff time T_backoff obtained in Step 3. * Step 5: After receiving a certain number 𝒦^t of local models, the server applies the FedAvg algorithm to aggregate the local models and generates the updated global model. Subsequently, the server broadcasts this global model to all users, which also indicates that the server will no longer receive any more local models in this round. Then, each user updates their local counter value. Finally, after uploading, the users update their counter values as follows: if the upload was successful, they increase the numerator by one and the denominator by 𝒦^t; if the upload was not successful, they only increase the denominator by 𝒦^t in their counter. The framework repeats these steps until the model converges. § NUMERICAL RESULTS §.§ Experiment setup In this section, we present the results of our experiments on two widely used datasets, Fashion-MNIST and CIFAR-10, to assess the performance of our proposed framework. §.§.§ Data partition We conducted experiments under two scenarios: IID and non-IID. In the IID scenario, we randomly partitioned the datasets into equal parts, with each user receiving an equal share. In the non-IID scenario, we adopted the partitioning method proposed in <cit.>, where the dataset is sorted by real label and partitioned into 200 shards of size 300, with each user receiving two shards. This results in each user having examples of only two types. §.§.§ Training settings We set the number of users to K=10, and in each round, the server selects two users to participate in the training process. We use two neural networks, multilayer perception (MLP) and convolutional neural network (CNN), for image classification, and these networks are commonly used by researchers <cit.>. The MLP consists of one hidden layer with 200 nodes, and its size is d_input× 200 × 10, where d_input is the input dimension (784 for Fashion-MNIST and 3072 for CIFAR). The CNN has two convolution layers with 5 × 5 kernels (128 channels and 256 channels, respectively) and a fully connected layer with 4096 nodes for Fashion-MNIST and 6400 nodes for CIFAR. Its size is C_input× 5 × 5 × 128 + 128 × 5 × 5 × 256 + d_fl× 10, where C_input is the input image channel (1 for Fashion-MNIST and 3 for CIFAR) and d_fl is the dimensions of the fully connected layer (1024 for Fashion-MNIST and 3072 for CIFAR). We use stochastic gradient descent (SGD) to train the model parameters. The learning rate is set to 10^-2, the batch size for training and testing is 32, and the local epoch is 1. We use FedAvg as the aggregation algorithm <cit.>. Furthermore, we set N to 2048 as the base to control the user window size based on priority levels and set the threshold to restrict the selected frequency of users to 16% of the total selected frequency. §.§.§ Baseline To assess the efficacy of our proposed technique for enhancing framework performance we selected the conventional user selection method, namely random selection, as our baseline. Additionally, we have considered both centralized and distributed user selection. The performance of the IID dataset is also shown as a reference. §.§ User selection with the IID dataset Initially, we examined our proposed technique in the IID scenario. In Fig. <ref>, it is evident that the performance of the random selection and priority-based strategies is comparable in both datasets. This is explained by the similarity in data distribution among users, which results in comparable distances between each local model and the global model. Consequently, the likelihood of each user participating in the priority-based strategy is similar to that of each user participating in the random selection strategy. As a result, the performance of the four user selection strategies is comparable. §.§ User selection with non-IID dataset To evaluate the effectiveness of the proposed method in the non-IID scenario, we conducted experiments on the two datasets with two networks, as shown in Fig. <ref>. It is evident that the performance of centralized user selection and decentralized user selection is similar when users are selected randomly. However, by utilizing the priority strategy proposed in this paper, we observe an improvement in performance compared to that achieved by random user selection. This is because users are selected based on the distance between local models and the global model, with users that are farther away from the global model contributing more knowledge than others. Additionally, it is shown that decentralized user selection with random access inferiors is the centralized priority due to backoff time control, which alters the likelihood of a selected user acquiring the channel before others. However, there remains some uncertainty in winning the channel during the competition process. Notably, in Fig. <ref>(a), selection with decentralized resource allocation can achieve performance similar to that of the centralized priority approach with low system overhead. The simulation shows that while the trend aligns with our expectations, the performance of FL with the CNN varies greatly. This can be explained by the fact that our training data are highly biased and we select a small number of users in each round of training. Furthermore, the substantial volume of parameters of the CNN results in significant changes in model distance values and corresponding variation in the knowledge contributed by each user. As a result, we have decided to focus on the MLP architecture with the Fashion-MNIST dataset for our subsequent experiment. §.§ Fairness In this subsection, we focus on the fairness of the user selection process, with a specific emphasis on studying the effect of using a counter in the process of user selection for non-IID datasets. To isolate the effect of the uncertainty introduced by the random access, we utilize the centralized approach to assess the impact of the counter. We included ten users, each with two randomly selected shards of data, in the FL process, and two users were selected in each round. Fig. <ref> displays the number of times each user is selected. For random selection, each user is selected in relatively equal measure. In contrast, when users are selected based on priority without a counter, the selection is biased toward certain users, such as users 1, 2, and 3. We conducted several experiments and discovered that the most frequently selected users are generally associated with training data with digital labels, such as 2, 5, 8, and 9, corresponding to pullover, sandal, bag, and ankle boot image datasets, respectively. Additionally, the biasing phenomenon is more pronounced when the training datasets of a user correspond to 2 and 9 than other situations. We speculate that the features of these training images differ significantly from those of others and have been learned by the AI model, even though these features might be difficult for the human eye to detect. As a result, these models have different knowledge from other models and have a large model distance. The biased selection of certain users decreases the detection performance, as shown in Fig. <ref>. If the counter is used, we can see that the number of times each user is selected becomes more balanced in Fig. <ref>, which ensures the effectiveness of our proposed method and mitigates the biasing global model over a few users. Consequently, the corresponding performance in Fig. <ref> improves significantly and outperforms the random selection approach. The threshold value of the counter also influences the performance, and we carried out experiments with different values. Based on the results, we set the threshold to 16%. §.§ Effect of CW size Finally, we conducted experiments to investigate the impact of the hyperparameter N in Eq. (<ref>) of CW on the performance of the system. We varied the base of CW size N from 512 to 2048 and observed that increasing the window size improves the performance. This can be attributed to the fact that a smaller window size often results in users producing similar backoff times, reducing the effectiveness of the prioritization mechanism for radio access. A larger window size promotes greater diversity in backoff times across users, making the prioritization mechanism more effective. Furthermore, the optimal CW size depends on the threshold values of the counter used in the selection process. These two parameters shall be adjusted for different scenarios. § CONCLUSION The aim of this study is to investigate the user selection process in FL, where users upload their locally trained models to the server with random access over a wireless channel. We introduced a user selection scheme by manipulating wireless resource competition to prioritize certain users. Different user selection criteria can be used to set user priority levels. In this paper, we considered the user selection problem in the context of training data bias and used CSMA as the random access scheme. Users are selected based on the knowledge they can contribute to the global model, which is measured by the distance between their local model and the global model. Prioritization was realized by reducing the CW size for selected users. To prevent biasing of the global model toward certain users, we also implemented a counter for each user to limit the number of their contributions. Our results demonstrated that the proposed prioritization strategy in the resource competition process, along with a counter, can improve the performance of FL. This allows FL with random access over wireless channels to achieve satisfactory performance while avoiding the complexity issue of the centralized approach. IEEEtran
http://arxiv.org/abs/2307.00471v1
20230702043633
Unveiling Stable One-dimensional Magnetic Solitons in Magnetic Bilayers
[ "Xin-Wei Jin", "Zhan-Ying Yang", "Zhimin Liao", "Guangyin Jing", "Wen-Li Yang" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
School of Physics, Northwest University, Xi'an 710127, China Peng Huanwu Center for Fundamental Theory, Xi'an 710127, China [email protected] School of Physics, Northwest University, Xi'an 710127, China Peng Huanwu Center for Fundamental Theory, Xi'an 710127, China School of Physics, Peking University, Beijing, 100871,China [email protected] School of Physics, Northwest University, Xi'an 710127, China Peng Huanwu Center for Fundamental Theory, Xi'an 710127, China Insititute of Physics, Northwest University, Xi'an 710127, China We propose a novel model which efficiently describes the magnetization dynamics in a magnetic bilayer system. By applying a particular gauge transformation to the Landau-Lifshitz-Gilbert (LLG) equation, we successfully convert the model into an exactly integrable framework. Thus the obtained analytical solutions allows us to predict a 1D magnetic soliton pair existed by tunning the thickness of the spacing layer between the two ferrimagnetic layers. The decoupling-unlocking-locking transition of soliton motion is determined at various interaction intensitiy. Our results have implications for the manipulation of magnetic solitons and the design of magnetic soliton-based logic devices. Unveiling Stable One-dimensional Magnetic Solitons in Magnetic Bilayers Wen-Li Yang August 1, 2023 ======================================================================= Introduction.— The intricate interplay of multiple interactions in magnetic materials generates a large class of localized spin textures —magnetic solitons <cit.>. These solitons exhibit distinct and varied configurations in different dimensions and hold great promise as candidates for the next generation of magnetic storage devices <cit.>. Instead of static magnetic interactions, dynamic magnetic interactions <cit.> have been recently predict and observed by the current-induced torque or non-equilibrium spin pumping <cit.>. Within the dynamic coupling magnetic interaction, two magnets can be coherently and tunable coupled at the macro distance, presenting a novel avenue for the coherent transfer of magnon excitation between distinct magnetic systems <cit.>. Furthermore, these developments raises also an intriguing question of the existence and regulation of attractive magnetic solitons in magnetic bilayer structures <cit.>. Extensive efforts have been dedicated to the quest for stable magnetic solitons in theory, experiments, and micromagnetic simulations <cit.>. The dynamics of magnetic solitons are described by the Landau-Lifshitz-Gilbert (LLG) equation <cit.>. However, for decades, due to the intricate nature of this highly nonlinear coupled equations with multiple interactions, finding analytical solutions are extremely challenging and is a long-standing problem <cit.>. The lack of comprehensive analytical solutions hinders progress, necessitating time-consuming and labor-intensive experiments and simulations, without the guidance of a solid theoretical framework. The dynamic coupling magnetic interaction not only unveils a host of fresh physical phenomena but also amplifies the complexity of solving the coupled LLG equation from a theoretical standpoint. In this letter, we establish an exchange-coupled magnetic bilayer structure, ferromagnetic/normal/ferromagnetic (F/N/F), as a model system. From the coupled LLG equations governing the magnetization dynamics in the ferromagnetic bilayers, a theoretical model at small amplitude approximation is developed. A gauge transformation is proposed allowing us to convert the problem into an integrable model, which is applicable when the intermediate layer thickness is appropriately chosen. Thereafter, the exact solution of the governing equation is achineved, and the analytical magnetic soliton solutions are subsequently obtained. By adjusting the strength of dynamic magnetic coupling, we find that the magnetic soliton pairs in the ferromagnetic bilayer undergo a decoupled-unlocking-locking transition. We also examine the influence of Gilbert damping in materials on the design of practical devices. These results illustrate practical ways to control the one-dimentional magnetic solitons, in which three motion states are successfully released: anti-parallel moving, splitting oscillation, and the locking soliton pair. Modeling.— We consider a magnetic bilayers system as illustrated in Fig. <ref>, which consists of two coupled ferromagnetic (FM) films and a nonmagnetic interlayer with thicknesse of s. The FM layers are assumed to be parallel to each other with equal thicknesses d_1=d_2=d. The dynamics of the unit magnetization vector m_i in the parallel coupled ferromagnetic layers can be described by the Landau-Lifshitz-Gilbert equation ∂m_i/∂ t=-γ_im_i×H^i_eff+α_i(m_i×∂m_i/∂ t)-γ_iJ/s M_s,im_i×m_j. where γ_i is the gyromagnetic ratio, α_i>0 denotes Gilbert damping parameter of each FM layer, M_s,i is the saturation magnetization, and J represents coupling strength between m_i and m_j with i,j=1,2. Moreover, the effective field of the two FM layers can be obtained from the free energy density of the system as H^i_eff=-1/μ_0δ E/δm_i. We assume the total energy incorporates the contributions from the Zeeman energy due to an applied magnetic field H_0=(0,0,h), the exchange interaction parametrized by an exchange constant A_i, and the perpendicular magnetic anisotropy energy. Thus, it takes the form H^i_eff=H_0+(2A_i/M_s,i)∇^2m_i+(2K_i/M_s,i)(m_i·n)n, where n=(0,0,1) is the unit vector directed along the anisotropy axis. For simplicity, we transform the coupled LLG equation (<ref>) to the dimensionless form ∂m_i/∂τ=-m_i×∂^2/∂ζ^2m_i-κm_i×(m_i·n)n-J'm_i×m_j, by rescaling the space and time into ζ=λ^-1_ex· x, τ=γμ_0M_s· t. Here λ_ex=√(2A_i/(μ_0M_s,i^2)) is the exchange length, κ=2K_i/(μ_0M_s,i^2) and J'=J/(μ_0sM_s,i^2) denote the dimensionless easy-plane anisotropy constant and dimensionless coupling strength, respectively. Table <ref> summarizes the realistic physical constants and parameters used for the structure under our consideration. Take into account the fact that the magnitude of the magnetization m_i^2=1 at temperature well below the Curie temperature, we reasonably introduce a stereographic transformation Φ_j=m_j^x+im_j^y, (m_j^z)^2=1-|Φ_j|^2. Furthermore, let us consider small deviations of magnetization m_i from the equilibrium direction (along the anisotropy axis), which corresponds to (m_j^x)^2+(m_j^y)^2≪(m_j^z)^2 (or |Φ_j|^2≪ 1) and therefore m_j^z≈1-|Φ_j|^2/2. As a result, the dynamics of the spinor Φ=(Φ_1, Φ_2)^T can be expressed as i∂/∂τΦ=∂^2/∂ζ^2Φ+(J'σ_1-Δ)Φ+S⊙(ΦΦ^†)Φ, where we have defined S=κ (σ_3)^2/2+J'σ_1/2 and Δ=J'+h+2κ, with σ_1,2,3 of the Pauli matrices. Symbol ⊙ represents the Hadamard product for matrices. A noteworthy remark extracted here is that by maintaining a suitable separation between two ferromagnetic layers (s=J/2K)), it becomes possible to introduce a gauge transformation Φ_1,2=1/√(2)(Ψ_1e^i(h+κ)τ±Ψ_2e^i(h+3κ)τ), making the dynamic model (<ref>) entirely integrable. Then, the new spinor Ψ=(Ψ_1, Ψ_2)^T is determined by the Manakov equation with arbitrary constant coefficients: iΨ_τ=Ψ_ζζ+κ(ΨΨ^†)Ψ. A diversity of solutions of this equation can be constructed using the methods of exactly integrable systems. One can also easily obtain the formulations of three components of magnetization by the inverse transformation from Eq. (<ref>). Magnetic soliton solutions.—Non-degenerate soliton solutions of (<ref>) can be constructed with the help of the Hirota bilinear formalism <cit.>, and the first- and second- order non-degenerate soliton solutions are presented in Supplementary Materials. The final fundamental nondegenerate soliton solutions are characterized by four arbitrary complex parameters, describing the velocity and the amplitude of the magnetic soliton in both FM layers, as well as the nonlinear interaction of magnetic solitons between two FM layers. From the non-degenerate soliton solution of Eq. (<ref>), the formulations of non-degenerate magnetic soliton are constructed. This derived solutions represent several categories of magnetic solitons in this magnetic bilayer system. Through analyzing these solutions, it becomes apparent that the magnetic bilayer system possesses diverse spin textures, manifested as dynamical magnetic solitons. As far as we know, experimental observation of these magnetic soliton pairs resulting from interlayer dynamic interactions are currently lacking. With this theoretical prediction, in the following, we try to discuss the possible generation mechanisms and the practical applications by these magnetic soliton pairs in magnetic bilayer structures. Linear stability analysis.—It has been confirmed that, from the analytical solution above, there are magnetic solitons allowed in this system, then another important aspect to be considered is their stability characters. For practical applications of magnetic solitons as memory units or drivien objects in spintronics, it is crucial to maintain stability of solitons in the presence of interference. The stability property is usually analyzed by way of linear stability analysis <cit.>. For this purpose, we consider the solitary wave solutions of the form Ψ=Ψ'exp(ibτ), with b being propagation constant, then Eq. (5) becomes -bΨ'_τ=Ψ'_ζζ+κ(Ψ'Ψ'^†)Ψ'. To analyze the linear stability of the solitary wave, we perturb the relevant wave function as Ψ_i={Ψ'_0i+[v_i(ζ)+w_i(ζ)]e^λτ+[v_i^*(ζ)-w_i^*(ζ)]e^λ^*τ}e^ibτ, here Ψ'_0i being the general complex-valued unperturbed wave function calculated from Eq. (<ref>), v_i and w_i(i=1,2) are small perturbations for a given eigenvalue λ. Inserting this perturbed solution in Eq. (<ref>) and linearizing thereafter, we obtain the following linear-stability eigenvalue problem: i L· W=λ· W. where matrix W=(v_1,w_1,v_2,w_2)^ T denotes the normal-mode perturbations. The matrix L contains the magnetic soliton solution Ψ'_0i representing the linear stability operator. The matrix elements and calculation details of matrix L are presented in Supplementary materials. In general, two separate regions can be defined based on the linear-stability spectrum. The non-degenerate soliton wave is linearly unstable when the spectrum contains eigenvalues with positive real parts, which gives an exponential growth rate of perturbations. While the soliton is regarded as stable if the spectrum contains purely imaginary discrete eigenvalues <cit.>. The whole spectrum of the linear-stability operator L are numerically solved by the Fourier collocation method. To verify the predictions of the linear stability analysis obtained from the numerical solution of the spectral problem (<ref>), we proceed to numerically simulate the nonlinear propagation of the magnetic solitons. The evolutions of stable non-degenerate magnetic solitons and unstable non-degenerate magnetic solitons are illustrated in Figs. <ref>. The initial conditions for both simulations are taken in the form of a soliton solution perturbed by a 10% random noise. The upper panels of Fig. <ref>(a) depict the stability regions in the parameter space ((k_1),(l_1)) of the magnetic soliton and provide an exemplary illustration of a stable soliton solution. The center panel plots the shape of m^z component in two ferromagnetic layers at t=0 and t=30. The whole stability spectrum of this non-degenerate soliton is shown in the upper right corner panel. It can be seen that this flat-bottom-double-hump magnetic solitons propagate stably and the flat bottom structure in the first FM layer is maintained, which complies with the results of the linear stability analysis. On the other hand, Fig. <ref>(b) shows the unstable propagation of the asymmetric single-double-hump soliton. Stronger instabilities cause the splitting and diffusion of the solitons at relatively short times. Coupling and Gilbert-damping.— The successful stabilization of non-degenerate solitons enlightens us to design bilayer ferromagnetic spin-electronic devices based on stable magnetic solitons. Here, we numerically investigate the propagation behavior of stable magnetic solitons in FM bilayers with various coupling strengths (which corresponds to thickness of the nonmagnetic spacer). Our first step is to construct a stable magnetic soliton in each layer, with opposing velocities. When the two ferromagnetic layers are far apart from each other, their interaction becomes very weak, and the two layers are decoupling (J'=0). The two solitons propagate in opposite directions respectively, as depicted in Fig. <ref>(a) and <ref>(b). An increase of the coupling strength leads to soliton separation in both FM layers (as shown in Fig. <ref>(c) and <ref>(d)). The interlayer interaction causes solitons to oscillate and propagate towards both ends at a constant velocity. This observation can be explained as follows. As the thickness of the intermediate layer reduces, the long-range dynamic interaction between the two FM layers, induced by adiabatic spin-pump, starts to come into play. The dynamic magnetization, which arises from the moving magnetic solitons in the ferromagnetic layer, causes the formation of non-equilibrium spin flow between the two layers. This ultimately triggers the bidirectional oscillation transmission of magnetic solitons. We highlight that as the two ferromagnetic layers continue to approach, the interlayer dynamic interaction will exceed a certain threshold, which becomes sufficient to rapidly synchronize the motion of magnetic solitons and balance the spin current. Two solitons thereby get trapped in a stationary position (See Fig. <ref>(e) and <ref>(f)). This dynamic region of soliton immobilization is henceforth referred to as the locking region. These simulation results in the wider range of J' are summarized in Fig. <ref>(g), which clearly shows the decoupling-to-unlocking-to-locking transition. The black and red lines in the figure represent the minimum values of soliton signals received by the signal receiving devices placed at both ends of the first layer FM under different coupling strengths. The different behaviors of magnetic solitons in FM bilayers under varying coupling strengths inspire us to design a logic signal generator. By adjusting the spacing between the two ferromagnetic layers, which is highly controllable in practice, it is possible to achieve different outputs of logical signals (Decoupling state corresponds to “10" and “01", unlocking state corresponds to “11", and locking state corresponds to “00"). This suggests a new posibility towards utilizing spintronic devices for logic operations. In practical applications, the signal attenuation caused by Gilbert damping in ferromagnetic materials must be considered. Through numerical simulation, we find that the damping effect has a significant impact on the magnetic solitons in the unlocking state. Fig. <ref>(a) shows the propagation of magnetic solitons in the unlocking state in the upper FM layer with Gilbert-damping constant α_1=0.05, where ℒ_ max represents the maximum distance at which the signal attenuates to an unrecognizable state (assuming that the m^z component is greater than 0.8). The dependence of ℒ_ max on the damping constant α for the FM layer is shown in Fig. <ref>(b). It can be observed that opting for materials featuring low damping coefficients can significantly increase the separation between signal receivers. Discussion and Conclusion.—To sum up, we have derived a model at a small amplitude approximation to describe the nonlinear dynamics of magnetization in a bilayer ferromagnetic system. When the intermediate layer takes a characteristic thickness (i.e., 2 nm), s=J/2K for the system here, and the dynamic interaction coupling parameter and magnetic anisotropy are taken as 2 mJ/m^2 and 5×10^5 J/m^3, it is possible to introduce a gauge transformation to transform the equation into a fully integrable constant coefficient Manakov system. The first-order and second-order non-degenerate magnetic soliton solutions are obtained, as well as their respective stability regions. The numerical simulation results of magnetic soliton transmission are well consistent with the predictions given by linear stability. These theoretical and numerical results confirm the existence of stable one-dimensional magnetic soliton pairs in magnetic bilayer system. To generate such magnetic solitons in a F/N/F bilayer system, the magnetization texture based on the above magnetic soliton solution must be manufactured into the two ferromagnetic layers. This excited solitions can be achieved for example by a local magnetic filed or spin-polarized electric currents. On the other hand, the intensity of the interlayer long-range dynamic interaction, induced by adiabatic spin-pump, can be tailored by manipulating the spacing between the two FM layers. Through the manipulation of the intermediate layer's thickness, we unveiled three distinct transport states of magnetic solitons: soliton decoupling, unlocking, and locking. With a gradual increment in dynamic interactions, we demonstrated the progression of magnetic soliton motion from decoupling to unlocking, and ultimately to locking. It is note that the dynamic exchange coupling strength J is related to the thickness of the spacing layer. We postulate an inverse square root relationship between the two parameters <cit.>, i.e. J∝ 1/√(s). Through calculations based on the parameters we have considered, it is determined that when the thickness of the intermediate layer is less than 0.45 nm, magnetic solitons initiate a transition towards the locking state. Note that, the thickness of this transition is related to the selection of ferromagnetic layer and insulating spacer layer materials. For the same material, various material properties such as the saturation magnetization M_s,i and the interfacial dynamic coupling of the synthetic layers can be controlled within the reach of leading-edge material fabrication and deposition techniques <cit.>. Finally, we examine the impact of Gilbert damping in different ferromagnetic materials on this transitional process. Our findings reveal that damping predominantly results in the attenuation of magnetic solitons in the unlocking state. Furthermore, we have established a correlation between the damping coefficient and the maximum separation distance between distinguishable magnetic soliton signals. These findings present new possibilities for developing spintronic devices for logic computing based on magnetic solitons, and have ignited extensive research on these systems to refine their design according to specific application requirements. The authors thank Prof. H. M. Yu and Prof. C. P. Liu for their helpful discussions. This work was supported by the National Natural Science Foundation of China (Nos. 12275213, 12174306, 12247103), and Natural Science Basic Research Program of Shaanxi (2023-JC-JQ-02, 2021JCW-19).
http://arxiv.org/abs/2307.03172v2
20230706175411
Lost in the Middle: How Language Models Use Long Contexts
[ "Nelson F. Liu", "Kevin Lin", "John Hewitt", "Ashwin Paranjape", "Michele Bevilacqua", "Fabio Petroni", "Percy Liang" ]
cs.CL
[ "cs.CL" ]
Data processing of Visible Emission Line Coronagraph Onboard ADITYA–L1 C. Kathiravan, R. Ramesh August 1, 2023 ====================================================================== *Work partially completed as an intern at Samaya AI. While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze language model performance on two tasks that require identifying relevant information within their input contexts: multi-document question answering and key-value retrieval. We find that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts. Furthermore, performance substantially decreases as the input context grows longer, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context models. § INTRODUCTION Language models have become an important and flexible building block in a variety of user-facing language technologies, including conversational interfaces, search and summarization, and collaborative writing. These models perform downstream tasks primarily via prompting: all relevant task specification and data to process is formatted as a textual context, and the model returns a generated text completion. These input contexts can contain thousands of tokens, especially when using language models on lengthy inputs (e.g., legal or scientific documents, conversation histories, etc.) or augmenting them with external information (e.g., relevant documents from a search engine, database query results, etc; ). Handling these use-cases requires language models to successfully operate over long sequences. Language models are generally implemented with Transformers, which scale poorly to long sequences (e.g., since self-attention complexity is quadratic with the input sequence length). As a result, language models are typically trained with relatively small context windows. Recent improvements in hardware (e.g., faster GPUs with more memory) and algorithms <cit.> have resulted in language models with larger context windows, but it remains unclear how these extended-context language models make use of their input contexts when performing downstream tasks. We empirically investigate this question via controlled experiments with a variety of state-of-the-art open (, ) and closed (OpenAI's and Anthropic's ) language models in settings that require accessing and using information within an input context. We first experiment with multi-document question answering, which requires models to reason over provided documents to find relevant information and use it to answer a given question; this task mimics the retrieval-augmented generation setup underlying many commercial generative search and question answering applications (e.g., Bing Chat). We make controlled changes to the input context size and the position of the relevant information within the input context and study their effects on model performance. In particular, we can increase the input context length by adding more documents to the input context (akin to retrieving more documents in retrieval-augmented generation), and modify the position of the relevant information within the context by changing the order of the documents in the input context to place the relevant document at the beginning, middle or end of the context. We observe a distinctive U-shaped performance, which can be clearly visualized in Figure <ref>, as we vary the position of the relevant information —language model performance is highest when relevant information occurs at the very beginning or end of its input context, and performance significantly degrades when models must access and use information in the middle of their input context (<ref>). For example, when relevant information is placed in the middle of its input context, 's performance on the multi-document question task is lower than its performance when predicting without any documents (i.e., the closed-book setting; 56.1%). In addition, we find that model performance steadily degrades on longer contexts (<ref>), and that extended-context models are not necessarily better at using their input context (<ref>). Given that language models struggle to retrieve and use relevant information in the multi-document question answering task, to what extent can language models even retrieve from their input contexts? We study this question with a synthetic key-value retrieval task, which is designed to be a minimal testbed for the basic ability to retrieve matching tokens from the input context. In this task, models are given a collection of JSON-formatted key-value pairs, and must return the value associated with a specific key. Similar to the multi-document QA task, the key-value retrieval task also admits controlled changes to the input context length (adding more key-value pairs) and the position of relevant information. We observe a similar U-shaped performance curve in this setting; many models struggle to simply retrieve matching tokens that occur in the middle of their input context. To better understand why language models struggle to access and use information in the middle of their input contexts, we conduct preliminary investigations into the role of model architecture (decoder-only vs. encoder-decoder), query-aware contextualization, and instruction fine-tuning (<ref>). We find that encoder-decoder models are relatively robust to changes in the position of relevant information within their input context when evaluated on sequences within its training-time sequence length, but they show a U-shaped curve when evaluated on sequences longer than those seen during training (<ref>). In addition, query-aware contextualization (placing the query before and after the documents or key-value pairs) enables models to perform the synthetic key-value task perfectly, but minimally changes trends in multi-document QA (<ref>). Finally, even base language models (i.e., without instruction fine-tuning) show a U-shaped performance curve as we vary the position of relevant information in the input context. Lastly, we perform a case study with retriever-reader models on open-domain question answering to better understand the trade-off between adding more information to an input context and increasing the amount of content that the model must reason over (<ref>)—in contrast to our controlled multi-document QA task, where the context always contains exactly one document that answers the question, none or many of the top k documents may contain the answer in the open-domain QA settting. When retrieving from Wikipedia to answer queries from NaturalQuestions-Open, we find that model performance saturates long before retriever recall levels off, indicating that models fail to effectively use additional retrieved documents—using more than 20 retrieved documents only marginally improves performance (∼1.5% for and ∼1% for claude-1.3). Our analysis provides a better understanding of how language models use their input context and introduces new evaluation protocols for future long-context models. To facilitate further work on understanding and improving how language models use their input context, we release our code and evaluation data.[https://nelsonliu.me/papers/lost-in-the-middlenelsonliu.me/papers/lost-in-the-middle] § LANGUAGE MODELS We study language models as functions that take a textual input context and return a textual output. Modern language models are most commonly implemented with Transformers <cit.>. Transformer language models encode input contexts with self-attention, whose time and memory complexity is quadratic in the length of the input, limiting their application to very long sequences. As a result, they are generally pre-trained with relatively small amount of prior context (its context window), which accordingly also limits the maximum length of the language model's input contexts. Increasing language model maximum context length. Recent advances in hardware (e.g., faster GPUs with more memory) and algorithms (e.g., FlashAttention; ) have driven a rapid increase in language model maximum context length. OpenAI's model (released in March 2023) has a maximum context window of 32K tokens; in May 2023, Claude's context window was expanded from 8K tokens to 100K tokens. In June 2023, OpenAI announced an extended-context version of its model, increasing its context from 4K to 16K tokens. A variety of open-source long context language models have also been recently released: MPT-30B has a maximum context length of 8K tokens, and LongChat-7B has a maximum context length of 16K tokens. Finally, a variety of recently-proposed architectures model sequences with millions of tokens, raising the potential of further dramatic increases in language model maximum context length <cit.>. § MULTI-DOCUMENT QUESTION ANSWERING Our goal is to better understand how language models use their input context. To this end, we analyze model performance on multi-document question answering, which requires models to find relevant information within an input context and using it to answer the question. In particular, we make controlled changes to the length of the input context and the position of the relevant information and measure changes in task performance. §.§ Experimental Setup Our multi-document question answering task closely parallels the retrieval-augmented generation setup underlying commercial search and question answering applications (e.g., Bing Chat). In these experiments, the model inputs are (i) a question to answer and (ii) k documents (e.g., passages from Wikipedia), where exactly one the documents contains the answer to the question and k - 1 “distractor” documents do not. Performing this task requires the model to access the document that contains the answer within its input context and use it to answer the question. Figure <ref> presents an example. We instantiate this task with data from the NaturalQuestions benchmark <cit.>, which contains historical queries issued to the Google search engine and human-annotated answers extracted from Wikipedia. Specifically, we first take queries from NaturalQuestions-Open <cit.>, an open domain question answering benchmark that is derived from NaturalQuestions. Use use passages (chunks of at most 100 tokens) from Wikipedia as documents within our input contexts. For each of these queries, we need a document that contains the answer and k-1 distractor documents that do not contain the answer. To obtain a document that answers the question, we use the Wikipedia paragraph that contains the answer from the NaturalQuestions annotations. To collect k-1 distractor documents that do not contain the answer, we use the Contriever retrieval system <cit.> to retrieve the k-1 Wikipedia chunks that are most relevant to the question and do not contain any of the NaturalQuestions-annotated answers.[Ambiguity in NaturalQuestions-Open means that a small number of distractor passages may contain a reasonable answer. We additionally run experiments on subset of unambiguous questions, finding similar results and conclusions; see Appendix <ref>.][We also explored using random documents as distractors, see Appendix <ref> for more details.] In the input context, the distractor documents are presented in order of decreasing relevance.[Since there might be a prior over “search results” appearing in ranked order, we explored randomly ordering the k-1 distractor documents and mentioning that the documents are randomly ordered in the task description, but found the same trends. See Appendix <ref> for more details.] To modulate the input context length in this task, we increase or decrease the number of retrieved documents that do not contain the answer (Figure <ref>). To modulate the position of relevant information within the input context, we adjust the order of the documents in the input context to change the position of the document that contains the answer (Figure <ref>). Following <cit.> and <cit.>, we use accuracy as our primary evaluation metric, judging whether any of the correct answers (as taken from the NaturalQuestions annotations) appear in the predicted output. To prevent models from exploiting the metric by simply copying the documents from the input context, we strip model output beyond the first generated newline character. In practice, model responses are generally a single sentence or paragraph; generation is terminated (via producing an end-of-sequence token) without producing any newline characters. Our experimental setup is similar to the needle-in-a-haystack experiments of <cit.>, who compare question answering performance when the relevant paragraph is placed (i) at the beginning of the input context or (ii) a random position within the input context. They find that encoder-decoder models have significantly higher performance when relevant information is placed at the start of the input context. In contrast, we study finer-grained changes in the position of relevant information. §.§ Models We analyze several state-of-the-art open and closed models. We use greedy decoding when generating outputs and leave exploration of other decoding methods to future work. We use a standard set of prompts for each model (depicted in Figure <ref>). Appendix <ref> tabulates input context lengths (number of tokens) for each model and experimental setting. Open models. We experiment with , which has a maximum context length of 8192 tokens. The model was initially pre-trained on 1 trillion tokens using 2048-token sequences, followed by an additional sequence length adaptation pre-training phase on 50B tokens using 8192-token sequences. uses ALiBi <cit.> to represent positional information. We also evaluate <cit.>, which builds on LLaMA-13B (original maximum context window of 2048 tokens; ) and extends its context window to 16384 tokens by using condensed rotary positional embeddings before fine-tuning with 16384-token sequences. Closed models. We use the OpenAI API to experiment with and .[We use the model revisions for all OpenAI API experiments.] has a maximum context length of 4K tokens, and is a version with an extended maximum context length of 16K tokens. We evaluate and with the Anthropic API; has a maximum context length of 8K tokens, and has an extended context length of 100K tokens.[We also evaluate on a subset of multi-document QA experiments, finding similar results and trends as other models (though GPT-4 has higher absolute performance). Evaluating on the full multi-document QA and key-value retrieval experiments would cost upwards of $6000. See Appendix <ref> for GPT-4 results and discussion.] §.§ Results and Discussion We experiment with input contexts containing 10, 20, and 30 documents (2.7K examples each). Figure <ref> presents multi-document question answering performance when the position of relevant information within the input context. To better understand the realistic lower- and upper-bounds on performance, we also evaluate performance on the closed-book and oracle settings. In the closed-book setting, models are not given any documents in their input context, and must rely on their parametric memory to generate the correct answer. On the other hand, in the oracle setting, language models are given the single document that contains the answer and must use it to answer the question. and have the highest closed-book (55%) and oracle (88%) performance; see Appendix <ref> for full closed-book and oracle results on all models. Model performance is highest when relevant information occurs at the beginning or end of its input context. As the position of relevant information is changed, we see a distinctive U-shaped curve in model performance—models are much better at identifying and using relevant information that occurs at the very beginning and very end of contexts, and suffer degraded performance when forced to use information within the middle of its input context. For example, 's multi-document QA performance can drop by more than 20%—at its nadir, performance in 20- and 30-document settings is lower than performance without any input documents (i.e., closed-book performance; 56.1%). These results indicate that current models cannot effectively reason over their entire context window when performing downstream tasks, and that models have an easier time retrieving and using information at the very start or end of their input contexts. Model performance substantially decreases as input contexts grow longer. On both tasks, model performance degrades as the contexts grow longer, indicating that models struggle to retrieve and use relevant information from long input contexts (Figure <ref>). This trend continues when comparing models with their corresponding extended-context versions. For example, 's lowest performance in the 20-document setting is 52.9% (when the document containing the answer is positioned 10th out of 20). The input contexts of the 30-document setting are too long for , but using its extended-context counterpart also results in performance decrease (49.5% when the relevant document is positioned 10th out of 30)—although extended-context models can process longer input contexts, they may not be better at reasoning over the information within its context window. Extended-context models are not necessarily better at using input context. In settings where the input context fits in the context window of both a model and its extended-context counterpart, we see that performance between them is nearly identical. For example, the results for and are nearly superimposed (solid purple series and dashed brown series, respectively). These results indicate that models with longer maximum context windows are not necessarily better at using this extended context. § HOW WELL CAN LANGUAGE MODELS RETRIEVE FROM INPUT CONTEXTS? Given that language models struggle to retrieve and use information from the middle of their input contexts in the multi-document question answering task, to what extent can they simply retrieve from input contexts? We study this question with a synthetic key-value retrieval task to isolate and study the basic ability of matching and retrieving relevant information from input contexts. §.§ Experimental Setup In our synthetic key-value retrieval task, the inputs are (i) a string-serialized JSON object with k key-value pairs, where each of the keys and values are unique, randomly-generated UUIDs and (ii) a particular key within the aforementioned JSON object. The goal is to return the value associated with the specified key. Thus, each JSON object contains one relevant key-value pair (where the value is to be retrieved), and k-1 irrelevant “distractor” key-value pairs. Figure <ref> provides an example input context and its corresponding desired output. We use accuracy as our evaluation metric, assessing whether the correct value appears in the predicted output. Our synthetic key-value retrieval task is designed to provide a minimal testbed for the basic ability to retrieve matching tokens from an input context. This task shares similar goals with the Little Retrieval Test of <cit.> and the closely-related fine-grained line retrieval task of <cit.>, but we explicitly seek to distill and simplify the task by removing as much natural language semantics as possible (using random UUIDs instead), since language features may present potential confounders (e.g., because Transformer language models may have varying sensitivity to different linguistic features in their input context; ). To modulate the input context length in this task, we change the number of input JSON key-value pairs k by adding or removing random keys, changing the number of distractor key-value pairs (Figure <ref>). To modulate the position of relevant information within the input context, we change the position of the key to retrieve within the serialized JSON object (Figure <ref>). §.§ Results and Discussion Figure <ref> presents key-value retrieval performance; We experiment with input contexts containing 75, 140, and 300 key-value pairs (500 examples each). We use the same set of models as the multi-document question answering experiments, see <ref> for more details. Although the synthetic key-value retrieval task only requires identifying exact match within the input context, not all models achieve high performance—claude-1.3 and claude-1.3-100k do nearly perfectly on all evaluated input context lengths, but other models struggle, especially when retrieving keys from 140 or more key-value pairs. The results on the key-value retrieval task have largely similar trends to the results on the multi-document question-answering task (excepting models with perfect performance on the key-value retrieval task). In particular, we see the U-shaped performance curve again; model performance is lowest when they must access key-value pairs in the middle of their input context. Furthermore, model performance in this setting generally also decreases on longer input contexts. in the 140 key-value setting is a notable outlier; when the relevant information is at the start of the input context, it tends to generate code to retrieve the key, rather than outputting the value itself. § WHY DO LANGUAGE MODELS STRUGGLE TO USE THEIR ENTIRE INPUT CONTEXT? Our multi-document question answering and key-value retrieval results show that language model performance degrades significantly when they must access relevant information in the middle of long input contexts. To better understand why, we perform some preliminary investigations into the role of model architecture (e.g., decoder-only vs. encoder-decoder), query-aware contextualization, and the effects of instruction fine-tuning. §.§ Effect of Model Architecture The open models we evaluate in <ref> and <ref> are all decoder-only models—at each timestep, they may only attend to prior tokens. To better understand the potential effects of model architecture on how language model use context, we compare decoder-only and encoder-decoder language models. We experiment with Flan-T5-XXL <cit.> and Flan-UL2 <cit.>. Flan-T5-XXL is trained with a sequences of 512 tokens (encoder and decoder). Flan-UL2 is initially trained with sequences of 512 tokens (encoder and decoder), but is then pre-trained for an extra 100K steps with 1024 tokens (encoder and decoder), before instruction-tuning on sequences with 2048 tokens in the encoder and 512 tokens in the decoder. However, since these models use relative positional embeddings, they can (in principle) extrapolate beyond these maximum context lengths; <cit.> find that both models can perform well with sequences of 8K tokens. Figure <ref> juxtaposes the performance of decoder-only and encoder-decoder models. When Flan-UL2 is evaluated on sequences within its 2048 training-time context window, its performance is relatively robust to changes in the position of relevant information within the input context. When evaluated on settings with sequences longer than 2048 tokens, Flan-UL2 performance begins to degrade when relevant information is place in the middle. Flan-T5-XXL shows a similar trend, where longer input contexts result in a greater performance degradation when placing relevant information in the middle of the input context. We speculate that encoder-decoder models may make better use of their context windows because their bidirectional encoder allows processing each document in the context of future documents, potentially enhancing relative importance estimation between documents. §.§ Effect of Query-Aware Contextualization Our experiments in <ref> and <ref> place the query (i.e., question to answer or key to retrieve) after the data to process (i.e., the documents or the key-value pairs). As a result, decoder-only models cannot attend to query tokens when contextualizing documents or key-value pairs, since the query only appears at the end of the prompt and decoder-only models can only attend to prior tokens at each timestep. On the other hand, encoder-decoder models use a bidirectional encoder to contextualize input contexts, and seem to be more robust to changes in the position of relevant information in their input context—can use this intuition to also improve the performance of decoder-only models by placing the query before and after the data, enabling query-aware contextualization of documents (or key-value pairs)? We find that query-aware contextualization dramatically improves performance on the key-value retrieval task. For example, (with query-aware contextualization) achieves perfect performance when evaluated with 300 key-value pairs. In contrast, without query-aware contextualization, it achieves a lowest performance of 45.6% in the same setting (Figure <ref>). In contrast, query-aware contextualization minimally affects performance trends in the multi-document question answering task. In particular, it improves performance when the relevant information is located at the very beginning of the input context, but slightly decreases performance in other settings. §.§ Effect of Instruction-Tuning All of the models that we evaluated in <ref> and <ref> are instruction-tuned—after their initial pre-training, they undergo supervised fine-tuning on a dataset of instructions and responses. In this supervised instruction-tuning data, the task specification and/or instruction is commonly placed at the beginning of the input context, which might lead instruction-tuned language models to place more weight on the start of the input context. To better understand the potential effects of instruction-tuning on how language models use long input contexts, we compare the multi-document question answering performance of against its base model (i.e., before instruction fine-tuning) . We use the same experimental setup as <ref>. Figure <ref> compares the multi-document QA performance of and as a function of the position of the relevant information in the input context. Surprisingly, we see that both and exhibit a U-shaped performance curve, where performance is highest when relevant information occurs at the very beginning or very end of the context. Although the absolute performance of is uniformly higher than that of , their overall performance trends are quite similar. These observations complement prior work, which found that (non-instruction-tuned) language models are biased towards recent tokens (i.e., the end of the input context; ). This recency bias has been observed in past work when evaluating models on next-word prediction of contiguous text, a setting where language models minimally benefit from long-range information <cit.>. In contrast, our results show that language models are capable of using longer-range information (i.e., the beginning of the input context) when prompted with instruction-formatted data. We hypothesize that language models learn to use these contexts from similarly-formatted data that may occur in webtext seen during pre-training, e.g., StackOverflow questions and answers. § IS MORE CONTEXT IS ALWAYS BETTER? A CASE STUDY WITH OPEN-DOMAIN QA In practical settings, there is often a trade-off with increased the input context length—providing the instruction-tuned language model with more information may help improve downstream task performance, but also increases the amount of content that the model must reason over. Even if a language model can take in 16K tokens, is it actually beneficial to provide 16K tokens of context? The answer to this question is downstream task-specific since it depends on the marginal value of the added context and the model's ability to effectively use long input contexts, but we perform a case study with open-domain question answering on NaturalQuestions-Open to better understand this trade-off. We use models in a standard retriever-reader setup. A retrieval system (Contriever, fine-tuned on MS-MARCO) takes an input query from NaturalQuestions-Open and returns k documents from Wikipedia. To condition instruction-tuned language models on these retrieved documents, we simply include them in the prompt. We evaluate retriever recall and reader accuracy (whether any of the annotated answers appear in the predicted output) as a function of the number of retrieved documents k. We use a subset of NaturalQuestions-Open where the long answer is a paragraph (as opposed to a table or a list). Figure <ref> presents open-domain QA results. We see that reader model performance saturates long before retriever performance levels off, indicating that readers are not effectively using the extra context. Using more than 20 retrieved documents only marginally improves reader performance (∼1.5% for and ∼1% for ), while significantly increasing the input context length (and thus latency and cost). These results, coupled with the observation that models are better at retrieving and using information at the start or end of the input contexts, suggest that effective reranking of retrieved documents (pushing relevant information closer to the start of the input context) or ranked list truncation (returning fewer documents when necessary; ) may be promising directions for improving how language-model-based readers use retrieved context. § RELATED WORK §.§ Long-context language models There is a rich line of work in designing performant language models with cheaper scaling than Transformers in the context length. Many lines of work pursue Transformer variants with attention modifications like recurrence <cit.>, factorizing attention into computationally less intensive approximations <cit.>, or low-rank approximations <cit.>; see <cit.> for a comprehensive overview. <cit.> instead provide a faster exact attention by a carefully-crafted IO-aware CUDA kernel. Separately, there are attempts to do away with attention entire to remove quadratic sequence length complexity, often through convolution and/or linear RNNs, e.g., in RWKV <cit.>, S4 <cit.>, or Hyena <cit.>. Many prior efforts evaluate perplexity on a diverse web corpus as a proxy for the ability to process long contexts; this work shows that precise knowledge access on long contexts may be an added challenge. However, a variety of work has proposed benchmarks for long-text understanding. <cit.> propose the Long Range Arena, which evaluates long-context models on a variety of natural language, visual reasoning, and synthetic tasks. However, only two of its constituent tasks involve natural language, which limits its applicability to evaluating long-context capabilities of pre-trained language models. In contrast, the SCROLLS benchmark <cit.> and its zero-shot extension ZeroSCROLLS <cit.> evaluate model performance on a variety of NLP tasks that require understanding long input contexts (e.g., summarization and question answering over long documents). §.§ How do language models use context? The pioneering work of <cit.> showed that small LSTM language models make increasingly coarse use of longer-term context; <cit.> found similar results in dialogue models. In a similar vein, <cit.> find that attentive LSTM language models tend to mainly use recent history. <cit.> were among the first to demonstrate the potential of combining context from an information retrieval system with a pretrained language models for unsupervised question answering. <cit.> found that many information-destroying operations had marginal effects on Transformer LMs' predictions. <cit.> found that long-context neural generation in modestly-sized Transformer language models degenerates because models fail to properly condition on long context. Finally, studying long-context models, <cit.> found that longer contexts improves prediction of only a few tokens, an empirical finding consistent with the theory of <cit.>, who showed that sequence distributions with bounded mutual information necessarily lead to marginal average prediction benefits from increasingly long context. <cit.> analyze how efficient Transformers perform on a variety of long-context downstream NLP tasks, finding that long-context transformers are recency-biased and do not effectively use long-range context. Furthermore, they also observe that query-aware contextualization can improve performance, although their analysis focuses on fine-tuned models with bidirectional encoders (while we primarily study zero-shot prompting with decoder-only language models). §.§ The serial-position effect The U-shaped curve we observe in this work has a connection in psychology known as the serial-position effect <cit.>, that states that in free-association recall of elements from a list, humans tend to best remember the first and last elements of the list. The serial-position effect plays a role in understanding how humans develop short- and long-term memory. Observing a serial-position-like effect in LLMs is perhaps surprising, since the self-attention mechanisms underlying Transformer LLMs is technically equally capable of retrieving any token from their contexts. § CONCLUSION We empirically study how language models use long input contexts via a series of controlled experiments on two tasks that require identifying and using relevant information in-context: multi-document question answering and key-value retrieval. We find that language models often struggle to use information in the middle of long input contexts, and that performance decreases as the input context grows longer. We conduct a preliminary investigation of the role of (i) model architecture, (ii) query-aware contextualization, and (iii) instruction-tuning to better understand how each of these factors might affect how language models use context. Finally, we conclude with a practical case study of open-domain question answering, finding that the performance of language model readers saturates far before retriever recall. Our results and analysis provide a better understanding of how language models use their input context and provides new evaluation protocols for future long-context models. § ACKNOWLEDGMENTS We thank Sewon Min for her help with the AmbigQA dataset. In addition, we thank Eric Wallace and Sang Michael Xie for feedback and discussions that helped improve this work. This work was supported by the Stanford Center for Research on Foundation Models (CRFM), by OpenAI via an API credits grant to the Stanford CRFM, and by Anthropic via the Claude academic access program. acl_natbib § AMBIGUITY IN MULTI-DOCUMENT QA DISTRACTOR DOCUMENTS Following past work on NaturalQuestions-Open <cit.>, we use a Wikipedia dump from late 2018 as our retrieval corpus. However, this standard Wikipedia dump has a small amount of temporal mismatch with the data in NaturalQuestions. For example, consider the question “what nfl team does robert griffin iii play for”. The NaturalQuestions annotated answer is “currently a free agent”. However, the Wikipedia retrieval corpus contains the information that he plays for the “Baltimore Ravens”, since he was released from the team between the Wikipedia dump's timestamp and the NaturalQuestions annotation process. We use the ambiguity annotations of <cit.> to create a subset unambiguous questions. Experiments on this unambiguous subset of the data show similar results and conclusions as the experiments on the full questions collection (Figure <ref>). § RANDOM DISTRACTORS IN MULTI-DOCUMENT QA We also run multi-document question answering experiments with random Wikipedia documents as distractors, which allows us to ablate the impact of retrieved distractors (hard negatives). Note that in this setting, the the document containing the answer can often be identified with simple heuristics (e.g., lexical overlap with the query). Figure <ref> presents the results of this experiment. Although all models have higher absolute accuracy in this setting, they surprisingly still struggle to reason over their entire input context, indicating that their performance degradation is not solely due to an inability to identify relevant documents. § RANDOMIZING DISTRACTOR ORDER IN MULTI-DOCUMENT QA Our prompt instructs the language model to use the provided search results to answer the question. There may be a prior in the pre-training or instruction-tuning data to treat search results as sorted by decreasing relevance (i.e., the documents near the beginning of the input context are more likely to be useful than those at the end). To validate that our conclusions are not simply a byproduct of this bias, we run experiments with the modified instruction “Write a high-quality answer for the given question using only the provided search results (some of which might be irrelevant). The search results are ordered randomly.” In addition, we randomly shuffle the k-1 distractor documents. Figure <ref> presents the results of this experiment. We continue to see a U-shaped performance curve, with performance degrading when language models must use information in the middle of their input contexts. Comparing the results in <ref> with those when randomizing the distractor order and mentioning such in the prompt, we see that randomization slightly decreases performance when the relevant information is at the very beginning of the context, and slightly increases performance when using information in the middle and end of the context. § PERFORMANCE We evaluate on a subset of 500 random multi-document QA examples with 20 total documents in each input context (Figure <ref>). GPT-4 achieves higher absolute performance than any other language model, but still shows a U-shaped performance curve—its performance is highest when relevant information occurs at the very start or end of the context, and performance degrades when it must use information in the middle of its input context. § CLOSED-BOOK AND ORACLE PERFORMANCE Table <ref> presents language model performance on the closed-book and oracle settings for multi-document question answering. In the closed-book setting, language models are not given any documents in their input context, and must rely on their parametric memory to generate the correct answer. In the oracle setting, language models are given the single document that contains the answer, and must use it to answer the question. This represents an upper-bound on task performance. § TOKEN COUNTS Table <ref>, Table <ref>, and Table <ref> present the average and maximum number of tokens in each of the input contexts for all experimental settings. Note that and use the same tokenizer, and use the same tokenizer, and and use the same tokenizer. Furthermore, the tokenizer is the same as the tokenizer, modulo some additional special tokens that do not appear in our data. As a result, the token counts for these two model families is the same in our experimental settings. § FULL MULTI-DOCUMENT QUESTION ANSWERING RESULTS This section tabulates model performance when evaluated on the multi-document QA task with varying numbers of documents (Figure <ref>). “Index n” indicates performance when the document with the answer occurs at position n + 1, where lower indices are closer to the start of the input context. For example, index 0 refers to performance when the document with the answer is placed at the very start of the context (i.e., first amongst all documents). §.§ 10 Total Retrieved Documents §.§ 20 Total Retrieved Documents §.§ 30 Total Retrieved Documents
http://arxiv.org/abs/2307.02018v1
20230705041401
Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues
[ "Dollaya Hirunyasiri", "Danielle R. Thomas", "Jionghao Lin", "Kenneth R. Koedinger", "Vincent Aleven" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.HC" ]
Dollaya et al. Carnegie Mellon University, Pittsburgh, PA, USA [email protected] {Drthomas,Jionghao,Koedinger}@cmu.edu [email protected] Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues Dollaya Hirunyasiri Danielle R. ThomasJionghao LinKenneth R. KoedingerVincent Aleven August 1, 2023 ============================================================================================================= Research suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor's ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues. § INTRODUCTION Tutoring is among the most highly personalized and consistently impactful interventions known to improve student learning <cit.>. Despite the known positive impacts of tutoring on student learning, there is a known shortage of trained tutors, with many available tutors lacking experience and the necessary competency skills to be successful in the field <cit.>. In recent years, although tutor training programs have been developed, most do not provide tutors with specific formative feedback during training, and little research exists on tutors receiving specific feedback on their actual tutoring practices. Recent advances in pre-trained large language models, such as the well-known AI-chatbot ChatGPT, have made it possible to provide specific and explanatory feedback to learners <cit.>. We propose that the use of large language models to provide tutors with effective formative feedback on their actual tutoring is a promising use case. The ability of GPT-4 to accurately evaluate components of praise given to students, which can be determined by comparing it to human expert evaluations, is a critical component of providing effective feedback, and as such, serves as our starting point. Moreover, the accuracy of AI-generated tutor feedback for the purpose of improving tutor learning and performance has not been well researched, if at all. In this work-in-progress, we used simulated dialogues to assess the capability of GPT-4 in providing accurate feedback to human tutors regarding their delivery of effective praise to students. To this end, the primary research question addressed is: RQ: Can GPT-4 accurately assess components of effective human tutor's praise to students and, in particular, what is the comparative accuracy between zero-shot and few-shot chain of thought prompting approaches? § RELATED WORK §.§ High-Quality Feedback Feedback is one of the most powerful influences on student achievement and can significantly impact learning outcomes and performance <cit.>. Effective feedback is described as having many characteristics, particularly: 1) being targeted, linked to specific goals and learning objectives; 2) being progress-oriented and constructive, focusing on the learning process and supporting a growth mindset; 3) being timely, as providing immediate and frequent feedback often benefits students' academic performance <cit.>. However, providing learners with timely, explanatory feedback, or in this case, offering timely feedback to online tutors while they are actively tutoring students is laborious and expensive when using human evaluators <cit.>. To facilitate the feedback provision process, Demszky et al. <cit.> provide automated, individualized feedback to over one thousand instructors on their teaching sessions within an online computer course. Instructors received the feedback via email within 2-4 days. This automatic, formative feedback tool improved instructors' uptake of student contributions by 27%, with preliminary evidence suggesting it also increased students' satisfaction with assignments and the course itself <cit.>. These promising findings underscore the potential that more timely feedback—either occurring in real time or shortly after—a tutoring session could enhance student contribution and performance. Despite the known positive impact of feedback on educators' performance and the global interest in leveraging large language models (LLMs) for communicative tasks, there is currently a lack of research on the use of LLMs for generating feedback on tutoring. §.§ Tutoring Competencies & Giving Effective Praise There is limited research on the key competencies and components of effective tutoring <cit.>, with many qualities of impactful tutoring challenging to measure or assess (e.g., building a relationship with the student) in practice. The National Student Support Accelerator (2021), a think tank emanating from the Annenberg Institute at Brown University that focuses on disseminating research and advancing developments in tutoring, has created a rubric for evaluating the effectiveness of tutors in facilitating sessions. The rubric contains three main criteria for assessing a tutoring session: 1) The tutor effectively employs tutoring facilitation strategies; 2) The tutor identifies and addresses potential student misconceptions or confusions; and 3) The tutor explains content clearly and correctly. Each criterion is measured on a 1-5 Likert-like scale, from “lacking” to “exemplary”, respectively <cit.>. Our recent research, surveying 18 partnering members across several tutoring organizations, determined that the most important perceived tutoring skills were the ability to engage and motivate students and build successful relationships with them <cit.>. From this research, we developed a super competency framework called SMART, standing for Social-emotional learning, Mastering content, Advocacy, Relationships, and Technology. Mastering Content, which pertains to a tutor's ability to comprehend mathematical pedagogy and apply effective tutoring skills, was identified as a crucial element of effective tutoring. Within this dimension, there are multiple scenario-based lessons covering a range of content. We selected the lesson titled Giving Effective Praise as our starting point, given its critical role in fostering and maintaining student motivation and engagement. The lesson objectives from Giving Effective Praise state that upon completion of the lesson, tutors will be able to: explain how to increase student motivation by giving effective praise; identify features of effective praise; and apply strategies by responding to students through praise <cit.>. Tutors should strive to praise students for their effort, acknowledging the learning process, and not necessarily the outcome, such as getting the problem correct <cit.>. The five key criteria for productive, process-focused praise used as a rubric in this work state that praise is: 1) sincere, earned, and truthful; 2) specific by giving details of what the student did well; 3) immediate, with praise given right after the student’s action; 4) authentic, not repeated often; and 5) focused on the learning process, not ability <cit.>. Given the known importance of effective praise on student motivation and performance, can large language models like GPT-4 pick up on the use of these strategies when analyzing tutor-student interaction data (i.e., tutor-student chat logs or transcripts)? If so, this would open the door to using large language models, such as GPT-4, to generate timely, impactful, and formative feedback to tutors during their actual tutoring sessions. §.§ Using Large Language Models to Give Feedback Large language models (LLMs) are trained using deep learning to produce text that resembles human writing. Trained on a vast array of sources, such as Wikipedia, webpages, written materials, and practically anything curated on the internet, the text generated by neural LLMs often mirrors the written language of most humans. We focus on ChatGPT using GPT-4, a general pre-trained large multimodal model capable of accepting both image and text inputs. OpenAI <cit.> asserts, “while less capable than humans in real-world scenarios, [GPT-4] exhibits human-level performance on various professional and academic benchmarks.” This current investigation seeks to determine if identifying tutors’ ability to give effective praise to students is an academic benchmark within GPT-4's capabilities. The application of LLMs to provide feedback is a growing research area within education <cit.>, with researchers striving to identify the limits of these models' capabilities. The use of LLMs to provide direct feedback to students, rather than tutors, has been explored by many researchers using various pre-trained models. For example, Jai et al. <cit.> used BART and found that AI-generated feedback was near-human in performance, while Li and Xing <cit.>, employing GPT-based models, concluded that providing emotional support via contextual replies to learners in massive open online courses (MOOCs) was comparable to humans. In a study more closely aligned with our current work, Dai et al. <cit.> demonstrated that ChatGPT was more capable than human instructors at genera -ting detailed feedback that fluently summarizes students’ performance. Despite these promising findings involving LLM’s ability to provide feedback to students, there exists very little research on its application to tutor feedback. Thomas et al. <cit.> leveraged ChatGPT to generate synthetic tutor responses from real-life tutoring scenarios within the previously discussed lesson, Giving Effective Praise. Thomas et al. <cit.> found that human-created training sets outperformed AI-generated training sets for creating automated short answer grading systems, with ChatGPT-generated tutor responses often lacking the nuance and variety evident within human-sourced tutor responses. Nevertheless, leveraging ChatGPT to evaluate human tutors' effectiveness in giving praise to students represents an interesting and novel use case. §.§ Prompt Engineering Prompt engineering, also known as in-context prompting, is the strategic creation and fine-tuning of prompts aimed at guiding a language model's behavior to yield specific outcomes. This process is achieved without the necessity of modifying the model's inherent architecture or parameters. As an empirical field, prompt engineering necessitates extensive experimentation and testing, considering the variations in the outcomes generated by identical prompts across different models <cit.>. Chain-of-Thought (CoT) prompting is a technique that breaks down complex, multi-step problems into more manageable, intermediate steps. This process aids language models in following a logical sequence, where each subsequent prompt builds upon the prior one, thus stimulating reasoning. Within the context of CoT prompting, two key methodologies exist: zero-shot and few-shot prompting. Zero-shot CoT prompting is a standalone approach that relies solely on the instructions embedded within the prompt. Conversely, few-shot CoT prompting incorporates examples to instruct the model on generating appropriate responses. Zero-shot and few-shot prompting are two fundamental approaches often championed in numerous large language model (LLM) studies, commonly employed for benchmarking LLM performance <cit.>. § METHOD §.§ Creation of Synthetic Tutoring Dialogues Due to the limited availability of real-life tutor-student dialogues, we used synthetic chat logs generated by prompting GPT-4. While we acknowledge the necessity of validating our findings with real-life dialogues, the current study is useful as a proof of concept and serves as a simulation or model, pending access to real-life tutor-student dialogues. We used GPT-4 to generate 30 synthetic tutor-student dialogues. Among these dialogues, the average number of words per dialogue was 253 (SD = 45.0); the tutor's words per dialogue averaged 180 (SD = 38.6); and the student's words per dialogue averaged 56.8 (SD = 23.7). Due to the limited space, we attached other prompting strategies and synthetic tutoring dialogues in the digital appendix[<https://github.com/DollayaDollayaDollaya/AIEDWorkshop>]. An example of a tutor-student dialogue generated by GPT-4 is shown in the Example 1: [table]skip=0pt,singlelinecheck=off, labelsep=custom, name=Example,skip=5pt,labelfont=small, font= small §.§ Human Grader Identification of Praise Criteria To evaluate the accuracy of GPT-4, we initially recruited three human graders, each with over five years of teaching experience. These graders were tasked with identifying effective praise within synthetic tutoring dialogues. Before beginning this task, they each completed a lesson titled Giving Effective Praise. This lesson clearly defines effective praise and trains learners on how to apply it. Additionally, the human graders were provided with a rubric that includes five criteria for identifying the different aspects of praise. This rubric, proposed by <cit.> (introduced in Section 2.2), includes five key criteria and their notation (in parenthesis) are, as follows: Praise is: 1) sincere, earned, and truthful (Sincere); 2) specific by giving details of what the student did well (Specific); 3) immediate, with praise given rights after the student’s action (Immediate); 4) authentic, not repeated often (Authentic); and 5) focused on the learning process, not ability (Process-focused). To arrive at the final grading for each dialogue, we used majority voting among the human graders. For instance, if two or more graders assessed that a particular chat log did not meet criterion 1 (Sincere), we followed their consensus and regarded that as the ground truth. Finally, we employed Fleiss' Kappa <cit.> to measure the inter-rater reliability among the three human graders (shown in Table 1). [table]skip=0pt,singlelinecheck=off, labelsep=custom, name=Table,skip=5pt,labelfont=small, font= small §.§ Prompting GPT-4 to Identify Praise Criteria We prompted GPT-4 to identify instances of praise in the dialogues based on the specific criteria provided. Recognizing that the effectiveness of GPT-4 is largely influenced by the prompt engineering strategies used, we implemented two approaches: zero-shot and few-shot Chain of Thought (CoT) prompting. This generated two sets of results. These results were then compared to the assessments made by human graders, using precision, recall, and F1 scores as metrics. Due to space constraints, we have included the zero-shot CoT and few-shot CoT prompts in the https://github.com/DollayaDollayaDollaya/AIEDWorkshopdigital appendix. § RESULTS §.§ Comparison of GPT-4 and Human Grader Performance We compared the results from GPT-4, using both zero-shot CoT and few-shot CoT prompting, with the consensus results from the human graders. The results are presented in Table 2. Both the zero-shot CoT and few-shot CoT approaches performed well in detecting elements of specific praise (i.e., detailing what the student did well) and immediate praise (i.e., given right after the student's action). We posit that the relative straightforwardness and clear nature (i.e., the tutor either delivered praise immediately after the student's action or they did not) of criterion 2 and 3, specific and immediate praise respectively, make them easier to detect by GPT-4 and human graders when present, compared to the remaining criteria. Both the zero-shot and few-shot CoT prompting methods for detecting specific praise had the lowest performance comparison between GPT-4 and the human graders, with F1 scores of 0.54 and 0.67, respectively. §.§ Comparison of Zero-shot and Few-shot Prompting The performance of zero-shot and few-shot CoT prompting methods showed a significant degree of similarity. To quantitatively assess the inter-rater agreement between these two approaches, we utilized Cohen's kappa statistical measure. The analysis in Table 3 showed a substantial level of agreement between the zero-shot and few-shot CoT prompting techniques, suggesting a strong consistency in their performance. Specifically, there was nearly perfect agreement (93.33%) in identifying authentic and process-focused praise criteria, with substantial agreement in recognizing sincere and specific praise. §.§ Strengths and Weaknesses of GPT-4 Across Praise Criteria In reference to Table 2, it's evident that GPT-4 performed well in identifying specific and immediate types of praise, as indicated by the favorable performance measures (F1 >80%). Examples 2, 3, and 4 illustrate selected tutor dialogues and responses generated by GPT-4 using few-shot CoT prompting that align with the majority decision of the human graders. It's worth noting that we chose to highlight criteria 2 (Specific), 3 (Immediate), and 4 (Authentic), which have relatively high F1 score. [table]skip=0pt,singlelinecheck=off, labelsep=custom, name=Example,skip=5pt,labelfont=small, font= small [table]skip=0pt,singlelinecheck=off, labelsep=custom, name=Example,skip=5pt,labelfont=small, font= small [table]skip=0pt,singlelinecheck=off, labelsep=custom, name=Example,skip=5pt,labelfont=small, font= small Then, we examined instances where GPT-4 disagreed with the majority of human graders, underperforming in its ability to detect different praise criteria. We focused on criteria 1 (sincere) and 5 (process-focused), for which GPT-4 received F1 score of 0.67, lower than the other criteria. In Example 5 below, after the student provided three incorrect responses before eventually arriving at the correct answer, human graders interpreted the subsequent praise as insincere (criteria 1), contending that the student's achievement didn't entirely warrant the commendation. In contrast, GPT-4 failed to incorporate this context into its evaluation. It seemingly focused solely on the immediate conversation, noting that the student had provided a correct answer, and concluded that the praise was therefore sincere and deserved. In Example 6, GPT-4 misinterpreted the tutor's praise for the student's efforts. The tutor's compliment, i.e., “You're showing a keen ability to recollect and apply important mathematical principles,” was interpreted by GPT-4 as praise for ability, due to the inclusion of the term “ability”. However, human graders perceived this compliment as being directed towards the learning process. In this regard, GPT-4's interpretation deviated from the human graders' consensus. [table]skip=0pt,singlelinecheck=off, labelsep=custom, name=Example,skip=5pt,labelfont=small, font= small [table]skip=0pt,singlelinecheck=off, labelsep=custom, name=Example,skip=5pt,labelfont=small, font= small § DISCUSSION GPT-4 exhibited proficiency in detecting specific and immediate praise, but it struggled to recognize sincerity. We hypothesize that GPT-4's superior performance in detecting specific and immediate praise is due to the relatively straightforward criteria for these types, while assessing sincerity in praise statements demands more nuanced judgment and perhaps a greater level of social-affective understanding (e.g., politeness <cit.>), which human graders possess. We noticed that it was particularly challenging for GPT-4 to identify sincerity, especially during the zero-shot CoT prompting. By including nuanced and varied examples of tutor praise statements, deemed sincere by human graders, in few-shot prompting strategies, we might enhance GPT-4's performance in recognizing this type of praise. Both zero-shot and few-shot CoT prompting exhibited comparable performance. Zero-shot and few-shot learning methods demonstrated similar results, with both falling short in detecting sincerity in praise (with F1 scores of 0.54 and 0.67, respectively) compared to their performance on other praise criteria. Various techniques for fine-tuning language models exist, particularly for zero-shot learning, such as instruction tuning <cit.>. Therefore, further research into enhancing zero-shot and few-shot learning methods is necessary to improve the performance of both models. §.§ Limitations The current study has several limitations. First, the lack of availability of real-life tutor-student conversations is a considerable limitation. Synthetic dialogues, while useful for preliminary investigation, do not entirely capture the complexity and nuances of authentic tutor-student interactions. Second, the sample size of the dialogues used in this study may limit the generalizability of the findings. We used only 30 synthetic dialogues for this study, and increasing this number would likely improve the reliability and robustness of our findings. Third, the few-shot prompts we utilized were relatively simple and included a limited variety of examples. By integrating a wider range of nuanced examples, we might boost GPT-4's capability to match human graders' discernment of praise criteria that are more nuanced and socially sensitive. §.§ Implications for Future Work The present work sets a precedent for potential expansions. Firstly, we aim to address existing limitations by incorporating real-life dialogues, increasing the volume of chat logs, and enhancing the effectiveness of zero-shot and few-shot prompting methods. Secondly. the scope could be broadened by evaluating dialogues using a more comprehensive, high-level tutoring rubric. This would move away from focusing solely on specific tutoring skills such as delivering effective praise to students. As previously discussed, and recommended by the National Student Support Accelerator <cit.> for adoption by tutoring organizations, the holistic tutoring rubric could lay the groundwork for future efforts in crafting LLM prompts. These prompts could then provide timely feedback to tutors regarding their overall performance. Thirdly, apart from investigating the accuracy of GPT-4's performance, we could delve into other aspects, such as its reliability in synthesizing such feedback. § CONCLUSION In this study, we assigned GPT-4 the task of identifying five distinct components of effective praise from synthetic tutor-student dialogues, according to past research determining criteria of effective praise. Our results suggest that GPT-4 performs moderately well in identifying two of these criteria: specific praise (which provides detail on what the student did well) and immediate praise (which is delivered right after the student's action). Conversely, GPT-4 had less success in recognizing instances of process-focused and sincere praise from the tutor. Overall, zero-shot and few-shot chain of thought prompting methods performed similarly. However, we anticipate enhancements to few-shot chain-of-thought prompting techniques, in particular, more nuanced and socially-responsive examples of sincere praise criteria will improve the performance of GPT-4 to detect praise closer to that of human graders. Acknowledgments. This work is supporting with funding from the Richard King Mellon Foundation (Grant#10851) and the Heinz Endowments (E6291). Any opinions, findings, and conclusions expressed in this material are those of the authors. Additionally, thanks Sorawit Saengkyongam, Can Udomcharoenchaikit, and Maim Hoque for contributing their thoughts on this research. splncs04
http://arxiv.org/abs/2307.01229v1
20230703055429
EmoGen: Eliminating Subjective Bias in Emotional Music Generation
[ "Chenfei Kang", "Peiling Lu", "Botao Yu", "Xu Tan", "Wei Ye", "Shikun Zhang", "Jiang Bian" ]
cs.SD
[ "cs.SD", "cs.AI", "cs.LG", "cs.MM", "eess.AS" ]
[ EmoGen: Eliminating Subjective Bias in Emotional Music Generation equal* Chenfei Kangequal,SJTU Peiling Luequal,MS Botao YuNJU Xu TanMS Wei YePeking Shikun ZhangPeking Jiang BianMS <https://github.com/microsoft/muzic> SJTUShanghai Jiao Tong University, China NJUNanjing University, China MSMicrosoft Research Asia PekingNational Engineering Research Center for Software Engineering, Peking University, China Xu [email protected] Music generation, music emotion, music attributes, supervised clustering, self-supervised learning 0.3in ] Music is used to convey emotions, and thus generating emotional music is important in automatic music generation. Previous work on emotional music generation directly uses annotated emotion labels as control signals, which suffers from subjective bias: different people may annotate different emotions on the same music, and one person may feel different emotions under different situations. Therefore, directly mapping emotion labels to music sequences in an end-to-end way would confuse the learning process and hinder the model from generating music with general emotions. In this paper, we propose , an emotional music generation system that leverages a set of emotion-related music attributes as the bridge between emotion and music, and divides the generation into two stages: emotion-to-attribute mapping with supervised clustering, and attribute-to-music generation with self-supervised learning. Both stages are beneficial: in the first stage, the attribute values around the clustering center represent the general emotions of these samples, which help eliminate the impacts of the subjective bias of emotion labels; in the second stage, the generation is completely disentangled from emotion labels and thus free from the subjective bias. Both subjective and objective evaluations show that outperforms previous methods on emotion control accuracy and music quality respectively, which demonstrate our superiority in generating emotional music. Music samples generated by are available via this link[<https://ai-muzic.github.io/emogen/>], and the code is available at this link[<https://github.com/microsoft/muzic/>]. § INTRODUCTION With the development of deep learning, automatic music generation is developing rapidly and attracting more and more interest <cit.>. Due to the importance of emotions for music, emotional music generation is an important and practical task, yet it is still under-explored. Previous work, according to the way of applying emotion signals, can be divided into two types. The first type is to convert emotion labels as embeddings and take them as model input <cit.>. The second type is to train an emotion classifier and apply it at either model output to guide the decoding process <cit.>, or latent space of variational autoencoders <cit.> and generative adversarial networks <cit.> to constrain the distribution of latent vectors. However, both the above two types of work directly use emotion labels as the control signals to generate music sequences in an end-to-end way, which is suboptimal. Emotion labels given by data annotators can be influenced by both objective and subjective factors. Objective factors like tempo and note density are highly associated with music emotions. As for subjective factors, the perceived emotions are highly related to social identities, personalities, instant emotional states of listeners, etc. For example, it is highly possible that a listener thinks a happy song is sad when he/she is in an upset state. Due to this subjectivity of human emotions, different data annotators may give different emotion labels to the samples with the same emotion, which results in subjective bias in emotion labels. With the inconsistent emotion labels, it is hard for those end-to-end methods to learn the relationship between emotion and music sequences, and accordingly, the models can be deficient in generating music that exactly matches the desired emotion. In this paper, we propose , an emotional music generation system that can eliminate the impacts of subjective bias of emotion labels. Instead of directly mapping emotion labels to music sequences in an end-to-end way, we leverage a set of music attributes that are highly correlated with emotions as a bridge and break down this task into two stages: emotion-to-attribute mapping with supervised clustering, and attribute-to-music generation with self-supervised learning. Specifically, to bridge the gap between emotions and music, the attributes need to be highly correlated with emotions. We design the attribute set by training an emotion classifier on a labeled dataset and selecting those attributes whose feature importance is high. In the emotion-to-attribute mapping stage, we map the emotion to the attribute values of a sample closest to the clustering center, which is obtained by clustering samples with emotion labels and calculating the average attribute values from each cluster. This clustering process is supervised since we use emotion labels to cluster samples into emotion categories. By this supervised clustering, mapped attribute values can represent the general emotion from the samples around the clustering center. Thus, the problem of subjective bias from emotion labels can be eliminated. In the attribute-to-music generation stage, we extract the attribute values from music sequences and train a Transformer-based model with these attributes as the control signals in a self-supervised way. The values of the attributes can be directly extracted from music sequences, so this generative model can learn the relationship between control signals and music without requiring any labeled data. Since this process is completely disentangled from emotion labels, it is not influenced by the subjective bias of emotion labels. With the benefits of the two stages based on supervised clustering and self-supervised learning on avoiding the subjective bias, can achieve a more precise emotion control in emotional music generation. The main contributions of this work are as follows: * We propose , an emotional music generation system that can eliminate subjective bias from emotion labels, which leverages emotion-related attributes as a bridge to generate music with desired emotions by two stages: emotion-to-attribute mapping with supervised clustering and attribute-to-music generation with self-supervised learning. * Experimental results show that outperforms previous methods on emotion control accuracy and music quality. Experiments also demonstrate the ability of to eliminate subjective bias in emotion labels. § RELATED WORK §.§ Emotional Music Generation Emotion-conditioned music generation is developing rapidly in the age of deep learning. According to the way of applying emotion signals, previous work can be divided into two types. The first type is to convert emotion labels as embeddings and take them as model input <cit.>. generate emotional music based on the one-hot emotion label. Some work <cit.> add extra emotion tokens into MIDI events to generate music with specific emotions. control music generation conditioned on continuous-valued valence and arousal labels. The second type is to train an emotion classifier and apply it at either model output through heuristic search methods guiding the decoding process <cit.>, or latent space of variational autoencoders <cit.> or generative adversarial networks <cit.> to constrain the distribution of latent vectors. use Genetic Algorithm to optimize the weights of the Long Short-Term Memory (LSTM) to generate music with desired emotions. Some work <cit.> apply search algorithm (e.g., beam search and tree search) to direct music generation with desired emotions. However, both of the above two types directly use emotion labels as the control signals to guide music generation, which ignores the impact of subjective bias of emotion labels as discussed in <ref>. Therefore, it is difficult for existing methods to generate music that matches the desired emotion. §.§ Attribute-Based Controllable Music Generation Music attributes are extracted from music sequences and can be manipulated to control music generation. Previous work attempt to leverage these attributes for controlling the music generation process. These works <cit.> extract attributes like rhythm density, pitch and rhythm variability, and chords and apply a VAE-based framework to control music generation by music attributes. A discriminator is used to control the hidden space distribution to satisfy the attribute conditions. propose MuseMorphose, which adds rhythmic and polyphony intensity into the latent space of VAE to control music generation. propose FIGARO, which designs expert description (such as note density, mean pitch, etc.) and learned description (latent representation) to control music generation with a VQ-VAE system. Directly using these attributes for emotional music generation is not enough, since they either do not consider building relationships between emotions and attributes, or fail to construct a concrete correlation between attributes and emotions, which may result in poor controlling accuracy. § METHOD <ref> shows the pipeline of , which contains two stages: emotion-to-attribute mapping, and attribute-to-music generation, with a set of designed attributes as a bridge. In attribute design, we enumerate and select the set of attributes that are highly correlated with emotions, which can help build a consistent relationship between emotions and music. For the emotion-to-attribute mapping stage, we get the mapped attribute values by choosing the values closest to the clustering center. The mapped attribute values can well represent the general emotions, so as to alleviate the subjective bias from emotion labels. For the attribute-to-music generation stage, the attribute values directly extracted from music sequences are used as control signals for training an autoregressive Transformer based model to generate corresponding music sequences in a self-supervised way. By disentangling the generation process from emotion labels, we can avoid the subjective bias from emotion labels to achieve better control in emotional music generation. We discuss the merits of our system in <ref>. §.§ Emotion-Related Attribute Design Instead of generating music sequences with the conditions of emotion labels, where subjective bias exists, we introduce emotion-related attributes to bridge the gap between emotions and the corresponding music. Compared with emotion labels, these objective attributes tell exactly what the corresponding music should be. For example, the tempo value tells what the duration of a beat is, while the key scale states what a set of notes can be used. By directly extracting the values of these attributes from music sequences, we can help build an explicit relationship between emotions and music. Specifically, we collected music attributes from low-level features like pitch statics, chords and vertical intervals, rhythm, and dynamics to high-level features like melodic intervals, instrumentation, and musical texture <cit.>. However, since many of them are irrelevant to emotions, directly using all of them would introduce a lot of noise. Thus, we select attributes that are highly correlated with emotions by training a Random Forest (RF) <cit.> classifier on an emotion-annotated dataset, then picking up the top-k attributes according to the ranking of feature importance as the final attributes set. Through this process, the designed attributes can represent emotional information and help control the music generation. Please refer to <ref> for details of these designed attributes. §.§ Emotion-to-Attribute Mapping To generate music with the desired emotion based on the emotion-related attributes, the emotion label is mapped to the attribute values that represent the general emotion with supervised clustering as shown in <ref>. Specifically, we first extract the values of the selected attributes for each sample in an emotion-annotated dataset. Based on the emotion labels given by the dataset, among the samples of each emotion label, we calculate the mean value for each attribute to obtain the center. Then, the attribute values of the sample that is the closest to the center are used to represent the features of the emotion. This process is supervised since emotion labels are used as clustering guidance to group samples into categories. This is also a clustering process since the samples are grouped in such a way that samples in the same group share similar emotional information, while samples in different groups convey distinct emotions. Through this supervised clustering method, the obtained attribute values should be able to represent the general emotion given this emotion label and avoid the subjective bias coming from emotion labels. §.§ Attribute-to-Music Generation Attribute values can be easily extracted from music sequences, which is much more precise for controlling music generation. The training process is shown in <ref>, we extract the values of the emotion-related attributes from the target music sequence and represent them in a d-dimension vector, we then take this vector as control signals into an autoregressive Transformer based model for generating the corresponding music sequence. The model is trained with the mapped attribute vectors as supervisory signals in a self-supervised way. Through this self-supervised learning step, the learned Transformer model is able to generate music whose attributes are controlled by the input attribute values. When inference, the attribute values mapped in the emotion-to-attribute mapping stage are leveraged as the control signals to guide the music generation process. Since the generation process is completely disentangled from emotion labels, it is not affected by the subjective bias from emotion labels. §.§ Merits of This proposed framework is beneficial for generating music with desired emotion in the following aspects: * Ability to eliminate subjective bias. By leveraging the supervised clustering and self-supervised learning paradigm in the two stages, can eliminate subjective bias from emotion labels to achieve better emotion control accuracy. Specifically, by mapping emotions to emotion-related attributes with supervised clustering, we obtain the values of attributes on behalf of the general emotion. By training the autoregressive Transformer based model with attribute values as control signals in a self-supervised way, we disentangle emotion labels from the generation process and build an explicit relationship between control signals and music sequences. The emotion labels are not directly used in the whole generation process so that we can avoid the subjective bias that exists in emotion labels. * Ability to precisely control generation process. Music attributes are good tools to concretely direct the generation. A single emotion label is too ambiguous to define what the corresponding music should be. For example, it is hard to define what a piece of happy music should be like. In contrast, music attributes are concrete to designate specific aspects of music <cit.>. For example, the tempo value tells exactly the duration of one beat, and the type of scale determines what sets of notes are used in generated music. By simply manipulating the values of the music attributes, we can precisely control the generated music. * Ability to be free from labeled data. can generate emotion-conditioned music without requiring any emotion annotation. Manual annotation is expensive, and there are only a few datasets <cit.> that contain emotion annotations. Unlike the previous methods that require emotion-music paired data for training the generative model, in , emotion labels are only used to determine the emotion-related attributes and the mapped attribute values that represent the general emotion. Once they are determined, they will not be changed. After that, we can simply extract the attribute values on an arbitrary dataset to train the generative model on it with self-supervised learning. Therefore, can be used to generate emotion-conditioned music even if the dataset has no emotion annotations. § EXPERIMENT In this section, we first introduce the experiment setup (<ref>), followed by the comparison with previous methods. After that, we give a comprehensive discussion on how eliminates the subjective bias of emotion labels. Then we show the comprehensive analysis of . Finally, we show the results of applying the framework of to other arbitrary datasets with no annotations. §.§ Experiment Setup Datasets We use altogether three datasets including one emotion-labeled dataset namely EMOPIA <cit.>, and two unlabeled datasets namely Pop1k7 <cit.> and LMD-Piano, where LMD-Piano is constructed by using the samples that only contain piano tracks from the Lakh MIDI (LMD) dataset <cit.>. The information of these datasets is shown in <ref>. EMOPIA uses Russell's 4Q model <cit.> as the emotion classification criterion, which is also leveraged in our evaluation process. EMOPIA with emotion labels is used to determine the designed attributes and the mapped attribute values in the emotion-to-attribute stage. Once they are determined, they are kept unchanged and emotion labels will not be used. It is also used in the fine-tuning stage when compared with previous methods. Pop1k7 and LMD-Piano are used for the pre-training stage when compared with previous methods. We randomly split each dataset by 8/1/1 for training/validation/test, respectively. System Configurations We use a REMI-like <cit.> representation method to convert MIDI into token sequences. We apply jSymbolic <cit.> to extract attribute values from music, and train the Random Forest classifier on EMOPIA, then select the top-100 attributes that are most related to emotions as described in <ref>. In the emotion-to-attribute mapping stage (<ref>), supervised clustering is implemented on EMOPIA. In the attribute-to-music generation stage (<ref>), we binarize the mapped attribute vector with the median, which is then fed into a 2-layer feed-forward network to obtain the attribute embedding. It is then added onto the token embedding at the input of an autoregressive Transformer model. We leverage Linear Transformer <cit.> as the backbone model, which consists of 6 Transformer layers with causal attention and 8 attention heads. The attention hidden size is 512 and FFN hidden size is 2048. The max sequence length of each sample is 1280. During training, the batch size is set to 8. We use Adam optimizer <cit.> with β_1 = 0.9, β_2=0.98 and ϵ=10^-9. The learning rate is 1 × 10^-4 with warm-up step 16000 and an inverse-square-root decay. The dropout rate is 0.1. During inference, we apply top-p sampling with ratio p=0.9 and temperature τ=1.0. Compared Methods We compare with two representative methods of different emotion control manners. The first one is Conditional Sampling (CS) <cit.>, which uses extra emotion tokens in the model input as the emotion conditions. The other one is Predictor Upper Confidence for Trees (PUCT) <cit.>, which uses an emotion classifier and a music discriminator trained on labeled data to direct the inference process. Evaluations and Metrics We conduct both subjective and objective evaluations to evaluate . Each model is applied to generate 1000 music pieces, with 250 for each of the four emotions. In subjective evaluation, human scorers are asked to rate each music piece. We report the following subjective metrics: 1) Subjective accuracy: Whether the perceived emotion by subjects is consistent with the emotion label. It represents emotion controllability. 2) Humanness: How similar it sounds to the music composed by a human. It represents the music quality. 3) Overall: an overall score. In objective evaluation, following <cit.>, we use a Linear Transformer-based emotion classifier trained on EMOPIA to predict the emotion label of each generated music piece. Then we calculate objective accuracy by comparing the emotion input for generating music with the predicted emotion class by this classifier. We report the objective accuracy of the classification, which functions as a supplementary metric to the subjective accuracy. For more details about the human rating process and the evaluation metrics, please refer to Appendix <ref>. §.§ Comparison with Previous Methods In order to conduct thorough evaluations, we design two training settings: 1) Setting 1 (): To ensure the music quality of generated music, following previous work <cit.>, we pre-train the models on Pop1k7+LMD-Piano before fine-tuning on EMOPIA. For , we first pre-train the language model with designed attributes as control signals, then fine-tune it on EMOPIA with attributes as control signals. For CS, following the work of <cit.>, we pre-train the language model with the control of the emotion token setting to “” as a placeholder, followed by attributes. After pre-training, we finetune the model on EMOPIA with emotion tokens assigned to the placeholder, followed by attributes as control signals. For PUCT, we first pre-train the language model, then fine-tune it on EMOPIA with an extra classification head to get the emotion classifier. We train the music discriminator by fine-tuning the language model with an extra classification head to classify real/fake samples. All of the methods are pre-trained on Pop1k7 and LMD-Piano. 2) Setting 2 (): The training methods in the above setting have their limitations in that they can only generate music similar to the dataset used in the fine-tuning stage. This constrains the ability of , which can naturally leverage arbitrary datasets for emotional music generation. To test this ability, we train the generative model on Pop1k7+LMD-Piano+EMOPIA in the attribute-to-music generation stage, and use the designed and mapped attributes for generating corresponding music with given emotions. Please note that since CS and PUCT require only labeled data in training, they cannot work in this setting. Therefore, we compare with the ground truth, the EMOPIA dataset. The results are shown in <ref>. We can observe that: 1) Compared with CS and PUCT, achieves better performance on all the metrics in . Particularly, has much better emotion controllability on both the subjective and objective accuracy. It demonstrates the superiority of in generating music with designated emotion. Besides, the higher humanness and overall score of indicate that is capable of improving the music quality. 2) In , achieves higher performance on all the metrics than CS and PUCT in . It demonstrates that can not only leverage an arbitrary dataset for emotional music generation but also have fairly good controllability and humanness. We will also show its ability on a dataset of more diverse and multi-instrument music in <ref>. 3) The accuracy of is even higher than that of the ground truth (EMOPIA). This shows that, on the one hand, there are samples with ambiguous emotion labels in the labeled dataset that affect the judgment of their emotions. On the other hand, the two-stage framework of , especially the mapped attribute values from emotions determined by supervised clustering in the emotion-to-attribute mapping stage, can help avoid the subjective bias from emotion labels. Thus, we choose as our basic framework and use it to do further analysis of . §.§ Verification on Eliminating Subjective Bias To demonstrate that can eliminate subjective bias in emotion labels, we conduct experiments to show 1) the existence of subjective bias in a labeled dataset and 2) 's ability to eliminate subjective bias. We use the subjective and objective accuracy described above as the metrics. Existence of Subjective Bias in Emotion Labels Subjective bias exists in emotion labels, which can result in poor controlling performance for end-to-end methods. To prove the existence of the subjective bias in emotion labels, we compare the emotion accuracy of the center samples and that of the boundary samples. Specifically, all the samples of EMOPIA are firstly clustered by emotion labels to get four emotion clusters. Then, we calculate the attribute average in each emotion cluster to get the clustering center. We choose 50 samples that are closest to the clustering center (i.e. center samples) and 50 samples that are distant from the clustering center (i.e., boundary samples). We ask 10 listeners to classify the samples into four emotion categories to get subjective accuracy and use the classification model to get objective accuracy. As shown in <ref>, both the subjective and objective accuracy in classifying the center samples is higher than that in classifying the boundary samples, which indicates that subjective bias exists in the dataset, especially in the labels of the boundary samples, and this subjective bias can hinder classification performance. We further validate this by performing t-SNE visualization of the samples in EMOPIA with central mapping and distance analysis of attribute vectors of samples from EMOPIA. The t-SNE visualization shown in <ref> reveals that the samples in EMOPIA fail to be grouped separately. As shown in <ref>, the wider area between two curves indicates better performance in differentiating different groups, however, there is not much distance between the curves calculated from samples of EMOPIA. The above results further prove that subjective bias exists in emotion labels. Subjective Bias Elimination To prove that can eliminate subjective bias in emotion labels, we compare the classification accuracy of the generated center samples and that of the generated boundary samples. Specifically, the center samples are generated by using the attribute values of the center samples extracted in the emotion-to-attribute mapping stage. Similarly, the generated boundary samples use attribute values of the boundary samples. As shown in <ref>, both the subjective and objective accuracy of the center samples are higher than those of the boundary samples, which indicates the effectiveness of the supervised clustering in the emotion-to-attribute mapping stage in eliminating subjective bias. We further validate this by performing t-SNE visualization of the samples generated by with central mapping and distance analysis of attribute vectors of samples generated by . As shown in <ref>, the samples generated by can be clustered into four distinct groups. As shown in <ref>, the intra-class distance of is smaller than that of EMOPIA, which indicates that samples generated by are more similar in emotion expression in each emotion category. Besides, the area between the two curves is much wider than that of EMOPIA, which indicates better performance in differentiating samples from different emotion categories. The above results both show that can eliminate subjective bias from emotion labels. §.§ Comprehensive Analysis In this subsection, we conduct analysis experiments on different modules: 1) Emotion-to-attribute mapping methods; 2) Attribute design methods; 3) Top-k attributes. More experiment implementation details can refer to Appendix <ref>. Emotion-to-Attribute Mapping Methods In the emotion-to-attribute mapping stage, we need to determine a set of attribute values as the mapped attribute values for each emotion category, for which we consider the following methods: 1) Closest: Directly using the attribute values of the sample whose attribute values are closest to the average attribute values of all the samples of the emotion. It is the default method of . 2) Center: Directly using the average attribute values of all the samples of the emotion as the mapped attribute values; 3) K-Means: Clustering the samples of each emotion with the K-Means clustering algorithm <cit.> and selecting the attribute values of the center of the largest cluster as the mapped ones. From the evaluation results shown in <ref>, we can see that Closest achieves better subjective and objective accuracy than Center and K-Means. Since Closest obtains attribute values out of a real sample in the dataset, it can maintain the original attribute distribution, and thus can achieve higher accuracy. On the contrary, the attribute values obtained by Center and K-Means are not from a real sample, so the value distribution deviates from a real one, which may result in poor control accuracy. As for the music quality, although the humanness score of Center and K-Means is higher than Closest, the difference is not significant. Therefore, we choose Closest as the default mapping method in the emotion-to-attribute mapping stage. Attributes Design We compare altogether four alternatives of the attribute design module: 1) Top-100: Using the top-100 attributes according to feature importance, which is the default method of ; 2) Random: Selecting 100 attributes randomly according to attribute groups described in <ref>. The details of how to select these attributes are described in Appendix <ref>. 3) Manual: Following previous work <cit.>, we use 17 manually designed music attributes that are related to emotions. The results are shown in <ref>. We can observe that: 1) (Top-100) improves subjective accuracy and objective accuracy by more than 35.7% and 16.5% separately compared with Random and Manual, which shows better controllability for our attribute design methods. The humanness score of is also superior to Random and Manual, which demonstrates the better performance of to generate high-quality music. 2) The subjective and objective emotion accuracy of Random is lower than Top-100, which indicates that the randomly selected attributes may not be enough to convey emotion-related information. This is reasonable since without the design to build the relationship between attributes and emotions, it is hard for the model to be trained with the controlling of emotional information. 3) The subjective and objective accuracy of Manual is lower than Top-100, which indicates that designing attributes through prior knowledge cannot well model the relationship between emotions and music. Therefore, we choose the top-100 attributes as the designed attributes of . Different Top-k Attributes We further analyze the influence of the number of designed attributes (i.e., k) on the model performance. Specifically, we vary k in (10,50,100,300,500) and evaluate the model with respect to both controllability and music quality. The evaluation results are shown in <ref>. We can observe that: 1) Top-100 achieves the highest subjective accuracy. With k increasing from 10 to 500, the subjective accuracy increases first and then decreases, which indicates that more attributes can help improve control accuracy, yet too many can harm the controllability and this may be because they have introduced more noises; 2) The objective accuracy of top-300 and top-500 is higher than top-100. The reason may be that a large number of attributes can cause the mapping relationship to overfit the labeled datasets and let the model generate music very similar to the ground truth, to which the emotion classifier tends to give a more correct prediction. Due to this matter, we believe that subjective accuracy is more credible than objective one. 3) As k increases, the humanness score generally decreases, which indicates that more attributes would cause lower music quality. This is reasonable because if there are much more attributes, it would be more difficult and more data-scarce for the generative model to learn the mapping from the attributes to the corresponding music, and accordingly cause the loss of music quality. However, top-100 can still have relatively good quality. Combing the performances on both subjective accuracy and music quality, we set k=100 as the default value of . §.§ Application on Multi-Instrument Datasets To evaluate 's ability to generate emotional music on the arbitrary dataset, we conduct experiments of on TopMAGD <cit.>, which is a multi-instrument dataset containing 22535 samples with no emotion annotations. Specifically, since the mapped attributes in the emotion-to-attribute mapping stage are determined, we only need to train the attribute-to-music generation model on TopMAGD. We use subjective accuracy, humanness, and overall as subjective metrics. For details of the subjective experiment, please refer to Appendix <ref>. We compare with the results in <ref>. The subjective accuracy of on TopMAGD is 0.433, and the humanness and overall score are 3.72±0.17 and 3.67±0.15, respectively. We can observe that: Compared with the results in <ref>, training on TopMAGD performs better than CS and PUCT in in control accuracy and music quality. In conclusion, is able to generate music with desired emotion on multi-instrumental datasets. Generated samples are available via this link[<https://emo-gen.github.io/>]. § CONCLUSION In this paper, we propose , an emotional music generation system that leverages a set of emotion-related music attributes as the bridge between emotion and music. divides emotional music generation into two stages: in music-to-attribute mapping stage, map the emotion label to attribute values that can represent the general emotion by supervised clustering; in attribute-to-music generation stage, train the generative model via self-supervised learning without emotion labels. Benefiting from two stages, eliminates the subjective bias in emotion labels, so as to achieve better control accuracy. Experiment results show that is able to generate music with better emotion control accuracy and music quality compared to the previous methods. In the future, we will consider improving or extending in the following aspects: First, selects the sample closest to the attribute center in the emotion-to-attribute mapping stage, which may ignore the diversity of emotions. It is worth exploring how to cluster attribute vectors in fine-grained emotion classes to get more diverse emotional mapping. Second, controls the music generation with song-level attributes globally, we will further explore how to control this process dynamically to achieve emotion transitions between bar, phrase, and section levels. Finally, we expect to extend to more tasks and domains, such as emotion/style-controlled text generation. icml2023 § SELECTED ATTRIBUTES LIST We extract 1495 music attributes from jSymbolic. The definition of these music attributes can be found at <https://jmir.sourceforge.net/manuals/jSymbolic_manual/home.html>. We first train a Random Forest classifier on EMOPIA, then select 100 attributes according to their feature importance. The first 10 attributes are shown in <ref>. For more details about selected attributes, please refer to <https://emo-gen.github.io/>. § DETAILS OF EXPERIMENTS §.§ Comparison with Previous Methods We invite 15 participants to evaluate 32 songs which consist of 4 emotion categories for each setting and method. The participant needs to rate music samples on a five-point scale with respect to 1) Valence: Is the music piece negative or positive; 2) Arousal: Is the music piece low or high in arousal; 3) Humanness: How similar it sounds to a piece composed by a human; 4) Overall: An overall score. For objective metrics, we apply the emotion classifier to classify the generated 1000 samples for each method, then we calculate objective accuracy by comparing the emotion input for generating music with the predicted emotion class by this classifier. To further compare the results of each method, we calculated the average valence and arousal scores, respectively. Detail evaluation results for valence and arousal are shown in <ref>. As we can see: * In , outperforms CS and PUCT in hv,lv,ha and la. Thus, controls emotion better than CS and PUCT under this setting. * in Setting 2 controls emotion better than CS and PUCT in Setting 1. Therefore, benefiting from the two-stage framework, can achieve good performance in arbitrary datasets with no annotations. §.§ Verification on Eliminating Subjective Bias Considering that each emotion in EMOPIA contains approximately 250 samples, we select 50 samples from the center and boundary respectively for each emotion category. And for , we generate 1000 samples for the center and boundary separately. Each emotion category contains 250 samples. We invite 10 participants to classify the music samples into 4 emotional quadrants according to Russell's 4Q model<cit.>. Each participant evaluates 16 samples randomly sampled from 1) Center and boundary of EMOPIA; 2) Center and boundary generated by . Samples are evenly distributed in four emotion categories. For objective metrics, we apply the emotion classifier to predict the emotion label of samples from EMOPIA and , then the objective accuracy can be obtained. §.§ Comprehensive Analysis For each compared module and method, we generate 1000 samples with 250 for each emotion category. Then we invite 7 participants to 1) Classify the sample into four emotional categories based on Russell's 4Q model; 2) Rate the humanness score of the sample on a five-point scale. The higher the score is, the more realistic the sample is to a human-composed one. Each participant receives 44 samples evenly distributed in 4 emotion categories, which are divided into 3 groups: 1) Group for 3 compared methods in emotion-to-attribute mapping, which consists of 12 samples; 2) Group for 3 compared methods in attribute design, which consists of 12 samples; 3) Group for 5 values of top-k, which consists of 20 samples. For objective metrics, we apply the emotion classifier to predict the emotion labels of 1000 generated samples and objective accuracy is calculated similarly to Appendix <ref>. Details of Attribute Design We present details about another two attribute design methods in <ref>: * Random: Since jSymbolic divides attributes into seven groups (please refer to <https://jmir.sourceforge.net/manuals/jSymbolic_manual/home.html> for detail), we select 100 attributes on average according to the 7 groups. * Manual: Following previous work <cit.>, we choose pitch class histogram, note density, rhythm density, and mean pitch/duration/velocity as manually designed attributes related to music emotion, which form a 17-dimensional attribute vector. §.§ Application on Multi-Instrument Datasets We keep the emotion-to-attribute mapping stage unchanged and train the attribute-to-music generation stage on TopMAGD. We apply to generate 1000 samples with 250 for each emotion category. And we invite 15 participants to rate each sample by Valence, Arousal, Humanness, and Overall score similar to <ref>. Each participant evaluates 4 samples which consist of 4 emotion categories. Then we report subjective accuracy (the same as the calculation rule in <ref>), humanness, and overall score as subjective evaluation metrics. §.§ Discussion on Output Diversity uses only one set of attributes to represent each emotion, which will limit the diversity. The emotion of music contains two levels, one is the emotion felt by most people (i.e., general emotions), and the other is the emotion felt by individuals (i.e., personalized emotions). General emotions are not as diverse as personalized emotions. In this paper, we mainly consider controlling the music generation with general emotions not influenced by subjective bias. However, if needed, EmoGen can also achieve more emotion diversity by mapping other emotions into more sets of attribute values in the emotion-to-attribute mapping stage.
http://arxiv.org/abs/2307.03151v2
20230706172623
Epicyclic frequencies in the equatorial plane around stationary and axially symmetric wormhole geometries
[ "Vittorio De Falco" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "hep-th" ]
Epicyclic frequencies in the equatorial plane around stationary and axially symmetric wormhole geometries]Epicyclic frequencies in the equatorial plane around stationary and axially symmetric wormhole geometries [email protected] ^1 Scuola Superiore Meridionale, Largo San Marcellino 10, 80138 Napoli, Italy, ^2 Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia Edificio 6, 80126 Napoli, Italy Epicyclic frequencies are usually observed in X-ray binaries and constitute a powerful astrophysical mean to probe the strong gravitational field around a compact object. We consider them in the equatorial plane around a general stationary and axially symmetric wormhole. We first search for the wormholes' existence, distinguishing them from a Kerr black hole. Once there will be available observational data on wormholes, we present a strategy to reconstruct the related metrics. Finally, we discuss the implications of our approach and outline possible future perspectives. [ Vittorio De Falco^1,2 August 1, 2023 ========================= § INTRODUCTION A wormhole (WH) is an exotic compact object, characterized by a non-trivial topology featuring no horizons and physical singularities. Furthermore, it presents a traversable bridge, dubbed WH neck, connecting two distinct universes or two different regions of the same spacetime <cit.>. This topic is frequently studied both in General Relativity (GR) and in Alternative/Extended Theories of gravity, where the related works can be classified in two macro-research areas: (1) proposing new WH solutions in different gravity frameworks by employing disparate mathematical methods (see e.g., Refs. <cit.>); (2) providing original astrophysical strategies based on the current or near-future observational data to look for the detection of WH existence (see e.g., Refs. <cit.>). Since these exotic objects have never been observed so far, it could be related to the fact that probably there exist particular WHs, which perfectly reproduce all observational properties of a BH with arbitrary high-accuracy, known also in the literature as black hole (BH) mimickers <cit.>. In order to reveal their existence, it would be very useful to provide tests of gravity in strong field regime. To this purpose, a helpful astrophysical tool of investigation is represented by the epicyclic frequencies. The term epicyclic derives from the Greek and it means beyond the circle. Indeed, such frequencies {ν_r,ν_φ,ν_θ} are physically obtained by linearly perturbing the motion of a test particle in a circular orbit along the radial, azimuthal, and polar directions, respectively. The epicyclic frequencies entail several advantages, because they closely depend on the underlying geometrical background, are produced in strong field regime, and are frequently found in BH systems <cit.>. In the literature, it is possible to find already some works on epicyclic frequencies applied to WHs, whose objectives are: (1) understanding the behaviour of a test gyroscope moving towards a Teo rotating traversable WH <cit.>; (2) analysis of quasi-periodic oscillations (QPOs) from an accretion disk around Teo rotating traversable WHs <cit.>; (3) testing observationally the presence of BH mimicker solutions via QPOs <cit.>; (4) investigations of the epicyclic frequencies around Simpson-Visser regular BHs and WHs <cit.>; (5) application of epicyclic orbits in the field of Einstein-Dirac-Maxwell traversable WHs to the QPOs observed in microquasars and active galactic nuclei <cit.>; (6) studies on the epicyclic frequencies around traversable phantom WHs in Rastall gravity <cit.>. In a previous work, we have studied the epicyclic frequencies in general static and spherically symmetric WH spacetimes, where we have showed the strategy to disentangle between a BH and a WH, and how to reconstruct a WH solution once they will be detected <cit.>. In this work, we aim at extending the aforementioned approach to general stationary and axially symmetric WH geometries. Therefore, the paper is structured as follows: in Sec. <ref> we describe the epicyclic frequencies around stationary and axially symmetric WHs; in Sec. <ref> we first describe the procedure to detect possible metric deviations from a Kerr BH and then explain how it is possible to reconstruct the WH solution from the observational data; finally in Sec. <ref> we draw the conclusions. § EPICYCLIC FREQUENCIES IN STATIONARY AND AXIALLY SYMMETRIC WORMHOLES In this section, we first introduce general stationary, axially symmetric, and traversable WH geometries described by the Teo-like metric (see Sec. <ref>) and then we present the formulas of the epicyclic frequencies in the equatorial plane of such spacetimes (see Sec. <ref>). From this section onward, we use geometrical units G=c=1 and the distances will be measured in units of M, being the total mass-energy of the considered compact object generating the underlying gravitational field. §.§ Teo-like wormholes General stationary, axially symmetric, and traversable WHs can be described in spherical-like coordinates (t,r,θ,φ) employing the following Teo-like metric <cit.> s^2 =-N^2(r,θ) t^2 + r^2/1-b(r,θ)/r +r^2K^2(r,θ)[θ^2+sin^2θ(φ-ω(r,θ) t)^2], where N(r,θ),b(r,θ),K(r,θ),ω(r,θ) are four unknown functions, which determine the WH spacetime. In particular, we have: N(r,θ) is the redshift function and describes the time properties of the WH; b(r,θ) is the shape function, delineating the WH form when it is embedded in an Euclidean space[In the case of Morris-Thorne (static and spherically symmetric) WHs <cit.>, these solutions are embedded in a three-dimensional Euclidean space, since the metric is invariant with respect to the θ coordinate. Instead, Teo-like WHs should be embedded in a four-dimensional Euclidean space, after having fixed a time instant, which does not spoil the final WH shapes due to their stationary and axial symmetry properties.]; K(r,θ) is the proper radial distance factor, which permits to define the proper radial distance R=rK(r,θ) from the origin of the coordinate system, endowed with the property to have ∂ R/∂ r>0; ω(r,θ) is the rotational function devoted to characterize the frame-dragging effect around the WH. Equation (<ref>) reduces to the Morris-Thorne metric <cit.> in the limit of zero rotation (i.e., ω(r,θ)→0) and spherical symmetry, which in formulas translates in requiring N(r,θ)→ e^Φ(r), b(r,θ)→ b(r), K(r,θ)→1. Such WHs must fulfill the following properties <cit.>: * for having no horizons the θ-derivatives of N(r,θ), b(r,θ), K(r,θ) evaluated in θ=0, π have to vanish on the rotation axis; * defined r_0>0 as the WH throat, no essential singularities occur if N(r,θ), b(r,θ), K(r,θ), ω(r,θ) are smooth functions everywhere finite for r≥ r_0; * the shape function fulfills: b≤ r, ∂_θ b(r_0,θ)=0 for all θ∈[0,π], b>r∂_r b (flaring-out condition); * asymptotically flatness, i.e., for r→∞, we have N→1,b/r→0, K→1, ω→0; * the metric (<ref>) is valid both in GR and Extended/Alternative theories of gravity. It generally depends on the Arnowitt-Deser-Misner (ADM) mass (or total mass-energy of the system contained in the whole spacetime <cit.>) M, the (dimensionless Kerr spin-like) total angular momentum a, and sometimes from other parameters, which come from the gravity theory to which it belongs and also the employed stress-energy tensor to construct it; * the traversability is achieved by either resorting to quantum mechanical effects, produced by ad hoc exotic stress-energy tensors (see e.g., Refs. <cit.>), or topological arguments, based on standard and (gravitational) curvature fluid stress-energy tensors (see e.g., Refs. <cit.>). The former approach is generally employed in GR, whereas the latter in modified gravity frameworks, presenting more degrees of freedom with respect to GR. §.§ Epicyclic frequencies in the equatorial plane The epicyclic frequencies {ν_r,ν_φ,ν_θ} are normally calculated in terms of the epicyclic angular velocities {Ω_r=2πν_r,Ω_φ=2πν_φ,Ω_θ=2πν_θ}, whose explicit formulas can be obtained in the equatorial plane θ=π/2 by exploiting one of the following equivalent strategies: * employing the conserved specific energy ℰ and angular momentum ℓ along the test particle trajectory, it is possible to write the following expressions ṫ=ṫ(ℰ,ℓ), φ̇=φ̇(ℰ,ℓ), where dot stays for the derivative with respect to an affine parameter along the test particle trajectory. Using the normalization condition for timelike four-velocities g_μνẋ^μẋ^ν=-1, we have g_rrṙ^2+g_θθθ̇^2=𝒱_ eff(r,θ,ℰ,ℓ). For stable circular orbits in the equatorial plane we have ṙ=θ̇=0, which implies 𝒱_ eff=0, whereas r̈=0 and θ̈=0 entails ∂_r𝒱_ eff=0, ∂_θ𝒱_ eff=0. We then have (see Sec. 10.3.2 in Ref. <cit.>) ( r/ t)^2 =1/g_rrṫ^2𝒱_ eff, (θ/ t)^2 =1/g_θθṫ^2𝒱_ eff. We derive Eqs. (<ref>) with respect to the coordinate time t and then consider r=r_0+δ r and θ=π/2+δθ, where δ r and δθ are small perturbations. We linearize the system and obtain harmonic oscillator equations, which provides the expressions of Ω_r,Ω_θ,Ω_φ in terms of the metric <cit.>; * starting from the timelike geodesic equations, we can employ the relativity of observer splitting formalism <cit.>[This technique permits to clearly distinguish between gravitational and inertial contributions. It encompasses a direct connection with the classical description and allows us to reveal the physics behind the symbols we algebraically manipulate.] and the zero angular momentum observers (ZAMOs). Therefore, the test particle's position (r,φ) is expressed in spherical-like coordinates, whereas its spatial velocity vector ν is split in the ZAMO frame {e_t̂,e_r̂,e_θ̂,e_φ̂} through (ν,α), where ν=||ν|| is the magnitude of the spatial velocity and α is the azimuthal angle of the vector ν in the e_r̂-e_φ̂ plane measured clockwise from the positive e_φ̂ direction. We finally obtain ν/ t =f_1(ν,α,r), α/ t =f_2(ν,α,r), r/ t =f_3(ν,α,r). We perturb the above dynamical system around a stable circular orbit of radius r_0 endowed with Keplerian velocity (i.e., α_0=0 and ν_0=ν_K(r)[To calculate ν_K, we can employ ν_K=rΩ_K(r,θ), with Ω_k(r,θ) being the Keplerian angular velocity (cf. Eq. (<ref>).]) via a small parameter ε≪1, namely ν=ν_K+εν_1, α=εα_1, r=r_0+ε r_1, Linearizing the dynamical system (<ref>), we have ν_1/ t =f̃_1(α_1,r_0), α_1/ t =f̃_2(ν_1,r_1,r_0), r_1/ t =f̃_3(α_1,r_0). Now, we consider ^2 α_1/ t^2 =∂f̃_2/∂ν_1ν_1/ t+∂f̃_2/∂ r_1 r_1/ t, and substituting Eqs. (<ref>) and (<ref>), it leads to the harmonic oscillator equation ^2 α_1/ t^2+Ω_r^2α_1=0, where we obtain the explicit expression of Ω_r. The azimuthal epicyclic frequency is calculated through the Keplerian angular velocity Ω_K, i.e., Ω_φ≡Ω_K=φ/ t. For determining Ω_θ, we should first introduce the polar angle ψ in the ZAMO frame, measured from the e_θ̂ direction, and then following a similar procedure outlined above for determining Ω_r <cit.>. The epicyclic angular velocities' formulas, evaluated at the angle θ=π/2 and radius r=r_0, are <cit.> Ω_φ =-∂_r g_tφ±√((∂_r g_tφ)^2-(∂_r g_tt)(∂_r g_φφ))/∂_r g_φφ, Ω_r^2 =(g_tt+Ω_φ g_tφ)^2/2g_rr[∂_rr^2(g_φφ/Y)+2ℓ∂_rr^2(g_tφ/Y) +ℓ^2∂_rr^2(g_tt/Y)], Ω_θ^2 =(g_tt+Ω_φ g_tφ)^2/2g_θθ[∂_θθ^2(g_φφ/Y)+2ℓ∂_θθ^2(g_tφ/Y) +ℓ^2∂_θθ^2(g_tt/Y)], where Y =g_ttg_φφ-g_tφ^2, ℓ =-g_tφ+Ω_φ g_φφ/g_tt+Ω_φ g_tφ. § SEARCHING FOR WORMHOLE'S EXISTENCE AND METRIC RECONSTRUCTION The epicyclic frequencies can be normally found in several X-ray binaries, composed by a BH (or a neutron star) and a companion donor star. These systems are characterized by the presence of an accretion disk, strongly emitting in the X-ray energy band, and by frequently flux variabilities on short timescales <cit.>. The latter effects are studied within the Fourier analysis via power-density spectra, which features very fast aperiodic and quasi-periodic variabilities showing (generally) the existence of narrow peaks with a distinct centroid frequencies, also known as QPOs (see Refs. <cit.>, for reviews). Although their origin is still not clear, the cause of their production is associated with the strong gravity's interaction with the motion of the matter around massive compact objects. QPO models share an extensive use of the epicyclic frequencies framed within different theoretical patterns <cit.>. Therefore, once we detect them, we should choose the appropriate theoretical model in order to infer the right values of the epicyclic frequencies. This represents the main criticality of this procedure, because sometimes it could be difficult to pinpoint the right QPO model, or more than one model could be employed (producing a model-degeneracy), or in the worst case no model could be exploited for the available data <cit.>. We stress that this is a promising approach for the availability of actual and also near-future more accurate observational data (see e.g., Refs. <cit.>). In this section, we first describe how to distinguish between a BH and the presence of a WH (see Sec. <ref>). If a WH is detected, we propose a methodology to reconstruct the related solution from the observations (see Sec. <ref>). §.§ Method to distinguish between a Teo-like wormhole and a Kerr black hole The technique to distinguish between a Kerr BH and a Teo-like WH consists in detecting metric-departures from the BH geometries in GR. Therefore, if we are able to fit the data on epicyclic frequencies via the Kerr model, then no WH is present; otherwise, a WH may exist. Since there are no epicyclic frequencies' data associated to WHs, we select some WH solutions from the literature. We would like to clarify that differently from the static and spherically symmetric case, where a WH solution can be found relatively easy, since only two unknown functions (i.e., g_tt(r) and g_rr(r)) must be determined, in the stationary and axially symmetric situation more functions and a dependence also from the polar angle θ are involved (i.e., g_tt(r,θ), g_rr(r,θ), g_φφ(r,θ), g_θθ(r,θ), and g_tφ(t,θ)). Therefore, some ansätze are generally invoked in order to restrict the functional space of the solutions. The proposed WH geometries, reported in Table <ref>, are all exact solutions of the field equations in GR, obtained by resorting to different models of exotic fluids. Once we have fixed all free parameters of each WH solution (see Sec. <ref>, where we provide more details on the employed methodology and the displayed simulations), in Table <ref> we calculate the related WH throat r_0, innermost stable circular orbit (ISCO) radius r_ ISCO, and epicyclic frequencies (in the equatorial plane) Ω_φ and Ω_r. Regarding the ISCO radius, it can be computed via the radial geodesic equation <cit.>, which could be very demanding for our WH solutions (similarly as it is done in the Kerr metric). An alternative and simpler manner to calculate r_ ISCO can be achieved by determining the minimum value for which the radial epicyclic angular velocity Ω_r is defined. The knowledge of the ISCO radius is very important, because it permits to preliminarily understand the geometrical properties of a WH spacetime. To distinguish between a Kerr BH and Teo-like WH, we plot in Fig. <ref> the epicyclic angular velocities of the WH solutions reported in Table <ref>. The following comments are in order. For Ω_r, we see that all the WH solutions exhibit the same trend for r≳10M, due to the asymptotically flat condition; whereas for r≲10M, it is evident the presence of large deflection from the GR case, with particular relevance around the Kerr ISCO radius. We note that the WH solutions #1 and #10 behave not adequately for all r-range, while the WH geometry #8 is an example of a BH mimicker solution. We can eventually claim that measurements around the ISCO radius are fundamental to identify possible metric-departures and thus hints for the possible existence of WHs. Instead, looking at the Ω_φ profiles, we immediately recognize that all WH solutions, except #1 and #10, behave similarly. Also in this case, the only way to catch a WH solution can be performed via analyses carried out around the Kerr ISCO radius. Although this astrophysical method is very efficient, there could be the unfortunate case, where slight deviations from BH solutions in GR may occur, but the described procedure may fail in its objective. In this situation, alternative astrophysical methods must be employed. However, the last eventuality should be always taken into account in order to robustly cross check the achieved results. §.§.§ Digression on our methodology and simulations This section is devoted to better illustrate the methodology we have pursued in the previous section in order to distinguish between a Kerr BH and a Teo-like WH, as well as to clarify some aspects of the simulations showed in Fig. <ref>, based on the values reported in Table <ref>. It is important to note that we have chosen the free parameters of each WH solution in a way they could mimic as much as possible the Kerr BH geometry. However, they have been calibrated and displayed only for the case of spin a=0.3. Therefore, it is spontaneous to question whether such a choice is valid also for other spin values. To this end, in Fig. <ref> we have produced two plots for both Ω_r and Ω_φ. The procedure to realize them can be divided into three steps performed for each spin value a and for each WH solution: (1) we calculate the WH ISCO radius, which is compared with that of the Kerr BH to finally select the minimum between them; (2) computation of the absolute discrepancy between the angular epicyclic frequencies of the WH solution and the Kerr BH, evaluated in 100 equally spaced points in the interval going from the appropriate ISCO radius (as explained in point (1)) to 20M; (3) mean of the values collected in (2). Figure <ref> is useful, because it constitutes a summary of the behaviour of the selected WH solutions in terms of the spin. However, if we generate plots similar to Fig. <ref> for some spin values covering the range [0,1], we see that those WH solutions able to mimic the Kerr BH for a=0.3 fulfill the same job also for a generic value of a. We would like to underline that this situation is just a particular event. In the most general case, we would have changed the set of parameters for the selected WH solutions in order to mimic the Kerr BH solution for each fixed value of the spin. However, we underline that also in the worst case, our methodology does not fail in its objective. Indeed, from an astrophysical perspective when we focus on a gravitational system, this is determined by a precise value of the spin a. The present broad discussion, which contemplates disparate configurations, is finalized more on theoretically exploring how the selected WH solutions change in terms of their parameters. §.§ Reconstruction of wormhole solutions from the observational data Once a WH will be detected, it is fundamental to have a strategy, which permits to reconstruct the WH solution from the observational data. We have seen that in the static and spherically symmetric case, there is a balance between available equations (i.e., Ω_r and Ω_φ) and unknown functions (i.e., g_tt(r) and g_rr(r)) <cit.>. Instead, in our case we have more unknown functions than available equations. In order to simplify the problem, we have already settled the metric in the equatorial plane θ=π/2, entailing thus that the independent metric components are four, which depend only on r. Therefore, we need to complement the two constraints on epicyclic frequencies with two extra conditions. Before to start, we adopt the following definitions: g_tφ =-r^2ω(r), g_tt =-N^2(r)-g_tφω(r), g_rr =[1-b(r)/r]^-1, g_φφ =r^2K^2(r). We first present a procedure to reconstruct ω(r) (see Sec. <ref>) and K(r) (see Sec. <ref>) via some astrophysical techniques. Then, we are able to determine also N(r) and b(r) by exploiting the functions ω(r) and K(r) and the data on the epicyclic frequencies (see Sec. <ref>). §.§.§ Reconstruction of ω(r) Astrophysically, the data points on the ω(r) function can be acquired by measuring the frame-dragging effect at different radii. The sampled nodes could be gathered by adopting, for example, these strategies: line emission from an accretion disk <cit.>, QPOs <cit.>, and comparison between the numerical simulations of an accretion disk and the image provided by the Event Horizon Telescope (EHT) <cit.>. Once, we collect them, we need to postulate a fitting function for reconstructing ω(r). To this end, it is useful to list some acceptable requirements: (1) ω(r)>0 (ω(r)<0) for positive (negative) values of a; (2) in modulus, it is a monotone decreasing function; (3) in the weak field limit, it behaves like ω(r)≈ 2Ma/r (as it also occurs in the Kerr metric). A reasonable and handy functional form of ω(r), meeting the aforementioned conditions, could be ω(r)=2Ma[r^α+∑_j=0^α-1a_j r^j-1/r^α+1+∑_k=0^α b_k r^k], where the coefficients a_j and b_k are real numbers (encoding the dependence from the WH mass, WH spin, and possibly other parameters). We could assume that α∈[1,3]⊂ℕ, taking inspiration from the WH solutions provided in Table <ref>. Let us choose as general form that for α=3, which contains seven free parameters. A simpler way to reduce the complexity of the problem could be to Taylor-expand Eq. (<ref>) for r→∞, having thus ω(r)=2Ma[1/r+c_0/r^2+c_2/r^3], which involves only two free parameters (i.e., c_0,c_1). A strategy could be to first fit the data with Eq. (<ref>) in order to have a first rough estimation. Then, the analysis could be refined by employing Eq. (<ref>). We clarify that for the lack of observational data, we are just assuming a functional form of ω. Equation (<ref>) contains six parameters, which can be further reduced depending on the available data and how they distribute. For example, one can truncate the series to lower orders or fixing some coefficients to certain numerical values. However, in this theoretical speculation we prefer to keep the general form, which could be further handled. §.§.§ Reconstruction of K(r) The scheme to reconstruct K(r) is more complicate, because this function does not have a direct physical effect as ω(r). In this case, it is not straightforward to determine the data points to be fitted. However, they could be constructed as follows: (1) measuring some proper radial distances {R_i}_i=1^ℳ with ℳ>1, like for example the photonsphere, the ISCO radius, and other regions obtained via the techniques already outlined for ω(r) <cit.>; (2) we can associate to each distance R_i from point (1) the related radius r_i, obtained by considering that the compact object is described by the Schwarzschild metric, whose mass can be estimated already at point (1); (3) we gather together the steps carried out in (1) and (2) to eventually build up the nodes {r_i,K_i≡ R_i/r_i}_i=1^ℳ. For wisely restricting the functional space to search for K(r), we remind that it is related to the proper radial distance R(r,θ)=rK(r,θ) and must fulfill the following properties: (1) since R(r)>0 and r>0, we have K>0; (2) K(r)→ 1 for r→+∞; (3) K(r) must be finite, positive, and monotone decreasing everywhere outside the WH throat; (4) ∂ R(r)/∂ r>0 implies 0>K'(r)>-K(r)/r, where from now on the prime will stay for the derivative with respect to radial coordinate r. We emulate the functional form of the K(r) function from the Kerr metric, which reads as K_ Kerr(r)=√(r^3+a^2(r+2M)/r^3). Indeed, we hypothesize that a possible general form of K(r) for Teo-like WHs could be K(r)=(r^β+d_0/r^β+d_1+∑_i=0^𝒩e_i/r^β+1+i)^1/(2γ). This expression has 𝒩+5 free parameters, namely {β,d_0,d_1,γ,e_0,…,e_𝒩} with d_0>d_1, β>1, and γ≥1. Equation (<ref>) can be further simplified by setting: γ=1 to reduce the complexity of the ensuing fitting function and also the number of free parameters; 𝒩=2, because higher-order terms strongly decrease, giving just tiny contributions. These further assumptions entail K(r)=(r^β+d_0/r^β+d_1+e_0/r^β+1+e_1/r^β+2+e_2/r^β+3)^1/2. In this way, we are left with only six free parameters. A further helpful simplification could be in considering an asymptotic expansion of Eq. (<ref>), namely K(r)=1+A/r+B/r^2+C/r^3, where we reduce to four free parameters. We use the same approach devised for ω(r), namely we first fit the data via the function (<ref>). Then, we ameliorate our analysis by exploiting Eq. (<ref>) to obtain more precise results. As discussed at the end of Sec. <ref>, also in this case, we prefer to keep the general form of Eq. (<ref>), which could be useful for eventual further manipulations. §.§.§ Reconstruction of N(r) and b(r) Once we know ω(r) and K(r), we are able to reconstruct N(r) and b(r) via the epicyclic angular velocities {Ω_φ,Ω_r}. We assume they are sampled in n values {x̅_i}_i=1^n contained in the interval [r_1,r_2], which we split in n+1 equally spaced points (i.e., r_1≡ x_0<…<x_n-1<x_n≡ r_2) such that x̅_i∈[x_i-1,x_i] for every i=1,…,n. Therefore, from Eq. (<ref>) we obtain (N^2/2)' =(g_φφ')^2Ω_φ^2+2g_tφ'g_φφ'Ω_φ/2g_φφ' -(g_tφ'ω+g_tφω'/2). Discretizing this equation, we have N^2(x̅_i)/2 =N^2(r_1)/2+∑_i=1^N[(g_φφ'(x̅_i))^2Ω_φ^2(x̅_i)/2g_φφ'(x̅_i) +2g_tφ'(x̅_i)g_φφ'(x̅_i)Ω_φ(x̅_i)/2g_φφ'(x̅_i)-g_tφ'(x̅_i)ω(x̅_i)/2 -g_tφ(x̅_i)ω'(x̅_i)/2](x_i-x_i-1). The above expression can be entirely calculated, since we know the functional form of ω(r) and K(r). The only unknown value is N(r_1), which can be estimated by following the same scheme devised in Ref. <cit.>. From this first step, we have the following points {x̅_i,N(x̅_i)}_i=1^n to be fitted in order to reconstruct N(r). From Eq. (<ref>), we have Z =(g_tt+Ω_φ g_tφ)^2/2[∂_rr^2(g_φφ/Y)+2ℓ∂_rr^2(g_tφ/Y) +ℓ^2∂_rr^2(g_tt/Y)], b(x̅_i) =∑_i=1^N x̅_i(1-Ω_r(x̅_i)/Z(x̅_i)). By fitting the points {x̅_i,b(x̅_i)}_i=1^N, we reconstruct b(r). In this case, we do not provide some general expressions for both the fitting functions, since they can be, in general, of any form. In addition, they are important for characterizing the WH solution and the gravity theory from which they come. Therefore, we list only some general constraints, which these functions must fulfill: * N(r) must be a positive monotone decreasing function, which asymptotically tends to 1. For recovering the Newtonian theory in the weak field limit, we have that for large radii N(r)→√(1-2M/r); * b(r) must be a positive monotone increasing function such that b(r)< r and asymptotically it should behave like b(r)/r→0. Finally, the flaring out condition imposes that the derivative must satisfy b'(r)<b(r)/r<1. For recovering the Newtonian theory in the weak field limit, we have that for large radii b(r)→2M. §.§ Conclusions In this paper, we have considered the epicyclic frequencies in the equatorial plane around general stationary, axially symmetric, and traversable WHs, modeled by the Teo-like metric. We have first described the general properties of this class of WHs and then we have written the formulas of the epicyclic frequencies in terms of the metric components (see Sec. <ref>). Subsequently, we have used the formulas of the epicyclic frequencies for detecting the eventual presence of a WH (see Sec. <ref>). Since we do not have yet data on WHs, we have considered some WH solutions proposed in the literature, see Table <ref>. For each WH, we have also calculated the WH throat r_0 and the ISCO radius r_ ISCO, as well as the explicit expressions (in the equatorial plane) of Ω_φ and Ω_r (once the free parameters have been fixed), see Table <ref>. In Fig. <ref> we have shown the profiles of the epicyclic frequencies compared to those obtained in the Kerr metric. From these plots we deduce that analyses carried out around the Kerr ISCO radius are fundamental to highlight possible metric-departures from GR. Our study has been carried out for a fixed value of the spin. However, we have verified also that by changing the spin values, the selected WH solutions behave similarly to the displayed case (see Fig. <ref> and Sec. <ref>, for details). Finally, in Sec. <ref> we present a strategy to reconstruct the WH solution once the observational data on WHs will be available. Since there are four unknowns {N(r),b(r),ω(r),K(r)} and only two equations {Ω_φ,Ω_r}, we need two extra constraints. We propose some procedure to reconstruct ω(r) and K(r). Regarding the function ω(r), we first construct the observational data via the measurement of the frame-dragging effect in some radii and then we fit them via some selected functions (see Sec. <ref>). Instead, for the function K(r) the reconstruction process is more complex, especially for the assembly of the observational data. Also in this case, we are able to select some general functional forms of K(r) for fitting the data (see Sec. <ref>). In the last part, we use the data on epicyclic frequencies and the explicit expressions of Ω_φ(r) and Ω_r(r), together with the analytical expressions ω(r) and K(r), to reconstruct also N(r) and b(r). We would like to stress that we have proposed a general strategy, which could be improved in terms not only of the construction of the data, but also in terms of the fitting functions (depending on the given nodes). This work can be applied not only to WHs, but also to investigate other compact objects. Furthermore, the capacity to detect metric-departures around the ISCO radius is extremely important for providing tests of gravity within GR or Extended Theories of gravity. In particular, the epicyclic frequencies permits to easily reconstruct from the data either a WH metric or also a BH solution framed in another gravity theory different from GR. As remarked also in this paper, sometimes it could be difficult to detect the WH solution or to reconstruct its metric by only exploiting the epicyclic frequencies. Therefore, it is always useful to complement this approach with other astrophysical methods in order to have more solid results. As future perspectives, we aim at extending this strategy to the whole three-dimensional space around stationary and axially symmetric WH geometries, where the role of the polar epicyclic angular velocity becomes extremely useful. We envisage the following issues to be addressed: (1) the Teo metric (<ref>) must be modified, because there should be five unknown functions (in correspondence with the five metric components) in order to faithfully model WHs in the three-dimensional space; (2) determining Ω_θ(r,θ) outside the equatorial plane (whose formula is not coincident with Eq. (<ref>), valid only in the equatorial plane); (3) all metric components will be functions of (r,θ), meaning that we need to find samples along the r and θ directions; (4) the fitting procedures will occur in the three-dimensional space, where the nodes must be interpolated by two-dimensional surfaces. § ACKNOWLEDGEMENTS V.D.F. thanks Gruppo Nazionale di Fisica Matematica of Istituto Nazionale di Alta Matematica for the support. V.D.F. acknowledges the support of INFN sez. di Napoli, iniziative specifiche TEONGRAV.
http://arxiv.org/abs/2307.00743v1
20230703041620
Joint Power Allocation and Beamforming for Active IRS-aided Directional Modulation Network
[ "Rongen Dong" ]
cs.IT
[ "cs.IT", "math.IT" ]
Joint Power Allocation and Beamforming for Active IRS-aided Directional Modulation Network Rongen Dong Rongen Dong is with the School of Information and Communication Engineering, Hainan University, Haikou, 570228, China. August 1, 2023 ====================================================================================================================================================== To boost the secrecy rate (SR) of the conventional directional modulation (DM) network and overcome the double fading effect of the cascaded channels of passive intelligent reflecting surface (IRS), a novel active IRS-assisted DM system is investigated in this paper. Aiming to maximize the SR, two power allocation (PA) strategies, called maximizing SR based on fractional programming (FP) (Max-SR-FP) and maximizing SR based on derivative operation (DO) (Max-SR-DO), are proposed by jointly designing the PA factors, beamforming vector, and phase shift matrix of IRS, subject to the power constraint at IRS. The former with higher performance employs the FP and successive convex approximation (SCA) algorithms to design the confidential message PA factor and the total PA factor at the base station, and the SCA algorithm is also utilized to design the beamforming vector and the phase shift matrix of the IRS. The latter with lower complexity adopts the DO, and equal amplitude reflecting (EAR) and general power iterative (GPI) methods to solve them, respectively. The simulation results show that compared with the benchmark PA schemes, both the proposed PA schemes achieve a significant SR performance improvement. Moreover, the SR gap between two proposed schemes decreases gradually with the increases of the number of IRS phase shift element. Directional modulation, secrecy rate, active intelligent reflecting surface, power allocation § INTRODUCTION The broadcast nature of wireless communication makes the confidential message vulnerable to eavesdropping by the illegal users. Directional modulation (DM), as an advanced and promising physical layer security technology, has attracted the research interest of a wide range of researchers<cit.>. DM provides security via directive and is suitable for line-of-sight (LoS) channels such as millimeter wave, unmanned aerial vehicle, intelligent transportation, and satellite communication<cit.>. The main ideas of DM are as follows: in the LoS channel, DM transmits confidential message to legitimate user along the desired direction via beamforming vector, and interferes with illegal user eavesdropping by sending artificial noise (AN) in the undesired direction, hence enhancing the secure performance of the system<cit.>. So far, the research for DM technology is mainly focused on the radio frequency frontend and baseband. To enhance the secrecy rate (SR) of the DM network with a eavesdropper, in <cit.>, in accordance with the convex optimization method, a sparse array of DM was synthesized, and the proposed approach achieved better flexibility in terms of control security performance and power efficiency. A DM network with hybrid active and passive eavesdroppers was considered in <cit.>, and a scheme using frequency division array with assisted AN technique at the transmitter to achieve secure transmission with angle-range dependence was proposed. Unlike the single legitimate user networks above, the authors in <cit.> investigated a multi-legitimate user DM network and designed a security-enhancing symbol-level precoding vector, which outperformed the benchmark method in terms of both the power efficiency and security enhancement. The multi-beam DM networks were investigated in <cit.> and <cit.>, and a generalized synthesis method and an AN-aided zero-forcing synthesis method were proposed by the former and the latter to enhance the system performance, respectively. However, the above mentioned works mainly focus on the scenario where the legitimate user and the eavesdropper have different directions. To ensure secure transmission of the system when the eavesdropper was in the same direction as the legitimate user, the secure precise wireless transmission DM system were investigated in <cit.> and <cit.>, which sent confidential message to a specific direction and distance to ensure the secure wireless transmission. With the development of wireless communication, the demand for network increases dramatically<cit.>. Using a large number of active devices will lead to serious energy consumption problems, and the emergence of intelligent reflecting surface (IRS) provides a novel paradigm to solve this problem. IRS is a planar array of large numbers of passive electromagnetic elements, each of which is capable of independently adjust the amplitude and phase of the incident signal<cit.>. Thanks to this ability, the signal strength at the receiver can be significantly enhanced by properly tuning the reflected signal. Recently, various wireless communication scenarios assisted by IRS have been extensively investigated, including the multicell communications <cit.>, unmanned aerial vehicles communications<cit.>, simultaneous wireless information and power transfer (SWIPT) network<cit.>, non-orthogonal multiple access network<cit.>, and wireless-powered communication network<cit.>. Given that the advantages of IRS in wireless communication, in recent years, the IRS-assisted DM network has also been investigated. With the help of IRS, the DM can overcome the limitation of being able to transmit only one bit stream and significantly enhance the SR performance. In <cit.>, an IRS-aided DM system was considered, and two confidential bit streams were transmitted from Alice to Bob at the same time. Based on the system model of <cit.>, in <cit.>, to enhance the SR performance, two low-complexity algorithms were proposed to jointly design the active and passive beamforming vectors of the IRS-assisted DM network. Moreover, a scenario in which Bob and Eve with the same direct-path and different reflect-path was considered in <cit.>, and the genetic algorithm and alternating optimisation criterion were employed to solve the maximum SR non-convex problem when the Eve's location information was available. The above works showed that the passive IRS can boost the SR performance of the conventional DM network. However, the “double fading” effect that accompanies passive IRS is inevitable, which is caused by the fact that the signal reflected through the IRS needs to pass through the transmitter-IRS and IRS-receiver cascade links<cit.>. To overcome this physical limitation, an emerging IRS structure, named active IRS, has been proposed. In recent years, researchers have investigated various wireless communication scenarios with the help of active IRS<cit.>. For example, to maximize the rate of IRS-aided downlink/uplink communication system, the placement of the active IRS was investigated in <cit.>, which revealed that the system rate was optimal when the active IRS was placed close to the receiver. An active IRS-assisted single input multiple output network was considered in <cit.>, an alternating optimization approach was proposed to obtain the IRS reflecting coefficient matrix and received beamforming, which achieved the better performance compared to the passive IRS-assisted network with the same power budget. An active IRS-aided SWIPT network was proposed in <cit.>, an alternating iteration method was employed to maximize the weighted sum rate, and the higher performance gain was achieved. The above works presented the benefits of active IRS for wireless network performance gains. Motivated by the discussions above, to further enhance the SR performance of the passive IRS-assisted DM system, an active IRS-assisted DM network with an eavesdropper is considered in this paper. Given that the beamforming and AN powers of the base station (BS) and IRS power are subject to the system's total power constraint, to investigate the impact of the power allocation (PA) among them on the system performance, we focus on maximizing the SR by jointly optimizing the PA factors, transmit beamforming vector, and phase shift matrix of active IRS, which is different from the PA strategy of the conventional DM network without IRS in <cit.>. To the best of the author's knowledge, This is the first work to investigate PA between BS and IRS in the active IRS-assisted wireless network. The main contributions of this paper are summarized as follows. * To enhance the SR performance of the conventional DM network, a novel DM network with the introduction of active IRS is proposed in this paper. We formulate a SR maximization problem by jointly optimizing the PA factors, beamforming vector, and the IRS phase shift matrix for the active IRS-aided secure DM system in the presence of an Eve, subject to the power constraint at IRS. Simulation results show that optimizing the PA factors can significantly enhance the SR. When the number of phase shift elements of IRS is 64, the proposed active IRS-assisted DM schemes can harvest up to 11.9% SR gain over the equal PA scheme with no PA optimization. * To maximize the SR and explore the impact of PA on performance improvement, a high-performance alternating optimization PA scheme, called maximum SR based on fractional programming (FP) (Max-SR-FP), is proposed to tackle the formulated optimization problem by jointly designing the PA factors, transmit beamforming, and phase shift matrix of IRS. In this scheme, we decompose the original problem into four tractable subproblems firstly. Next, the FP and successive convex approximation (SCA) algorithms are employed to derived the PA factors, and the beamforming vector and phase shift matrix of IRS are also derived by the SCA algorithm. Finally, these subproblems are optimized alternately until convergence. * Given the high computational complexity of the proposed Max-SR-FP scheme, a low-complexity alternating iteration scheme, named maximum SR based on derivative operation (DO) (Max-SR-DO), is proposed to maximize the SR. First of all, by utilizing the DO criterion, the closed-form expressions of the PA factors are derived. Then, we divide the phase shift matrix of IRS into two parts, i.e., amplitude and phase, and solve them separately. The amplitude of the active IRS is computed based on the criteria of equal amplitude reflection (EAR), and the phase of the active IRS is addressed by the general power iterative (GPI) approach. From the simulation results, it is clear that the SRs harvested by both the proposed schemes are higher than those of the benchmark PA schemes. In addition, when the number of active IRS phase shift elements tends to large-scale, the difference in terms of SR between these two proposed schemes is trivial, and they both outperform the benchmark PA schemes. The remainder of this paper is organized as follows. We describe the system model of active IRS-assisted DM network and formulate the maximum SR problem in Section <ref>. Section <ref> introduces the proposed Max-SR-FP scheme. The proposed Max-SR-DO scheme is described in Section <ref>. The numerical simulation results and conclusions are given in Section <ref> and Section <ref>, respectively. Notations: in this work, the scalars, vectors and matrices are marked in lowercase, boldface lowercase, and uppercase letters, respectively. Symbols (·)^T, (·)^*, (·)^H, ∂(·), Tr(·), {·}, diag{·}, and blkdiag{·} refer to the transpose, conjugate, conjugate transpose, partial derivative, trace, real part, diagonal, and block diagonal matrix operations, respectively. The sign |·| stands for the scalar's absolute value or the matrix's determinant. The notations I_N and ℂ^P× Q refer to the identity matrix of N× N and complex-valued matrix space of P× Q, respectively. § SYSTEM MODEL As illustrated in Fig. 1, we consider an active IRS-assisted secure DM network, where the BS (Alice) sends confidential message to the legitimate user (Bob) with the assistance of active IRS, but there is a risk that it may be eavesdropped by the illegal user (Eve). There are N antennas at the BS and single antenna at both Bob and Eve, there are M reflection elements on the active IRS with tunable amplitude and phase. In this paper, it is assumed that the active IRS reflects signal only once and there exists the line-of-sight channels. Moreover, all channel state information is assumed to be available owing to the channel estimation. The transmitted signal at Alice is expressed as s=√(β l P)vx+√((1-β) l P)T_ANz, where P stands for the total power, β∈(0, 1] and (1-β) refer to the PA parameters of the confidential message and AN, l∈ (0, 1) means the PA factor of the total power allocated to the BS, v∈ℂ^N× 1 and x refer to the beamforming vector and confidential message intent to Bob, they satisfy v^Hv=1 and 𝔼[|x|^2]=1, respectively, T_AN∈ℂ^N× N and z∈ℂ^N× 1 represent the projection matrix and vector of AN, they meet Tr(T_ANT^H_AN)=1 and z∼𝒞𝒩 (0, I_N), respectively. Given the existence of path loss, the received signal at Bob is formulated as y_b =(√(g_ab)h^H_ab+√(g_aib)h^H_ibΨH_ai)s+ √(g_ib)h^H_ibΨn_r+n_b =√(β l P) (√(g_ab)h^H_ab+√(g_aib)h^H_ibΨH_ai)vx+    √((1-β) l P)(√(g_ab)h^H_ab+√(g_aib)h^H_ibΨH_ai) T_ANz+    √(g_ib)h^H_ibΨn_r+n_b, where g_ab and g_ib stand for the path loss parameters of Alice-to-Bob and IRS-to-Bob channels, g_aib=g_aig_ib means the equivalent path loss parameter of Alice-to-IRS and IRS-to-Bob channels, Ψ=diag{ψ_1, ⋯, ψ_m, ⋯, ψ_M}∈ℂ^M× M and ψ=[ψ_1, ⋯, ψ_m, ⋯, ψ_M]^H refer to the reflection coefficient matrix and vector of IRS, ψ_m=α_ie^jϕ_i, and α_i and ϕ_i are the amplitude and phase, respectively. n_r ∼𝒞𝒩 (0, σ^2_rI_M) and n_b ∼𝒞𝒩 (0, σ^2_b) mean the complex additive white Gaussian noise (AWGN) at IRS and at Bob, respectively, h^H_ab∈ℂ^1× N, h^H_ib∈ℂ^1× M, and H_ai=h_iah^H_ai∈ℂ^M× N denote the Alice-to-Bob, IRS-to-Bob, and Alice-to-IRS channels, respectively. It is assumed that h_tr=h(θ_tr) for simplicity, and the normalized steering vector is h(θ)Δ=1/√(N)[e^j2πΦ_θ(1), …, e^j2πΦ_θ(n), …, e^j2πΦ_θ(N)]^T, where Φ_θ(n)=-(n-N+1/2)d cosθ/λ, n=1, 2, …, N, θ represents the direction angle of the signal departure or arrival, n stands for the antenna index, d indicates the distance between adjacent transmitting antennas, and λ refers to the wavelength. Similarly, the received signal at Eve is cast as y_e =(√(g_ae)h^H_ae+√(g_aie)h^H_ieΨH_ai)s+ √(g_ie)h^H_ieΨn_r+n_e =√(β l P) (√(g_ae)h^H_ae+√(g_aie)h^H_ieΨH_ai)vx+    √((1-β) l P)(√(g_ae)h^H_ae+√(g_aie)h^H_ieΨH_ai) T_ANz+    √(g_ie)h^H_ieΨn_r+n_e, where g_ae and g_ie stand for the path loss parameters of Alice-to-Eve and IRS-to-Eve channels, g_aie=g_aig_ie means the equivalent path loss parameter of Alice-to-IRS and IRS-to-Eve channels, n_e ∼𝒞𝒩 (0, σ^2_e) represents the AWGN at Eve, h^H_ae∈ℂ^1× N and h^H_ie∈ℂ^1× M refer to the Alice-to-Eve and IRS-to-Eve channels, respectively. It is assumed that AN is only transmitted to Eve for jamming and not to Bob, based on the criterion of null-space projection, T_AN should meet H_aiT_AN=0_M× N, h^H_abT_AN=0_1× N. Let us define a equivalent virtual channel matrix of confidential message as follows H_CM=[ [ H_ai; h^H_ab ]]_(M+1)× N. Then, T_AN can be designed as T_AN=I_N-H_CM^H[H_CMH_CM^H]^†H_CM. At this point, (<ref>) and (<ref>) can be rewritten as y_b =√(β l P)(√(g_ab)h^H_ab+√(g_aib)h^H_ibΨH_ai)vx+ √(g_ib)h^H_ibΨn_r    +n_b and y_e =√(β l P)(√(g_ae)h^H_ae+ √(g_aie)h^H_ieΨH_ai)vx+    √((1-β) l P)√(g_ae)h^H_aeT_ANz+ √(g_ie)h^H_ieΨn_r+n_e, respectively. Based on (<ref>) and (<ref>), the achievable rates at Bob and Eve are respectively given by R_b=log_2(1+β l P|(√(g_ab)h^H_ab+ √(g_aib)h^H_ibΨH_ai)v|^2/σ^2_r√(g_ib)h^H_ibΨ^2+σ^2_b) and R_e =log_2(1+ β l P|(√(g_ae)h^H_ae+ √(g_aie)h^H_ieΨH_ai)v|^2/(1-β) l P √(g_ae)h^H_aeT_AN^2+ σ^2_r√(g_ie)h^H_ieΨ^2+σ^2_e). Moreover, the transmitted power at active IRS can be formulated as follows P_r=Tr(Ψ(g_aiβ l PH_aivv^HH_ai^H+σ^2_rI_M)Ψ^H). The SR of active IRS-assisted secure DM network is expressed as R_s=max{0, R_b-R_e}. In this paper, we maximize the SR by jointly deriving the PA factors β and l, beamforming vector v, and active IRS phase shift matrix Ψ. The overall optimization problem is formulated as follows max_β, l, v, Ψ  R_s    s.t.    v^Hv=1,           |ψ(m)|≤ψ^max,           P_r≤ (1-l)P,           0<β≤ 1,           0<l< 1, where ψ^max means the amplification gain threshold of the active IRS elements, and (1-l)P is the maximum transmit power of IRS. It is obvious that this problem is non-convex and the optimization variables are coupled with each other, which makes it a challenge to address it directly in general. Hence, the alternating iteration algorithm is taken into account for solving this optimization problem in what follows. § PROPOSED MAX-SR-FP SCHEME In this section, we propose an alternating optimization algorithm, named Max-SR-FP, to address the coupling variables PA factors β and l, beamforming vector v, and active IRS phase shift matrix Ψ in problem (<ref>). Below, aimed at maximizing SR, we decompose the problem (<ref>) into four subproblems, and alternately update β, l, v, and Ψ while fixing the other variables. §.§ Optimization of the PA factor β In this subsection, the beamforming vector v and IRS phase shift matrix Ψ are given for the sake of simplicity, we re-arrange the IRS power constraint (<ref>) as β l Tr(Ψ(g_ai PH_aivv^HH_ai^H)Ψ^H)+ Tr(σ^2_rΨΨ^H)≤ (1-l)P. For the sake of simplicity, let us define A_b=P|(√(g_ab)h^H_ab+ √(g_aib)h^H_ibΨH_ai)v|^2, A_e=P|(√(g_ae)h^H_ae+ √(g_aie)h^H_ieΨH_ai)v|^2, B_b=σ^2_r√(g_ib)h^H_ibΨ^2+σ^2_b, B_e=σ^2_r√(g_ie)h^H_ieΨ^2+σ^2_e, C_e=P √(g_ae)h^H_aeT_AN^2. Then, (<ref>) and (<ref>) can be degenerated to R_b=log_2(β l A_b+B_b/B_b) and R_e=log_2(β l A_e+(1-β) l C_e+B_e/(1-β) l C_e+B_e), respectively. Correspondingly, the objective function of the optimization problem (<ref>) can be simplified as R_s=R_b-R_e =log_2((β l A_b+B_b)[(1-β) l C_e+B_e]/β l A_e+(1-β) l C_e+B_e)-log_2B_b =log_2β(1-β)l^2A_bC_e+β l A_bB_e+(1-β)l B_bC_e+B_bB_e/β l A_e+(1-β) l C_e+B_e   -log_2B_b. In what follows, we handle the optimization of the PA parameters β and l successively. Given l, in accordance with (<ref>) and (<ref>), the optimization problem with respect to β can be simplified as follows max_β  1/β (l A_e- l C_e)+l C_e+B_e(-β^2l^2A_bC_e+     β(l^2A_bC_e+l A_bB_e-l B_bC_e)+l B_bC_e+B_bB_e)   s.t.   (<ref>), 0<β≤ 1, which can be re-arrange as max_β  -β^2A_1+β B_1+C_1/β D_1+E_1   s.t.   β F_1≤ G_1, 0<β≤ 1, where A_1=l^2A_bC_e, B_1=l^2A_bC_e+l A_bB_e-l B_bC_e, C_1=l B_bC_e+B_bB_e, D_1=l A_e- l C_e, E_1=l C_e+B_e, F_1=l Tr(Ψ(g_ai PH_aivv^HH_ai^H)Ψ^H), G_1= (1-l)P-Tr(σ^2_rΨΨ^H). It can be found that this problem is non-convex. Notice that this is a FP problem, and the denominator of (<ref>) is β D_1+E_1=β l A_e+(1-β)lC_e+B_e>0. To transform (<ref>) into a convex optimization problem, based on the Dinkelbach's transform in <cit.>, we introduce a auxiliary parameter τ_1 and recast the problem (<ref>) as follows max_β, τ_1  -β^2A_1+β B_1+C_1-τ_1(β D_1+E_1)   s.t.   β F_1≤ G_1, 0<β≤ 1. The optimal solution to this problem is the root of -β^2A_1+β B_1+C_1-τ_1(β D_1+E_1)=0. At this point, the optimization problem (<ref>) is convex, and we can address it by CVX directly <cit.>. §.§ Optimization of the PA factor l Fixed v and Ψ, given that β has been found, we transfer the focus to solving for l. In accordance with (<ref>) and (<ref>), by neglecting the constant terms, the optimization problem with respect to l can be simplified as follows max_l  l^2β(1-β)A_bC_e+l(β A_bB_e+(1-β)B_bC_e)+B_bB_e/l(β A_e+(1-β)C_e)+B_e   s.t.   (<ref>), 0<l< 1, which yields max_l  l^2A_2+lB_2+C_2/lD_2+E_2   s.t.   l F_2≤ G_2, 0<l< 1, where A_2=β(1-β)A_bC_e, B_2=β A_bB_e+(1-β)B_bC_e, C_2=B_bB_e, D_2=β A_e+(1-β)C_e, E_2=B_e, F_2=βTr(Ψ(g_ai PH_aivv^HH_ai^H)Ψ^H)+P, G_2=P-Tr(σ^2_rΨΨ^H). It is noticed that lD_2+E_2>0, and this is a non-convex fractional optimization problem, in accordance with the FP method, we introduce a auxiliary parameter τ_2 and recast the problem (<ref>) as max_l, τ_2  l^2A_2+lB_2+C_2-τ_2(lD_2+E_2)   s.t.   l F_2≤ G_2, 0<l< 1, The optimal solution to this problem is the root of l^2A_2+lB_2+C_2-τ_2(lD_2+E_2)=0. However, the problem (<ref>) is still non-convex and requires further transformation. In accordance with the first-order Taylor approximation of l^2A at feasible point l̅, we have l^2A_2≥ 2l̅A_2 l-l̅^2A_2. Then, (<ref>) can be converted to max_l, τ_2  2l̅A_2 l-l̅^2A_2+lB_2+C_2-τ_2(lD_2+E_2)   s.t.   l F_2≤ G_2, 0<l< 1, which is a convex optimization problem and can be addressed directly by the convex optimizing toolbox. §.§ Optimization of the beamforming vector v Given β, l, and Ψ, we reformulate the IRS power constraint (<ref>) as follows P_r=v^H(g_aiβ l PH^H_aiΨ^HΨH_ai)v+ Tr(σ^2_rΨΨ^H)≤(1-l)P. With ignoring the constant term, (<ref>) can be re-arranged as the optimization problem with respect to v as follows max_v  v^HAv/v^HBv   s.t.   v^Hv=1, (<ref>), where A =β l P(√(g_ab)h^H_ab+ √(g_aib)h^H_ibΨH_ai)^H(√(g_ab)h^H_ab+    √(g_aib)h^H_ibΨH_ai) +(σ^2_r√(g_ib)h^H_ibΨ^2+σ^2_b)I_N, and B =β l P(√(g_ae)h^H_ae+ √(g_aie)h^H_ieΨH_ai)^H(√(g_ae)h^H_ae+    √(g_aie)h^H_ieΨH_ai) +((1-β) l P √(g_ae)h^H_aeT_AN^2+    σ^2_r√(g_ie)h^H_ieΨ^2+σ^2_e)I_N. In accordance with the first order Taylor approximation, we have |y|^2/z≥ -y̅^*y̅/z̅^2z+2{y̅^*y}/z̅. Then, the problem (<ref>) can be recast as max_v  -v̅^HAv̅/(v̅^HBv̅)^2v^HBv+2{v̅^HAv}/v̅^HBv̅   s.t.   v^Hv=1, (<ref>), where v̅ stands for the given vector. This is a convex optimization problem that can be tackled directly with CVX. §.§ Optimization of the IRS phase shift matrix Ψ In this subsection, we turn our target to optimize Ψ with given v, β, and l. For the sake of derivation, let us define ψ=[ [ ψ; 1 ]]_(M+1)× 1, h_j=[ [ √(g_aij)diag{h^H_ij}H_aiv; √(g_aj)h^H_ajv ]]_(M+1)× 1, j=b, e, H_j=[ [ √(g_ij)diag{h^H_ij}; 0^H ]]_(M+1)× M, j=b, e. Based on the fact that diag{p}q=diag{q}p, the power constraint (<ref>) can be re-arranged as follows P_r =Tr(Ψ(g_aiβ l PH_aivv^HH_ai^H+ σ^2_rI_M)Ψ^H) =ψ^T(g_aiβ l Pdiag{v^HH_ai^H}diag{H_aiv} +σ^2_rI_M)ψ^* =ψ^Tblkdiag{g_aiβ l Pdiag{v^HH_ai^H}diag{H_aiv}+     σ^2_rI_M, 0}ψ^* ≤ (1-l)P. In addition, the achievable rates (<ref>) and (<ref>) can be rewritten as R_b=log_2(1+β l P|ψ^Hh_b|^2/σ^2_rψ^HH_b^2+σ^2_b) and R_e = log_2(1+ β l P|ψ^Hh_e|^2/σ^2_rψ^HH_e^2+(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e) =log_2(1+β l P|ψ^Hh_e|^2+ σ^2_rψ^HH_e^2/(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e)-    log_2(1+σ^2_rψ^HH_e^2/(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e), respectively. At this point, the optimization problem with respect to Ψ is given by max_ψ  log_2(1+β l P|ψ^Hh_b|^2/σ^2_rψ^HH_b^2+σ^2_b)+         log_2(1+σ^2_rψ^HH_e^2/(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e)-         log_2(1+β l P|ψ^Hh_e|^2+ σ^2_rψ^HH_e^2/(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e)   s.t.   |ψ(m)|≤ψ^max,  ψ(m+1)=1,  (<ref>). This problem is non-convex and further transformation is required. Based on the result in <cit.>, for fixed points e̅_1, e̅_2, and e̅_3, the inequalities In(1+|e_1|^2/e_2)≥In(1+|e̅_1|^2/e̅_2)-|e̅_1|^2/e̅_2+ 2{e̅_1e_1}/e̅_2-                      |e̅_1|^2/e̅_2(e̅_2+|e̅_1|^2)(e_2+|e_1|^2) and -In(1+e_3)≥ -In(1+e̅_3)-1+e_3/1+e̅_3+1 are valid. Therefore, by omitting the constant term, the optimization problem (<ref>) can be degenerated to max_ψ  2{a̅a^H}/b̅-|a̅|^2(b+|a|^2)/b̅(b̅+|a̅|^2)+         2{c̅^Hc}/d̅-|c̅|^2(d+|c|^2)/d̅(d̅+|c̅|^2)-1+e/1+e̅   s.t.   |ψ(m)|≤ψ^max,  ψ(m+1)=1,   (<ref>), where a=√(β l P)ψ^Hh_b, b=σ^2_rψ^HH_b^2+σ^2_b, c=(√(σ^2_r)ψ^HH_e)^H, d=(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e, e=β l P|ψ^Hh_e|^2+ σ^2_rψ^HH_e^2/d, a̅, b̅, c̅, d̅, and e̅ mean the solutions obtained at the previous iteration. Then, the optimization problem (<ref>) degenerate towards the following problem min_ψ  ψ^HWψ-2{ψ^Hu},   s.t.   |ψ(m)|≤ψ^max,  ψ(m+1)=1,   (<ref>), where W=|a̅|^2/b̅(b̅+|a̅|^2) (β l Ph_bh^H_b+σ^2_rH_bH^H_b)+ |c̅|^2/d̅(d̅+|c̅|^2)σ^2_rH_eH^H_e+ 1/1+e̅β l Ph_eh^H_e+ σ^2_rH_eH^H_e/d, u=1/b̅β l Ph_bh^H_bψ_t+ 1/d̅σ^2_rH_eH^H_eψ_t, and ψ_t stands for the solution obtained at the previous iteration. It is noted that the problem (<ref>) is convex, which can be derived directly with CVX. §.§ Overall scheme and complexity analysis Up to now, we have completed the derivation of the PA factors β and l, beamforming vector v, and IRS phase shift matrix Ψ. To make the process of this scheme clearer, we summarize the entire proposed Max-SR-FP algorithm in Algorithm 1 below. The objective value sequence {R_s(β^(k), l^(k), v^(k), Ψ^(k))} obtained in each iteration of the alternate optimization method is non-decreasing. Specifically, it follows R_s(β^(k), l^(k), v^(k), Ψ^(k)) (a)≤R_s(β^(k+1), l^(k), v^(k), Ψ^(k)) (b)≤R_s(β^(k+1), l^(k+1), v^(k), Ψ^(k)) (c)≤R_s(β^(k+1), l^(k+1), v^(k+1), Ψ^(k)) (d)≤R_s(β^(k+1), l^(k+1), v^(k+1), Ψ^(k+1)), where (a), (b), (c), and (d) are due to the update in (<ref>), (<ref>), (<ref>), and (<ref>), respectively. Moreover, R_s(β^(k), l^(k), v^(k), Ψ^(k)) has a finite upper bound since the limited power constraint. Therefore, the proposed Max-SR-FP algorithm is convergent. The computational complexity of the overall Max-SR-FP algorithm is 𝒪{L_FP[24√(2)M^2In(1/δ)+L_v(N^3+NM^2)+L_Ψ(2√(2)(M+1)^3+N(M+1)^2)]} float-point operations (FLOPs), where L_FP refers to the maximum number of alternating iterations, δ stands for the given accuracy tolerance of the FP, L_v and L_Ψ mean the iterative numbers of the subproblems (<ref>) and (<ref>), respectively. § PROPOSED MAX-SR-DO SCHEME In the previous section, the Max-SR-FP scheme has been proposed to tackle the problem (<ref>), which derives the optimization variables via the FP and SCA approaches. However, given the fact that the complexity of the Max-SR-FP scheme is high, a low-complexity power allocation scheme, called Max-SR-DO, is proposed in this section, which optimizes β, l, v, and Ψ jointly. §.§ Optimization of the PA factor β Given l, v, and Ψ, when F_1=0, β F_1=0≤ G_1 holds and this constraint in problem (<ref>) can be omitted. When F_1≠0, by re-arranging (<ref>), we can obtain the optimization problem with respect to β as follows min_β  f_1(β)=β^2A_1-β B_1-C_1/β D_1+E_1   s.t.   0<β≤β^max, where β^maxΔ=min{G_1/F_1,1 }. Given that the denominator β D_1+E_1>0, we can obtain that the objective function of problem (<ref>) is continuous and differentiable in the interval (0, β^max]. Then, we take its partial derivative and make it equal to 0 yields ∂ f_1(β)/∂β=β^2A_1D_1+2β A_1E_1-B_1E_1+C_1D_1/(β D_1+E_1)^2=0, which can degenerate to β^2A_1D_1+2β A_1E_1-B_1E_1+C_1D_1=0. At this point, we divide the discussion into two situations in the following: §.§.§ When A_1D_1≠ 0 the equation (<ref>) is a quadratic. Let us define Δ_β=(2A_1E_1)^2-4A_1D_1(-B_1E_1+C_1D_1). if Δ_β≥ 0, based on the formula for the roots of a quadratic function, we can get its roots as β_1=-2A_1E_1+√(Δ_β)/2A_1D_1,  β_2=-2A_1E_1-√(Δ_β)/2A_1D_1. §.§.§ When A_1D_1= 0 the equation (<ref>) can be degraded to 2β A_1E_1-B_1E_1+C_1D_1=0, which yields β_3=B_1E_1-C_1D_1/2A_1E_1. The detailed procedures for deriving the PA factor β is shown in Algorithm 2. §.§ Optimization of the PA factor l Given β, v, and Ψ, two scenarios are taken into consideration. §.§.§ When β=1 at this point, we have A_2=β(1-β)A_bC_e=0. Then, the problem (<ref>) can be reduced to min_l  -lB_2+C_2/lD_2+E_2   s.t.   0<l≤ l^max. The objective function can be transformed into -lB_2+C_2/lD_2+E_2 =-lA_b+B_b/lA_e+B_eB_e =-(1+A_eB_b-A_bB_e/lA_bA_e+A_bB_e)A_bB_e/A_e. At this point, this objective function is monotonic, and the optimal l is obtained easily, i.e., l=0. §.§.§ When β≠1 in accordance with (<ref>) and F_2>0, we can obtain the optimization problem with respect to l as follows min_l  f_2(l)=-l^2A_2+lB_2+C_2/lD_2+E_2   s.t.   0<l≤ l^max, where l^maxΔ=min{G_2/F_2, 1}. Based on the fact that D_2=β A_e+(1-β)C_e≥ 0 and E_2=B_e> 0, then, the denominator lD_2+E_2≠ 0. Hence, the objective function is continuous and differentiable in the interval (0, l^max]. At this point, we take its partial derivative and set it equal to 0 yields ∂ f_2(l)/∂ l=-l^2A_2D_2+2l A_2E_2+B_2E_2-C_2D_2/(l D_2+E_2)^2=0, which yields l^2A_2D_2+2l A_2E_2+B_2E_2-C_2D_2=0. Since A_2D_2≠ 0, the function (<ref>) is a quadratic. Defining that Δ_l=(2A_2E_2)^2-4A_2D_2(B_2E_2-C_2D_2), when Δ_l≥ 0, we can obtain the candidates for the PA factors as follows l_1=-2A_2E_2+√(Δ_l)/2A_2D_2,  l_2=-2A_2E_2-√(Δ_l)/2A_2D_2. The detailed procedure for solving the PA factor l is shown in Algorithm 3. §.§ Optimization of the beamforming vector v and IRS phase shift matrix Ψ Given β and l, Section <ref> has completed the derivation of the beamforming vector v, we shall not dwell on it here for brevity. In this subsection, we turn the focus of the design to the IRS phase shift matrix Ψ. Given that Ψ consists of amplitude and phase, we will derive Ψ by solving for them separately in the following. Firstly, the derivation of the magnitude is taken into account. For the sake of derivation, we assume that |ψ(m)|≤ψ^max in (<ref>) always holds and the amplitudes of the IRS phase shift elements are the same, noted as |ψ(m)|=α_i=α, and Ψ= αdiag{e^jϕ_1, ⋯, e^jϕ_m, ⋯, e^jϕ_M}. Let us define ϕ=[e^jϕ_1, ⋯, e^jϕ_m, ⋯, e^jϕ_M]^H and Φ=diag{ϕ}. Based on the IRS power constraint (<ref>) and the fact that allocating all power for IRS is optimal, i.e., Tr(αΦ(g_aiβ l PH_aivv^HH_ai^H+ σ^2_rI_M)αΦ^H)= (1-l)P, which yields α =√((1-l)P/Tr(Φ(g_aiβ l PH_aivv^HH_ai^H+σ^2_rI_M)Φ^H)) =√((1-l)P/Tr(g_aiβ l PH_aivv^HH_ai^H+σ^2_rI_M)). In the following, we focus on solving the phase ψ. Let us define ψ=[ αϕ; 1]∈ℂ^(M+1)× 1, in accordance with (<ref>) and (<ref>), the objective function R_s can be recast as R_s=R_b-R_e =log_2(1+β l P|ψ^Hh_b|^2/σ^2_rψ^HH_b^2+σ^2_b)- log_2(1+β l P|ψ^Hh_e|^2/σ^2_rψ^HH_e^2+(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e) =log_2(ψ^HQ_1ψ/ψ^HQ_2ψ∙ψ^HQ_3ψ/ψ^HQ_4ψ), where Q_1=β l Ph_bh_b^H+σ^2_rH_bH_b^H+ σ^2_b/α^2M+1I_M+1, Q_2=σ^2_rH_bH_b^H+σ^2_b/α^2M+1I_M+1, Q_3=σ^2_rH_eH^H_e+(1-β) l P √(g_ae)h^H_aeT_AN^2+σ^2_e/α^2M+1I_M+1, Q_4=β l Ph_eh^H_e+Q_3. Due to the fact that the logarithmic function is monotonically increasing, the problem of maximizing the logarithm of a function can be equivalent to that of maximizing itself. Then, the optimization problem with respect to ψ can be degraded as max_ψ  ψ^HQ_1ψ/ψ^HQ_2ψ∙ψ^HQ_3ψ/ψ^HQ_4ψ   s.t.   ψ^Hψ=α^2M+1. At this point, this problem can be addressed with GPI method in <cit.>, and the IRS phase shift matrix Ψ can be obtained based on (<ref>) and (<ref>). §.§ Overall scheme and complexity analysis So far, we have completed the design of the power allocation factors β and l, transmit beamforming vector v, and active IRS phase shift matrix Ψ. For clarity of this scheme procedure, we summarize the entire proposed Max-SR-DO algorithm in Algorithm 4. Similar to the convergence analysis of the Max-SR-FP algorithm proposed previously, the proposed Max-SR-DO algorithm is also convergent. The computational complexity of the overall Max-SR-DO algorithm is given by 𝒪{L_DO[2M^2+L_v(N^3+NM^2)+((M+1)^3+N(M+1)^2)In(1/ζ)]} FLOPs, where L_DO and ζ stand for the maximum number of alternating iterations and the given accuracy tolerance of the GPI method, respectively. § SIMULATION RESULTS To verify the performance of the proposed two PA schemes, we perform a simulation comparison in this section. Unless otherwise noted, the parameters of the simulation are listed as follows: P=35dBm, N=8, M=32, d_ai=110m, d_ab=126m, d_ae=130m, θ_ai=11π/36, θ_ab=π/3, θ_ae=19π/36, σ^2_b=σ^2_e, σ^2_r=2σ^2_b, and ϵ=10^-3. The path loss model is modeled as g=λ^2/(4π d)^2<cit.>, where λ and d stand for the wavelength and reference distance, respectively. For the sake of convenience, let us set (λ/(4π))^2=10^-2. To evaluate the performance of the proposed schemes, several benchmark PA schemes that are compared are listed as follows. 1) l=0.5: Set the PA factor l to 0.5, we only optimize β, v, and Ψ alternatively. 2) β=0.5: Fixing the PA factor β to 0.5, we only have to alternately optimize l, v, and Ψ. 3) β=l=0.5: Both the PA factors β and l are fixed at 0.5, we only need to optimize v and Ψ alternately. 4) No-IRS: Set all the active IRS related channel vectors and matrix to zero vectors and zero matrix, i.e., h^H_ib=0, h^H_ie=0, and H_ai=0. Then, we only have to optimize β, l, and v alternatively. Firstly, we show the convergence of both the proposed alternating optimization schemes in Fig. <ref>, where the total power P=30, 35dBm. It can be seen from the figure that the SRs of both proposed schemes increase rapidly with the number of iterations and finally converge to a value after a finite number of iterations. In addition, the SRs of both proposed schemes increase with the increases of P, and the SR of the proposed Max-SR-FP scheme is slightly better than that of the proposed Max-SR-DO scheme, regardless of P=30dBm or P=35dBm. Fig. <ref> shows the computational complexity of the proposed two methods versus the number M of the IRS phase shift elements. This simulation plots present that with the increase of M, the computational complexity of the proposed Max-SR-FP and proposed Max-SR-DO schemes increases gradually. Compared with the Max-SR-FP scheme, the computational complexity of the Max-SR-DO scheme has decreased by about one order of magnitude. Fig. <ref> plots the curves of the SR versus the number M of active IRS phase shift elements of the proposed schemes and benchmark schemes. Observing this figure, it can be found that the SRs of both the proposed schemes and benchmark schemes gradually increase with the increases of M, they have a decreasing order in terms of SR performance: proposed Max-SR-FP, proposed Max-SR-DO, l=0.5, β=0.5, β=l=0.5, and no IRS. The SR gap between the two proposed schemes is trivial with the increases of M. When M=64, the SR performance enhancements achieved by both proposed PA schemes over the schemes of l=0.5, β=0.5, β=l=0.5, and no IRS are above 2.1%, 5.0%, 11.9%, and 64.9%, respectively. These further explain the motivation for researching PA algorithms. Fig. <ref> depicts the curves of the SR versus the signal-to-noise ratio (SNR) ranging from 0dB to 25dB, where the total power P=30dBm. From this figure we can learn that the SRs of two proposed schemes and four benchmark schemes increase with the increases of SNR. Compared to the benchmark scheme of no IRS, the SRs achieved by the both proposed schemes and the remaining benchmark schemes are remarkable. Moreover, the difference of the SRs among two proposed schemes, l=0.5 scheme, β=0.5 scheme, and β=l=0.5 scheme gradually decreases with the increases of the SNR. Fig. <ref> demonstrates the curves of the SR versus the noise ratio η, where η=σ^2_r/σ^2_b and σ^2_b remains constant, i.e., the increase of η is equivalent to that of the noise power at the active IRS. This figure shows that apart from the scheme of no IRS, the SRs of two proposed schemes and the remaining three benchmark schemes decrease gradually with the increases of η. This is due to the fact that the active IRS helps to transmit the confidential information to Bob and also reflects the noise generated at the IRS to him. When η increases, the noise received by Bob also increases, which leads to a decrease in the SR performance. To investigate the impact of the Bob's location on SR performance, with fixed positions of Alice, IRS, and Eve, we assume that Bob moves only on the straight line L_ab for simplicity of analysis. At this point, the Bob's location only depends on the distance d_ab of Alice-to-Bob link. The diagram of Bob's movement as shown in Fig. <ref>. Based on the model of Bob's position movement in Fig. <ref>, Fig. <ref> presents the curves of the SR versus the distance d_ab ranging from 60m to 130m, respectively. It reveals that as Bob's position moves away from Alice along L_ab and closer to the IRS, the SR of the no-IRS scheme gradually decreases with the increase of d_ab. For the proposed Max-SR-FP and Max-SR-DO schemes, first, when Bob is between Alice and IRS and away from them, their energy received from Alice gradually decreases and their SRs gradually decreases with increasing d_ab. Then, as Bob moves away from Alice and closer to the IRS, their energy received from the IRS gradually increases and their SRs gradually increase and reach a peak when Bob is at the bottom of the IRS. Finally, with Bob moving away from Alice and IRS, their energy from Alice and IRS gradually decreases and the SRs gradually decrease. § CONCLUSION In this paper, we introduced an active IRS into secure DM system and investigated the optimization of the PA factors, transmit beamforming vector, and phase shift matrix of IRS in the active IRS-assisted DM network. Firstly, to achieve the purpose of maximizing SR with AN only interfering with eavesdropper Eve, based on the criterion of null-space projection, the projection matrix of AN was designed on the null-space of the Alice-to-IRS and Alice-to-Bob channels and the corresponding closed-form expression was derived. Then, based on the criterion of maximum the SR, two alternating iteration PA schemes, namely Max-SR-FP and Max-SR-DO, were proposed. To address the formulated optimization problem, the proposed Max-SR-FP scheme employed the FP and SCA methods to find the optimal PA factors, beamforming vector, and IRS phase shift matrix. While the proposed Max-SR-DO scheme got the PA factors by taking derivative operation, and derived the phase shift matrix of active IRS by the criteria of EAR and GPI. The former offered higher SR performance, while the latter had lower computational complexity. The simulation results showed that the SR performance of the DM network was dramatically enhanced with the help of active IRS compared to the no IRS scheme. Moreover, the SRs achieved by both proposed PA schemes were better than that of the benchmark PA schemes. IEEEtran
http://arxiv.org/abs/2307.01533v2
20230704073648
Unsupervised Video Anomaly Detection with Diffusion Models Conditioned on Compact Motion Representations
[ "Anil Osman Tur", "Nicola Dall'Asen", "Cigdem Beyan", "Elisa Ricci" ]
cs.CV
[ "cs.CV" ]
U-VAD with Diffusion Models Conditioned on Compact Motion Rep. Tur and Dall'Asen et al. University of Trento, Trento, Italy Fondazione Bruno Kessler, Trento, Italy University of Pisa, Pisa, Italy Unsupervised Video Anomaly Detection with Diffusion Models Conditioned on Compact Motion Representations Anil Osman Tur1,2^* Nicola Dall'Asen1,3^* Cigdem Beyan1 Elisa Ricci1,2 August 1, 2023 ======================================================================================================== This paper aims to address the unsupervised video anomaly detection (VAD) problem, which involves classifying each frame in a video as normal or abnormal, without any access to labels. To accomplish this, the proposed method employs conditional diffusion models, where the input data is the spatiotemporal features extracted from a pre-trained network, and the condition is the features extracted from compact motion representations that summarize a given video segment in terms of its motion and appearance. Our method utilizes a data-driven threshold and considers a high reconstruction error as an indicator of anomalous events. This study is the first to utilize compact motion representations for VAD and the experiments conducted on two large-scale VAD benchmarks demonstrate that they supply relevant information to the diffusion model, and consequently improve VAD performances w.r.t the prior art. Importantly, our method exhibits better generalization performance across different datasets, notably outperforming both the state-of-the-art and baseline methods. The code of our method is available https://github.com/AnilOsmanTur/conditioned_video_anomaly_diffusionHERE^*These authors contributed equally.. § INTRODUCTION Detecting anomalous events in videos automatically is a crucial task of computer vision that has relevance to numerous applications, including but not limited to intelligent surveillance and activity recognition <cit.>. Video anomaly detection (VAD) can be particularly difficult because abnormal events in the real world are infrequent and can belong to an unbounded number of categories. As a result, traditional supervised methods might not be suitable for this task since balanced normal and abnormal samples are typically unavailable for training. Moreover, VAD models are challenged by the contextual and often ambiguous nature of abnormal events, despite their sparsity and diversity <cit.>. As a result, VAD is commonly carried out using a one-class learning approach, in which only normal data are provided during training <cit.>. However, given the dynamic nature of real-world applications and the wide range of normal classes, it is not practical to have access to every type of normal training data. Therefore, when using a one-class classifier, there is a high risk of misclassifying an unseen normal event as abnormal because its representation might be significantly different from the representations learned from normal training data <cit.>. To address the aforementioned challenge of data availability, some researchers have implemented weakly supervised VAD that do not require per-frame annotations but instead leverage video-level labels <cit.>. In weakly supervised VAD, unlike its one-class counterpart, a video is considered anomalous if even a single frame within it is labeled as anomalous. Conversely, a video is labeled as normal only when all frames within it are labeled as normal. However, such approaches lack localizing the abnormal portion of the video, which can be impractical when dealing with long videos. Also, it is important to note that labeling a video as normal still requires the inspection of entire frames <cit.>. A more recent approach to VAD is unsupervised learning, in which unlabelled videos are used as input and the model learns to classify each frame as normal or anomalous, allowing to localize the abnormal frames. Unlike a one-class classifier, unsupervised VAD does not make any assumptions about the distribution of the training data and does not use any labels during model training. However, it is undoubtedly more challenging to arrive at the performance of other VAD approaches that use labeled training data <cit.>. This study focuses on performing unsupervised VAD in complex surveillance scenarios by relying solely on the reconstruction capability of the probabilistic generative model called diffusion models <cit.>. The usage of generative models (e.g., autoencoders) is common for one-class VAD <cit.>. However, as shown in <cit.> for unsupervised VAD, the autoencoders might require an additional discriminator to be trained collaboratively to reach a desired level of performance. Instead, our study reveals that diffusion models constitute a more effective category of generative models for unsupervised VAD, displaying superior results when compared to autoencoders, and in some cases, even exceeding the performance of Collaborative Generative and Discriminative Models. Furthermore, we explore the application of compact motion representations, namely, star representation <cit.> and dynamic images <cit.> within a conditional diffusion model. This study marks the first attempt at utilizing these motion representations to address the VAD task. The experimental evaluation conducted on two large-scale datasets indicates that using the aforementioned compact motion representations as a condition of diffusion models is more beneficial for VAD. We also explore the transferability of unsupervised VAD methods by assessing their generalization performance when trained on one dataset and tested on another. When performing cross-dataset analysis, it becomes apparent that incorporating compact motion representations as the condition of diffusion models leads to vastly superior performance. This represents a crucial feature of the proposed method in comparison to both the state-of-the-art (SOTA) and baseline models, making it highly valuable for practical applications. The main contributions can be summarized in three folds. (1) We propose an effective unsupervised VAD method, which uses compact motion representations as the condition of the diffusion models. We show that compact motion representations supply relevant information and further improve VAD performance. (2) Our method leads to enhanced generalization performance across datasets. Its transferability is notably better than the baseline methods and the SOTA. (3) We conduct a hyperparameter analysis for diffusion models, which yields insights into using them for VAD. § RELATED WORK Anomaly Detection. Anomaly refers to an entity that is rare and significantly deviates from normality. Automated anomaly detection models face challenges when detecting abnormal events from images or videos due to their sparsity, diversity, ambiguity, and contextual nature <cit.>. Automated anomaly detection is a well-researched subject that encompasses various tasks, e.g., medical diagnosis, defect detection, animal behavior understanding, and fraud detection <cit.>. For a review of anomaly detection applications in different domains, interested readers can refer to the survey paper <cit.>. VAD, the task at hand, deals with complex surveillance scenarios. Zaheer et al. <cit.> categorized relevant methodologies into four groups: (a) fully supervised approaches requiring normal/abnormal annotations for each video frame in the training data, (b) one-class classification requiring only annotated training data for the normal class, (c) weakly supervised approaches requiring video-level normal/abnormal annotations, and (d) unsupervised methods that do not require any annotations. Labeling data is a costly and time-consuming task, and due to the rarity of abnormal events, it is impractical to gather all possible anomaly samples for fully-supervised learning. Consequently, the most common approach to tackling VAD is to train a one-class classifier that learns from the normal data <cit.>. Several of these approaches utilize hand-crafted features <cit.>, while others rely on deep features that are extracted using pre-trained models <cit.>. Generative models e.g., autoencoders and GANs have also been adapted for VAD <cit.>. One-class classifiers often cannot prevent the well-reconstruction of anomalous test inputs, resulting in the misclassification of abnormal instances as normal. Moreover, an unseen normal instance could be misclassified as abnormal because its representation may differ significantly from the representations learned from normal training data. As evident, data collection is still a problem for the one-class approach because it is not practical to have access to every variety of normal training data <cit.>. Therefore, some researchers <cit.> have turned to weakly supervised VAD, which does not rely on fine-grained per-frame annotations, but instead use video-level labels. Consequently, a video is labeled as anomalous even if one frame is anomalous, and normal if all frames are normal. This setting is not optimal because labeling a video as normal requires inspecting all frames, and it cannot localize the abnormal portion. On the other hand, VAD methods that use unlabelled training data are quite rare in the literature. It is important to note that several one-class classifiers <cit.> have been referred to as unsupervised, even though they use labeled normal data. Unsupervised VAD methods analyze unlabelled videos without prior knowledge of normal or abnormal events to classify each frame as normal or anomalous. The only published method addressing this definition is <cit.>, which presents a Generative Cooperative Learning among a generator (an autoencoder) and a discriminator (a multilayer perceptron) with a negative learning paradigm. The autoencoder reconstructs the normal and abnormal instances while the discriminator estimates the probability of being abnormal. Through negative learning, the autoencoder is constrained not to learn the reconstruction of anomalies using the pseudo-labels produced by the discriminator. That approach <cit.> follows the idea that anomalies occur less frequently than normal events, such that the generator should be able to reconstruct the abundantly available normal representations. Besides, it promotes temporal consistency while extracting relevant spatiotemporal features. Our method differs from <cit.> in that it relies solely on a generative architecture, specifically a conditional diffusion model. The baseline unconditional diffusion model, in some cases, surpasses the full model of <cit.> while in all cases it achieves better performance than the autoencoder of <cit.>. On the other hand, the proposed method improves the achievements of the unconditional diffusion model thanks to using compact motion representations, and importantly, it presents the best generalization results across datasets. Diffusion Models. They are a family of probabilistic generative models that progressively destruct data by injecting noise, then learn to reverse this process for sample generation. <cit.>. Diffusion models have emerged as a powerful new family of deep generative models with SOTA performance in many applications, including image synthesis, video generation, and discriminative tasks like object detection and semantic segmentation <cit.>. Given that diffusion models have emerged as SOTA generative models for various tasks, we are motivated to explore their potential for VAD through our proposed method. Star Representation <cit.>. It aims to represent temporal information existing in a video in a way that the channels of output single RGB image convey the summarized time information by associating the color channels with simplified consecutive moments of the video clip. Such a representation is suitable to be the input of any CNN model and so far in the literature, it was used for dynamic gesture recognition <cit.>, while this is the first time it is being used for VAD. Dynamic Image <cit.>. It refers to a representation of an input video sequence that summarizes the appearances of objects and their corresponding motions over time by encoding the temporal ordering of the pixels from frame to frame. This can be seen as an early fusion technique since the frames are combined into a single representation before further processing them such as with. It has been used for action and gesture recognition <cit.> and visual activity modeling <cit.>, however, it has never been used for VAD. § PROPOSED METHOD We design a method to use diffusion models to tackle the unsupervised VAD, i.e. to classify each frame in a video as normal or abnormal without using the labels. To provide a frame-based prediction, we classify a video clip of consecutive N frames and then slide this window along the video. We build our model on top of diffusion models, in particular, k-diffusion <cit.>, which has shown better performance w.r.t DDPM <cit.>. To overcome the heavy computational burden of dealing with video clips, we operate in the latent space of a pre-trained network that extracts clip-level features. We then leverage the generative capabilities of diffusion models to reconstruct noised clip features and, based on the reconstruction error, decide whether the clip is normal or abnormal with a data-driven threshold. While this formulation leads to SOTA performance, we further condition the diffusion process with compact motion information coming from the video clip (see Sec. <ref>) to better guide the reverse process and achieve better performance. An overview of our method is provided in Fig. <ref>. §.§ Diffusion Model Diffusion models apply a progressive addition of Gaussian noise ϵ_t of standard deviation σ_t to an input data point x_T sampled from a distribution p_data(x) for each timestep t ∈ [0, T]. The noised distribution p(x,σ) becomes isotropic Gaussian and allows efficient sampling of new data points x_0 ∼𝒩(0, σ^2_max𝐈). These data are gradually denoised with noise levels σ_0 = σ_max > σ_1 > … > σ_T-1 > σ_T = 0 into new samples. Diffusion models are trained by minimizing the expected L_2 error between predicted and ground truth added noise <cit.>, i.e.: ℒ_simple = ϵ_t - ϵ̂_2. In this work, we use the diffusion formulation of <cit.>, which allows the network to perform either ϵ or x_0 prediction, or something in between, depending on the noise scale σ_t, nullify the error amplification that happens in DDPM <cit.>. The denoising network D_θ formulation as follows: D_θ(x; σ_t) = c_skip(σ_t)  x + c_out(σ_t)  G_θ( c_in(σ_t)  x;  c_noise(σ-T) ), where G_θ becomes the effective network to train, c_skip modulates the skip connection, c_in(·) and c_out(·) scale input and output magnitudes, and c_noise(·) scales σ to become suitable as input for F_θ. Formally, given a video clip C of N frames, i.e. C ∈ℝ^N× 3× H× W, we first extract features from a pre-trained 3D-CNN ℱ to obtain a feature vector fea ∈ℝ^f, with f the latent dimension of the network. We then use this latent representation in the diffusion process to reconstruct them without using any label. We leverage the fact that denoising does not necessarily have to start from noise with variance σ^2_max, but it can place at any arbitrary timestep t ∈ (0, T], as shown in <cit.>. We can therefore sample fea_t ∼𝒩(fea, σ^2_t) and run the diffusion reverse process on it to reconstruct fea_T. The choice of t allows balancing the amount of information destroyed in the forward process, and we exploit this fact to remove the frequency components associated with anomalies. We then measure the reconstruction goodness in terms of MSE, with a higher reconstruction error possibly indicating that the clip is anomalous. When deciding whether a video frame is anomalous or not, we adopt the data-driven thresholding mechanism of <cit.>. The decision for a single video frame is made by keeping the distribution of the reconstruction loss (MSE) of each clip over a batch. The feature vectors resulting in higher reconstruction error refer to anomalous clips and vice versa. This decision is made through the data-driven threshold L_th, defined as L_th = μ_p + k σ_p where k is a constant, μ_p and σ_p are the mean and standard deviation of the reconstruction error for each batch. §.§ Compact Motion Representations We further extend the described diffusion model to incorporate compact motion representation in the process to provide rich motion information. We compute this representation using two different approaches: Star representation <cit.> or Dynamic Image <cit.>. Visual examples of these two representations are presented in Fig. <ref> and a complete description is presented as follows. Star RGB Images. The objective of using star representation is to depict the time-based data present in an input RGB video <cit.>. The star representation matrix M computation is computed as given in Eq. <ref> where I_k(i,j) represents the RGB vectors of a pixel at a given (i,j) position at k-th frame and λ is the cosine similarity of the RGB vectors. By using such a cosine similarity star representation also includes the information change in hue and saturation. M(i,j) = ∑_k=2^N(1 - λ/2). |∥ I_k-1(i,j)∥_2 - ∥ I_k-1(i,j)∥_2|, where N is the length of the video clip. To create an RGB image as the output, each video segment is divided equally into three sub-videos such that each sub-video is used for generating one of the RGB channels. Thus, the resulting image channels convey the summarized information of consecutive moments. Dynamic Image Computation. A dynamic image presents a summary of object appearances and their motions throughout an input video sequence by encoding the sequential order of pixels from one frame to another. Dynamic image computation uses RGB images directly by multiplying the video frames by α_t coefficient and summing them to generate the output image with the formula given d^* = ∑_k=1^Nα_k I_k ,  α_k = 2k - N - 1, where I_k is the kth image of the video segment and N is the number of frames in the video segment. Conditioning on Compact Motion Representation. After extracting the compact motion representation of a clip C, we obtain the conditioning feature vector cond through a pre-trained 2D-CNN F_cond. We inject this both in the encoder and in the decoder part of our network G by summing with the input features. To deal with the different dimensionality of the two blocks, we use 2 linear projections to obtain vectors of the same size as the input. § EXPERIMENTAL ANALYSIS AND RESULTS The evaluation metric employed in this study is the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC), which is determined using frame-level annotations of the test videos within the datasets, consistent with established VAD methodologies. In order to evaluate and compare the effectiveness of the proposed approach, the experiments were carried out on two mainstream large-scale unconstrained datasets: UCF-Crime <cit.> and ShanghaiTech <cit.>. The UCF-Crime dataset <cit.> was obtained from diverse CCTV cameras that possess varying field-of-views. It consists of a total of 128 hours of videos, with annotations for 13 distinct anomalous events e.g., road accidents, theft, and explosions. To ensure fair comparisons with the SOTA, we utilized the standardized training and testing splits of the dataset, which consist of 810 abnormal and 800 normal videos for training, and 130 abnormal and 150 normal videos for testing, without utilizing the labels. On the other hand, the ShanghaiTech dataset <cit.> was recorded using 13 distinct camera angles under challenging lighting conditions. For our study, we utilized the training split, which comprises 63 abnormal and 174 normal videos, as well as the testing split, consisting of 44 abnormal and 154 normal videos, in accordance with SOTA conventions. §.§ Implementation Details Architecture. In line with <cit.>, we use 16 non-overlapping frames to define a video clip, and we use pre-trained 3D-ResNext101 or 3D-ResNet18 as feature extractor F <cit.>. After computing the compact motion representation, we extract a single conditioning vector with F_cond with a pre-trained ResNet50 or ResNet18 due to their widespread use together with such motion representations <cit.>. We use an MLP with an encoder-decoder structure as the denoising network G, and the encoder is comprised of three layers with sizes of {1024, 512, 256}, while the decoder has hidden dimensions of {256, 512, 1024}. The timestep information σ_t is transformed via Fourier embedding and integrated into the network by FiLM layers <cit.>, while the conditioning on compact motion representation is applied after timestep integration by summation to the inputs. Training and sampling. The learning rate scheduler and EMA of the model are set to the default values of k-diffusion, which include an initial learning rate of 2×10^-4 and InverseLR scheduling. The weight decay is set at 1×10^-4. Training is conducted for 30 epochs with a batch size of 256, while testing is performed on 8192 samples as in previous literature <cit.>. Several hyperparameters affect the diffusion process in k-diffusion, and given the novelty of the task at hand, we do not rely on parameters from prior literature. We, therefore, conduct an extensive exploration of the effects of training and testing noise. Training noise is distributed according to a log-normal distribution with parameters (P_mean, P_std), while sampling noise is controlled by σ_min and σ_max, and below, we investigate their role. For the diffusion reverse process, we use LMS sampler with the number of steps T set to 10. §.§ Results We first compare our method's results with SOTA and baseline methods. Then, we report the results of the cross-dataset evaluation, where the training and validation sets are from a different domain than the test split. Finally, we analyze how the hyperparameters of the diffusion models affect VAD performance. Performance Comparisons. The performance of the proposed method together with the SOTA and baseline methods' (i.e., unconditional diffusion model) results are given in Tables <ref> and <ref> for the ShanghaiTech <cit.> and UCF-Crime <cit.> datasets, respectively. These tables also include an ablation study such that the condition of the diffusion models is changed between star representation, dynamic images, and spatiotemporal features, in addition to changing the feature backbone between 3D-ResNext101 and 3D-ResNet18, and the motion representation backbone between ResNet50 and ResNet18. As seen in Table <ref>, the proposed method outperforms all others on the ShanghaiTech <cit.> dataset, achieving the best results by surpassing the SOTA autoencoder <cit.> by 14.45%, the SOTA collaborative generative and discriminative model <cit.> by 4.77%, and the SOTA <cit.> by 20.71%. The proposed method also improves upon the unconditional diffusion models (i.e., baselines) by 1.08%. It is worth noting that other conditional diffusion models, i.e., using spatiotemporal features as the condition and compact motion representation as input, are occasionally less effective than our method, with the proposed method surpassing them by 10.52%. The best performance is achieved by using a 3D-ResNet18 as the feature backbone, star representation as the condition, and ResNet50 as the corresponding backbone. On the other hand, for the UCFC dataset <cit.> (Table <ref>), the proposed method achieves the second-highest score after the more complex model of <cit.>, which employs a generator, discriminator and negative learning. Nonetheless, our method outperforms the SOTA autoencoder <cit.> by 10.53% and the SOTA <cit.> by 14.85%. It also demonstrates superior performance compared to the baseline and the other conditional diffusion models by 1.63% and 1.79%, respectively. Furthermore, the optimal performance of the proposed method for this dataset is achieved by utilizing 3D-ResNet18 as the feature backbone, star representation as the condition, and ResNet50 as the condition backbone. Cross-dataset Analysis. When performing this analysis, we take into consideration the results presented in Tables <ref> and <ref> such that we select the combinations of input feature and condition backbone that yield the best results. Table <ref> shows that our methods achieve significantly better results in cross-dataset analysis, regardless of which compact motion representation is used as the condition, compared to all other baselines and SOTA methods. Notably, the performance of the proposed method is remarkable (8.6-17.06% better) in comparison to both the generative model and full model proposed by <cit.>. On the other hand, the baseline unconditional diffusion model that utilizes spatiotemporal features outperforms the baseline unconditional diffusion model that uses compact motion representations. The relative effectiveness of the proposed method is of significant practical importance, as in most cases, the deployment domain differs from the domain on which the model is trained. Hyperparameter Analysis. We study the effect of the training noise on the learning process, and we find that baseline diffusion and our method both achieve higher results with smaller values of noise, meaning a lower P_mean. Importantly, our method generally achieves better performance than the baseline, given the same parameters, for a wider choice range of training noise parameters, making it less sensitive to this choice. We explore the effect of P_mean∈ [-5, -0.5] and P_std∈ [0.5, 2.]. On the other hand, recalling that t closer to zero indicates a point closer to an isotropic Gaussian distribution, we explore the effect of different t as the starting point of the reverse process. While the baseline unconditional diffusion achieves its best performance with t=4 and t=6, we find that our method achieves better performance in high-noise areas (t=1, t=2), effectively allowing the removal of more information from the clip vector, and proving the effectiveness of conditioning on motion representation for the task at hand. § CONCLUSIONS We have presented a novel approach for unsupervised VAD, which can accurately identify and locate anomalous frames by utilizing only the reconstruction capabilities of diffusion models. Our conditional diffusion model uses features extracted from compact motion representations as the condition while it takes the spatiotemporal features extracted from pre-trained networks as the input. By doing so, we show the contribution of the compact motion representations, i.e., our method succeeded in improving the SOTA VAD results while also demonstrating remarkable transferability across domains. Note that the unsupervised nature of our approach allows for an anomaly detection system to begin identifying abnormalities based solely on observed data, without any human intervention. If no abnormal events have occurred, the system may mistakenly identify rare normal events as abnormal. However, it is expected that such anomaly systems operate for a longer period of time, thus, the likelihood of having no abnormal events decreases significantly. In the future, we aim to modify our method in a way that it can operate on edge devices with near real-time capabilities. § ACKNOWLEDGMENT The project is partially funded by the European Union (EU) under NextGenerationEU. We acknowledge the support of the MUR PNRR project FAIR - Future AI Research (PE00000013) funded by the NextGenerationEU. E.R. is partially supported by the PRECRISIS, funded by the EU Internal Security Fund (ISFP-2022-TFI-AG-PROTECT-02-101100539). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the EU or The European Research Executive Agency. Neither the EU nor the granting authority can be held responsible for them. The work was carried out in the Vision and Learning joint laboratory of FBK and UNITN. splncs04
http://arxiv.org/abs/2307.05516v1
20230707020846
Investigation of the $N\bar{D}$ system in quark delocalization color screening model
[ "Xuejie Liu", "Yue Tan", "Dianyong Chen", "Hongxia Huang", "Jialun Ping" ]
hep-ph
[ "hep-ph", "nucl-th" ]
[E-mail: ][email protected] [E-mail:][email protected] [E-mail:][email protected] [E-mail: ][email protected] [E-mail: ][email protected] ^1School of Physics, Southeast University, Nanjing 210094, P. R. China ^2School of Mathematics and Physics, Yancheng Institute of Technology, Yancheng, 224051, P. R. China ^4Department of Physics, Nanjing Normal University, Nanjing 210023, P.R. China ^3Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, P. R. China In this work we systematically calculate the pentaquark systems with quark contents qqqqc̅ with the analyzed total spin and parity quantum numbers of J^P=1/2^-, J^P=3/2^- and J^P=5/2^-, in the I=0, I=1 and I=2 isospin channels. The effective potentials between baryon and meson clusters are given, and the possible bound states are also investigated. Also, the study of the scattering process of the open channels is performed to look for possible resonance state. Our estimations indicate that several possible bound states and narrow baryon-meson resonances are found from corresponding the calculation processes. 13.75.Cs, 12.39.Pn, 12.39.Jh Investigation of the ND̅ system in quark delocalization color screening model Jialun Ping^4 Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Hadron physics has opened a renewed interest in multi-hadron system. In the past years, many new hadronic states have been observed experimentally, and they carry exotic quantum numbers that cannot be reached by the standard quark model for meson of qq̅ and for baryons for qqq. This significant progress in experiments has triggered plenty of theoretical interest and made the study of these exotic states an intriguing topic in hadronic physics <cit.>. Among these states, one can highlight the new three hidden charm pentaquark candidates observed in 2019 by the LHCb collaboration <cit.> in the J/ψ p invariant mass spectrum of Λ_b^0⟶ J/ψ K^-p decays; they were defined as P_c(4312), P_c(4440) and P_c(4457), respectively. In fact, the hidden charm pentaquark could be tracked back to 2015, when two exotic states: P_c(4380) and P_c(4450) were be found in the same decay process at the same collaboration <cit.>. In principe, due to the fact that the P_c states were found in the experiments, the P_cs states as their strangeness partners should also exist. So, recently, the LHCb Collaboration released their results about the Ξ_b→ J/ψ K^-Λ decay, which indicates a new resonance structure named P_cs(4459). Besides the exotic states mentioned above, , the existence of flavor-exotic states, where quarks and antiquarks can not annihilate through strong and electromagnetic interactions, was also predicted. Therefore, searching for these flavor-exotic states has become increasingly important both theoretically and experimentally. The discussion about the nature of these exotic signals are carried out by various theoretical approaches. For instance, the three newly announced hidden-charm pentaquarks, P_c(4312), P_c(4440) and P_c(4457) are more likely to be considered to be the molecular state of Σ_cD̅^* in effective field theories <cit.>, QCD sum rules <cit.>, phenomenological potential models <cit.>, heavy quark spin symmetry <cit.> and heavy hadron chiral perturbation theory  <cit.>. Moreover, their photo-production  <cit.> and decay properties <cit.> have been also discussed. As for the other types of pentaquarks, which are genuinely exotic states with no quark-antiquark annihilation although the existence of several states has not been experimentally confirmed. For example, the pentaquarks state composed of uudus̅ triggered a lot of theoretical work on mutli-quark system. In Ref. <cit.>, the authors used the standard nonrelativistic quark model of Isgur-Karl to investigate the NK scattering problem, and the NK scattering phase shift showed no resonance in the energy region 0-500 MeV above the NK threshold. In Refs. <cit.>, the NK phase shifts were calculated within a constituent quark model by numerically solving the RGM equation. Wang et al. <cit.> gave a study on the NK elastic scattering in a quark potential model and their results are consistent with the experimental data. In Ref. <cit.>, Barnes and Swanson used the quark-Born-diagram method to derive the NK scattering amplitudes and obtained reasonable results for the NK phase shifts, but they were limited to the s- wave. In Ref. <cit.>, the NK interaction was studied in the constituent quark model and the numerical results of different partial waves were in good agreement with the experimental data. Now the pentaquarks composed of uudus̅ extends to the heavy quark sector, there have been, not only analogous discussions, but also new approaches which are not accessible in light flavor sectors. There, an interesting observation is made that there is a sufficiently strong attraction between a D̅(D̅^*) meson and a nucleon N. As a matter of fact, many theoretical studies have been devoted to the ND̅ system. Since the hadron nucleon interaction is the basic quantity for the hadronic molecules and for exotic nuclei, the interaction of ND̅ system is investigated, which is subtle <cit.>. Bound states of nucleon and an open heavy meson are discussed with respect to the heavy quark symmetry <cit.>, and the results that indicated the ND̅ with IJ^P=01/2^- is a bound state. In Ref. <cit.>, the bound state found previously in the I(J^P)=0(1/2^-) channel survives when short range interaction is included and a resonance state with I(J^P)=0(3/2^-) as a Feshbach resonance predominated by a heavy vector meson and a nucleon(ND̅^∗). In heavy quark symmetry its result implied that the coupling to ND̅^∗ channel bring to attraction to the ND̅ interaction and generates an S-wave ND̅ bound state with binding energy about 1 MeV <cit.>. However, some models suggested no significant attraction in the ND̅ interaction <cit.>. Hence, it is worthwhile to make a systematical study of the ND̅ system by using different methods, which will deepen our understanding of possible pentaquarks. In the heavy flavor sector one has to admit that there is a complete lack of experimental information at low energies. Therefore the conclusions extracted from generalizations of models that have been successful for some pentaquarks may be helpful. In this paper, the quark delocalization color screening model is used, in which the predicated the P_c states were consistent with the reported P_c states by the LHCb collaboration <cit.>. So within the quark delocalization color screening model formalism, we systematically study herein the possibility of having either bound or resonance states in the pentaquarks sector with only one heavy antiquark, qqqqQ̅ with quantum numbers J^P=1/2^-, J^P=3/2^-, and J^P=5/2^-, and in the I=0, 1 and 2 isospin sectors. This five-body bound state problem is solved by means of the resonating group method (RGM). The plan of this paper is the following. In the next section the QDCSM are briefly presented. By using the QDCSM, the results of the effective potential, the bound state calculation, and the scattering phase shifts of the open channel are presented and discuss in detail in Sec. <ref>. Section. <ref> is devoted to the summary and concluding remarks of this study. § THE QUARK DELOCALIZATION COLOR SCREENING MODEL (QDCSM) In this work, the main purpose is to detect the presence of possible bound states or resonance states in the ND̅ system. Now, we use the quark delocalization color screening model to calculate the spectra of the the ND̅ system. Besides, we employ the resonating group (RGM) method to calculate the baryon-meson scattering phase shifts and to look for the resonance states. The quark delocalization color screening model (QDCSM) is an extension of the native quark cluster model <cit.> and was developed with aim of addressing multiquark systems. The detail of QDCSM can be found in the Refs. <cit.>. Here, the general form of the five-body complex Hamiltonian is given by H = ∑_i=1^5(m_i+p_i^2/2m_i)-T_CM+∑_j>i=1^5V(r_ij), where the center-of-mass kinetic energy, T_CM is subtracted without losing generality since we mainly focus on the internal relative motions of the multiquark system. The interplay is of two body potential which includes color-confining, V_CON, one-gluon exchange, V_OGE, and Goldstone-boson exchange, V_χ, respectively, V(r_ij) = V_CON(r_ij)+V_OGE(r_ij)+V_χ(r_ij). Note herein that the potential could contain central, spin-spin, spin-orbit, and tensor contributions; In this work, only the first two will be considered attending the goal of the present calculation and for clarity in our discussion. The potential V_OGE(r_ij) can be written as V_OGE(r_ij) = 1/4α_s λ^c_i ·λ^c_j [1/r_ij-π/2δ(r_ij)(1/m^2_i+1/m^2_j +4σ_i·σ_j/3m_im_j)], where m_i and σ are the quark mass and the Pauli matrices, respectively. The λ^c is SU(3) color matrix. The QCD-inspired effective scale-dependent strong coupling constant, α_s, offers a consistent description of mesons from light to heavy quark sector, which can be written by, α_s(μ)=α_0/ln(μ^2+μ_0^2/Λ_0^2). Similary, the confining interaction V_CON(r_ij) can be expressed as V_CON(r_ij) = -a_cλ^c_i·λ^c_j[f(r_ij)+V_0_ij], and the f(r_ij) can be written as f(r_ij) = {[ r_ij^2 i,j,; 1 - e^-μ_ij r_ij^2/μ_ij i,j,; ]. where the color screening parameter μ_ij is determined by fitting the deuteron properties, NN and NY scattering phase shifts <cit.>., with μ_qq= 0.45, μ_qs= 0.19 and μ_ss= 0.08, satisfying the relation μ_qs^2=μ_qqμ_ss, where q represents u or d quark. When extending to the heavy-quark case, we found that the dependence of the parameter μ_cc is not very significant in the calculation of the P_c states <cit.> by taking it from 10^-4 to 10^-2 fm^-2. The typical size of the multiquark system is of several femtometres, thus the value of the μ_ij r^2 is rather small, and in this case the exponential function can be approximated to be e^-μ_ijr_ij^2 = 1-μ_ijr_ij^2+𝒪(μ_ij^2 r_ij^4). Accordingly, the confinement potential between two clusters is approximated to be V_CON(r_ij) = -a_cλ^c_i·λ^c_j (1-e^-μ_ij𝐫_ij^2/μ_ij+ V_0_ij)   ≈ -a_cλ^c_i·λ^c_j (r_ij^2+ V_0_ij), which is the same with the expession of two quarks in the same cluster. Thus, when the value of the μ_ij is very small, the screened confinement will return to the quadratic form, which is why the results are insensitive to the value of μ_cc. In the present work, we take μ_cc=0.01. Then μ_sc and μ_uc are obtained by the relation μ_sc^2=μ_ssμ_cc and μ_uc^2=μ_uuμ_cc, respectively. The Goldstone-boson exchange interactions between light quarks appear because the dynamical breaking of chiral symmetry. For the ND̅ system, the K exchange interactions do not appear because there is no s quark herein. Only the following π and η exchange term works between the chiral quark-(anti)quark pair. V_χ(r_ij) = v^π_ij(r_ij)∑_a=1^3λ_i^aλ_j^a+v^η_ij(r_ij) [(λ _i^8·λ _j^8)cosθ_P-(λ _i^0·λ_j^0) sinθ_P] with v^B_ij = g_ch^2/4πm_B^2/12m_im_jΛ _B^2/Λ _B^2-m_B^2 m_B {(σ_i·σ_j) [ Y(m_B r_ij)-Λ_B^3/m_B^3 Y(Λ _B r_ij)] }, B=π, η, where Y(x)=e^-x/x is the standard Yukawa function. The λ^a is the SU(3) flavor Gell-Mann matrix. The mass of the η and π meson is taken from the experimental value <cit.>. Finally, the chair coupling constant, g_ch, is determined from the π NN coupling constant through <cit.> g_ch^2/4π=(3/5)^2g_π NN^2/4πm_u,d^2/m_N^2 which assumes that flavor SU(3) is an exact symmetry, only broken by the different mass of the strange quark. The model parameters and the masses of the ground mesons are listed in Tables <ref> and <ref>, respectively. In QDCSM, the quark delocalization is realized by specifying the single particle orbital wave function of QDCSM as a linear combination of left and right Gaussian, the single particle orbital wave functions used in the ordinary quark cluster model, ψ_α(s_i,ϵ) = (Φ_α(s_i) +ϵΦ_β(s_i))/N(ϵ), ψ_β(s_i,ϵ) = (Φ_β(s_i) +ϵΦ_α(s_i))/N(ϵ), N(ϵ) = √(1+ϵ^2+2ϵ e^-s^2_i/4b^2), Φ_α(s_i) = (1/π b^2)^3/4 e^-1/2b^2(r_α-2/5s_i)^2, Φ_β(-s_i) = (1/π b^2)^3/4 e^-1/2b^2(r_β+3/5s_i)^2, The s_i, i=1,2,..., n, are the generating coordinates, which are introduced to expand the relative motion wave function <cit.>. The mixing parameter ϵ(s_i) is not an adjusted one but determined variationally by the dynamics of the multi-quark system itself. This assumption allows the multi-quark system to choose its favorable configuration in the interacting process. It has been used to explain the cross-over transition between the hadron phase and the quark-gluon plasma phase <cit.>. § THE RESULTS AND DISCUSSIONS In the present calculation, we investigate the possible lowest-lying and resonance states of the uuduc̅ pentaquark systems by taking into account the (uud)(uc̅) configurations. For the uuduc̅ pentaquark systems, the considered baryons are always positive parity and mesons are either pseudoscalars (0^-) or vectors (1^-). So a pentaquark state with negative parity has L=0. Accordingly, the total angular momentum, J, coincides with the total spin, S, and can take values 1/2, 3/2 and 5/2. The possible baryon-meson channels which are under consideration in the computation are listed in Table  <ref>; they are grouped according to total spin and parity J^P, and isospin I. Our purpose is to explore if there is any other pentaquark state, and to see whether those pentaquark states can be explained as the molecular pentauqarks. Since the attractive potential is the necessary for forming bound states, for the first step, the effective potential of all channels is studied. Then, a dynamic calculation including both the single-channel and channel-coupling is carried out in order to check whether there is any bound state. Finally, the scattering process of the open channels is observed to search for any resonance states. §.§ The effective potentials To search the possible pentaquark states, we estimate the effective potentials between these hadron pairs for the first step. Here the definition of potential can be written as E(S_m) = ⟨Ψ_5q(S_m)|H|Ψ_5q(S_m)⟩/⟨Ψ_5q(S_m)|Ψ_5q(S_m)⟩, where S_m stands for the distance between two clusters. Ψ_5q(S_m) represents the wave function of a certain channel. Besides, ⟨Ψ_5q(S_m)|H|Ψ_5q(S_m)⟩ and ⟨Ψ_5q(S_m)|Ψ_5q(S_m)⟩ are the Hamiltonian matrix and the overlap of the states. So the effective potential between two colorless cluster is defined as, V(S_m)=E(S_m)-E(∞), where E(∞) stand for at a sufficient large distance of two clusters. The estimated effective potentials of the ND̅ systems with I=0, I=1 and I=2 are shown in Fig. <ref>. For the I(J^P)=0(1/2^-) system, there are two physical channels: ND̅ and ND̅^*. From the Fig. <ref>(a), the effective potential of the ND̅ shows attraction. So it is more possible for the ND̅ channel to form a bound state although the interactions of ND̅ is weakly. For the ND̅^* channels, the effective potentials of which is repulsive. For the I(J^P)=0(3/2)^-, one can see that the ND̅^* channel is attractive, so this channel may be a bound state, and a dynamic calculation about the ND̅ system would be needed. For the I(J^P)=1(1/2)^- system, we can see that the effective potentials are attractive for the ND̅^∗ and ΔD̅^∗ channels, while the one for ND̅ channel is repulsive. It is obvious that the attraction of the ΔD̅^∗ is the largest one, followed by the attraction of ND̅ channels. So the ΔD̅^∗ channel is most likely to form a bound state. For the I(J^P)=1(3/2)^- system, From the Fig. <ref>(b), the potentials for the ΔD̅ and ΔD̅^* channels are attractive, while the potential for the ND̅^∗ channel is repulsive. So no bound state or resonance state can be formed in this channel. However, the bound states or resonance states are possible for the ΔD̅ and ΔD̅^* channels due to the attractive nature of these interactions between the two clusters. For the I(J^P)=1(5/2)^- system, only one channel ΔD̅^* present, the interaction of Δ and D̅^* shows a great attraction, which implies the possible bound state or resonance state in this channel. For the I(J^P)=2(1/2)^- system and I(J^P)=2(5/2)^- system, there is one channel ΔD̅^*, but the effective potentials of the ΔD̅^* channel for this two system, is exactly opposite, and the effective potential of the former is attractive and the latter is repulsive. So it is interesting to explore the possibility of formation of bound or resonance state. For the IJ^P=23/2^- system, the effective potential of the ΔD̅ channel shows an repulsive property, while the ΔD̅^* channel is attractive, and a dynamic calculation is needed here to check for existence of bound or resonance states. §.§ Possible bound states To check whether the possible bound state exist, a dynamic calculation is a very important process. It is important to note herein that just S-wave for the ND̅ system would be taken into account. Moreover, a very efficient method to solve the bound state problem of a few body system is to employ the resonating group method(RGM) <cit.>. For the RGM, firstly, the relative motion wave function between two clusters can be expanded by the Gaussian, then the integro-differential equation of RGM can be transformed into an algebraic equation, the generalized eigenequation. Finally, The intrinsic energy of the ND̅ system is obtained by solving the Schrodinger equation. The details of RGM are shown in the Appendix A. In the calculation, two clusters separation (|S_m|) is taken to be less than 6 fm (to keep the matrix dimension manageably small). For the ND̅ system, all possible physical channels for each I(J^P) quantum numbers are listed in Table <ref>. Table <ref> summarize our calculated results of the lowest-lying uuduc̅ pentaquark states. In this table, E_sc and E_cc are the eigenenergies of the ND̅ system by the single channel estimations and the coupled channel estimations. E_th^Model and E_th^Exp stand for the theoretical estimations and experimental measurements of the theoretical of the channels. E_B can be obtained by subtracting the difference between E_sc and E_th^Model. Considering the uncertainties caused by the QDCSM estimations, the corrected eigenenergies of the single channel E_sc^' can be obtained through the use of E_th^Exp+E_B. In a very similar way, we take the lowest the threshold of the involved channels as a scale, the corrected eigenenergy of the coupled channels E_cc^' can be obtained. For the I(J^P)=0(1/2^-) system, the single-channel calculation shows that ND̅, the attraction of which is too weak to tie the two particles, is unbound. For the ND̅^∗ channel, which also is an unbound state because of the repulsive nature of the interaction of N and D̅^∗. However, when we consider the effect of multichannel coupling, the lowest energy obtained is approximately 2802 MeV, which is below the ND̅ threshold with the binding energy of -1 MeV. So there is a bound state for the I(J^P)=0(1/2^-) system. This results consistent with the results of the Refs. <cit.>, in which the stability of ND̅ in the J^P=1/2^- with the I=0 was discussed. For the I(J^P)=0(3/2^-) system, only one channel ND̅^∗ exists. Although the effective potential of the ND̅^∗ presents an attractive characteristic, the result of the bound calculation from the Table <ref> shows that the channel ND̅^∗ is not a bound state because their interactions are not sufficient to form a bound state. For the I(J^P)=1(1/2^-) system, there are three channels: ND̅, ND̅^*, and ΔD̅^∗. The single channel bound calculation of the ND̅ and ND̅^∗ indicate that these two channels are all unbound. As shown in Fig. <ref>(b), the effective potential of the ND̅ is repulsive, and the ND̅^* channel has only too weakly attraction to form a bound state. So it is reasonable. However, the ΔD̅^* can make a bound state through the stronger attraction of the Δ and D̅^∗, and in comparison with the theoretical threshold of Δ and D̅^∗ channel, the lowest energy of this state is 3203 MeV, with the binding energy of -36 MeV. After the multichannel coupling calculation, the lowest energy is still above the threshold of the ND̅ channel, and it means that there is no bound state for the I(J^P)=1(1/2^-) system. However, for the closed channel ΔD̅^∗ channel, when considering the scattering process calculated with other open channels such as ND̅, or ND̅^∗ , ΔD̅^∗ may be a resonance state. So in the next section, we will study the scattering processes of these open channels to determine whether ΔD̅^∗ can form a resonance state. For the IJ^P=1(3/2^-) system, which includes three channels: ND̅^∗, ΔD̅ and ΔD̅^∗. The ΔD̅ and ΔD̅^∗ are bound, and the lowest energy of those channels are below their theoretical thresholds, and from the Table <ref>, the binding energies are -14 MeV and -26 MeV, respectively. For the ND̅^∗ channel, no bound state can be found due to the repulsive effect of the interaction of the N and D̅^∗. When considering the three-channel coupling calculation of ND̅^∗, ΔD̅, ΔD̅^∗, the result turns out that the lowest eigenvalue obtained is still higher the theoretical threshold of the corresponding physical channel, which means that there are no bound states. However, we should check if the ΔD̅ and ΔD̅^∗ channel are resonance states by coupling the open channel ND̅^∗ in the next section. For the I(J^P)=1(5/2^-) system, stable energy is obtained by the bound calculation, the mass of which is -28 MeV lower than the threshold of ΔD̅^∗. So it is possible to a bound state. From the very strong attraction effect of ΔD̅^∗, the result can be deemed valid. From the Ref. <cit.>, the only one bound state in the ΔD̅^∗ (T,S)=(1,5/2) system was obtained, this result is similar to our results. For the I=2 system, the ΔD̅ with J^P=1/2^- is a bound state with the binding energy of -10 MeV while the ΔD̅^∗ with J^P=5/2^- is unbound as a rejection properties of the two particles. For the J^P=3/2^-, there is no bound state no matter the single calculation or the multichannel coupling calculation. §.§ Possible resonance states In this section, to confirm whether or not these bound states can survive as resonance states after coupling to the open channels, the scattering phase shifts of all the open channels in QDCSM would be investigated. The details of the calculation method are shown in the Appendix A. Resonances are unstable particles usually observed as bell-shaped structures in scattering phase shifts process. So the results of possible resonances are shown in Fig <ref> to Fig. <ref>. Here, we should note that the horizontal axis M in Fig <ref> to Fig. <ref> is the sum of the corresponding theoretical threshold of the open channel and the incident energy. In addition, the resonance mass and decay width of the bound states in present the Table <ref> where M_r is the resonance mass, and Γ_i is the partial decay width of resonance state, and Γ_total is the total decay width. For the whole system, it is noted that all of the states that we study here are in the S-wave. Although the bound state of the S-wave can decay to a D-wave by tensor forces, its decay is almost negligible due to its small size, so the total decay widths of the states is smaller compared to the minimum limit. we proceed now to describe in detail our theoretical findings. From the bound state calculation shown above, for the I(J^P)=1(1/2^-) system, the ΔD̅^∗ is bound while the ND̅ and ND̅^∗ is unbound and those channels can be identified as the open channels. So the bound state ΔD̅^∗ can decay to two open channels: ND̅ and ND̅^∗. Firstly, we calculate the two-channel coupling. The phase shifts of all scattering channels are shown in Fig. <ref>. The phase shifts of the ND̅ channel clearly show a resonance states, which means that the bound state ΔD̅^∗ appears as a resonance state by coupling to the scattering channel ND̅ while there is no indication of the presence of any resonance states in the scattering phase shift of the open channel ND̅^∗. Besides, we also consider the channel coupling calculation of ΔD̅^∗ and two open channels ND̅ and ND̅^∗. From Fig. <ref>, no resonance states are found, because the effect of the channel coupling raise the energies of ΔD̅^∗ above its threshold. Moreover, in order to minimize the theoretical errors and to compare calculated results to the future experimental data, we shifts the mass of the resonance state to M_r=M-E_Model^th+E_Model^exp. Taking the resonance state ΔD̅^∗ in the ND̅ channel as an example, the resonance mass is about 3123 MeV as shown in Fig. <ref>, the theoretical threshold is E_Model^th=3132 MeV, and the experimental threshold is E_Model=3239 MeV. So the finial resonance mass is M_r=3123-3132+3239=3230 MeV. The resonance mass and decay width are listed in Table. <ref>. For the I(J^P)=1(3/2^-) system, two types of channel coupling are to be taken into account in the calculation. The first is the two-channel coupling with a single bound state and a related open channel, while the other is the three-channel coupling with two bound states and a corresponding open channel. Firstly, the two channel coupling is considered, and the phase shifts of all scattering channels are shown in Fig. <ref>. The phase shifts of the ND̅^∗ channel clearly show two resonance states, which means that the bound states ΔD̅ and ΔD̅^∗ appear as a resonance state by coupling to the scattering channel ND̅^∗. The resonance mass and the decay width of every resonance state can be obtained from the shape of the resonance, which are listed in Table <ref>, which shows that the ΔD̅ and ΔD̅^∗ are all resonance state and the mass of those resonance are 3089 MeV and 3233 MeV with the decay width of 20 MeV and 4 MeV, respectively. In addition, to investigate the effect of the coupling channel of the bound states, we also calculate the three channel coupling. The phase shifts of all scattering channels of the I(J^P)=1(3/2^-) system are shown in Fig. <ref>, which shows a multi-resonance behavior. There are two resonance states in the ND̅^∗ scattering phase shifts corresponding to ΔD̅ and ΔD̅^∗, respectively. The resonance masses and decay widths of resonance states with three channel coupling are also listed in Table <ref>. The resonance mass of the ΔD̅ state is 3090 MeV and the decay width is about 14 MeV, and the ΔD̅^* state has the mass of 3230 MeV and the decay width of 1 MeV. § SUMMARY In the present work, we have systematically studied the ND̅ system composed of uuduc̅ in the framework of the quark delocalization color screening model (QDCSM). By solving the RGM equation, we attempt to explore if there are any possible bound and resonance states. Herein, the computed result present that several possible bound and resonance states are found for the ND̅ system within the scanned quantum numbers: I(J^P)=0(1/2^-), I(J^P)=1(1/2^-), I(J^P)=1(3/2^-), I(J^P)=1(5/2^-), and I(J^P)=2(1/2^-). These are characterized by the following features: (1)There are some bound states of ΔD̅^∗ with I(J^P)=1(1/2^-), ΔD̅ with I(J^P)=1(3/2^-) and ΔD̅^∗ with I(J^P)=1(3/2^-) when considering the single-channel calculation. But when the coupling channel calculations are applied to the entire ND̅ system , the ND̅ for the I(J^P)=0(1/2^-) can be turned into a bound state and the ΔD̅^∗ in the I(J^P)=0(5/2^-) and I(J^P)=2(1/2^-) are also supposed to the bound states. The presence of those states is also a sharp prediction of quark exchange dynamics because in a hadronic model the attraction appears in different channels. (2)Narrow baryon-meson resonance states are obtained in coupled-channels cases, which are ΔD̅^∗ with I(J^P)=1(1/2^-) (M_r=3220 MeV, Γ=0.01 MeV) and I(J^P)=1(3/2^-) (M_r= 3230∼3233 MeV, Γ=1∼4MeV), and ΔD̅ with I(J^P)=1(3/2^-) (M_r= 3089∼3090 MeV, Γ=14∼20 MeV). last but not least, we hope our sophisticated calculations of the pentaquarks may provide valuable information for the future experimental searches. This work is supported partly by the National Natural Science Foundation of China under Contract No. 12175037, No. 11775050, No. 11775118 and No. 11535005. This work is also supported by china Postdoctoral Science Foundation funded project No. 2021M690626, and No. 1107020201. § RESONATING GROUP METHOD FOR BOUND-STATE AND SCATTERING PROBLEMS In the present work, We perform bound state calculations as well as scattering calculations for the ND̅_̅s̅ system by using the RGM in QDCSM. The issue of this approach is how to deal with the two-body problem. In this method, when dealing with the two-cluster system, one can only consider the relative motion between the clusters, while the two clusters are frozen inside. So the wave function of the baryon-meson system is ψ = ∑_L𝒜[[ϕ̂_A(ρ_A,λ_A)ϕ̂_B(ρ_B)]^[σ]IS⊗χ_L(R_AB)]^J, where L stands for the orbital angular momentum and the symbol 𝒜 is the antisymmetry operator, which can be defined as 𝒜 = 1-P_14-P_24-P_34, where 1, 2, and 3 stand for the quarks in the baryon cluster, and 4 stands for the quark in the meson cluster. ϕ̂_̂Â and ϕ̂_̂B̂ are the internal cluster wave functions of the baryon A and meson B: ϕ̂_A = (2/3π b^2)^3/4(1/2π b^2)^3/4 e^-(ρ_A^2/4b^2+λ_A^2/3b^2)η_I_AS_Aξ_A^c , ϕ̂_B = (1/2π b^2)^3/4 e^-ρ_B/4b^2η_I_BS_Bξ_B^c , where η_I, S, and ξ represent the flavor, spin, and internal color terms of the cluster wave functions, respectively. ρ_A and λ_A are the internal coordinates for the baryon cluster A and ρ_B is the internal coordinate for the meson cluster B. The Jacobi coordinates are defined as follows: ρ_A = r_1-r_2, ρ_B=r_4-r_5, λ_A = r_3-1/2(r_1+r_2), R_A = 1/3(r_1+r_2+r_3), R_B=1/2(r_4+r_5), R_AB = R_A-R_B, R_G=3/5R_A+2/5R_B. From the variational principle, after variation with respect to the relative motion wave function χ(R)=∑_Lχ_L(R), one obtains the RGM equation, ∫ H(R, R^')χ(R^')d(R^')=E ×∫ N(R, R^') χ(R^')d(R^'), where H(R, R^') and N(R, R^') are Hamiltonian and norm kernels, respectively. The eigenenergy E and the wave functions are obtained by solving the RGM equation. Generally, one can introduce generator coordinates S_m to expand the Lth relative motion wave function χ_L(R) by[In the present estimation, only S-wave bound state is considered, i.e., L=0.], χ_L(R) = 1/√(4π)(6/5π b^2)^3/4∑_m=1^nC_m ∫exp[-3/5b^2(R-S_m)^2]Y^L(Ŝ_m)dŜ_m = ∑_m=1^nC_mu_L(R,S_m)/RY^L(R̂), with u_L(R,S_m) = √(4π)(6/5π b^2)^3/4R e^-3/5b^2(R-S_m)^2 × m^L j_L(-i6/5b^2S_m), where C_m is expansion coefficients, n is the number of the Gaussian bases, which is determined by the stability of the results, and j_L is the Lth spherical Bessel function. Then the relative motion wave function χ(R) is χ(R) = 1/√(4π)∑_L(6/5π b^2)^3/4 ×∑_m=1^nC_m∫ e^-3/5b^2(R-S_m)^2Y^L(Ŝ_m)dŜ_m. After the inclusion of the center of mass motion, Φ_G(R_G) = (5/π b^2)^3/4e^-5/2b^2R_G, the total wave function in Eq.(<ref>) can be rewritten as, Ψ_5q = 𝒜∑_m,LC_m,L∫1/√(4π)∏_α=1^3Φ_α(S_m)∏_β=4^5Φ_β(-S_m) [ [η_I_AS_Aη_I_BS_B]^ISY^L(Ŝ_m)]^J[ξ_c(A)ξ_c(B)]^[σ]. where Φ_α(S_m) and Φ_β(-S_m) are the single-particle orbital wave function with different reference centers, which specific form can be seen in Eq. (<ref>). With the reformulated ansatz as shown in Eq. (<ref>), the RGM equation becomes an algebraic eigenvalue equation, ∑_j,LC_J,LH_i,j^L,L^' = E∑_jC_j,L^'N_i,j^L^', where N_i,j^L^' and H_i,j^L,L^' are the overlap of the wave functions and the matrix elements of the Hamiltonian, respectively. By solving the generalized eigenvalue problem, we can obtain the energies of the pentaquark E and the corresponding expansion coefficient C_j,L. Finally, the relative motion wave function between two clusters can be obtained by substituting the C_j,L into Eq. (<ref>). For a scattering problem, the relative wave function is expanded as χ_L(R) = ∑_m=1^nC_mũ_L(R,S_m)/RY^L(R̂), with ũ_L(R,S_m)={[ α_mu_L(R,S_m), R≤R_C; [h_L^-(k,R)-s_mh_L^+(k,R)]R_AB, R≥R_C ]. where u_L is presented Eq. (<ref>), h^±_L is the Lth spherical Hankel functions, k is the momentum of the relative motion with k=√(2μ E_ie), μ is the reduced mass of two hadrons of the open channel, E_ie is the incident energy of the relevant open channels, which can be written as E_ie=E_total-E_th where E_total denotes the total energy and E_th represents the threshold of open channel. R_C is a cutoff radius beyond which all the strong interaction can be disregarded. Besides, α_m and s_m are complex parameters that are determined by the smoothness condition at R= R_C and C_m satisfy Σ_m=1^nC_m=1. After performing the variational procedure, a Lth partial-wave equation for the scattering problem can be deduced as ∑_m=1^nℒ^L_imC_m = ℳ_i^L(i=0, 1,..., n-1), with ℒ^L_im = 𝒦_im^L-𝒦_i0^L-𝒦_0m^L+𝒦_00^L, ℳ_i^L = 𝒦_00^L-𝒦_i0^L, and 𝒦_im^L = <ϕ̂_Aϕ̂_Bũ_L(R^',S_m)/R^'Y^L(R^')| H-E |. . ×𝒜[ϕ̂_Aϕ̂_Bũ_L(R,S_m)/RY^L(R)] >. By solving Eq.(<ref>), we can obtain the expansion coffefficients C_m, then the S-matrix element S_L and the phase shifts δ_L are given by S_L=e^2iδ_L=∑_m=1^nC_ms_m. Finally, the cross-section can be obtained from the scattering phase shifts by the formula, σ_L =4π/k^2(2L+1) sin^2δ_L. 99 Klempt:2007cp E. Klempt and A. Zaitsev, Phys. Rept. 454 (2007), 1-202 doi:10.1016/j.physrep.2007.07.006 [arXiv:0708.4016 [hep-ph]]. Brambilla:2010cs N. Brambilla, S. Eidelman, B. K. Heltsley, R. Vogt, G. T. Bodwin, E. Eichten, A. D. Frawley, A. B. Meyer, R. E. Mitchell and V. Papadimitriou, et al. Eur. Phys. J. C 71 (2011), 1534 doi:10.1140/epjc/s10052-010-1534-9 [arXiv:1010.5827 [hep-ph]]. Hosaka:2016pey A. Hosaka, T. Iijima, K. Miyabayashi, Y. Sakai and S. Yasui, PTEP 2016 (2016) no.6, 062C01 doi:10.1093/ptep/ptw045 [arXiv:1603.09229 [hep-ph]]. Chen:2016qju H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Phys. Rept. 639 (2016), 1-121 doi:10.1016/j.physrep.2016.05.004 [arXiv:1601.02092 [hep-ph]]. Lebed:2016hpi R. F. Lebed, R. E. Mitchell and E. S. Swanson, Prog. Part. Nucl. Phys. 93 (2017), 143-194 doi:10.1016/j.ppnp.2016.11.003 [arXiv:1610.04528 [hep-ph]]. Esposito:2016noz A. Esposito, A. Pilloni and A. D. Polosa, Phys. Rept. 668 (2017), 1-97 doi:10.1016/j.physrep.2016.11.002 [arXiv:1611.07920 [hep-ph]]. Dong:2017gaw Y. Dong, A. Faessler and V. E. Lyubovitskij, Prog. Part. Nucl. Phys. 94 (2017), 282-310 doi:10.1016/j.ppnp.2017.01.002 Ali:2017jda A. Ali, J. S. Lange and S. Stone, Prog. Part. Nucl. Phys. 97 (2017), 123-198 doi:10.1016/j.ppnp.2017.08.003 [arXiv:1706.00610 [hep-ph]]. Guo:2017jvc F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, Rev. Mod. Phys. 90 (2018) no.1, 015004 doi:10.1103/RevModPhys.90.015004 [arXiv:1705.00141 [hep-ph]]. Olsen:2017bmm S. L. Olsen, T. Skwarnicki and D. Zieminska, Rev. Mod. Phys. 90 (2018) no.1, 015003 doi:10.1103/RevModPhys.90.015003 [arXiv:1708.04012 [hep-ph]]. Karliner:2017qhf M. Karliner, J. L. Rosner and T. Skwarnicki, Ann. Rev. Nucl. Part. Sci. 68 (2018), 17-44 doi:10.1146/annurev-nucl-101917-020902 [arXiv:1711.10626 [hep-ph]]. Liu:2019zoy Y. R. Liu, H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Prog. Part. Nucl. Phys. 107 (2019), 237-320 doi:10.1016/j.ppnp.2019.04.003 [arXiv:1903.11976 [hep-ph]]. Brambilla:2019esw N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C. P. Shen, C. E. Thomas, A. Vairo and C. Z. Yuan, Phys. Rept. 873 (2020), 1-154 doi:10.1016/j.physrep.2020.05.001 [arXiv:1907.07583 [hep-ex]]. Richard:2019cmi J. M. Richard, A. Valcarce and J. Vijande, Annals Phys. 412 (2020), 168009 doi:10.1016/j.aop.2019.168009 [arXiv:1910.08295 [nucl-th]]. LHCb:2019kea R. Aaij et al. [LHCb], Phys. Rev. Lett. 122 (2019) no.22, 222001 doi:10.1103/PhysRevLett.122.222001 [arXiv:1904.03947 [hep-ex]]. LHCb:2015yax R. Aaij et al. [LHCb], Phys. Rev. Lett. 115 (2015), 072001 doi:10.1103/PhysRevLett.115.072001 [arXiv:1507.03414 [hep-ex]]. Liu:2019tjn M. Z. Liu, Y. W. Pan, F. Z. Peng, M. Sánchez Sánchez, L. S. Geng, A. Hosaka and M. Pavon Valderrama, Phys. Rev. Lett. 122 (2019) no.24, 242001 doi:10.1103/PhysRevLett.122.242001 [arXiv:1903.11560 [hep-ph]]. He:2019ify J. He, Eur. Phys. J. C 79 (2019) no.5, 393 doi:10.1140/epjc/s10052-019-6906-1 [arXiv:1903.11872 [hep-ph]]. Wang:2019got Z. G. Wang, Int. J. Mod. Phys. A 35 (2020) no.01, 2050003 doi:10.1142/S0217751X20500037 [arXiv:1905.02892 [hep-ph]]. Guo:2019kdc Z. H. Guo and J. A. Oller, Phys. Lett. B 793 (2019), 144-149 doi:10.1016/j.physletb.2019.04.053 [arXiv:1904.00851 [hep-ph]]. Huang:2018wed H. Huang and J. Ping, Phys. Rev. D 99 (2019) no.1, 014010 doi:10.1103/PhysRevD.99.014010 [arXiv:1811.04260 [hep-ph]]. Mutuk:2019snd H. Mutuk, Chin. Phys. C 43 (2019) no.9, 093103 doi:10.1088/1674-1137/43/9/093103 [arXiv:1904.09756 [hep-ph]]. Zhu:2019iwm R. Zhu, X. Liu, H. Huang and C. F. Qiao, Phys. Lett. B 797 (2019), 134869 doi:10.1016/j.physletb.2019.134869 [arXiv:1904.10285 [hep-ph]]. Eides:2019tgv M. I. Eides, V. Y. Petrov and M. V. Polyakov, Mod. Phys. Lett. A 35 (2020) no.18, 2050151 doi:10.1142/S0217732320501515 [arXiv:1904.11616 [hep-ph]]. Weng:2019ynv X. Z. Weng, X. L. Chen, W. Z. Deng and S. L. Zhu, Phys. Rev. D 100 (2019) no.1, 016014 doi:10.1103/PhysRevD.100.016014 [arXiv:1904.09891 [hep-ph]]. Shimizu:2019ptd Y. Shimizu, Y. Yamaguchi and M. Harada, [arXiv:1904.00587 [hep-ph]]. Xiao:2019aya C. W. Xiao, J. Nieves and E. Oset, Phys. Rev. D 100 (2019) no.1, 014021 doi:10.1103/PhysRevD.100.014021 [arXiv:1904.01296 [hep-ph]]. Meng:2019ilv L. Meng, B. Wang, G. J. Wang and S. L. Zhu, Phys. Rev. D 100 (2019) no.1, 014031 doi:10.1103/PhysRevD.100.014031 [arXiv:1905.04113 [hep-ph]]. Cao:2019kst X. Cao and J. p. Dai, Phys. Rev. D 100 (2019) no.5, 054033 doi:10.1103/PhysRevD.100.054033 [arXiv:1904.06015 [hep-ph]]. Wang:2019krd X. Y. Wang, X. R. Chen and J. He, Phys. Rev. D 99 (2019) no.11, 114007 doi:10.1103/PhysRevD.99.114007 [arXiv:1904.11706 [hep-ph]]. Xiao:2019mvs C. J. Xiao, Y. Huang, Y. B. Dong, L. S. Geng and D. Y. Chen, Phys. Rev. D 100 (2019) no.1, 014022 doi:10.1103/PhysRevD.100.014022 [arXiv:1904.00872 [hep-ph]]. Hiyama:2005cf E. Hiyama, M. Kamimura, A. Hosaka, H. Toki and M. Yahiro, Phys. Lett. B 633 (2006), 237-244 doi:10.1016/j.physletb.2005.11.086 [arXiv:hep-ph/0507105 [hep-ph]]. Lemaire:2003et S. Lemaire, J. Labarsouque and B. Silvestre-Brac, Nucl. Phys. A 714 (2003), 265-276 doi:10.1016/S0375-9474(02)01367-2 Liu:2019sip X. Liu, H. Huang and J. Ping, Phys. Rev. C 100 (2019) no.2, 025203 doi:10.1103/PhysRevC.100.025203 [arXiv:1905.00583 [hep-ph]]. Wang:2004ky H. J. Wang, H. Yang and J. C. Su, Phys. Rev. C 68 (2004), 055204 doi:10.1103/PhysRevC.68.055204 [arXiv:hep-ph/0410376 [hep-ph]]. Barnes:1992ca T. Barnes and E. S. Swanson, Phys. Rev. C 49 (1994), 1166-1184 doi:10.1103/PhysRevC.49.1166 [arXiv:nucl-th/9212008 [nucl-th]]. Huang:2004sj F. Huang, Z. Y. Zhang and Y. W. Yu, Phys. Rev. C 70 (2004), 044004 doi:10.1103/PhysRevC.70.044004 [arXiv:nucl-th/0406046 [nucl-th]]. Yasui:2009bz S. Yasui and K. Sudoh, Phys. Rev. D 80 (2009), 034008 doi:10.1103/PhysRevD.80.034008 [arXiv:0906.1452 [hep-ph]]. Yamaguchi:2011xb Y. Yamaguchi, S. Ohkoda, S. Yasui and A. Hosaka, Phys. Rev. D 84 (2011), 014032 doi:10.1103/PhysRevD.84.014032 [arXiv:1105.0734 [hep-ph]]. Gamermann:2010zz D. Gamermann, C. Garcia-Recio, J. Nieves, L. L. Salcedo and L. Tolos, Phys. Rev. D 81 (2010), 094016 doi:10.1103/PhysRevD.81.094016 [arXiv:1002.2763 [hep-ph]]. Haidenbauer:2007jq J. Haidenbauer, G. Krein, U. G. Meissner and A. Sibirtsev, Eur. Phys. J. A 33 (2007), 107-117 doi:10.1140/epja/i2007-10417-3 [arXiv:0704.3668 [nucl-th]]. Carames:2012bd T. F. Carames and A. Valcarce, Phys. Rev. D 85 (2012), 094017 doi:10.1103/PhysRevD.85.094017 [arXiv:1204.5502 [hep-ph]]. Hosaka:2016ypm A. Hosaka, T. Hyodo, K. Sudoh, Y. Yamaguchi and S. Yasui, Prog. Part. Nucl. Phys. 96 (2017), 88-153 doi:10.1016/j.ppnp.2017.04.003 [arXiv:1606.08685 [hep-ph]]. DeRujula:1975qlm A. De Rujula, H. Georgi and S. L. Glashow, Phys. Rev. D 12 (1975), 147-162 doi:10.1103/PhysRevD.12.147 Isgur:1979be N. Isgur and G. Karl, Phys. Rev. D 20 (1979), 1191-1194 doi:10.1103/PhysRevD.20.1191 Isgur:1978wd N. Isgur and G. Karl, Phys. Rev. D 19 (1979), 2653 [erratum: Phys. Rev. D 23 (1981), 817] doi:10.1103/PhysRevD.19.2653 Isgur:1978xj N. Isgur and G. Karl, Phys. Rev. D 18 (1978), 4187 doi:10.1103/PhysRevD.18.4187 Wang:1992wi F. Wang, G. h. Wu, L. j. Teng and J. T. Goldman, Phys. Rev. Lett. 69 (1992), 2901-2904 doi:10.1103/PhysRevLett.69.2901 [arXiv:nucl-th/9210002 [nucl-th]]. Chen:2007qn L. Chen, H. Pang, H. Huang, J. Ping and F. Wang, Phys. Rev. C 76 (2007), 014001 doi:10.1103/PhysRevC.76.014001 [arXiv:nucl-th/0703103 [nucl-th]]. Chen:2011zzb M. Chen, H. Huang, J. Ping and F. Wang, Phys. Rev. C 83 (2011), 015202 doi:10.1103/PhysRevC.83.015202 Ping:1993me J. L. Ping, F. Wang, G. H. Wu, L. J. Teng and J. T. Goldman, Wang:1998nk F. Wang, D. Qing, P. Xu and J. L. Ping, Nucl. Phys. A 631 (1998), 462C-466C doi:10.1016/S0375-9474(98)00048-7 Wu:1996fm G. H. Wu, L. J. Teng, J. L. Ping, F. Wang and J. T. Goldman, Phys. Rev. C 53 (1996), 1161-1166 doi:10.1103/PhysRevC.53.1161 Huang:2011kf H. Huang, P. Xu, J. Ping and F. Wang, Phys. Rev. C 84 (2011), 064001 doi:10.1103/PhysRevC.84.064001 [arXiv:1109.5607 [nucl-th]]. Huang:2015uda H. Huang, C. Deng, J. Ping and F. Wang, Eur. Phys. J. C 76 (2016) no.11, 624 doi:10.1140/epjc/s10052-016-4476-z [arXiv:1510.04648 [hep-ph]]. ParticleDataGroup:2018ovx M. Tanabashi et al. [Particle Data Group], Phys. Rev. D 98 (2018) no.3, 030001 doi:10.1103/PhysRevD.98.030001 Vijande:2004he J. Vijande, F. Fernandez and A. Valcarce, J. Phys. G 31, 481 (2005) doi:10.1088/0954-3899/31/5/017 [arXiv:hep-ph/0411299 [hep-ph]]. Fernandez:1986zn F. Fernandez and E. Oset, Nucl. Phys. A 455, 720-736 (1986) doi:10.1016/0375-9474(86)90459-8 Wu:1998wu G. h. Wu, J. L. Ping, L. j. Teng, F. Wang and J. T. Goldman, Nucl. Phys. A 673 (2000), 279-297 doi:10.1016/S0375-9474(00)00141-X [arXiv:nucl-th/9812079 [nucl-th]]. Ping:1998si J. L. Ping, F. Wang and J. T. Goldman, Nucl. Phys. A 657 (1999), 95-109 doi:10.1016/S0375-9474(99)00321-8 [arXiv:nucl-th/9812068 [nucl-th]]. Pang:2001xx H. R. Pang, J. L. Ping, F. Wang and J. T. Goldman, Phys. Rev. C 65 (2002), 014003 doi:10.1103/PhysRevC.65.014003 [arXiv:nucl-th/0106056 [nucl-th]]. Xu:2007oam M. Xu, M. Yu and L. Liu, Phys. Rev. Lett. 100 (2008), 092301 doi:10.1103/PhysRevLett.100.092301 [arXiv:0712.1641 [hep-th]]. Kamimura:1981oxj M. Kamimura, Nucl. Phys. A 351 (1981), 456-480 doi:10.1016/0375-9474(81)90182-2 real_method1 J.Simon, J. Chen, Phys. 75, 2465 (1981). Hiyama:2005cf E. Hiyama, M. Kamimura, A. Hosaka, H. Toki and M. Yahiro, Phys. Lett. B 633 (2006), 237-244 doi:10.1016/j.physletb.2005.11.086 [arXiv:hep-ph/0507105 [hep-ph]]. Hiyama:2018ukv E. Hiyama, A. Hosaka, M. Oka and J. M. Richard, Phys. Rev. C 98 (2018) no.4, 045208 doi:10.1103/PhysRevC.98.045208 [arXiv:1803.11369 [nucl-th]].
http://arxiv.org/abs/2307.00312v2
20230701120012
Upper bounds for the number of isolated critical points via Thom-Milnor theorem
[ "Vladimir Zolotov" ]
math-ph
[ "math-ph", "math.AG", "math.AT", "math.CA", "math.MP", "31B05" ]
We apply the Thom-Milnor theorem to obtain the upper bounds on the amount of isolated (1) critical points of a potential generated by several fixed point charges(Maxwell's problem on point charges), (2) critical points of SINR, (3) critical points of a potential generated by several fixed Newtonian point masses augmented with a quadratic term, (4) central configurations in the n-body problem. In particular, we get an exponential bound for Maxwell's problem and the polynomial bound for the case of an "even dimensional" potential in Maxwell's problem. [2010]31B05 [Vladimir Zolotov][email protected] Upper bounds for the number of isolated critical points via Thom-Milnor theorem Vladimir Zolotov =============================================================================== § INTRODUCTION In the present paper, we use the Thom-Milnor theorem to give upper bounds for the number of isolated * critical points of a potential of generated by several fixed point charges (Maxwell's problem on point charges), * critical points of , * critical points of a potential generated by several fixed Newtonian point masses augmented with a quadratic term, * central configurations in the n-body problem. Surprisingly the direct application of the Thom-Milnor theorem allows to obtain tighter or more general bounds than specialized methods. In particular, we get an exponential bound for (<ref>) which were previously only known for the 2-dimensional case, see <cit.>. Additionally for the case of "even dimensional" potential in (1) we get a polynomial bound. §.§ Thom-Milnor theorem Our main tool is the following theorem by R. Thom <cit.> and J. Milnor <cit.>. Let m,k,p = 1,2,3,… and f_1,…,f_p be a real polynomials in m variables. Let V be the zero set of the system f_1(x_1,…,x_m) = … = f_p(x_1,…,x_m) = 0. Suppose that each f_i has degree ≤ k, then the sum of the Betti numbers of V is ≤ k(2k - 1)^m-1. In the above formulation by qth Betti numbers of V we mean the rank of Čech cohomology group H^q(V), using coefficients in some fixed field F. All we need from Betty numbers are two following properties: * they all are non-negative since they are ranks of some groups, * 0th Betti number b_0 is the number of the connected components of V. The above two properties directly imply the following corollary. Under assumptions of Proposition <ref>, the number of connected components of V does not exceed k(2k - 1)^m-1, and in particular the number of isolated zeroes of the system (<ref>) does not exceed k(2k - 1)^m-1. §.§ The structure of the paper Each of Sections <ref>-<ref> is devoted to a single problem from (<ref>) - (<ref>). At the beginning of each section, We give a proper introduction to the problem and provide some historical remarks. Then we derive the bounds by applying Corollary <ref>. Finally, at the end of each section, we discuss the (non-)existence of non-isolated solutions. For problems (<ref>), (<ref>) and (<ref>) for almost all sets of parameters there are no degenerate critical points and in particular there are no non-isolated critical points, see Proposition <ref>, Proposition <ref> and the discussion in Subsection <ref>. There is no such result known for (<ref>). And if one could establish it that would be a big result, since the question of whether non-isolated central configurations exist is a long-standing open problem known as Smale's 6th problem, see <cit.>. §.§ Previously known connections between problems (<ref>) - (<ref>) The author is not the first person who noticed that problems (<ref>) - (<ref>) are connected to each other. The problem (<ref>) originates in the study of central configurations, see <cit.>, and thus is connected to (<ref>). The similarity between (<ref>) and (<ref>) is noted in <cit.>. The previously known bound for (<ref>) by A. Gabrielov, D. Novikov, and B. Shapiro <cit.> relies on the Khovanskii's theory of fewnomials <cit.>. This paper inspired the usage of the theory of fewnomials in other works. In particular A. Albouy and Ya. Fu <cit.> apply it in the context of counting central configurations. § MAXWELL'S PROBLEM §.§ The statement Fix d ∈{1,2,3,…}. Let ||·|| denotes the the standard Euclidean norm. Fix m ∈{0,1,2,…}. Suppose that we have points x_1,…,x_n ∈^d. And numbers q_1,…,q_n ∈∖{0} which symbolize point charges located in those points. Consider a function V = V_m^(x_1,q_1),…,(x_n,q_n):^d→ given by V(p) = ∑_i = 1^nq_i/|| p - x_i ||^m, for m ≠ 0, V(p) = ∑_i = 1^nq_ilog|| p - x_i ||, for m = 0. Maxwell's problem asks for an upper bound on the number of critical points of V. If m = d - 2 then V is the potential of electrostatic field created by point charges q_1,…,q_n (maybe up to a constant) as it is considered in mathematical physics. If in addition d = 3 then V is the "real life" potential of electrostatic field from physics. J.C. Maxwell <cit.> argued that in the case d = 3, m = 1 the upper bound on the number of isolated critical points of V is (n - 1)^2, but his proof contains an unproven claim, and thus is considered to be incomplete (see <cit.>). §.§ Rough summary of our results If m is even then the critical points of V coincide with solutions of a system of polynomial equations with d independent variables. Combining this with the Thom-Milnor estimate on the number of connected components of a set of solutions of a polynomial system we get that the number of isolated critical points of V is bounded from above by (1 + (n-1)(m+2))(1 + 2(n-1)(m+2))^d-1, see Theorem <ref>(<ref>). For the general case critical points of V correspond to zeroes of a polynomial system with n + d independent variables. Once again by the use of the Thom-Milnor estimate we get that the number of isolated critical points of V can not exceed (m + 4)(2m + 7)^d + n, see Theorem <ref>(<ref>). §.§ Previous results §.§.§ The only previously known result for d = 3 and m = 1 and arbitrary n. The work <cit.> by A. Gabrielov, D. Novikov, and B. Shapiro is the only one which deals with arbitrary d,m and n and even the only one which addresses the case d = 3 and m = 1 for arbitrary n. Authors of <cit.> represent the set of critical points of V as a set of solutions of a system of quasi-polynomial equations. From that by application of Khovanskii's theory of fewnomials <cit.> they deduce that for any m ∈{0,1,2,…} and any d ∈{1,2,…} if all the critical points of V are non-degenerate then their total number does not exceed 4^n^2(3n)^2n. In general, there exist configuration where critical points are not isolated: consider a square with point charges 1, -1, 1, -1 in its vertices. Then every point on the line through the center of the square and orthogonal to the plane of the square will be critical. It is unknown if non-isolated critical points could exist if all the charges have the same sign, see <cit.>. §.§.§ Other results which work for arbitrary n. In <cit.> K. Killian considers the case d = 2, m = 1 and shows that if all critical points of V are isolated then their total number does not exceed 2^2n-2(3n-2)^2. For the case d = 2, m = 1 it is unknown if non-isolated critical points are possible. T. Erdélyi, J. Rosenblatt and R. Rosenblatt <cit.> show that there are no isolated critical points in the case if all point charges are on the same line. The study a of a case d = 2, m = 0 goes back to K. F. Gauss, see <cit.>. In this case using the identification ^2 ≅ we can write (∇ V(z))^* = c ∑_i = 1^nq_i/( z - x_i ), where * denotes the complex conjugation and c ≠ 0 is an absolute constant. Thus the zeroes of ∇ V coincide with zeroes of a complex polynomial of a degree at most (n - 1) and V has at most (n - 1) critical points. §.§.§ Modeling of m = m' inside m = m' + 1. Another phenomena mentioned in <cit.> is that one can generate V = V_m^(x_1,q_1),…,(x_n,q_n):^d→ using V_m+1 by adding additional dimension and substituting point charges by charged lines. More precisely consider V̅ = V̅_m + 1^(l_1,q_1),…,(l_n,q_n):^d×{0}→, where l_i is a line (x_i,*) and V̅ is given by V̅(p) = ∑_i = 1^n∫_x ∈ l_iq_i dx/|| p - x ||^m+1. Then V = c(m) V̅, where c(m) > 0 is a constant depending only on m. Theorem <ref> gives a polynomial upper bound for the number of isolated critical points for the case of even m. Thus, if one could model m = m' inside m = m' + 1 using say 100 point charges instead of each charged line then he would likely be able to get a polynomial upper bound for the case of odd m too. But the author fails to figure out how to do a reduction of this nature. §.§.§ Results for a specific n. T.-L. Lee and Y.-L. Tsai <cit.> give an example with 9 equilibrium points for d = 2, m = 1, and n = 4 which is the claimed upper bound of the Maxwell conjecture. §.§.§ Lower bounds for the number of isolated critical points. Lower bounds on the number of critical points of V are given by M. Morse and S. Cains <cit.> and T. Kiang <cit.>. §.§.§ Critical points of polynomials Khavinson et al. <cit.> formulated conjectures on location of critical points of polynomials related to Maxwell's problem. §.§ Our results for Maxwell's problem The following theorem formalizes the informal summary given in Subsection <ref>. Let x_1 = (x_11,…,x_1d),…,x_n = (x_n1,…,x_nd) be distinct points in ^d, p = (p_1,…,p_d) ∈^d ∖{x_1,…, x_n}, m ∈{0,1,2,3,…}, q_1,…,q_n ∈∖{0} and V = V_m^(x_1,q_1),…,(x_n,q_n). Then, Point p is a critical point of V iff ∑_i=1^n( q_i(p - x_i) ∏_ 1 ≤ j ≤ n j ≠ i (∑_1≤ k ≤ d(p_k - x_jk)^2)^m+2/2) = 0. If m is even then V has at most (1 + (n-1)(m+2))(1 + 2(n-1)(m+2))^d-1 isolated critical points. Point p is a critical point of V iff there exist ß_1,…,ß_n > 0 satisfying ß_j^2 ∑_1≤ k ≤ d(p_k - x_jk)^2 = 1 , for every 1≤ j ≤ n and ∑_i=1^n( q_i(p - x_i) ß_i^m+2 ) = 0. V has at most (m + 4)(2m + 7)^d + n isolated critical points. We remind that V(p) = ∑_i = 1^nq_i/|| p - x_i ||^m, for m ≠ 0, V(p) = ∑_i = 1^nq_ilog|| p - x_i ||, for m = 0. We differentiate V: V'_p_k = c(m)∑_i = 1^nq_i(p_k - x_ik)/|| p - x_i ||^m + 2, where c(m) ≠ 0 is a constant depending only on m, ∇ V = c(m)∑_i = 1^nq_i(p - x_i)/|| p - x_i ||^m+2, which is (maybe up to a constant) the force given by Coulomb's law. Thus, ∇ V(p) = 0 is equivalent to ∑_i=1^n( q_i(p - x_i) ∏_ 1 ≤ j ≤ n j ≠ i|| p - x_j ||^ m+2 ) = 0. Which is equivalent to (<ref>). When m is even (<ref>) is a polynomial system in d variables: p_1,…,p_d. Each polynomial of the system has degree ≤ (1 + (n-1)(m+2)). Thus, by Thom-Milnor theorem (see Corollary <ref>) we have the number of isolated critical points of V does not exceed (1 + (n-1)(m+2))(1 + 2(n-1)(m+2))^d-1. Theorem <ref>(<ref>) is just a reformulation of Theorem <ref>(<ref>). Denote N = (m + 4)(2m + 7)^d + n. We will argue by contradiction. Suppose that V has at least N + 1 isolated critical points. Then by Theorem <ref>(<ref>) the zero set of the system (<ref>) (considered as a polynomial system in d + n variables p_1,…,p_k,ß_1,…,ß_n) has at least N + 1 connected components. The degree of every polynomial in this system is ≤max{4, m+3}≤ m + 4. Thus, by Thom-Milnor theorem (see Corollary <ref>) we have that the total number of the connected components of the zero set does not exceed (m + 4)(2m + 7)^d + n = N, so we have a contradiction. §.§ Existence of non-isolated critical points As we already mentioned in subsubsections <ref> and <ref> non-isolated critical points do exist but it is unknown if they exist in dimension 2 or for potentials generated by charges of the same sign. It is also known that in almost all configurations of charges there are no non-isolated critical points. More precisely M. Morse and S. Cains <cit.> give the following theorem. Let d ∈{1,2,3,…} and m = {0,1,2,3,…}. Let x_1 = (x_11,…,x_1d),…,x_n - 1 = (x_(n - 1)1,…,x_(n - 1)d) be distinct points in ^d, and q_1,…,q_n ∈∖{0}. Then, for almost all x_n = (x_n1,…,x_nd) ∈^d ∖{x_1,…,x_n-1} the potential V = V_m^(x_1,q_1),…,(x_n,q_n) has no degenerate critical points. The above proposition is a corollary of the following theorem <cit.>. Let d,∈{1,2,3,…}. Let ^d+ be a Euclidean space with Cartesian coordinates x_1,…,x_d,a_1,…,a_. Let W be an open non-empty subset of ^d+ and U:W→ be a C^2-mapping such that for every point p ∈ W satisfying U'_x_i(p) = 0, for every 1 ≤ i ≤ d, we have [ U”_x_1x_1 U”_x_1x_2 … U”_x_1x_d U”_x_1a_1 U”_x_1a_2 … U”_x_1a_; U”_x_2x_1 U”_x_2x_2 … U”_x_2x_d U”_x_2a_1 U”_x_2a_2 … U”_x_2a_; 8; U”_x_dx_1 U”_x_dx_2 … U”_x_dx_d U”_x_da_1 U”_x_da_2 … U”_x_da_ ] = d. Then for almost all a ∈{å∈^|∃ x ∈^d: (x,å) ∈ W} the map u^a: {x ∈^d | (x, a) ∈ W}→ given by u^a(x) = U(x,a) does not have degenerate critical points (i.e., if the gradient of u^a is 0 at some point then Hessian of u^a has the maximal rank at this point). We will not give the proof of Proposition <ref> since it can be found in <cit.>. But we will give the proof of Proposition <ref> because <cit.> only states Proposition <ref> for d = 3, m = 1 and only sketches the proof. We take W = {(p,a)| p,a ∈^d, p ≠ a, p ≠ x_1,…, p ≠ x_n-1} and we define U:W → by U(p,a) = V_m^(x_1,q_1),…,(x_n-1,q_n-1), (a,q_n)(p). Consider a matrix d × d matrix M given by M = [ U”_p_1a_1 U”_p_1a_2 … U”_p_1a_d; U”_p_2a_1 U”_p_2a_2 … U”_p_2a_d; 4; U”_p_da_1 U”_p_da_2 … U”_p_da_d ] and we are going to show that (M) = d. For 1 ≤ k ≤ d have U'_a_k(p) = c(m) q_n(p_k - a_k)/|| p - a ||^m+2, where c(m) = m, for m ≠ 0 and c(0) = -1. And thus for 1 ≤ j ≠ k ≤ d we have U”_a_kp_j(p) = -c(m)(m+2) q_n(p_k - a_k)(p_j - a_j)/|| p - a ||^m+4, and U”_a_kp_k(p) = -c(m)(m+2) q_n(p_k - a_k)^2/|| p - a ||^m+4 +c(m) q_n/|| p - a ||^m+2. Thus M = (c(m) q_n/|| p - a ||^m+2)(I - (m+2)vv^T), where v = (p - a) / || p - a ||. Suppose that (M) < d then there exists w ≠ 0 such that Mw = 0, from (<ref>) we see that w = ≪ v for some ≪≠ 0. But then Mw = M≪ v = (c(m)q_n/|| p - a ||^m+2)(≪ v - ≪ v (m+2)) ≠ 0. So we have a contradiction and thus (M) = d. Now we can apply Proposition <ref> and get that for almost all a the potential V_m^(x_1,q_1),…,(a,q_n) does not have degenerate critical points. § CRITICAL POINTS OF §.§ The statement In wireless communications the signaltointerferenceplusnoise ratio () is used as a way to measure the quality of wireless connection (see <cit.>). Given d ∈{2,3}, points x_1,…,x_n ∈^d, ψ_1,…,ψ_n > 0, α > 0 and N ≥ 0 a function (x_i, ·):^d ∖{x_1,…,x_n}→ is defined by (x_i, p) = ψ_i || x_i - p ||^-α/∑_j≠ i(ψ_j || x_j - p ||^-α) + N. In this model a receiver at a point p successfully receives a message from sender x_i, if and only if (x_i, p) ≥β, where β is a constant ≥ 1. Numbers ψ_1,…,ψ_n represent transmitting powers of concurrently transmitting stations at points x_1,…,x_n. N represents the environmental noise. The pass-loss parameter α is typically taken from the interval [2,4], with å = 2 being the most common. The reception threshold is commonly taken to be ≈ 6. Interest in the critical points of is motivated by the point location problem, see <cit.>. Here is the description of the problem from <cit.>: "Given a query point p, it is required to identify which of the n transmitting stations is heard at p, if any, under interference from all other n-1 transmitting stations and background noise N. Obviously, one can directly compute (x_i, p) for every i ∈{1,…, n} in time Θ(n) and answer the above question accordingly. Yet, this computation may be too expensive, if the query is asked for many different points p." §.§ Rough summary of our results for For the case when å is an even integer the critical points of (x_i, ·) coincide with solutions of a system of polynomial equations with d independent variables. From the Thom-Milnor Theorem (see Corollary <ref>) we get that the number of isolated critical points is bounded from above by (å (2n - 1) - 1)(2å(2n - 1) - 3)^d-1, see Theorem <ref>(<ref>). §.§ Our results for Now we give a more detailed version of the above statement. Let d ∈{2,3}, let x_1,…,x_n be points in ^d, p = (p_1,…,p_d) ∈^d ∖{x_1,…, x_n}, å∈{2,4,6,8…}, ψ_1,…,ψ_n > 0, N ≥ 0, i ∈{1,…, n} and (x_i, ·) be a function defined as in (<ref>). Then, Point p is a critical point of (x_i, ·) iff for every m = 1,…,d we have f'_m(p) g(p) - f(p) g'_m(p) = 0, where f,g are real polynomials in variables (p̅_1,…,p̅_d) := p̅ given by f = ψ_i ∏_j ≠ i|| x_i - p̅||^å, g = ∑_j ≠ i(ψ_j ∏_k ≠ j|| x_k - p̅||^å) + N ∏_1 ≤ k ≤ n|| x_k - p̅||^å. (x_i, ·) has at most (å (2n - 1) - 1)(2å(2n - 1) - 3)^d-1 isolated critical points. Note that (x_i, p̅) = f(p̅)/g(p̅). Thus, '_m(x_i, p) = f'_m(p) g(p) - f(p) g'_m(p)/g^2(p) and the claim follows. (Note that since ψ_1,…,ψ_n > 0 the denominator is always positive and thus can not create any problems.) Since å is even natural number, (<ref>) is a polynomial system in d variables: p_1,…,p_d. Each polynomial of the system has degree ≤ (å (2n - 1) - 1). Thus, by Thom-Milnor theorem (see Corollary <ref>) we have the number of isolated critical points of V does not exceed (å (2n - 1) - 1)(2å(2n - 1) - 3)^d-1. §.§ Existence of non-isolated critical points It is unclear whether non-isolated critical points of (x_i, ·) could exist in the general case. But for almost all of the selections of locations of transmitters there are no non-isolated critical points. The following proposition has the same nature as Proposition <ref> but the proof is a bit more messy. Let d ∈{2,3}, n ∈{d+2, d+3, d+4,…}, i ∈{1,…, n}, let x_i be a point in ^d, å∈{2,4,6,8,…}, ψ_1,…,ψ_n > 0, N ≥ 0. Then, for almost all (x_1 = (x_11,…,x_1d),…,x_i-1 = (x_(i-1)1,…,x_(i-1)d), x_i+1 = (x_(i+1)1,…,x_(i+1)d), …,x_n= (x_n1,…,x_nd)) ∈^d(n-1) the function (x_i, ·) has no degenerate critical points. Instead of working of working with (x_i, ·) we prefer to work with 1/(x_i, ·) = ∑_j≠ i(ψ_j || x_j - p ||^-α) + N/ψ_i || x_i - p ||^-α. By computing the gradient and the Hessian one can see that for any open Ω⊂^d and any C^2-function f:Ω→ (0,∞) a point p ∈Ω is a critical point of f iff p is a critical point of 1/f, a point p ∈Ω is a degenerate critical point of f iff p is a degenerate critical point of 1/f. Thus, it suffices to show that for almost all (x_1 ,…,x_i-1, x_i+1 , …,x_n) a function 1/(x_i, ·) has no degenerate critical points. We take W = {(p,(x_1,…,x_i-1,x_i+1,…,x_d)) | | p,x_1,…,x_i-1,x_i+1,…,x_d are distinct points in ^d ∖{x_i}, and x_1,…,x_i-1,x_i+1,…,x_d do not lie on all on the same line} and we define U:W → by U(p,(x_1,…,x_i-1,x_i+1,…,x_n)) = 1/(x_i, p). For h ∈{1,…,i-1,i+1,…,n} consider a matrix d × d matrix M_h given by M_h = [ U”_p_1x_h1 U”_p_1x_h2 … U”_p_1x_hd; U”_p_2x_h1 U”_p_2x_h2 … U”_p_2x_hd; 4; U”_p_dx_h1 U”_p_dx_h2 … U”_p_dx_hd ] By Proposition <ref> it suffices to show that the matrix [M_1,…,M_i-1,M_i+1,…,M_n] has rank d everywhere in W. Suppose that's not true. Then for some (p,(x_1,…,x_i-1,x_i+1,…,x_n)) ∈ W there exists ∈^d∖{0} such that w^TM_h = 0, for every h ∈1,…,i-1,i+1,…,n. Next, we need to explain what happens to a row vector when it gets multiplied by M_h from the right. For every h ∈{1,…,i-1,i+1,…,n} and k ∈{1,…,d} U'_x_hk(p) = åψ_h || x_h - p ||^-(α + 2)/ψ_i || x_i - p ||^-α(p_k - x_hk). We denote Q_h := åψ_h || x_h - p ||^-(α + 2)/ψ_i || x_i - p ||^-α, so we have U'_x_hk(p) = (p_k - x_hk)Q_h. For every h ∈{1,…,i-1,i+1,…,n} and k,j ∈{1,…,d} we have U”_x_hkp_j(p) = P^h,k,j_1 + P^h,k,j_2 + P^h,k,j_3, where P^h,k,j_1 = 0 if j ≠ k Q_h if j = k , P^h,k,j_2 = -(å + 2)(p_k - x_hk)(p_j - x_hj)|| x_h - p ||^-2Q_h, P^h,k,j_3 = å(p_k - x_hk)(p_j - x_ij)|| x_i - p ||^-2 Q_h. Thus M_h can be presented as M_h = M_h1 + M_h2 + M_h3, such that for every v = (v_1,…,v_d)^T ∈^d we have v^T M_h1 = v^TQ_h, v^T M_h2 = -(å + 2)|| x_h - p ||^-2v^T(p-x_h)(p-x_h)^TQ_h, v^T M_h3 = å|| x_i - p ||^-2v^T(p-x_i)(p-x_h)^TQ_h. Note that for every v ∈^d we have that v^T M_h2 = ≪_2(v) (p-x_h)^T, v^T M_h3 = ≪_3(v) (p-x_h)^T, for some ≪_2(v), ≪_3(v) ∈. Thus from ^TM_h = 0 we have that ^T M_h1 = -(≪_2(w) + ≪_3(w)) (p-x_h)^T, and we can conclude that for every h ∈1,…,i-1,i+1,…,n there exists c(h) ∈∖{0} such that = c(h) (p-x_h). This implies that all x_1,…,x_i-1,x_i+1,…,x_d all lie on the same line. But that's the opposite of what is stated in the definition of W. § NEWTONIAN POINT MASSES WITH A CENTRAL FORCE This model is similar to Maxwell's problem for the case m = 1 with all the charges having the same sign except there is an additional quadratic term. More precisely, let d ∈{1,2,3,…}. Suppose that we have points x_1,…,x_n ∈^d. And numbers m_1,…,m_n > 0 which symbolize point masses located in those points. Consider a function F = F^(x_1,m_1),…,(x_n,m_n):^d→ given by F(p) = 1/2||p||^2 + ∑_i = 1^nm_i/|| p - x_i ||. Once again we are interested in an upper bound on the number of critical points of F. This model arises in the study of the restricted (n + 1)-body problem. We refer the reader to <cit.> for details and the relevant bibliography. §.§ Rough summary of our results for Newtonian Point Masses with a Central Force We give an exponential upper bound for the number of isolated critical points for any dimension d. More precisely the number of isolated critical points of F does not exceed (1 + 3n)(1 + 6n)^d + n. §.§ Previous results Arustamyan et al. <cit.> claim an exponential upper bound for the number of isolated critical points for the planar case of this problem. In astrophysics, there is a related open problem of finding upper bounds on the number of possible images in gravitational microlensing, see <cit.>. S. Perry <cit.> gives the only known upper bound for the general case. §.§ Our results for Newtonian Point Masses with a Central Force The following theorem formalizes the informal summary given in Subsection <ref>. Let d ∈{1,2,3,…}. Let x_1 = (x_11,…,x_1d),…,x_n = (x_n1,…,x_nd) be points in ^d, p = (p_1,…,p_d) ∈^d ∖{x_1,…, x_n}, m_1,…,m_n ∈∖{0} and let F = F^(x_1,m_1),…,(x_n,m_n) be the function given by (<ref>). Then, Point p is a critical point of F iff there exist ß_1,…,ß_n > 0 satisfying ß_j^2 ∑_1≤ k ≤ d(p_k - x_jk)^2 = 1 , for every 1≤ j ≤ n, and p - ∑_i=1^n( m_i(p - x_i) ß_i^3) = 0. F has at most 4 · 7^d + n isolated critical points. We remind that F(p) = 1/2||p||^2 + ∑_i = 1^nm_i/|| p - x_i ||. We differentiate V: F'_p_k = p_k - ∑_i = 1^nm_i(p_k - x_ik)/|| p - x_i ||^3, ∇ F = p - ∑_i = 1^nm_i(p - x_i)/|| p - x_i ||^3, Thus, ∇ F(p) = 0 is equivalent to (<ref>). Denote N = 4 · 7^d + n. We will argue by contradiction. Suppose that F has at least N + 1 isolated critical points. Then by Theorem <ref>(<ref>) the zero set of the system (<ref>) (considered as a polynomial system in d + n variables p_1,…,p_k,ß_1,…,ß_n) has at least N + 1 connected components. The degree of every polynomial in this system is ≤ 4. Thus, by Thom-Milnor theorem (see Corollary <ref>) we have that the total number of the connected components of the zero set does not exceed 4 · 7^d + n = N, so we have a contradiction. §.§ Existence of non-isolated critical points There is only one known example of a configuration with non-isolated critical points: we take n = 1 and put the only one point mass we have into the origin. It is also known that for almost all configurations there is no degenerate critical points, see <cit.> (authors of <cit.> state their result only for d = 2 but their proof works for every d). § CENTRAL CONFIGURATIONS Let d,n ∈{1,2,3,…}, m_1, …, m_n > 0 and let x_1,…,x_n be distinct points in ^d. We say that a system of point masses (x_1, m_1),…,(x_n, m_n) is a central preconfiguration if there exists ≪ > 0 such that for every i = 1,…,n, ≪ x_i = ∑_ 1 ≤ j ≤ n j ≠ i(m_i|| x_i - x_j||^-3(x_i - x_j)). We say that a central preconfiguration is normalized if ≪ in the above system is equal to 1. (Every central preconfiguration can be normalized by scaling.) We say that normalized central preconfigurations (x_1, m_1),…,(x_n, m_n) and (x'_1, m_1),…,(x'_n, m_n) are equivalent if there exists an orientation and origin-preserving isometry F:^d →^d such that F(x_i) = x'_i for every i = 1,…,n. Central configuration is an equivalence class of normalized central preconfigurations. We say that a c central configurations K is isolated iff there exists > 0 such that for every normalized central preconfiguration (x_1, m_1),…,(x_n, m_n) from K and every normalized central preconfiguration (y_1, m_1),…,(y_n, m_n) which does not belong to K we have max_1 ≤ i ≤ n|| x_i - y_i ||≥. §.§ Our results for central configurations The following theorem is very similar to the result of R. Kuzmuna <cit.>. R. Moekel's estimates <cit.> for the number of central configurations also utilize Thom-Milnor theorem. Let d,n ∈{1,2,3,…}, m_1, …, m_n > 0 There are at most 4 · 7^n(n-1)/2 + nd - 1 isolated central configurations with those parameters. Let x_1 = (x_11,…,x_1d),…,x_n = (x_n1,…,x_nd) ∈^d. A system of point masses (x_1, m_1),…,(x_n, m_n) is a normalized central preconfiguration iff there exist {ß_ij}_1 ≤ i ≠ j ≤ n such that ß_ij = ß_ji > 0 and satisfying ß_ij^2 ∑_1≤ k ≤ d(x_ik - x_jk)^2 = 1 , for every 1≤ i < j ≤ n and x_i = ∑_ 1 ≤ j ≤ n j ≠ i(m_iß_ij^3(x_i - x_j)). Denote N = 4 · 7^n(n-1)/2 + nd - 1. We will argue by contradiction. Suppose that there are at least N + 1 isolated central configurations. Then the zero set of the system (<ref>) (considered as a polynomial system in n(n-1)/2 + nd variables {ß_ij}_1 ≤ i < j ≤ n, {x_ij}_1 ≤ i ≤ n, 1 ≤ j ≤ d ) has at least N + 1 connected components. The degree of every polynomial in this system is ≤ 4. Thus, by Thom-Milnor theorem (see Corollary <ref>) we have that the total number of the connected components of the zero set does not exceed 4 · 7^n(n-1)/2 + nd - 1 = N, so we have a contradiction. §.§ Existence of non-isolated central configurations It is unknown if non-isolated central configurations exist. This is a high-profile open problem known as 6th Smale's problem, see <cit.>. M. Hampton and R. Moeckel <cit.> showed that all central configurations are isolated for n = 4. A. Albouy and V. Kaloshin <cit.> proved the same for d = 2 and n = 5 for almost all sets of masses. For more information on the field we refer the interested reader to the introduction in <cit.>. §.§ Acknowledgements The author is thankful to Andrei Alpeev and Pasha Galashin for discussing early versions of the paper and pointing out a serious bug. I am grateful to Mathoverflow users Wille Liou, Gro-Tsen, and user43326 for helping me with my questions on algebraic geometry. The author is thankful to Alain Albouy for pointing me to the works of Kuzmina and Moeckel. plain
http://arxiv.org/abs/2307.03015v1
20230706142417
Sequential Neural Barriers for Scalable Dynamic Obstacle Avoidance
[ "Hongzhan Yu", "Chiaki Hirayama", "Chenning Yu", "Sylvia Herbert", "Sicun Gao" ]
cs.RO
[ "cs.RO", "cs.AI" ]
Sequential Neural Barriers for Scalable Dynamic Obstacle Avoidance Hongzhan Yu, Chiaki Hirayama, Chenning Yu, Sylvia Herbert, Sicun Gao Received: date / Accepted: date ======================================================================== empty empty There are two major challenges for scaling up robot navigation around dynamic obstacles: the complex interaction dynamics of the obstacles can be hard to model analytically, and the complexity of planning and control grows exponentially in the number of obstacles. Data-driven and learning-based methods are thus particularly valuable in this context. However, data-driven methods are sensitive to distribution drift, making it hard to train and generalize learned models across different obstacle densities. We propose a novel method for compositional learning of Sequential Neural Control Barrier models (SN-CBFs) to achieve scalability. Our approach exploits an important observation: the spatial interaction patterns of multiple dynamic obstacles can be decomposed and predicted through temporal sequences of states for each obstacle. Through decomposition, we can generalize control policies trained only with a small number of obstacles, to environments where the obstacle density can be 100x higher. We demonstrate the benefits of the proposed methods in improving dynamic collision avoidance in comparison with existing methods including potential fields, end-to-end reinforcement learning, and model-predictive control. We also perform hardware experiments and show the practical effectiveness of the approach in the supplementary video. § INTRODUCTION Dynamic obstacle avoidance poses longstanding challenges for mobile robots. Consider the case of autonomous driving in populated areas: the ego-robot needs to quickly predict the movement of the pedestrians and infer control actions that can avoid collision accordingly, while maintaining progress towards its goal. Existing approaches typically use known dynamics of both the obstacles (i.e. pedestrians) and the ego-robot to compute control actions, using methods such as artificial potential fields (APFs) <cit.>, dynamic windows <cit.>, and model-predictive control (MPC) <cit.>. Control barrier functions (CBFs) <cit.> provide a new approach <cit.> that combines the benefits of potential fields and MPC. CBFs reduce the complexity of online optimization by enforcing a value landscape that maintains forward invariance of safe behaviors of the ego-robot. They still require full knowledge of the dynamics of the system, and can be hard to design in complex environments. CBFs can also encounter the issue of “freezing robots" when used for ensuring collision avoidance with multiple dynamic obstacles <cit.>. A major difficulty with dynamic obstacles, such as humans, is that the analytic modeling of their dynamics is inherently hard <cit.>. For specific applications, it is often viable to collect data to train black-box models that make accurate predictions, in the form of neural networks <cit.> or Gaussian processes <cit.>. However, they have two drawbacks: 1) Hard to Scale and Generalize. The interaction patterns of the dynamic obstacles grow exponentially in the number of obstacles, which affects both training and inference. Training is expensive because of the need to sample the combinatorial space of possible patterns of all dynamic obstacles, and distribution drift becomes a major challenge <cit.>. If we train a control policy in an environment with a small number of pedestrians, then the policy will struggle in environments with a large number of pedestrians that exhibit a very different distribution in the obstacle dynamics (Figure <ref>). 2) Hard to Optimize for Predictive Control. Although high-capacity learning-based models can fit the collected data with high accuracy, they are extremely nonlinear functions that can not be easily used to form online optimization problems, such as for MPC. They can be used through forward-unrolling and sampling, which often becomes inefficient and unreliable for real-time inference of the control actions. In this paper, we propose a new approach to alleviate both limitations of learning-based methods for dynamic obstacle avoidance at scale. The key technique is based on the following observation: the collective dynamics of the dynamic obstacles can be approximately inferred from the sequential patterns in the trajectories of each individual obstacle. For instance, when we observe that one pedestrian is slowing down or changing directions, it is most likely because of other pedestrians or obstacles nearby. That allows us to directly infer the next state of the pedestrian, without the need of explicitly using the spatial information of the other obstacles. In this way, the collective spatial interaction dynamics of a group of dynamic obstacles can be inferred by aggregating the predictions from the sequential patterns of each obstacle. Such inference can be hard to formulate analytically, but high-capacity neural network models may capture such implicit patterns through data. We will first examine the validity of such decomposition in detail in Section <ref>, and show that it is central to achieving scalable modeling and control. Given the benefits of compositional learning with sequential models, we propose the design of sequential neural control barrier functions (SN-CBFs) to achieve compositional learning and inference for scalable dynamic collision avoidance. Note that the design does not rely on the direct use of sequential models to predict the movement of the obstacles. Instead, by learning SN-CBF models, we can directly infer safe control actions for the current state of the ego-robot, without the need of unrolling the complex predictive models. Moreover, the highly nonlinear SN-CBF models can produce value landscapes that are significantly more complex than manually-designed simpler forms of potential fields or barrier functions, as illustrated in Figure <ref>(c). In this way, the SN-CBF alleviates well-known issues, such as the narrow-corridor effects in APF, and can be used on ego-robots with highly nonlinear dynamics (details in Section <ref>). Importantly, although the SN-CBF models are first applied to each dynamic obstacle individually, the control action is always computed after aggregating the value landscapes for all obstacles at every step. As illustrated in Figure <ref>(b-c) , we aggregate the SN-CBF values from all obstacles into one unified landscape to infer the control actions for the ego-robot (red dot in the figure). Doing so alleviates the common issue of “freezing robots,” where simply computing the ego-robot control with respect to each dynamic obstacle can easily lead to conflicting control decisions <cit.>. In contrast, every control action that we successfully obtain from SN-CBF models avoids all obstacles simultaneously. We analyze the performance of our method in Section <ref>, showing that it maintains a significantly lower failure rate compared to existing methods, especially as the obstacle density increases. We will describe our contributions in the following order. We will first formalize and evaluate the sequential decomposability of the collective interaction patterns of dynamic obstacles in Section <ref>. We will then describe the model-free learning procedures for the SN-CBF models in Section <ref>, and then the online inference procedures in Section <ref>. We evaluate the proposed methods in simulation environments and hardware experiments in Section <ref>. We demonstrate scalable performance in collision avoidance that generalizes well from sparse to dense environments. We analyze how the new methods can address common issues in potential fields, reinforcement learning, and model-predictive approaches. § RELATED WORK Dynamic Collision Avoidance. Existing methods for dynamic collision avoidance typically require the known dynamics of both the ego-robot and the obstacles. Artificial potential fields (APFs) methods  <cit.> design repulsive/attractive potential fields and use the gradient of this function to inform a feedback controller. They typically require that the ego-robot and the obstacles have known and simple dynamics such that the gradient directions can be directly followed. Under such assumptions, APFs can be used at large scales <cit.>, but capturing human movements using simple potential fields requires strong assumptions. Model-predictive control (MPC) <cit.> is another main framework for dynamic collision avoidance. It formulates online optimization problems that involve unrolling the system dynamics of both the ego-robot and the obstacles over bounded time horizons, to compute optimizing control actions. MPC can have high computational complexity, and additional efforts are required for handling disturbances and modeling error <cit.>. The dynamic window approach <cit.> is a special form of MPC that reduces the search space to admissible controls of the ego-robot, which has also been extended to use learned dynamics models based on the collected data <cit.>. The prediction error can quickly accumulate, and we will show the advantage of our proposed methods compared with such methods in the experiments. Learning-based Approaches. Deep reinforcement learning (DRL) approaches have been proposed for dynamic obstacle avoidance in many forms, including CADRL <cit.>, MRCA <cit.>, and GA3C-CADRL <cit.>. These methods focus on formulating the avoidance problems as Markov Decision Processes (MDPs) or Partially-Observed MDPs (POMDPs) to perform model-free learning of the control policies. CADRL <cit.> encodes social interactions into reward shaping for RL training to achieve safe navigation in pedestrian-rich environments. MRCA <cit.> performed collision avoidance through information on LIDAR measurements without directly detecting the dynamics objects. GA3C-CADRL <cit.> introduced sequential models to support a varying number of pedestrian states. GCBF-MBPO <cit.> proposed model-based enhancement to achieve faster training. In general, existing DRL methods are sensitive to distribution drift and lack generalizability from sparse to dense environments. We will compare with DRL baselines in the experiment section. Control Barrier Functions. Control barrier functions (CBFs) <cit.> impose (typically manually-designed) value landscapes to ensure forward invariance of the safe set with control actions computed by efficient online optimization (as quadratic programs). While well-designed CBFs can provide formal guarantee for control systems with static obstacles and known dynamics, its direct application in dynamic obstacle avoidance <cit.> has several challenges. Applying CBFs between every pair of agents lead to feasibility issues where avoiding one agent inevitably leads to collisions with another, while synthesizing valid CBFs for arbitrary numbers of agents is challenging. To mitigate the issue of feasibility and scalability, several recent works have proposed compositional CBFs. They can be constructed through temporal logic <cit.>, or piecewise CBFs <cit.>. Learning-based approaches have been introduced for constructing CBFs from sensory data with linear functions <cit.>, support vector machines <cit.>, and neural networks <cit.>. The work in <cit.> shows the benefits of jointly learning CBFs as safety certificates and the control policies. The work in <cit.> uses neural network CBFs to achieve safe decentralized control in multi-agent systems, assuming known nonlinear dynamics. The work in <cit.> generalizes CBFs to new configurations of static obstacles, while we consider generalization from sparse to dense environments of dynamic obstacles. We focus on learning sequentially decomposable value landscapes, instead of reactive control policies, for dynamic obstacles without known dynamics, such that safe control action can be efficiently performed online at scale. § PRELIMINARIES We consider ego-robots with underlying dynamics ẋ(t)=f(x(t), u(t)) where x(t) takes values in an n-dimensional state space X⊆ℝ^n, u(t)∈ U⊆ℝ^m is the control vector, and f:X× U →ℝ^n is a Lipschitz-continuous vector field. We allow f to be generally nonlinear and not control-affine, unlike typically assumed in CBF methods. Safety properties, such as collision avoidance, can be specified by declaring an unsafe region of the state space X_u⊆ X. We say the system is safe if none of its trajectories intersects with X_u. To ensure safety properties of a system, we can construct a forward invariant set for the system that is disjoint from the unsafe set. We say a subset of the state space Inv⊆ X is forward invariant for the agent under control, if for any initial state x(0)∈Inv and any t≥ 0, we have x(t)∈Inv. Namely, any trajectory that starts in the invariant Inv stays in Inv forever. Consequently, a system is safe if we can find a forward invariant set Inv such that Inv∩ X_u=∅. CBFs are scalar functions whose zero-superlevel set is a forward invariant set in the safe region of the space, and whose spatial gradients can be used to enforce this invariance. Consider a dynamical system defined by vector field f:X× U → X where X⊆ℝ^n is the state space and U⊆ℝ^m the control space. Let B: X→ℝ be a continuously differentiable function with zero-superlevel set 𝒞={x∈ X: B(x)≥ 0}. We say B is a control barrier function, and 𝒞 is forward invariant, if for any state x ∈ X: max_u∈ UḂ(x) = max_u∈ U⟨∇ B(x), f(x, u)⟩≥ -α (B(x)) Here Ḃ(x) is the Lie derivative of B. ⟨·,·⟩ denotes inner product. α (·) is an extended class-𝒦_∞ function. We often choose α(B(x))=κ B(x) for some parameter κ∈ℝ^+. In this paper, we consider model-free training in stochastic environments, and do not attempt to globally satisfy the standard CBF conditions (<ref>). Instead, we encode the conditions as loss functions, and use the idea of CBFs to reduce collision rate with statistical evaluation of its effectiveness, rather than to prove the complete absence of collision. § COMPOSITIONAL SEQUENTIAL MODELING OF SPATIAL INTERACTION DYNAMICS Our approach builds on a key observation: the collective dynamics of the dynamic obstacles, such as how a group of pedestrians interacts with each other, can be approximately inferred by aggregating the prediction for each individual obstacle based on the sequential patterns in their own trajectories. We now formalize and evaluate this claim first. Suppose the state of each dynamic obstacle can be fully described as a vector in ℝ^q, such that for m such obstacles their joint state is h=(h_1, ..., h_m)∈ℝ^mq, where h_i is the state vector for the i-th obstacle. The spatial interaction dynamics of such k obstacles is the vector field defined over the space of the joint states. To differentiate the dynamics of the obstacles from the dynamics of the ego-robot, we write it as a discretized mapping over consecutive states as G:ℝ^mq→ℝ^mq, h^(t+Δ t )=G(h^(t)) where h^(t) is the joint state of all obstacles at time t and Δ t is a small time step. The difficulty with modeling G through sampling state pairs in the joint state space ℝ^mq is two-fold. First, the sample complexity over the ℝ^mq grows exponentially in the number of obstacles m. Second, the dynamics and distribution learned for any fixed m may not be applicable to a different m' number of obstacles. We assert that for each individual h_i, their dynamics should have certain regularity in the sense that they typically react to similar observations of other agents in the same way, which is identified by the state trajectories of h_i itself. For instance, in Figure <ref>(Left), by observing that the agent on the left (colored in orange) slows down quickly, we can infer that its immediate next state should continue to slow down. We know this without directly observing the agent's state on the right (colored in blue). The same principle applies to this other agent: from its sequence of states, we can infer that it is picking up speed while curving a little bit to avoid another agent. Thus, by only observing the two separate sequences of each agent, we can aggregate the individual predictions, and infer their joint next state, which bypasses the need to learn the collective state transition. Formally: Let {h_i}_i∈ [m] be the state vectors describing m dynamic obstacles, where each h_i∈ℝ^q. Let the collective dynamics of the joint state be defined by G:ℝ^qm→ℝ^qm. We say G is sequentially decomposable in k∈ℤ^+ steps up to ε∈ℝ^+, if there exists Ĝ_i^k:(ℝ^q)^k→ℝ^q for each h_i of the form ĥ_i^(t+Δ t)=Ĝ_i^k(h_i^t,h_i^(t-Δ t),...,h_i^(t-kΔ t)) where t≥ kΔ t, such that (ĥ_1^(t+Δ t),...,ĥ_m^t+Δ t)-G(h_1^(t),...,h_m^(t))_∞≤ε. In words, the approximate prediction of the next state for all obstacles predicted by Ĝ_i^k is within ε-error from the ground truth interaction dynamics G in the max norm. Importantly, Ĝ_i^k only considers the states of an individual h_i as its inputs. While we can not directly prove the sequential decomposability without precise analytic models of the dynamic obstacles, we can empirically evaluate its validity for given systems. For pedestrian dynamics, we simulate the interaction dynamics of the pedestrians using the widely-adopted ORCA model <cit.>. We train the sequential models of individual obstacles to perform coordinate-wise safety classification in an environment with 6 pedestrians and test it in higher numbers of obstacles, and compare with baselines as follows. We experiment with several designs of generalizing neural network models from sparse to dense environments. First, we consider the approach of using a permutation-invariant encoder over pedestrians with sequences of all states, so that it can be applied to arbitrary number of pedestrians, but can not handle the inherent distribution drift when the obstacle density changes from training to tests. We call this first design the Collective Sequential Model (CoSM). The second design, called Compositional Sequential Model (CSM) uses a sequential model with individual pedestrian states but does not condition the learning with interaction among pedestrians. This design achieves better prediction and generalization. The third design, named Interaction-based Compositional Sequential Model (ICSM), corresponds to our main approach in SN-CBF, taking into account both the sequential data and the interaction of the nearby agents. Figure <ref> (Middle-Right) demonstrates that the sequential decomposition plus interaction of nearby obstacles produces the best accuracy and generalizability. Note that the SN-CBF model will not directly predict the next states of the obstacles, but will generate value landscapes that aim to capture both state sequence patterns from individual obstacles, and also the implicit interaction patterns of the nearby obstacles exhibited in training data. § TRAINING PROCEDURES FOR SN-CBF MODELS §.§ Model Architecture We design the SN-CBF models to allow an implicit parameter space H⊆ℝ^k× q, where H contains length-k sequences of the obstacle states, relative to the ego-robot, where each relative state h^(t)∈ℝ^q. The SN-CBF model can then be conditioned on such sequential information, and still produce scalar values over the ego-robot state x∈ X⊆ℝ^n. Namely, the models are functions B: X× H→ℝ, with B(x,h) giving a scalar value on the robot state x given the observation h^(t),...,h^(t-kΔ t) of the obstacle's state sequence. SN-CBF models are constructed using the architecture shown in Figure <ref>(a), where we encode h with a standard long short-term memory (LSTM) neural network for handling sequential inputs <cit.>, and the ego-state x is embedded through a multilayer perceptron (MLP). We concatenate the encoded vectors as d, and feed d to another MLP that computes the CBF value B(x, h) ∈ℝ. This architecture is important for the generalizability of the learned model. §.§ Training Procedures We train SN-CBF models in two steps: initial training, and boundary refinement. The first step uses trajectory samples to roughly mark the safe and unsafe regions, and the second step focuses on sampling around the safety boundary from the first step, to refine it and improve its invariance properties. Both steps are important, as shown in Figure <ref>. The first step proposes safety boundaries from demonstrations to reduce the sampling space, and the second one corrects the values of misclassified states around the safety boundary. Both steps are performed in environments with a small number of obstacles, but will be deployed in much denser environments. 1) Initial Training. We first collect a set of random trajectories of the robot interacting with the dynamic obstacles. This step can use a nominal simple controller with a high collision rate, such as a simple potential-field controller or an RL-trained reactive control policy. From these trajectories, we collect the initial labeling of safe states and unsafe states between the ego-agent and an obstacle based on whether collision occurs. For each state, we keep track of h∈ H that encodes the sequence of relative states between the robot and one obstacle. Thus we obtain an initial safe set D_s⊆ X× H of collision-free samples, and an initial unsafe set D_u⊆ X× H of samples in collision. These samples are sparse, and the initial training only relies on this weak supervision to approximately separate safe and unsafe regions. Using the safe set D_s and unsafe set D_u of pairs (x,h) collected through the demonstrations, we train the SN-CBF model by minimizing the following loss function, which encodes the standard CBF conditions (Definition <ref>), with an error margin parameterized by γ∈ℝ^+: L_B,D = 1/N_s∑_ (x, h) ∈ D_sϕ_γ(-B(x, h)) + 1/N_u∑_(x, h) ∈ D_uϕ_γ(B(x, h)) + 1/N∑_(x, h) ∈ Dϕ_γ(-Ḃ(x, h)-α(B(x, h))) where ϕ_γ(x) = max(γ+x, 0). The first term enforces that the the value of B(x,h) for any safe (x,h)∈ D_s should be greater than γ, because a positive loss is only incurred when γ-B(x,h)>0. The second term enforces B(x,h) to take sufficiently negative values on unsafe pairs. The third term enforces the Lie derivative condition Ḃ(x,h)≥ -α(B(x,h))+γ, where α is chosen to be a positive constant as an extended class 𝒦_∞ function. Because of the unknown interaction dynamics, the Lie derivative Ḃ can not be analytically computed, but can be approximated by the finite difference between two consecutive pairs, i.e., Ḃ(x,h)=̇(B(x',h') - B(x,h))/Δ t. The margin γ is used to enforce the invariance conditions of CBFs. 2) Boundary Refinement. After the initial training, the SN-CBF models may violate the control barrier conditions in Definition <ref> at many states near the safety boundary (i.e., the zero-levelset of the model). We then refine the model by focusing the training at this boundary between the safe and unsafe regions in the following steps. We first collect from the demonstrations from the previous step, as the initial set D^θ of (x,h) pairs that are close to the safety boundary and currently classified as “safe” by the SN-CBF model obtained from the initial training. We then examine all elements in D^θ. First, if some (x,h) pair is already in collision and thus wrongly classified by the initial model, we remove it from D_s and add it to D_u. Second, we examine the invariance condition on each pair by sampling control actions and take one that maximizes the predicted next state. This operation is an approximation of the max_u∈ U operator in the CBF conditions in Definition <ref>. We then inspect if the next state x' under the best sampled action can be in collision. If so, we add both (x,h) and (x',h') into D_u, where h' is the corresponding new state sequence of the obstacle induced by this control action. After updating the D_s and D_u, we retrain the SN-CBF models, still using (<ref>). We iteratively perform this refinement until convergence. § ONLINE INFERENCE WITH SN-CBF After training the SN-CBF models for individual obstacles, we can apply them to an arbitrary number of obstacles individually, and the aggregate all values as follows: ℬ(x)=∏_i=1^q max(1/bmin(B(x,h_i),b),0) where b∈ℝ^+ is a threshold parameter. This aggregated ℬ(x) value defines the total value landscape for the state x of the ego-robot. This aggregation rule ensures that if B(x,h_i)≤ 0 for any obstacle i, then ℬ(x)=0 and the state x is considered unsafe. On the other hand, B(x,h_i) is clipped at b for all i, so obstacles that are far from the ego-robot will not affect ℬ(x). Overall, x is unsafe with respect to any obstacle if and only if ℬ(x)=0, and ℬ(x) is always within [0,1]. Using the aggregated ℬ values, we compute control actions at each state x of the ego-robot. We sample from the control action space U for a fixed number of candidate control actions u_1,...,u_l. We then use the (learned) dynamics model of the ego-robot to predict its next state x'_i=π(x,u_i) for each sampled action u_i, and evaluate the predicted next states x'_i by ℬ(x'_i). Any u_i that corresponds to a nonzero ℬ(x'_i) is considered a feasible action that can avoid collision. We then choose u_i that corresponds to the next state that minimizes the distance between x'_i and the goal. When no feasible action is available, we declare failure, and stop the robot. § EXPERIMENTS We evaluate the proposed SN-CBF methods both in simulation and in hardware experiments. In simulation, we consider a robot navigation around pedestrians environment that can be easily scaled, as well as a highway lane-changing environment. In hardware experiments, we use SN-CBF to control directly an ego-robot car navigating around densely distributed pedestrians. The hardware experiment setting is shown in Figure <ref> as well as the supplementary video. In the simulation environments, the pedestrians are modeled using the optimal reciprocal collision avoidance (ORCA) model <cit.> and the vehicles on highway are modeled with the intelligent driver model (IDM) <cit.>. These underlying models are unknown to the learning agents. We test the methods with different densities of obstacles and different dynamics of the ego-robot, including single and double integrator, the Dubins car model, and the bicycle model. Baselines Methods. We adapt various existing methods into data-driven and sampling-based forms, and maintain their core approaches. We consider the following baselines: - Sampling-based potential field methods (S-PFM): a standard potential field method <cit.> with repulsive fields around each obstacle and attractive field around the goal based on Euclidean distance. In each step we sample actions and evaluate the predicted next states on these actions. - Gradient-based potential field methods (G-PFM): a similar potential field method that uses gradient-based control based on the gradients of the potential fields. Note that it requires full knowledge of the dynamics of the ego-robot. - Sampling-based MPC (S-MPC): a method that learns a neural dynamics model, unrolls the model online to construct a tree of future states, then selects the first action that leads to the best predicted outcome <cit.>. - Black-box multi-agent-CBF (B-MA-CBF): a method for safe multi-agent control that learns decentralized CBFs using known system dynamics <cit.>. The approach uses the max pooling layer design in neural network architecture instead of sequential modeling. - Proximal policy optimization (PPO) and deep Q-Learning (DQN): two deep reinforcement learning methods, we use PPO <cit.> for the continuous action space in the navigation environment, and DQN <cit.> for the discrete actions in the highway lane-changing environment. Simulation Experiment Setup. In all evaluation experiments, we randomly initialize the agent, obstacles, and the goal configurations. We label a full trajectory as collision-free only when the agent successfully reaches the goal, with no collision or failure of finding control online at any step. Otherwise we consider the full trajectory as a failure. We define the collision rate to be the ratio of failed trajectories over the total. All experiments use 5 different random seeds. Hardware Experiment Setup. We train SN-CBF models for controlling a car robot to avoid pedestrians in an indoor environment. We first collect data from a small number of pedestrians, and adapt the ORCA model to provide a simulation model of the pedestrians. We perform the training procedures of SN-CBF models in simulation, and then deploy the SN-CBF models in the hardware car robot to infer control actions in real-time. We deploy the car in test environments with 3 times the pedestrian density compared to the data collection phase, as shown in Figure <ref>. We demonstrate the success of the methods in the supplementary video. Overall Performance Compared to Baselines. First, Figure <ref> compares the performance of SN-CBF for reducing collision in the simulation environment of navigation. The training is performed in simulation environments with only 6 obstacles, and the results show how the performance of the learned models scale as the density of obstacles increases up to 100 times of the training environment. The results confirm that sampling-based control outperforms gradient-based control (which assumes additional knowledge of the dynamics), especially when the environment becomes dense. When the ego-robot has simple dynamics that are easy to control, such as in the case of the single-integrator, the sampling-based potential field methods can perform quite well, but the gap with neural CBF becomes much larger in non-holonomic cases such as the Dubins car model. In all environments, SN-CBF reduces the collision rate by more than 50% from the best performing potential field methods. Across all environments, SN-CBF methods are able to maintain collision rate under 10% up to 60x more obstacles, and only reach 15% in the bicycle model case with 600 obstacles. The comparison with B-MA-CBF confirms the importance of the sequential modeling choice. Note that this method is an adaptation of the original MA-CBF <cit.> to the model-free setting, so its generalizability becomes worse than the original training with known dynamics. The main factor for the performance difference is that MA-CBF uses an aggregation model on the spatial patterns of the adjacent obstacles, which enables it to handle a varying number of obstacles but the distribution drift in the spatial interaction patterns restricts generalization of the learned models. Comparison with End-to-End Reinforcement Learning. Figure <ref> shows the comparison with standard RL methods. In the navigation environment, the policy trained with PPO can perform reasonably in the training environment, but almost always fails in denser environments (collision rate reaching 100% quickly). We use a version of SN-CBF methods that uses the control policy learned in PPO to provide the nominal control action for fair comparison, and we see that the collision reduction is still significant. In the lane-changing environment, we discretize the action space so that the comparison can be made with Deep Q-learning. This environment can not be made arbitrarily dense, and we still observe significant collision reduction. Alleviating Narrow Corridors in APF. The narrow corridor problem is a well-known issue in potential field methods <cit.>. When the ego-robot enters an area with where the adjacent obstacles create repulsive fields that point at conflicting directions, the robot can be misguided into collision or oscillation loops. In Figure <ref> we illustrate this problem where the collision case follows from gradient-based potential field control. In contrast, SN-CBF methods generate more accurate and dynamics-aware force fields to improve online control. In Figure <ref> we show the level sets of the learned models for both the Dubins car model and the bicycle model. The different dynamics induced very different landscapes. In particular, the SN-CBF model in the bicycle case induces a much wider gap between the level sets, which reflects the need to initiate collision avoidance much farther away from the obstacles. In both cases, the dynamics-aware SN-CBF enables online control that maintains efficient movement to pass the corridor. Comparison with Model-Predictive Approaches. The standard setting of MPC requires the use of analytic dynamics of both the ego-robot and the obstacles, and thus can not be directly applied to the model-free setting. Instead, we can compare with a sampling-based adaptation of MPC by sampling control actions and forward predicting the future states, and then selecting control actions based on the potential field values of predicted states. This comparison allows us to understand the effectiveness of the SN-CBF models in capturing the dynamics without multi-step unrolling. In Figure <ref> (Plot 4), we observe the benefits of CBF models in capturing the dynamic nature of the interactions through the barrier landscapes and avoid expensive online computation. It also allows us to avoid the accumulation of model-prediction errors that are inherent in learned models of dynamics. § CONCLUSION We proposed novel learning-based control methods for scalable dynamic obstacle avoidance through compositional learning of SN-CBF models. We exploit the important observation that the spatial interaction patterns of multiple obstacles can be decomposed and predicted through sequential modeling of individual obstacles. We design SN-CBF models that incorporate sequential modeling of individual obstacles, so that they can be composed in environments with an arbitrary number of obstacles. The online inference composes SN-CBF models of all the dynamic obstacles simultaneously to reduce the “freezing the robot” problem. We evaluated the methods by training in environments with a small number of obstacles, and tested the effectiveness of online composition and control in environments where the obstacle density is up to 100x higher. We have demonstrated the benefits in comparison with potential field methods, reinforcement learning, and sampling-based model-predictive approaches. We believe SN-CBF methods can provide a powerful framework for tackling many challenging problems in robot control in the model-free settings. One direction for future work is the analysis of the probabilistic safety properties of the methods under certain assumptions on the environments. Acknowledgement. The work is supported by NSF Career CCF 2047034, NSF AI Institute CCF 2112665, Amazon Research Award, and ONR YIP N00014-22-1-2292. unsrt
http://arxiv.org/abs/2307.00593v1
20230702152054
LLM4CBI: Taming LLMs to Generate Effective Test Programs for Compiler Bug Isolation
[ "Haoxin Tu", "Zhide Zhou", "He Jiang", "Imam Nur Bani Yusuf", "Yuxian Li", "Lingxiao Jiang" ]
cs.SE
[ "cs.SE" ]
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals Compiler bugs pose a significant threat to safety-critical applications, and promptly and effectively isolating these bugs is crucial for assuring the quality of compilers. However, the limited availability of debugging information on reported bugs complicates the compiler bug isolation task. Existing compiler bug isolation approaches typically convert the problem into a test program mutation problem, but they are still limited by ineffective mutation strategies or high human effort requirements. Drawing inspiration from the recent progress of pre-trained Large Language Models (LLMs), such as ChatGPT, in code generation, we propose a new approach named to tame LLMs to generate effective test programs for compiler bug isolation. However, using LLMs directly for test program mutation may not yield the desired results due to the challenges associated with formulating precise prompts and selecting specialized prompts. To overcome the challenges, three new components are designed in . (1) utilizes a program complexity-guided prompt production component, which leverages data and control flow analysis to identify the most valuable variables and locations in programs for mutation. (2) employs a memorized prompt selection component, which adopts reinforcement learning to select specialized prompts for mutating test programs continuously. (3) A test program validation component is proposed to select specialized feedback prompts to avoid repeating the same mistakes during the mutation process. Compared with the state-of-the-art approaches (DiWi and RecBi) over 120 real bugs from the two most popular C open-source compilers, namely GCC and LLVM, our evaluation demonstrates the advantages of : It isolates more bugs, ranging from 13.6% to 90.9% in various settings, than the other approaches. Additionally, we demonstrate that is extensible, allowing for easy integration with other LLMs. Software Debugging, Bug Isolation, Compilers, GCC, LLVM, Reinforcement Learning, Large Language Models (LLMs) : Taming LLMs to Generate Effective Test Programs for Compiler Bug Isolation Haoxin Tu, Zhide Zhou, He Jiang^*,* He Jiang is the corresponding author. Imam Nur Bani Yusuf, Yuxian Li, Lingxiao Jiang H. Tu is with the School of Software, Dalian University of Technology, Dalian, China. H. Tu is also with the School of Computing and Information Systems, Singapore Management University, Singapore. E-mail: [email protected]. Z. Zhou and H. Jiang are with the School of Software, Dalian University of Technology, Dalian, China, and Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province. H. Jiang is also with DUT Artificial Intelligence, Dalian, China. E-mail: [email protected], [email protected]. I. Yusuf, Y. Li, and L. Jiang are with the School of Computing and Information Systems, Singapore Management University, Singapore. E-mails: [email protected], [email protected], [email protected]. August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== : Taming LLMs to Generate Effective Test Programs for Compiler Bug Isolation Haoxin Tu, Zhide Zhou, He Jiang^*,* He Jiang is the corresponding author. Imam Nur Bani Yusuf, Yuxian Li, Lingxiao Jiang H. Tu is with the School of Software, Dalian University of Technology, Dalian, China. H. Tu is also with the School of Computing and Information Systems, Singapore Management University, Singapore. E-mail: [email protected]. Z. Zhou and H. Jiang are with the School of Software, Dalian University of Technology, Dalian, China, and Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province. H. Jiang is also with DUT Artificial Intelligence, Dalian, China. E-mail: [email protected], [email protected]. I. Yusuf, Y. Li, and L. Jiang are with the School of Computing and Information Systems, Singapore Management University, Singapore. E-mails: [email protected], [email protected], [email protected]. August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Compilers serve a fundamental role in building reliable software systems, and bugs in compilers can have catastrophic consequences <cit.>. To mitigate such threats, a crucial task is to isolate the bugs promptly and effectively. However, isolating compiler bugs poses significant challenges due to the limited debugging information. A prevalent direction for resolving the problem is to transform the bug isolation problem into a test program mutation problem <cit.>. The core idea behind such approaches is first to generate a set of passing test program mutants by mutating the given failing test program and then collect the passing and failing spectrum. Finally, combined with notable Spectrum-based Fault Localization (SBFL) techniques, suspicious files are ranked. However, existing program mutation strategies in DiWi <cit.> and RecBi <cit.> exhibit limitations in terms of effectiveness and demand substantial human effort. First, these strategies are limited in generating diverse test programs. DiWi only supports the local mutation, such as changing the type of a variable. Although RecBi supports more mutation strategies, such as structural mutation, i.e., changing the control flow of the program, it can only synthesize statement conditions ingredients without statement body (see more details in Section <ref>). Moreover, the random selection of variables and locations for mutation in both DiWi and RecBi is limited by the diversity of the generated test programs. Second, the mutation process in DiWi and RecBi requires significant human effort. Before mutation, DiWi and RecBi require manual code additions to collect necessary context information, and RecBi necessitates the construction of ingredients from existing test programs. Additionally, these strategies pay insufficient attention to the validity of the generated programs that contain undefined behaviors, thereby reducing their effectiveness of bug isolation. Overall, these limitations in ineffective mutation and the associated human effort emphasize the need for improved program mutation strategies in bug isolation. Inspired by the recent progress of pre-trained Large Language Models (LLMs) (e.g., ChatGPT <cit.>), in code generation, we propose , i.e., pre-trained Large Language Models for Compiler Bug Isolation. Our key insights are that (1) LLMs were trained by ultra-large-scale code, so the test programs produced by LLMs tend to be diverse; (2) LLMs have a good learning & reflection mechanism to help generate better outputs based on the users' feedback following a prompt-response dialog paradigm; (3) prompts used by LLMs can be expressed in a natural language, easier for users to use and reducing the human effort to finish a task. Thus, adapting LLMs to generate effective test programs could be promising. However, directly using LLMs to generate effective test programs for compiler bug isolation presents several difficulties, raising the following two challenges that need to be addressed. Challenge 1: Formulation of Precise Prompts. The quality of prompts plays a crucial role in unleashing the program mutation capabilities of LLMs <cit.>, but using existing natural mutation descriptions as prompts may not be effective. For example, the mutation rule “insert an if statement” is a mutation description used in existing work RecBi <cit.>. Such description lacks precision on which variables to use and where to insert the statement, limiting the possibility of mutating failing test programs (i.e., those that can trigger the bug) into passing ones (i.e., those that can not trigger the bug). Due to the difficulties in selecting precise variables and locations, the existing approaches DiWi <cit.> and RecBi <cit.> randomly select them, which is shown to be ineffective (see more evaluation results in Section <ref>). Therefore, it is necessary to formulate precise mutation prompts to assist LLMs in generating effective test programs. Challenge 2: Selection of Specialized Prompts. When several prompts are collected, selecting the specialized ones for mutating specific failing test programs is important and challenging. The reasons are two-fold. First, compiler bugs tend to be different and have different language features. One prompt may be useful for mutating one particular failing test program but may not be helpful for another failing test program. Randomly selecting prompts to mutate test programs may be ineffective (see more evaluation results in Section <ref>). Second, LLMs may make different mistakes when mutating different test programs, and different feedback prompts should also be given to different test programs. Therefore, a new prompt selection strategy is needed to select specialized prompts. To overcome the above challenges, three new components are designed in to tame LLMs for generating effective test programs for compiler bug isolation. First, a precise prompt production component is designed to address the first challenge. A precise prompt pattern that could accurately represent the desired mutation operations is first introduced. Then, program complexity metrics measured data and control flow analysis are utilized to identify the most relevant variables and optimal insertion locations. Second, two new components, i.e., a memorized prompt selection component and a lightweight test program validation component, are proposed for selecting specialized prompts. In the prompt selection component, incorporates memorized search via reinforcement learning to track and accumulate rewards based on the performance of LLMs, which allows to continually select the specialized prompts for mutating specific test programs. In the test programs validation component, leverages a static analysis to detect and filter out potential invalid test programs that may contain undefined behaviors, thus mitigating the risks associated with invalid programs. Empirical evaluations over 120 real bugs from the two most popular C open-source compilers, i.e., GCC and LLVM, are conducted to demonstrate the effectiveness of . First, we have compared with two state-of-the-art approaches (i.e., DiWi <cit.> and RecBi <cit.>) in terms of compiler bug isolation capabilities. The results show can isolate 90.91%/35.14% and 50.00%/13.64% more bugs than DiWi and RecBi within Top-1/Top-5 ranked results, respectively. Second, we have evaluated the effectiveness of three new components of , and the results show that all the components contribute to the effectiveness of . Third, we show that is extensible to integrate with other LLMs. Contributions. We make the following contributions: * To our knowledge, is the first work aiming to leverage the capabilities of LLMs for compiler bug isolation tasks in the field. * Three new components, i.e., precise prompt production, memorized prompt selection, and lightweight test program validation, are proposed to guide LLMs to generate effective test programs for compiler bug isolation. * Empirical evaluations are conducted to demonstrate the effectiveness of . The results show that is effective for compiler bug isolation and is extensible to other LLMs. * [We plan to release the source code of in <https://github.com/haoxintu/LLM4CBI> after the acceptance of this paper.] paves the way for future research in compiler bug isolation, opening exciting opportunities to further explore and leverage the capabilities of LLMs for more efficient and effective bug isolation techniques. Organizations. Section <ref> gives the background and our motivation. Sections <ref> describe the design of . Section <ref> presents the implementation and the evaluation results. Section <ref> and <ref> discuss the limitations of our approach and threats to validity. Section <ref> describes related work, and Section <ref> concludes with future work. § BACKGROUND AND MOTIVATION In this section, we first give the background about test program mutation for compiler bug isolation and Large Language Models (LLMs). Then, we use an example to illustrate the limitations of existing approaches and highlight the advantages of our approach. §.§ Background §.§.§ Test Program Mutation for Compiler Bug Isolation Fig. <ref> exemplifies the prevalent workflow of existing compiler bug isolation approaches <cit.>. Given a failing test program that can trigger the compiler bug, existing approaches first use different mutation strategies to produce a passing test program that does not trigger any bug. Then, both the failing and passing test programs are subjected to compilation, enabling the collection of code coverage information of the compiler source files. All the compiler files that are touched by a failing test program during compilation are considered suspicious. Conversely, the passing test program serves to mitigate suspicions regarding innocent files that may have been implicated. To eventually isolate the buggy files, following the well-established principles of Spectrum-Based Fault Localization (SBFL) <cit.>, they compare the execution traces (or spectra) between failing test programs and passing test programs using a formula such as Ochiai <cit.>. Two recent studies, i.e., DiWi <cit.> and RecBi <cit.>, follow the same strategy, and their goal is to generate a set of programs that have slightly different control- and data-flow information compared with the falling test program to flip the compiler execution results (i.e., from failing to passing). We aim to achieve the same goal by leveraging a new approach based on LLMs in this study. §.§.§ Large Language Models (LLMs) Recently, pre-trained Large Language Models (LLMs) such as ChatGPT <cit.> have become ubiquitous and have exhibited remarkable performance in numerous tasks, such as machine translation <cit.>, text summarization <cit.>, classification <cit.> and code generation <cit.>. Technically, LLMs can be directly employed to tackle specific downstream tasks by providing the task description to the model, which is known as prompt, without fine-tuning on specialized datasets. This is achieved through a technique known as prompt engineering <cit.>. Prompt engineering aims to find the prompt that yields the best performance on specific tasks. Prior studies show that prompt engineering can achieve state-of-the-art performance on various downstream tasks <cit.>. Benefiting from the huge potential of LLMs, there are increasing recent works showing that LLMs can be used for solving different tasks in software engineering tasks <cit.>. In this study, we aim to unleash the power of LLMs in the field of test program generation tasks. Many existing LLMs adopt the decoder of the Transformer architecture <cit.>. Given a prompt containing the task description, the decoder generates the test programs Y as a sequence of tokens, token-by-token by following Equation <ref>, y_t = _y P(y | p, y_<t) where y_t is the current token to be predicted, y_<t refers to all the previously predicted tokens, and p is the input prompt. The equation states that the current token y_t to be predicted is determined by selecting the token y that maximizes the conditional probability P(y | p, y_<t), given the prompt p and the previously generated tokens y_<t. Because a generated test program Y depends on the input prompt p and the search space for the input prompt p is huge, finding the best prompt p to generate the effective test program Y can be challenging. In this work, we propose to automatically find the prompt p that can generate more effective test programs Y for the compiler bug isolation task. Two main categories of LLMs are available to the community for code generation tasks: infiling and general <cit.>. Infilling models (e.g., CodeGen <cit.>, Incoder <cit.>, and PolyCoder <cit.>) are used to insert the most natural code based on bi-directional context (e.g., in the middle of a code snippet), while general models (e.g., LLaMA <cit.>, Alpaca <cit.>, ChatGPT <cit.>, Vicuna <cit.>, and GPT4ALL <cit.>) target to generate a complete code snippet given the left context only by a natural language description. In this study, we consider general models mainly due to the fact that general models follow the prompt-response dialog paradigm, which involves minimal human effort and fits our objective. §.§ Motivating Example Fig. <ref> showcases a failing test program that exposes a bug in the LLVM-3.4 compiler at the -O3 optimization level. The program introduces a division by zero in the induction variable elimination optimization pass in LLVM, causing the miscompilation bug. Notably, Fig. <ref> represents a passing test program generated by our proposed solution (), which can not trigger the bug in the LLVM compiler. Next, we show the limitation of existing approaches and the advantages of our approach in generating the passing test program. §.§.§ Limitations in Existing Approaches Existing approaches, such as DiWi <cit.> and RecBi <cit.>, face limitations in generating the above passing test programs due to three primary reasons. First, the mutation strategies employed in DiWi and RecBi exhibit certain limitations. For instance, DiWi only supports local mutation operators that do not alter the control flow of the failing test program. Consequently, it cannot generate crucial elements such as the highlighted gray statement “if (a == 0)” shown in Fig. <ref>. Similarly, RecBi allows the insertion of structural conditions, like “if (a == 0)”, but lacks the capability to generate the corresponding statement bodies, such as “s = v;”. Moreover, both DiWi and RecBi rely on a random selection strategy for determining which variables to use and where to insert them during the mutation process, leaving the transformation outcomes largely dependent on chance. Second, the mutation process involved in existing approaches necessitates substantial human effort. Prior to mutation, additional code must be written to collect essential contextual information (e.g., the names of variables like and their defined types) and extract relevant elements (e.g., statements) from the existing test programs. Manual coding is required to randomly determine suitable locations for inserting the new code snippets into the failing test programs during the mutation. Third, they pay little attention to the validity of generated test program, which can have the side effect of compiler bug isolation. In summary, the aforementioned limitations underscore the ineffectiveness and high demand of human efforts of existing approaches in generating high-quality test programs. These challenges highlight the need for a new solution that overcomes these shortcomings and effectively generates passing test programs. §.§.§ Advantages of Our Approach Compared to DiWi <cit.> and RecBi <cit.>, excels in generating effective passing test programs by taming the capabilities in LLMs. First, a precise prompt such as “Please generate a variant program P of the input program F by inserting an if statement and reusing the variables in the list {a, s, v} between lines 12-18” is produced to guide LLM in mutating the given program following certain requirements. Instead of using a vague prompt, considers more detailed information in the program that can increase the likelihood of flipping from failing to passing to construct a precise prompt. In this way, the new test program is generated by via inserting the new if-statement shown in gray in Fig. <ref>. Notably, supports the insertion of bodies (such as “s = v;”) in structural mutation, which increases the diversity of the generated test program. Second, benefiting from the LLMs' prompt-response dialog paradigm, the whole mutation process only involves little human effort compared with previous studies. In addition, detects and filters potential invalid test programs that contain undefined behaviors, further boosting the capabilities of in terms of compiler bug isolation. § THE DESIGN OF Overview. Fig. <ref> illustrates the general workflow of , which address the two main challenges and leverages the capabilities of LLMs to generate effective test programs for compiler bug isolation. first generates precise prompts and collects all the generated prompts (1) in the precise prompt production component. Then, selects a prompt (2) via a memorized prompt selection component and utilizes it as input for the LLMs (3). Next, the produces a new test program (4), which undergoes a test program validation component (5). If the generated program is valid, it is compiled (6), and coverage information of compiler source files is collected. Also, the quality of the generated test program will be measured with similarity and diversity metrics in (7), which are served as the input of memorized prompt selection component to help select better prompts. However, if the program is invalid due to semantic errors, provides the feedback (8) prompts to the LLMs, guiding them not to make the same mistakes again. Ultimately, upon reaching the termination condition (e.g., 1 hour), employs SBFL along with the failing and passing spectra to rank suspicious files (9). Specifically, the precise prompt production component is designed to address the first challenge of the formulation of precise prompts, and two other components, i.e., memorized prompt selection component and lightweight test program validation component, are utilized to tackle the second challenge of the selection of specialized prompts. We provide further details regarding this in the subsequent sections. §.§ Precise Prompt Production This section presents the precise prompt production component, which addresses the first challenge of the formulation of precise prompts. We first outline the design of the prompt production pattern for LLMs and then show how to utilize program complexity metrics, e.g., data-flow and control-flow complexity, to populate the pattern. §.§.§ Prompt Pattern for Program Mutation The following is the pattern designed in for constructing effective prompts for LLMs. Pattern: Please generate a variant program P of the input program F by (70, 5)mutation rule and reusing the (52, 5)variables between (46, 5)location . In the pattern, P refers to the newly generated test program, and F refers to the given failing test program. In the rest, (70, 5)mutation rule means the actual mutation operation; (52, 5)variables and (46, 5)location describe the specific requirements when conducting the mutation. We reuse the existing mutation rules (cf Table <ref>) in the pattern. Since compilers use different strategies to optimize the programs that have different data or control flow <cit.>, our intuition is that, instead of randomly mutating programs in existing approaches, mutating the most complex part of the failing program is more likely to flip the failing into passing. To this end, measures the most complex part by two means: the variable that holds complex data flow and the location that involves complex control flow. §.§.§ Data-flow Complexity-guided Variable Selection The data-flow analysis aims to output the most complex variables defined and used in the given failing test program. Due to the fact that we aim to investigate the complexity of a variable by examining how this variable can affect and to what extent, we deem that the more a variable is defined (or assigned), the more complex dependence is held on the variable, thus contributing to the data-flow complexity. Following the existing definition of data-flow complexity <cit.>, we follow the existing work <cit.> and opt for the variable def-use chain <cit.> to analyze the data-flow complexity of a program. Specifically, we calculate the complexity of a variable (Comp_var) by the following Equation: Comp_var = N_def + N_use where the N_def counts the number of times that a variable is defined, including redefining or assigning. Note that Def keeps track of the changes in a variable, so the data dependency analysis is included. N_use counts the number of times that a variable is used. This refers to the value of a variable value being used in some computation with no modification to the variable’s value. Under the above Equation, the data flow analysis upon the failing test program in Fig. <ref> will produce a variable list {a, s, v}, which represents the Top-3 complex variables used in the program. §.§.§ Control-flow Complexity-guided Location Selection The aim of the control-flow analysis is to output the location of the most complex statement in the program. We leverage the control-flow graph of the input failing program, where each node represents the statement and the edge indicates the execution flow. When the Control-Flow Graph (i.e., CFG) is available, we calculate the complexity of each statement using the following Equation (3) based on cyclomatic complexity <cit.>: Comp_control = N_eage - N_node + 2 where N_eage and N_node represent the number of edges and nodes in the CFG, respectively. Since the cyclomatic complexity is not designed to measure the complexity at the statement level, we count the complexity of each statement by obtaining the complexity values during cyclomatic complexity calculation. Note that we intentionally ignore the statement that includes the oracle (i.e., having the or function). The reason is that changing the code block, including the test oracle, is more likely to break the oracle, meaning a fake passing test program is probably generated <cit.>. Taking the code example shown in Fig. <ref> again, the control-flow analysis designed in will indicate that the most complex control flow lies in the for loop between Lines 12 - 18. In this way, after getting the variable list and the desired location to be inserted, generates certain prompts based on the designed pattern. For example, one of the generated prompts is: “Please generate a variant program P of the input program F by (110, 5)inserting an if statement and reusing the (120, 5)variables in the list {a, s, v} between (55, 5)lines 12-18 ”. As aforementioned in Section <ref>, not every prompt contributes equally to a specific failing test program. Furthermore, LLMs may make different mistakes when mutating a program: e.g., LLMs may incur a syntax error in a failing test program but a semantic error in another program. Giving all the error information as feedback prompts is not necessary. Hence, a specialized prompt selection strategy is supported and developed in . §.§ Memorized Prompt Selection This subsection and the next subsection present memorized prompt selection and lightweight test program validation components to address the second challenge of the selection of the specialized prompts. In this subsection, we first give the background of reinforcement learning and then detail the memorized prompt selection. §.§.§ Reinforcement Learning Reinforcement learning (a kind of memorized search) is a field of study that aims to teach an agent how to take actions within an environment in order to maximize its cumulative reward over the long term <cit.>. Key roles in reinforcement learning include (1) Agent: the role of a learner and decision-maker; (2) Environment: Everything that is composed of and interacts with something other than the agent; (3) Action: The behavioral representation of the agent body; (4) State: The information that the capable body obtains from the environment; (5) Reward: Feedback from the environment about the action. Reinforcement learning can generally be classified into value-based algorithms (e.g., Deep Q Learning algorithm <cit.>) and policy-based algorithms (e.g., Policy Gradients algorithm <cit.>). With the advancement of reinforcement learning, a class of algorithms known as Actor-Critic (AC) algorithms have been proposed <cit.>, combining elements of value-based and policy-based strategies. In this study, we employ the Advantage Actor-Critic (i.e., A2C <cit.>) framework to address the challenge of compiler bug isolation, as it is both effective (in comparison to AC <cit.>) and suitable for single-thread and multi-thread systems (compared to Asynchronous Advantage Actor-Critic, i.e., A3C <cit.>). In this study, adopts the A2C (Advantage Actor-Critic) framework <cit.> to learn the effects of prompt, enabling the generation of effective passing test programs for a specific compiler bug. We opt for the A2C framework mainly because it has demonstrated practical effectiveness, efficiency, and stability with low variance <cit.>, making it a suitable choice. §.§.§ Reinforcement Learning for Prompt Selection The effectiveness of randomly applying mutations upon a given failing test program can be limited <cit.>. Consequently, we do not randomly select prompts for LLMs to generate program variants; we need to efficiently generate more effective passing test programs within a given time tailored to a specific compiler bug. To accomplish this goal, we employ reinforcement learning in , where the quality of the generated passing test programs serves as the reward metric of a prompt. Fig. <ref> provides an overview of the reinforcement learning-based strategy for selecting prompts in . Following the A2C framework, first initializes two neural networks: the Actor Neural Network (ANN) and the Critic Neural Network (CNN) in the agent. The ANN predicts the probability distribution of actions based on historical knowledge, enabling the subsequent selection of an optimal action by . CNN predicts the potential reward that can be accumulated from the current state to a future state after executing the selected action, incorporating future knowledge. chooses an action a_t to select a prompt (randomly chosen for the first time) and measure the quality (Q_t) of the newly generated test program. To facilitate learning, employs an advantage loss function (A_loss) based on the predicted potential reward (PR) and the actual reward (R) obtained from the selected prompt. Finally, updates the status of all the ANN, CNN, and states to the agent. repeatedly selects a prompt to generate test programs until the termination condition (e.g., 1-hour limit is reached or 10 program variants are generated) is reached. Consistent with existing A2C-based approaches <cit.>, both ANN and CNN in are with a single hidden layer to ensure lightweight and fast convergence. The following provides detailed explanations of the most important parts, including the actual reward and advantage loss function calculations. Measuring Actual Reward. An essential factor contributing to the effectiveness of the A2C-based approach lies in determining the actual reward subsequent to applying a prompt. Drawing the inspiration from previous research <cit.>, a collection of proficient passing test programs needs to meet both similarity and diversity criteria. The similarity entails that each passing test program should exhibit a comparable compiler execution trace to that of the given failing test program. Consequently, following the principles of SBFL, the suspicion associated with a greater number of buggy-free files can be diminished. The diversity requires that different passing test programs possess distinct compiler execution traces from one another to reduce suspicion regarding various buggy-free files. By doing so, aggregating a set of passing test programs that have undergone mutation facilitates the effective isolation of genuinely faulty files by circumventing bias. Both similarity and diversity rely on the distance metric, which is defined as: dist(a,b) = 1 - Cov_a ∩ Cov_b/Cov_a ∪ Cov_b where the dist(a,b) represents the coverage distance between two test programs a and b, which is determined using the Jaccard Distance measurement <cit.>. Cov_a and Cov_b represent the sets of statements covered in compilers by test programs a and b, respectively. Denoting the set of generated passing test programs as p = {p_1, p_2, ..., p_n} and the failing test program as f, we formalize the metrics of similarity and diversity achieved by the set of passing test programs, represented as Equation <ref> and Equation <ref>, respectively. sim = ∑_i = 1^N (1 - dist(p_i, f))/N div = ∑_i = 1^N - 1∑_j = i+1^Ndist(p_i, p_j)/N(N-1)/2 where N is the number of passing test programs. At time step t, once a passing test program is generated, evaluates the effectiveness of the current set of passing test programs by linearly combining the attained measures of similarity and diversity in Equation <ref>. Q_t = n (α× div_t + (1 - α)× sim_t) where the coefficient α represents the weighting factor for the linear combination of diversity and similarity in Equation <ref>. Additionally, following the existing study <cit.>, Equation <ref> also incorporates another coefficient n, which corresponds to the size of the set of passing test programs. Subsequently, determines whether to accept the generated passing test program based on whether it can enhance the overall quality of the set of passing programs compared to the previous time step denoted as t-1. The Equation <ref> outlines the computation of the enhanced quality relative to the previous time step. Q_t = Q_t - Q_t-1 Nevertheless, in each state, only one prompt is chosen to generate a passing test program, and the performance of a prompt can vary significantly depending on different bugs. To balance the influence of the diverse performance of prompts, incorporates both the current time step improvement and the historically accumulated improvement attributed to the current prompt. This combined value serves as the actual reward obtained at the current time step, which is defined as follows: R_t = ∑_i = 1^t Q_i/T(m_j) Here, T(m_j) represents the count of times the m_j (a prompt) has been chosen to mutate the failing test program. Q_i is defined as zero (Q_i = 0) if the selected prompt at the i^th time step is not m_j; if the selected prompt is m_j, Q_i is computed using Equation <ref>. Whether selecting a new prompt m_j or previously selected prompt m_j depends on the decision made by ANN in the agent. Calculating Advantage Loss. While the actual reward (R_t) is obtained at the current time step t, also utilizes CNN to predict the potential reward (PR_t+i). To effectively consider future factors, A2C incorporates an advantage loss function. This function is designed to address the issue of high variance in the two neural networks and prevent convergence towards local optima <cit.>. The advantage loss function is expressed as follows in Equation <ref>: A_loss(t) = ∑_i=t^t+u(γ^i-t R_i) + γ^u+1PR_t+u-PR_t the variable u indicates that the CNN considers the future u consecutive states and actions when predicting the potential reward. γ represents the weight assigned to the actual future reward. PR_t+u and PR_t denote the predicted potential rewards at the (t+u)^th and t^th time steps, respectively, as determined by CNN. Notably, repeats this process for u times within a time step to approximate the actual future reward. Using the loss computed by the advantage function in Equation <ref>, proceeds to update the weights of both the ANN and CNN following Equation <ref>. w = w + β∂ (log P_w(a_t|s_t)A_loss(t))/∂ w where s_t represents the current state, while a_t denotes the corresponding action. P_w(a_t|s_t)A_loss(t) represents the probability of performing action a_t at state s_t based on the parameters w in both the ANN and CNN. β represents the learning rate of the weight updates. §.§ Lightweight Test Program Validation Existing studies <cit.> pay little attention to the validity of the mutated test programs. First, the test programs generated by these approaches may contain undefined behaviors. As demonstrated in our evaluation results in Section <ref>, such test programs reduce the effectiveness of compiler bug isolation. Second, existing approaches may generate test programs that do not have a test oracle, which can also affect the effectiveness of bug isolation. Third, existing studies are unaware of the errors they made during the mutation process, so they frequently make the same mistakes when mutating test programs. To address the above limitations, designs a semantic validation to filter away semantically invalid test programs and utilizes a test oracle validation to fix the test programs that violate test oracles. Additionally, collects all the validation errors in the test programs generated by LLMs and gives such information as feedback prompts to LLMs to avoid repeating the same mistakes. Next, we detail the validation processes. §.§.§ Program Semantic Validation We chose static analysis checks on the newly generated test program that may contain any undefined behavior, which proved to be lightweight compared with dynamic analysis approaches <cit.>. As shown in step 5 in Fig. <ref>, the newly generated test program is transferred to this component for checking the semantic validity. Based on the analysis capabilities from Frama-C<cit.>, we opt for five different categories of undefined behaviors in : * : invalid memory access, such as out-of-bound read or out-of-bound write. * : invalid RHS (Right-Hand Shift) or LHS (Left-Hand Shift) operand for right or left shift operation. * : accessing out-of-bounds index of an array. * : accessing uninitialized left-value, i.e., use a variable before it was uninitialized. * : a number is divided by zero. If one of the above undefined behaviors is detected, the semantic error information will be updated to LLMs to avoid repeating the same mistakes. For example, if a new test program contains an undefined behavior, an additional prompt, “The above program contains an undefined behavior , please do not generate such test programs again.”, will be fed to LLMs in step 8. §.§.§ Test Oracle Validation Test Oracles. It is also required to check whether a generated test program is passing or failing <cit.>. According to the types of compiler bugs (i.e., crash bugs and wrong-code bugs), following the existing work <cit.>, considers two types of test oracles: (1) Regarding crash bugs (i.e., the compiler crashes when using some compilation options to compile a test program), the test oracle is whether the compiler still crashes when using the same compilation options to compile a generated test program. (2) Regarding wrong-code bugs (i.e., the compiler mis-compiles a test program without any failure messages, causing the test program to have inconsistent execution results under different compilation options), the test oracle is whether a generated test program still produces inconsistent execution results under the compilation options producing the previous inconsistencies. Similar to DiWi <cit.> and RecBi <cit.>, LLMs may not put in the test oracles in a generated test program; so, we apply another heuristic validation to check for missing oracles. E.g., we check if the generated test program contains the same number of or statements as the given failing test program. If a newly generated test program passes the above validations, the buggy compiler is used to compile the generated test program in step 6. collects the semantic error or test oracle information and uses it as feedback prompts to guide LLMs not to generate programs with the same kinds of errors. It is worth noting that since different test programs may contain different kinds of errors, the feedback prompts could be different. §.§ Suspicious Files Ranking utilizes the concept of SBFL (Spectrum-Based Fault Localization) to identify potentially buggy compiler files by comparing the coverage of failing and passing tests in 9, as outlined in previous research <cit.>. To be specific, since Ochiai <cit.> perform well, employs the Ochiai Equation score(s) = ef_s/√((ef_s + nf_s)(ef_s + ep_s)) to calculate the suspicious score of each statement. In the Equation, ef_s and nf_s represent the number of failing tests that execute and do not execute statement s, and ep_s represents the number of passing tests that execute statement s. In , since there is only one given failing test program, the number of failing tests that execute statement s (ef_s) is fixed at 1. Additionally, focuses solely on the statements executed by the given failing test program, implying that the number of failing tests that do not execute statement s (nf_s) is 0. As a result, the Ochiai Equation is simplified to: score(s) = 1/√((1 + ep_s)). Once the suspicious score of each statement is obtained, proceeds to calculate the suspicious score of each compiler file. Similar to previous research <cit.>, aggregates the suspicious scores of the statements executed by the given failing test program within a compiler file to determine the suspicious score of the file SCORE(f) = ∑_i = 1^n_f score(s_i)/n_f. The number of statements that the failing test program executes in the compiler file f is denoted as n_f. utilizes this information to calculate the suspicious score of each compiler file. By arranging the compiler files based on their suspicious scores in descending order, yields a ranking list. § EVALUATION §.§ Experimental Setup Implementation of . was implemented utilizing OClint <cit.> (v22.02), srcSlice <cit.> (v1.0), Gcov <cit.> (v4.8.0), and PyTorch <cit.> (v1.10.1+cu113). OClint is served to calculate the cyclomatic complexity of each statement; srcSlice is used to get the data-flow complexity of the variables defined and used in the program; Gcov is applied for collecting compiler coverage information; PyTorch supports the A2C framework. For A2C, we set the hyperparameters with the default settings in the previous study <cit.>. For the implementation of test program validation, we adopt Frama-C <cit.> (Phosphorus-20170501) to check the semantic validity of the test program, and we write Python scripts (python 3.8.5) to check if there are any test oracle violations. We adapt GPT-3.5 as the default LLMs in , with the temperature parameter 1.0 in GPT-3.5 (see more detailed discussions on temperature settings in Section <ref>). Study Subjects. We use GCC and LLVM as our subjects to assess the effectiveness of . Both two compilers are widely used in existing literature <cit.> and therefore constitute a comprehensive evaluation. On average, a GCC buggy version contains 1,758 files with 1,447K source lines of code (SLOC), while an LLVM buggy version comprises 3,265 files with 1,723K SLOC. Benchmark. We utilize a benchmark consisting of 120 real compiler bugs, with an equal distribution of 60 GCC and 60 LLVM bugs, which includes all bugs studied in prior works <cit.>. Specifically, each bug is accompanied by relevant buggy details, including the faulty compiler version, the failing test program, the buggy compilation options, and the faulty files which serve as the ground truth. Running Platform. We conduct all the experiments on a workstation equipped with a 12-core CPU, Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz, 64G RAM, and Ubuntu 18.04 operating system, without GPU support. Evaluation Metrics. Each approach for isolating compiler bugs generates a list of suspicious compiler files. To evaluate the effectiveness of each approach, we measure the position of each buggy file in the ranking list with the help of the ground truth. In cases where multiple compiler files had the same suspicious scores, we follow the precedent set by prior research <cit.> and assign the worst ranking. Specifically, we compute the following metrics commonly used in compiler bug isolation studies <cit.>. * Top-N. This metric denotes the number of bugs that are effectively isolated and contained within the Top-N position, where N is a member of the set 1, 5, 10, 20 as specified in our study. A higher value of Top-N indicates a better performance of an approach. * Mean First Ranking (MFR). This metric represents the average rank of the first faulty file within the ranking list for each bug. The objective of MFR is to promptly isolate the initial defective element in order to expedite the debugging process. A smaller value is better, as it indicates that developers could localize the corresponding bug as quickly as possible. * Mean Average Ranking (MAR). This metric measures the average of the mean rank of every faulty file within the ranking list for each bug. The MAR metric is intended to isolate all faulty elements accurately. Similar to MFR, the approach with a smaller value of MAR is better. §.§ Research Questions In this study, we aim to answer the following main research questions (RQs): * RQ1: Can outperform state-of-the-art approaches (i.e., DiWi <cit.> and RecBi <cit.>) in terms of the effectiveness and efficiency of compiler bug isolation? * RQ2: Can each main component, i.e., precise prompt production, memorized prompt selection, and lightweight test program validation, contribute to ? * RQ3: Can be easily extended with other LLMs for the compiler bug isolation task? §.§ Answers to RQ1 §.§.§ Experimental Settings We set up the following two settings to investigate RQ1: * Setting-1: terminate with the same running time (i.e., one hour). This is the standard comparison strategy used in existing compiler bug isolation studies <cit.> for evaluating their effectiveness. * Setting-2: terminate when generating the same number (i.e., 10) of passing test programs. This setting aims to demonstrate the efficiency of further. We opt for the number suggested by the existing empirical studies <cit.>. We gave the timeout of 2 hours if an approach can not generate the desired number of test programs of a bug. We add this setting because during the experiments in Setting-1, we find the number of generated passing test programs by comparative approaches is different, and could generate a larger number of passing test programs. Thus, whether the superior performance on benefited from the more test programs or the quality of the newly generated programs is unknown. We conduct Setting-2 to investigate it further. Comparison Strategies. For both settings, we repeatedly run all the comparative approaches three times and calculated the median results of Top-N, MFR, and MAR metrics to reduce the influence of randomness. Additionally, for Setting-2, we compare the execution time for each approach and speedups achieved by . §.§.§ Experimental Results Results for Setting-1. The comparison results of Setting-1 are presented in Table <ref>. The first column represents subject compilers, and the second column shows different approaches. Columns 3-10 provide the Top-N metrics derived from the median values obtained from three iterations of each approach, including the number of Top-N (i.e., Num. Top-N) and improvements (i.e., ⇑_Top-N (%)) made by . Columns 11-14 represent the MFR (Mean First Ranking) and MAR (Mean Average Ranking) metrics, as well as the improvements achieved by . Notably, demonstrates its capability by successfully isolating 21, 48, 74, and 98 compiler bugs (out of a total of 120 compiler bugs in GCC and LLVM) within the Top-1, Top-5, Top-10, and Top-20 files, respectively. This accounts for the improvement of 17.50%, 41.67%, 61.67%, and 81.67% than DiWi and 17.50%, 41.67%, 61.67%, and 81.67 than RecBi, respectively. Further analysis of the effects across different subject compilers revealed interesting findings. Despite the larger number of compiler files in LLVM compared to GCC, exhibited slightly better results in the case of GCC. For instance, achieved MFR and MAR values of 12.90 and 13.70 for GCC, while corresponding values for LLVM were 13.73 and 13.90. Compared with DiWi and RecBi, the evaluation reveals that outperforms DiWi and RecBi across all metrics and for both GCC and LLVM compilers. Notably, demonstrates significant improvements of 90.91% and 35.14% over DiWi in terms of Top-1 and Top-5. For RecBi, could isolate 50.00% and 13.64% more bugs than RecBi in terms of Top-1 and Top-5. In particular, the practical significance of the Top-5 metric is highlighted by previous research, which indicates that developers often discontinue the use of automated debugging tools if the faulty elements cannot be localized within the Top-5 positions. Consequently, shows better practicality compared to DiWi and RecBi by substantially improving the effectiveness of compiler bug isolation, specifically in relation to the Top-5 metric. Furthermore, in terms of MFR and MAR, achieves a considerable improvement of 46.36% and 44.79% over DiWi and 39.91% and 38.43% over RecBi, respectively. The improvement of MFR and MAR demonstrates that can promptly and accurately isolate more compiler bugs than DiWi and RecBi. Results for Setting-2. From Table <ref>, for all the 120 bugs, we can know that also outperforms comparative approaches: can isolate more bugs and hold the lowest MFR and MAR. For the efficiency side, Fig. <ref> presents the detailed performance results. The x-axis in box Fig. <ref> show different approaches in GCC and LLVM, and y-xais represents the time spent by isolating each bug. For the figures, we can observe takes less time when isolating most of the bugs. We can see the time taken on LLVM is generally larger than on GCC. This is justifiable as the number of files of LLVM is larger than GCC, and the more files means the more time spent on calculating the coverage. We also calculate the speedups achieved by . Specifically, we follow the Equation below to calculate the speedups: T_baseline - T_/T_baseline× 100 where T_baseline represents the time spent by the baselines (i.e., DiWi and RecBi) for generating 10 passing test programs on average, and T_ describes the time our proposed spent generating the same number of passing test programs on average. Fig. <ref> shows the results. We can see is able to achieve 53.19% and 64.54% as well as 47.31% and 63.19% than DiWi and RecBi, for GCC and LLVM, respectively. It is interesting to note that the overall results of are better in Setting-1 compared with Setting-2. This is reasonable as could generate more passing test programs in one hour, improving the results. Therefore, we suggest developers run a long time of to get better isolation results. Summary for RQ1. The evaluation clearly demonstrates that significantly outperforms two state-of-the-art approaches in two different settings: is able to effectively and efficiently generate passing test programs for compiler bug isolation than DiWi and RecBi. §.§ Answers to RQ2 §.§.§ Experimental Settings We use the following variants of aiming to differentiate the effects of main components in : * uses a simple prompt without data flow and control flow analysis. That is, only keeps the (66, 5)mutation rule and does not rely on the (48, 5)variables and (42, 5)location information to fill in the designed pattern. * utilizes a specific prompt by giving the “the most complex data and control flow”. That means, replaces the (48, 5)variables with “the most complex variables” and (42, 5)location with “in the most complex statements”, which depends on the program understanding capabilities of LLMs to fill in the pattern. * randomly selects a prompt without the memorized prompt selection, meaning performs a random selection of the prompt during the prompt selection process. * removes the test program validation component in the prompt selection process in : does not care about test program validity for compiler bug isolation. Among these variants, and aim to investigate whether the new program complexity-guided prompt production pattern is effective, while and target to understand whether the memorized prompt selection and test program validation could contribute to , respectively. Comparison Strategies. We run the four variants with the same strategy as Setting-1 as RQ1. We then compare the Top-N, MFR, and MAR metrics for each approach to articulate the contribution of those components. §.§.§ Experimental Results The comparison results between and the comparative variant approaches are presented in Table <ref>, where each column and row has the same meaning as in Table <ref>. Contribution of Prompt Production. As shown in Table <ref>, is better than both (i.e., the approach uses a simple prompt without data flow and control flow analysis) and (i.e., the approach utilizes specific prompt by giving the “the most complex data and control flow”) in terms of all the metrics over all the 120 bugs in GCC and LLVM. Notably, is able to isolate more 61.54% and 13.64% bugs within the Top-1 and Top-5 files and also improves 31.17% and 32.34% than in terms of MFR and MAR. Compared with both and , shows better results on Top-N metrics. In addition, also holds a smaller value of MFR and MAR, indicating has a better compiler bug isolation capability. The above results indicate that the program complexity metrics are helpful for producing precise prompts, and removing data flow and control flow analysis or replacing it with explicit descriptions to LLMs are not effective. This is reasonable, as LLMs have shown to be limited at semantic understanding <cit.>. Therefore, with limited prompt description, LLMs may randomly select interesting variables and locations when mutating the failing test program, making it difficult to flip the execution from failing to passing for generating effective test programs for compiler bug isolation. Contribution of Prompt Selection. We run (i.e., an approach randomly selects a prompt) and to investigate the contribution of memorized prompt selection component. Table <ref> presents the overall results. The results indicate that outperforms as well. Specifically, demonstrates better performance compared to , with substantial improvements of 50.00%, 2.04%, 10.45%, 18.07%, 22.83%, and 21.41% across Top-1, Top-5, Top-10, Top-20, MFR, and MAR metrics, respectively. These results underscore the superiority of our memorized prompt selection based on reinforcement learning over the random strategy. Contribution of Test Program Validation. Compared with the approaches shown in rows 4 and 5 in Table <ref>, outperforms (a variant approach removes the test validation component) for all the metrics for both GCC and LLVM. For the overall benchmarks, could isolate 31.25% and 19.05% more bugs with Top-1 and Top-5 files than , respectively. In terms of MFR and MAR, also exhibited significant improvements of 22.20% and 20.00% over . These results provide compelling evidence that incorporating the test program validation component enhances the effectiveness of compiler bug isolation, thereby confirming its valuable contribution in . To further understand why filtering the semantically invalid test programs is important, we showcase two examples with undefined behaviors in Fig. <ref> and indicate their side effects for compiler bug isolation. The first example shown in Fig. <ref> includes an undefined behavior , which is generated by the mutation rule replacing a constant value (from “1” to “-1”) in Line 12. RecBi ranked this bug at 23^th due to the interference of the undefined behavior. The second example exposes an undefined behavior , which is produced by the mutation rule inserting a modifier of a variable (from to ) in Line 2. Because the size of (2 bytes in 64-bit system) is shorter than (4 bytes in 64-bit system), there is an out-of-bound read from the variable , which makes RecBi rank this bug at 28^th. Benefiting from the test program validation process, filter those examples before ranking the suspicious files and finally rank the LLVM bug and GCC bug at 9^th and 5^th, respectively. This is reasonable, as demonstrated in previous studies <cit.>, modern compilers can have unexpected consequences if the program is semantically invalid. In summary, the semantically invalid test programs can have side effects on the effectiveness of compiler bug isolation, and filtering them could help improve the ranking results. Summary for RQ2. All three components, including precise prompt production, memorized prompt selection, and lightweight test program validation, contribute to the effectiveness of . §.§ Answers to RQ3 §.§.§ Experimental Settings Table <ref> presents the evaluated LLMs in this study, with column Sizes reflecting the model sizes in billions of parameters, Release-Date showing when the LLM is released, and Popularity indicating the number of GitHub stars or users counted by 1st June 2023. We evaluate three of the most representative and popular LLMs. We choose the above LLMs mainly because (1) they are very popular that the number of stars soared to 20k+ in a few months and (2) they are proven to be effective in code generation tasks <cit.>. Based on the selected LLMs, we design three new variant approaches, i.e., , , and , by replacing the LLMs in to answer this RQ. Setting Up Different LLMs. For the three LLMs, we download the GGML[<https://github.com/ggerganov/ggml> ] format of these models from HuggingFace website[<https://huggingface.co/>] and use the python binding [<https://github.com/abetlen/llama-cpp-python>] of [The goal of is to run LLMs using 4-bit integer quantization on a MacBook only with CPU.] to run these models in the machine that only supports CPU. Specifically, we use a web server supported by , which aims to act as a drop-in replacement for the OpenAI API. This feature allows us to use compatible models with any OpenAI-compatible client (language libraries, services, etc.). Note that it would be easy and invoke little human effort to support any other interesting models in . The only thing users need to do is to provide the GGML format model, either directly download from the HuggingFace website or product using the detailed and actively maintained tutorials in 's homepagerefnote, for web server. Comparison Strategies. We run those variants within one hour limit and then compare the Top-N metrics for each approach. We do not repeatedly run those models mainly because we aim to demonstrate the extendability of replacing different LLMs in . §.§.§ Experimental Results Table <ref> shows that all the designed variant approaches, i.e., , , and , contribute to the compiler bug isolation task. The results indicate that different LLMs can have different capabilities for generating test programs for compiler bug isolation, and GPT-3.5 (used as the default model in ) achieves the best performance compared with other LLMs. Serval reasons made the performance of other LLMs not as good as GPT-3.5 used in : * Only give suggestion responses. Many outputs from those models are suggestions telling users how to change the code rather than the actual code. For example, on isolating LLVM #17388 with the prompt “replacing a binary operator with another valid one on the variables in the list ['a', 'c']”, responses, “You can replace the binary operator `||` with the logical operator `&&`”. Such output did not directly contain the complete test programs, but that information is still useful for helping users manually write a good passing test program. * Limited code generation capabilities. Although other LLMs are targeting a comparable performance compared with GPT-3.5 or GPT-4, they may still be limited by the scale of training data. For example, The model Alpaca used text-davinci-003 model to generate 52K instruction data as training data, which may still be limited compared with the training data scale of GPT-3.5 (although OpenAI did not disclose the actual number). * Performance issue. Running LLMs typically requires a powerful GPU to get better performance. Since our experiments run only in CPUs without GPUs, the response time for other LLMs is larger. Based on our observation, the response time of getting an answer from those LLMs takes appropriately 300 seconds, which significantly reduces the effectiveness of those approaches. In contrast, GPT-3.5 only invokes API for the code generation, and the response time is less than 10 seconds, which makes have a better capability. Although the results from other LLMs are not as good as GPT-3.5, we show the ease in adopting new LLMs in . With the rapid development of more powerful LLMs, such as Falcon<cit.>, we believe advanced techniques can mitigate the above limitations and further assist the compiler bug isolation task in the near future. Summary for RQ3. is extensible, allowing for easy integration with other LLMs. § DISCUSSION In this section, we first discuss the comparison results with GPT-4 and then evaluate the impact of different temperature values used in . Finally, we discuss some limitations of . §.§ Comparison with GPT-4 In this study, we are unable to conduct large-scale experiments using GPT-4 in due to its higher API cost[<https://openai.com/pricing>] compared to GPT-3.5. However, we did run two cases using a variant of , called , which incorporates GPT-4. Interestingly, the results demonstrated that outperformed , with both Top 20+ bugs being ranked in the Top-5. The improved performance of can be attributed to two main factors. First, GPT-4 generated a smaller number of test programs with syntax errors, indicating a higher quality of output. Second, GPT-4 exhibits a better understanding of the prompt, leading to improved results <cit.>. As LLMs continue to advance rapidly, we anticipate the availability of more capable and user-friendly models that can be integrated into , further enhancing the compiler bug isolation capabilities of . §.§ Different temperature Settings in LLMs The temperature parameter in LLMs, such as GPT-3.5, plays a crucial role in controlling the randomness of the generated output. A lower temperature value makes the output more deterministic, while a higher temperature value increases randomness. In the context of , it is important to investigate the impact of different temperature values on the performance of test program generation for compiler bug isolation. To this end, we conducted experiments using temperature values of 0.4, 0.6, 0.8, 1.0, and 1.2. The bug isolation capabilities of were compared under these different temperature settings, and the results are shown in Fig. <ref>. The results show that the default temperature value of 1.0 performs the best for . This finding aligns with expectations since OpenAI has fine-tuned this parameter and set it as the default value, indicating that it yields optimal results. §.§ Limitations of The limitations in inherit the issues from SBFL techniques and LLMs. First, SBFL suffers from tie issues <cit.>. A possible solution is to address this issue by incorporating the use of commit history information <cit.>. Second, LLMs may generate syntax invalid programs, which may reduce the effectiveness of . However, based on the experimental results, the number of invalid test programs is acceptable and does not affect the results much. A more effective approach could leverage the program repair technique <cit.> to repair the code generated by LLMs automatically. We leave further investigation on the above two directions as our future work. § THREATS TO VALIDITY This section discusses the internal, external, and construct threats of . The internal validity concerns stem from the implementation of and comparative approaches (DiWi <cit.> and RecBi<cit.>). To mitigate this threat, we have adopted the implementation provided by the previous studies <cit.>. As for , we have meticulously developed its implementation by leveraging well-established libraries, as expounded upon in Section <ref>, and have conducted thorough code checking of the code. For the tool used in test program validation, Frama-C <cit.> can not detect all the undefined behaviors in the programs as the identification of undefined behavior in the programs is typically a challenging problem <cit.>. We plan to leverage more advanced techniques to detect potential undefined behaviors in the test programs. The external validity of our study can be influenced by two key factors: the selection of compilers and the presence of bugs. To ensure the reliability of our findings, we have followed the established practices in prior research on compiler bug isolation <cit.> when selecting compilers. Following these approaches, we have employed two widely used open-source C compilers, GCC and LLVM, which are renowned for their popularity and extensive usage within the community. Regarding the bugs used in our study, we chose a comprehensive set of 120 real compiler bugs, encompassing all known bugs from prior investigations in the field of compiler bug isolation. To further bolster the external validity of our research and address potential threats, we are committed to expanding our bug dataset by incorporating additional real compiler bugs. The construct validity threats are subject to potential threats related to randomness, evaluation metrics, and parameter settings during the evaluation process. We address these concerns by (1) repeating experiments three times and calculating median results to account for randomness, thereby reducing the impact of random variations; (2) employing widely-used bug localization metrics to assess the effectiveness of to ensure the reliability and comparability of our evaluation results; (3) providing explicit parameter configurations and thoroughly investigating their impact in Section <ref>. By doing so, we enhance transparency and enable a deeper understanding of the influence of different parameter settings. § RELATED WORK This section surveys the most related works in this study, namely compiler debugging and LLMs for code generation. Compiler Debugging. Our study is primarily related to two recent works: DiWi <cit.> and RecBi <cit.>. Both are general solutions for isolating all kinds of bugs in compilers. These works address the problem of isolating compiler bugs by transforming them into the generation of passing test programs. DiWi and RecBi achieve this by first utilizing mutation operators (DiWi focuses on local mutation operators while RecBi supports structural mutation operators) to generate a set of passing test programs similar to the failing test program, but without triggering the bug. Then, they employ spectrum-based bug localization techniques <cit.> to rank the buggy compiler files by comparing execution traces between the generated passing test programs and the failing test program. LocSeq <cit.> focuses on isolating optimization bugs only in LLVM, while ODFL <cit.> aims to isolate bugs only in GCC. We did not compare with either LocSeq or ODFL mainly because they are not general enough and can only be applicable to specific middle-end compiler optimization bugs. In contrast, is capable of isolating all kinds of bugs in compilers, including syntax analyzer, semantic analyzer, and back-end bugs. In addition, Zeller <cit.> introduces a method to facilitate GCC debugging by calculating the cause-effect chain through a comparison of program states between a passing run and a failing run. Holmes and Groce <cit.> propose a method to localize compiler bugs by comparing a set of compiler mutants. Regarding compiler debugging techniques for other programming languages, Ogata et al. <cit.> present an approach for debugging the just-in-time compiler in a Java virtual machine. HeuiChan et al. <cit.> leverage dynamic analysis to locate bugs in JIT compilers. Different from those works, we follow the existing strategy to generate a set of effective test programs for compiler bug isolation. In contrast, we propose a new structural mutation strategy to mutate failing test programs effectively: we support complete control-flow statements mutation that includes both statement conditions and bodies. Besides, only involves little human effort to accomplish the test program mutation process. LLMs for Code Generation. The emergence of LLMs has sparked significant interest in their application to code generation tasks. Ling et al. <cit.> follow an encoder-decoder design and use a sequence-to-sequence LSTM model with attention and a copy mechanism to generate Java and Python codes. Iyer et al. <cit.> use a grammar-aware decoder to generate syntactically valid Java parse trees followed by Java codes using a two-step attention mechanism. CodeBERT <cit.> and GraphCodeBERT <cit.> inherit the design of BERT <cit.> which are encoder-only models to generate codes. Moreover, CodeT5 <cit.> and PLBART <cit.> leverages encoder-decoder architecture for generating codes. GLAsT <cit.> and VGen <cit.> apply decoder-only LLMs for generating Verilog RTL Code. Other recent LLMs, such as ChatGPT <cit.>, Vicuna <cit.>, WizardLM <cit.>, and StableLM <cit.>, all follow decoder-only architecture to generate various codes in Python, Java, C/C++, and SQL. In this study, we opt for the decoder-only models that follow the prompt-response dialog paradigm to generate the whole test program for compiler bug isolation tasks. Different from existing approaches, we design a new pattern for precisely producing prompts in . The program's data and control flow complexity are measured to fill in the designed pattern. Furthermore, a memorized selection and a test program validation strategy are proposed to select the proper prompt for effectively taming LLMs for test program mutation. We also demonstrate can be extensible for adopting different LLMs for compiler bug isolation. § CONCLUSION AND FUTURE WORK This paper presents , a new approach to tame LLMs for generating effective test programs for compiler bug isolation. In , three new components, i.e., precise prompt production, memorized prompt selection, and lightweight test program validation, are designed to tackle the two main challenges of formulating precise prompts and selecting specialized prompts. Empirical evaluation using 120 real-world bugs from GCC and LLVM demonstrates the effectiveness of over state-of-the-art approaches. Notably, achieves significant improvements ranging from 13.6% to 90.9% in various settings than DiWi and RecBi, respectively. Furthermore, we demonstrate that is extensible, highlighting its ease of extension to other LLMs for the compiler bug isolation task. Future Work. In addition to the promising results presented in this paper, there are several exciting directions for further enhancing the capabilities of . We are actively exploring the following directions: * Interactive bug isolation. Taking inspiration from interactive fault localization techniques that leverage user feedback <cit.>, we may incorporate interactive elements into . In detail, by treating LLMs (e.g., ChatGPT) as a user, we can enable them to learn continuously from the feedback received during the bug isolation process, thereby improving the isolation capability. * Intelligent bug isolation. To empower LLMs with more detailed information, such as coverage data, we can enable them to make intelligent decisions regarding potentially buggy files. Specifically, we plan to guide LLMs to employ different evaluation strategies (e.g., different formulas used in SBFL) and fine-tune the parameters of LLMs to facilitate compiler bug isolation. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT xxx IEEEtran
http://arxiv.org/abs/2307.01874v1
20230704183457
Non-relativistic spatiotemporal quantum reference frames
[ "Michael Suleymanov", "Ismael L. Paiva", "Eliahu Cohen" ]
quant-ph
[ "quant-ph" ]
[email protected] Faculty of Engineering and the Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat Gan 5290002, Israel H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, United Kingdom Faculty of Engineering and the Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat Gan 5290002, Israel Quantum reference frames have attracted renewed interest recently, as their exploration is relevant and instructive in many areas of quantum theory. Among the different types, position and time reference frames have captivated special attention. Here, we introduce and analyze a non-relativistic framework in which each system contains an internal clock, in addition to its external (spatial) degree of freedom and, hence, can be used as a spatiotemporal quantum reference frame. Among other applications of this framework, we show that even in simple scenarios with no interactions, the relative uncertainty between clocks affects the relative spatial spread of the systems. Non-relativistic spatiotemporal quantum reference frames Eliahu Cohen ======================================================== § INTRODUCTION In classical mechanics, space and time are absolute entities, existing independently of any physical object. Time serves as a parameter according to which a physical system evolves, and it is the same in every reference frame. Investigation of electrodynamics, followed by the formulation of Maxwell's equations, caused some friction with this notion. Maxwell's equations are invariant not under Galilean transformations but under Lorentz transformations, which do not keep time as a universal parameter. Taking the operational view that time is what is measured by clocks and clocks are also subject to the laws of physics, Einstein resolved the dispute between these two conflicting ideas by showing that mechanical systems should also transform according to Lorentz transformations. As a consequence, time is not the same in every reference frame. In non-relativistic quantum mechanics, time serves as a Newtonian universal evolution parameter, while space coordinates become operators associated with measurable quantities, thus creating tension with the theory of special relativity where time and space are treated on equal footing in Minkowski spacetime. Relativistic versions of quantum mechanics (Klein-Gordon and Dirac), where the number of described particles, and, hence the number of degrees of freedom, is fixed, suffer from various problems and inconsistencies. So far, the most successful and powerful theory that combines both principles of quantum mechanics and special relativity is relativistic quantum field theory (RQFT). In RQFT space and time coordinates are parameters and the fields become operators describing an infinite number of degrees of freedom, while the particles are field excitations. In quantum mechanics, however, there have been discussions about the introduction of a time observable since the early days of the theory, including the known objection by Pauli <cit.>, which can be resolved with the use of positive operator-valued measures (POVMs) <cit.>. The possibility of such an operator was discussed in terms of Heisenberg cut in Ref. <cit.>. Alternative approaches attempt to combine quantum mechanics with special relativity, e.g., the study of relativistic quantum dynamics, introduced by Steuckelberg in 1941 <cit.> and further developed after that <cit.>. In these approaches, time becomes an operator associated with a measurable quantity, called coordinate time, just like space coordinates. However, in addition, these approaches introduce an extra invariant and universal time parameter, which is sometimes called historical or universal time and serves as an evolution parameter like the one in classical mechanics and non-relativistic quantum mechanics. Using this framework, several systems were described, e.g., relativistic Coulomb-like and harmonic oscillator potentials <cit.> and relativistic spacetime string <cit.>. In this work, the relevant approach for time is the Page and Wootters framework <cit.>, which introduces relational dynamics from the constraint given by the Wheeler-DeWitt equation with the use of clock systems. This framework has gained considerable attention lately <cit.> in the larger context of quantum reference frames <cit.>. We combine it with the concept of spatial quantum reference frames. Specifically, we consider a system of N particles, each with an internal clock degree of freedom. The remainder of this work is structured as follows. In Section <ref> we briefly review the basics of spatial and temporal reference frames. With this, we introduce our framework in Section <ref>, following up with a study of some consequences implied by it in Section <ref>. Finally, we conclude our work and present some prospective ideas in Section <ref>. § PRELIMINARIES The fundamental concept behind this work is the notion of a frame of reference. In studies related to it, we typically have a number of systems, some of which can be used as a reference frame to describe the physics of the others. Before any further consideration is made, the Hilbert space ℋ^kin, called the kinematic Hilbert space, can be constructed as a tensor product of the Hilbert space associated with each individual system. However, at this stage, a reference frame has not been chosen yet, and the system must satisfy a constraint (or a list thereof). We will consider in this work the case with first-class constraints. Then, not every vector in ℋ^kin will represent a possible physical state <cit.>. This is the case because, in the presence of a constraint Ĉ, we are interested in a subset of vectors |Ψ⟩^phys of the total space satisfying Ĉ|Ψ⟩^phys = 0. The resultant space of states that satisfy the constraint is the physical Hilbert space ℋ^phys. Observe that, given |Ψ⟩^kin∈ℋ^kin, the vector δ(Ĉ) |Ψ⟩^kin, where δ(Ĉ) 1/2π∫ ds e^i sĈ, satisfies the constraint Ĉ and, if no further constraints apply, belongs to ℋ^phys. Within ℋ^phys, there still exists a multiplicity of descriptions of the systems since a frame of reference is not chosen yet. It turns out that this a gauge freedom, and choosing a frame is equivalent to fixing a gauge <cit.>. In this work, we combine spatial and temporal quantum reference frames. To better organize the presentation and make the discussion less abstract, we first briefly review some basics of each of them. §.§ Temporal quantum reference frames: Page-Wootters framework The framework introduced by Page and Wootters was proposed as a solution to a time issue that arises in the canonical quantization of general relativistic systems <cit.>. Specifically, the quantization leads to the constraint Ĥ_T|Ψ⟩^phys=0, known as Wheeler-DeWitt equation <cit.>. This constraint implies that the total system (supposedly the entire universe) is an energy eigenstate and, hence, does not evolve in time. A question then surfaces: How to reconcile this with the standard notion of time evolution? Approaching this issue, Page and Wootters showed that the standard dynamics could indeed emerge from this static scenario in a relational way if the entire system was split into two parts: The first, associated with ℋ_C, is a clock and the other, associated with ℋ_R, is the system whose dynamics will be analyzed. Moreover, they assumed that Ĥ_T = ŵ + Ĥ_R, where ŵ is the Hamiltonian of the clock and Ĥ_R is the Hamiltonian of system R. In ℋ_C, time states are defined as |t⟩ e^-iŵ(t-t')|t'⟩ with the condition that ∫ dt |t⟩⟨ t| = 1_C. In this expression, as in this entire work, we use units such that ħ=1. With this, it is possible to define the time operator t̂∫ dt t |t⟩⟨ t|, and it holds that [t̂,ω̂] = i1_C and _t |t⟩ = -i ŵ |t⟩ <cit.>. Observe that t̂ and ω̂ are not necessarily canonical conjugate. This is the case because t̂ needs not be Hermitian. In fact, t̂ is Hermitian only if ⟨ t|t'⟩ = δ(t-t') for every t and t', in which case the clock is referred to as an ideal clock. When this condition is not matched, t̂ is a positive operator-valued measure <cit.>. For simplicity, we assume ideal clocks throughout this work. With this set and defining |ψ(t)⟩√(2π)⟨ t| Ψ⟩^phys, Eq. (<ref>) implies that ⟨ t| Ĥ_T|Ψ⟩^phys = 0, which leads to i_t|ψ(t)⟩= Ĥ_R|ψ(t)⟩, which is the standard Schrödinger equation. The factor √(2π) is typically not introduced in the definition of |ψ(t)⟩. As it will be clear soon, it has been included here for symmetry with the spatial reference frames. The inner product between |Ψ⟩^phys and |Φ⟩^phys is defined as <cit.> (Ψ^phys,Φ^phys)_phys kin⟨Ψ|(Ĥ_T)|Φ⟩^kin = ⟨ψ(t)|ϕ(t)|,⟩ where |ϕ(t)⟩⟨t|Φ|^⟩phys It is noteworthy that, while the original formulation of the framework assumed no interactions with the clock, this framework has been extended to the interacting case <cit.>. §.§ Spatial quantum reference frames Consider a system of 3 particles, each with position x̂_I and momentum p̂_I, where I∈ℑ and ℑ{A,B,C}. The kinematic space associated with the joint system is ℋ^kin = ℋ_A ⊗ℋ_B ⊗ℋ_C. Since the idea is to use any of the particles as a position reference frame, it is, in a sense, natural to impose the momentum constraint <cit.> P̂_T|Ψ⟩^phys = 0, where P̂_T ∑_I∈ℑp̂_I. The inclusion of this constraint implies that the coordinates x̂_I do not have a meaning by themselves. Instead, only their relative values with respect to each other have a meaning. An arbitrary state in ℋ^kin can be written as |Ψ⟩^kin = ∫ dp_A dp_B dp_C Ψ(p_A,p_B,p_C) |p_A, p_B, p_C⟩ = ∫ dPΨ(P) |P⟩, where we have introduced P{p_I; I ∈ℑ} and dP=∏_I∈ℑdp_I. However, because of the constraint in Eq. (<ref>), a physical state is of the form |Ψ⟩^phys =∫ dp_A̅ψ_(A)(p_A̅) |-p_A̅, p_A̅⟩_ABC =∫ dp_B̅ψ_(B)(p_B̅) |-p_B̅, p_B̅⟩_BAC =∫ dp_C̅ψ_(C)(p_C̅) |-p_C̅, p_C̅⟩_CAB, where p_S̅∑_I∈ℑ∖{S} p_I for every S∈ℑ, p_S̅P∖{S}, and ψ_(A)(p_A̅) Ψ(-p_A̅, p_B, p_C), ψ_(B)(p_B̅) Ψ(p_A, -p_B̅, p_C), ψ_(C)(p_C̅) Ψ(p_A, p_B, -p_C̅). Each of the three expressions in Eq. (<ref>) includes a degree of freedom that is now redundant, which is associated with expressing |Ψ⟩^phys from the perspective of different particles. In case one of these perspectives is chosen, say, A's perspective in Eq. (<ref>), this redundancy can be removed with a projection onto |x_A⟩. The resulting reduced state |ψ_x_A⟩√(2π)⟨x_A | Ψ|^⟩phys is |ψ_x_A⟩ = ∫ dp_A̅ e^i p_A̅ x_Aψ_(A)(p_A̅) |p_A̅⟩, which belongs to ℋ_B ⊗ℋ_C, can be constructed. It is noteworthy that x_A is a fixed parameter that can be arbitrarily chosen. It is common to fix x_A=0 (see, e.g., Ref. <cit.>). Here, however, we keep an arbitrary x_A for a more symmetrical representation of the expressions emerging from the spatiotemporal frames we study. The inner product between |Ψ⟩^phys and |Φ⟩^phys can be defined as <cit.> (Ψ^phys,Φ^phys)_physkin⟨Ψ|(P̂_T)|Φ⟩^kin This construction is consistent with the usual requirement of having a normalized wave function (in a given frame) and, moreover, is independent of the choice of perspective. Indeed, in momentum representation, the physical inner product can be written as (Ψ^phys,Ψ^phys)_phys = ∫ dp_A̅[ψ_(A)(p_A̅)]^*ψ_(A)(p_A̅) =∫ dp_B̅[ψ_(B)(p_B̅)]^*ψ_(B)(p_B̅) =∫ dp_C̅[ψ_(C)(p_C̅)]^*ψ_(C)(p_C̅). Now, suppose the joint system composed of A, B, and C has a dynamics governed by the Hamiltonian Ĥ_T= ∑_I p̂_I^2/2m_I + ∑_I>J V_IJ(x̂_I - x̂_J). In A's reference frame, it becomes Ĥ_x_A = 1/2∑_I ≠ A(1/m_A + 1/m_I) p̂_I^2 + 1/m_Ap̂_B p̂_C + V(x̂_B,x̂_C; x_A), where V(x̂_B,x̂_C; x_A) V_BA(x̂_B - x_A) + V_CA(x̂_C - x_A) + V_CB(x̂_C - x̂_B). In fact, it can be shown <cit.> that Ĥ_x_A governs the dynamics of |ψ_(A)⟩, i.e., i_t|ψ(t;x_A)⟩ = Ĥ_x_A|ψ(t;x_A)⟩, where |ψ(0;x_A)⟩ = |ψ_x_A⟩. § 1+1 SPATIOTEMPORAL QUANTUM REFERENCE FRAMES We now combine the two frameworks presented in the previous section. More precisely, we consider a composite system of 3 particles, each with degrees of freedom associated with time (i.e., a clock) and space, with their respective Hilbert spaces ℋ_C_I and ℋ_R_I, where I∈ℑ. Each particle I has external degrees of freedom associated with the position operator x̂_I and its canonical conjugate momentum p̂_I. Moreover, the internal clock of each particle I has a Hamiltonian ω̂_I and a time operator t̂_I, built according to Page and Wootters' description. The full Hilbert space under consideration is, then, ℋ^kin≡⊗_I (ℋ_C_I⊗ℋ_R_I), and an arbitrary element |Ψ⟩^kin∈ℋ^kin is |Ψ⟩^kin=∫ dΩ dP |Ω,P⟩ψ^kin(Ω,P) where Ω{ω_I; I∈ℑ}. We assume the entire joint system satisfies the momentum constraint in Eq. (<ref>) and the Hamiltonian constraint in Eq. (<ref>). In the latter, Ĥ_T denotes the total Hamiltonian of the system, assumed to be of the form Ĥ_T ∑_I (ŵ_I + p̂_I^2/2m_I). Hence, [P̂_T,Ĥ_T] vanishes everywhere in ℋ^kin. As a consequence, δ(P̂_T) and δ(Ĥ_T) also commute[This would also hold if the Hamiltonian was of the form Ĥ_T ∑_I (ŵ_I + p̂_I^2/2m_I) + ∑_I>J V_IJ(x̂_I - x̂_J). However, for the argument to be more direct, we consider the noninteracting case.]. Then, unequivocally, we can consider the map from ℋ^kin into ℋ^phys |Ψ⟩^kin↦|Ψ⟩^phys(P̂_T) (Ĥ_T) |Ψ⟩^kin. Then, we can write |Ψ⟩^phys= =∫ dω_A̅ dp_A̅|ω_A̅, p_A̅⟩Ψ^kin(-H_A̅,_B,_C,-p_A̅,p_B,p_C) =∫ dω_B̅ dp_B̅|ω_B̅, p_B̅⟩Ψ^kin(_A,-H_B̅,_C,p_A,-p_B̅,p_C) =∫ dω_C̅ dp_C̅|ω_C̅, p_C̅⟩Ψ^kin(_A,_B,-H_C̅,p_A,p_B,-p_C̅) where ω_I̅Ω∖{ω_I} and H_I̅ is the “scalar version” of the operator Ĥ_I̅ = ∑_J ≠ Iω̂_J + p̂_I̅^2/2m_I + ∑_J Ip̂_J^2/2m_J. As will be seen soon, this is the effective Hamiltonian of the systems from I's perspective. By scalar version, we mean the eigenvalue of the operator associated with the eigenstate |ω_A̅, p_A̅⟩. Observe that Ψ^kin(-H_A̅,_B,_C,-p_A̅,p_B,p_C)= Ψ^kin(_A,-H_B̅,_C,p_A,-p_B̅,p_C)= Ψ^kin(_A,_B,-H_C̅,p_A,p_B,-p_C̅), i.e., they describe the same physical state in ℋ^phys. However, they correspond to different perspectives before the removal of the redundant degrees of freedom. With this, we follow the procedure discussed in the previous section and define the state of the system conditioned on the spatiotemporal state of one of the particles, say, A, which will serve as a reference frame as a “ruler” and a “clock.” Such a reduced state is |ψ_A̅(t_A; x_A)⟩ 2πt_A,x_A|Ψ^phys, where ψ_A̅ represents the state of systems B and C from A's perspective. In this definition, while t_A is a quantity to be varied for the study of the time evolution of the systems from the perspective of A, x_A is a fixed parameter used as a reference to give relational meaning to the position of the other particles. Applying the constraint in Eq. (<ref>) and projecting it onto |t_A, x_A⟩, we obtain the Schrödinger equation i_t_A|ψ_A̅(t_A; x_A)⟩ = Ĥ_A̅|ψ_A̅(t_A; x_A)⟩. Observer that, from the initial conditions, we have |ψ_A̅(t_A; x_A)⟩ = e^-i t_A Ĥ_A̅|ψ(0; x_A)⟩ = 2π⟨0_C_A,x_A| e^-it_A Ĥ_A̅|Ψ⟩^phys. Then, from Eq. (<ref>), we obtain |ψ_A̅(t_A; x_A)⟩ = ∫ dω_A̅ dp_A̅|ω_A̅, p_A̅⟩ e^-i (t_AH_A̅ + x_Ap_A̅) Ψ^kin(-H_A̅,ω_B,ω_C, -p_A̅,p_B,p_C). Projecting this expression onto |ω_A̅, x_A̅⟩, we obtain ψ_A̅(t_A, ω_A̅, x_A̅; x_A) = 1/2π∫ dp_A̅ e^i(x_BAp_B+x_CAp_C-t_AH_A̅) Ψ^kin(-H_A̅,ω_A̅, -p_A̅,p_A̅), where ψ_A̅(t_A, ω_A̅, x_A̅; x_A) ⟨ω_A̅, x_A̅|ψ_A̅(t_A;x_A)|$⟩, which implies that Ψ^kin(-H_A̅,ω_A̅, -p_A̅,p_A̅ ) = =1/2π∫ dx_A̅ e^-i(p_Bx_BA+p_Cx_CA) e^it_AH_A̅ψ_A̅(t_A, ω_A̅, x_A̅; x_A) =1/2π∫ dx_A̅ e^-i(p_Bx_BA+p_Cx_CA) ψ_A̅(0, ω_A̅, x_A̅; x_A). Observe that, from the definition in Eqs. (<ref>) and (<ref>), applying the constraints on a kinematical state and reducing the resulting state to a perspective is equivalent to solving the Schrödinger equation in Eq. (<ref>). Repeating the above steps, one may obtain a description with respect to any subsystem. For example, the state from B's perspective is of the form |ψ_B̅(t_B;x_B)⟩ = ∫ dω_B̅ dp_B̅|ω_B̅,p_B̅⟩ e^-i(t_BH_B̅ + x_B p_B̅) Ψ^kin(ω_A,-H_B̅,ω_C,p_A,-p_B̅,p_C) which obeys the following Schrödinger equation i_t_B|ψ_B̅(t_B,x_B)⟩^phys = Ĥ_B̅|ψ_B̅(t_B,x_B)⟩^phys. A few words regarding the inner product are in order. We construct the inner product generalizing the formulation in <cit.>, where the authors showed the equivalence between different approaches of relational quantum dynamics to include both Hamiltonian and momentum constraints, i.e., (Ψ^phys, Φ^phys)_physkinΨδ(Ĥ_T) δ(P̂_T) Φ^kin. Observe that(Ψ^phys, Ψ^phys)_phys = ψ(t_I; x_I) | ψ(t_I; x_I)for everyI∈ℑ, i.e., the normalization of reduced states is independent of perspective. § APPLICATIONS OF THE FRAMEWORK In this section, we explore a few consequences of the formalism introduced in the previous section. To start, we consider an effect that exists independently of the temporal systems: the relativity of entanglement, already familiar from previous studies in the literature on reference frames. After this, we will examine how the temporal parts of the state affect the spatial distribution of the particles which should be less familiar. Assume the initial state of the system in a given frame, sayA, is of the formψ_A̅(t_A=0; x_A) = |ξ_B⟩ ⊗|λ_B⟩ ⊗|ξ_C⟩ ⊗|λ_C⟩, where|ξ_I⟩ ∫dω_Iξ_I(ω_I)|ω_I⟩and|λ_I⟩ ∫dx_I λ_I(x_IA)|x_I⟩forI=B,C. Then, we identify ψ_A̅(t_A=0, ω_A̅, x_A̅; x_A) = Π_I≠ Aξ_I(ω_I) λ_I(x_IA). Together with Eq. (<ref>), this leads to Ψ^kin(-H_A̅,ω_A̅, -p_A̅,p_A̅) = Π_I≠ Aξ_I(ω_I) λ̃_I(p_I), where λ̃_I(p_I) 1/√(2π)∫ dx_I e^-ip_I x_IAλ_I(x_IA) Then, the state of the systems for any instant of timet_AfromA's perspective is obtained by Eq. (<ref>) ψ_A̅(t_A, ω_A̅, x_A̅; x_A) =1/2π∫ dp_A̅ e^i(x_BAp_B+x_CAp_C-t_AH_A̅) ξ_B(ω_B)ξ_C(ω_C) λ̃_B(p_B)λ̃_C(p_C). Similarly, using Eq. (<ref>), we conclude that, from a different perspective, say,B's perspective, we have ψ_B̅(t_B, ω_B̅, x_B̅; x_B) = 1/2π∫ dp_B̅ e^i(x_ABp_A+x_CBp_C-t_BH_B̅) ξ_B(-H_B̅) ξ_C(_C) λ̃_B(-p_B̅) λ̃_C(p_C). This expression will be the basis for our analysis in this section. §.§ Frame dependency of entanglement The form of the initial conditions just discussed reveals that, while there was no entanglement between the systems fromA's perspective, particlesAandBmay be entangled fromC's perspective. This can be seen from the factorλ̃_B(-p_B̅)in Eq. (<ref>) since it suggests that the state of systemsAandCwill generally be mixed for an arbitraryλ_B. To give a concrete example, consider the special case in whichξ_Bandξ_Care constant functions,λ_B(x_BA) = e^ik_B x_BA, andλ_C(x_CA) = δ(x_CA-x_0), wherek_Bandx_0are real constants. Then, following the analysis presented at the beginning of this section, it can be checked that ψ_B̅(t_B=0,ω_B̅, x_B̅; x_B) = e^-ik_Bx_ABδ(x_CB - x_AB - x_0). Observe that this function can be equivalently written ase^-ik_B(x_CB - x_0) δ(x_CB - x_AB - x_0). This means thatAandCbehave individually as plane waves. However, when one is measured, the other is localized a distancex_0away from the location of the measured particle. In the above scenario we see that particles which were not entangled in one frame become entangled in the other. This fits the spatial quantum reference frame treatment without using a dynamical time <cit.>. §.§ Clock states and spatial localizability of systems We now turn our attention to the termξ_B(-H_B̅)in Eq. (<ref>). The HamiltonianH_B̅depends on spatial and temporal quantities of systemsAandC. As a result, depending onξ_Bthe various subsystems might be correlated inB's perspective. Another way to put it is by saying that the state of a clock in a given perspective affects even the spatial distribution of particles in the perspective of the system to which that clock belongs. To illustrate this, consider the case in whichλ_B(x_BA) = (2/πΔ^2)^1/4 e^-x_BA^2/Δ^2e^iq_0x_BA,λ_C(x_CA) = e^ik_C x_CA, andξ_I(ω_I) ∝e^-α_I |ω_I|, wherek_C,q_0 ∈ℝ,Δ>0,α_I>0, andI=B,C. According to Eq. (<ref>), the wave function describing particlesBandCfromA's perspective at an arbitrary instant of timet_Ais ψ_A̅(t_A,ω_A̅,x_A̅;x_A) ∝ e^-|ω_B|(α_B - i sgn(ω_B) t_A)e^-|ω_C|(α_C - i sgn(ω_C) t_A) exp{-it_A[(k_C+q_0)^2/2m_A+q_0^2/2m_B + k_C^2/2m_C]} e^ik_C x_CA √(π/Δ^2/4 + i t_A/2μ_AB)exp{-[ x_BA-t_A(k_C+q_0/m_A+q_0/m_B)]^2/4(Δ^2/4 + i t_A/2μ_AB)} e^iq_0x_BA, where μ_AB = (1/m_A + 1/m_B)^-1 is the reduced mass of particles A and B. Using Eq. (<ref>), the state of the system in B's perspective can be obtained. We analyze here the resulting state for ω_A,ω_C>0. A similar analysis can be made for other relevant regions of the spectrum associated with ω_A and ω_C. In the aforementioned domain, the wave function describing particles A and C from B's perspective at an arbitrary instant of time t_B is ψ_B̅(t_B,ω_B̅,x_B̅;x_B) ∝ e^-(it_B + α_B) ω_A e^-(it_B + α_B + α_C) ω_C exp{-(it_B + α_B) [(k_C+q_0)^2/2m_A + q_0^2/2m_B + k_C^2/2m_C]} e^ik_C x_CB √(π/Δ^2/4 + α_B/2μ_AB + it_B/2μ_AB)exp{-[x_AB+(t_B-iα_B) (k_C+q_0/m_A+q_0/m_B)]^2/4(Δ^2/4 + α_B/2μ_AB + it_B/2μ_AB)} e^-i(q_0+k_C)x_AB. Observe that even in the limitΔ→0, when, inA's perspective, particleBis localized att_A=0[sinceλ_B(x_BA) →δ(x_B-x_A)], the terms associated with the spatial distribution of particleAfromB's perspective (i.e., the terms in the last line of the above equation) reveal that it has the form of a Gaussian for anyt_B. This is due to the relative uncertainty of clockBinA's perspective. In fact, in the limitα_B→0, where the relative uncertainty between clocksAandBvanishes, it is possible to obtain a localized state for systemAatt_B=0inB's perspective. Similarly, we may choose a different normalized dependence of the clocks inω_I. Consider the caseλ_B(x_BA)=δ(x_BA-x_0),λ_C(x_CA)=e^ik_Cx_CA, andξ_I(ω_I) = rect_W_I(ω_I), wherex_0,k_C, andW_Iare positive real constants,I=B,C, and rect_W_I(z)= 1/W_I, -W_I/2 < z < W_I/2 0, elsewhere. Leta = 1/2μ_AB,b=k_C/m_B, andc = ω_A + ω_C + k_C^2/2μ_CB. If-W_B/2<c-b^2/4a<W_B/2, the initial state of particlesAandCfromB's perspective is ψ_B̅(t_B=0 ,ω_B̅,x_B̅;x_B) = ξ_C(_C) e^ik_C(x_CB+x_0) 2e^-ib/2a(x_AB+x_0) sin[p^eff(x_AB+x_0)]/p^eff(x_AB+x_0), wherep^eff=√(b^2/4a^2-1/a(c-W_B/2)). In the limitW_B∞,p^eff →√(W_B/2a), the result does not depend on anyω_I. Precisely, lim_W_B∞e^-ib/2a(x_AB+x_0)sin[p^eff(x_AB+x_0)]/p^eff(x_AB+x_0)≃(x_AB+x_0). This is the case because, in this limit, the relative uncertainty between the clocks vanishes. Observe that the results discussed here hold even though we do not apply any relativistic correction and, moreover, no interaction between the clocks and the spatial coordinates takes place. In fact, this can be seen as a consequence of the physical space not having the tensor product structure from the kinematic space, and the tensor product notation being used simply as a label for the systems <cit.>. Moreover, because of the relative uncertainty between the clocks, the well-defined momentt_A=0inA's perspective becomes “fuzzy” inB's perspective <cit.>, hence the influence of the relative uncertainty between the clocks in the spatial distribution of systems. § DISCUSSION AND OUTLOOK In this work, we have introduced a simple framework for 1+1 non-relativistic spatiotemporal quantum reference frames, which combines position reference frames with the Page-Wootters approach. In our model, we have assumed each physical system contains, in addition to an external degree of freedom associated with its position, an internal degree of freedom that can be used as a clock. With this, each system can be used as a spatiotemporal reference frame and, moreover, the remaining systems satisfy the Schrödinger equation in the chosen perspective. Below we situate these results in a wider context. Recently, relativistic and non-relativistic spatiotemporal reference frames have been considered in the literature <cit.>. While Ref. <cit.>, like the present work, investigates a non-relativistic framework, the authors consider a scenario with a single clock system to be used as a reference for time. One may think that this should indeed be enough for a non-relativistic treatment, in particular in scenarios without interactions with the clock. However, as we have shown, the clock states in one perspective influence the spatial degrees of freedom of the systems when changing perspectives. For example, when only the spatial frame is considered, if a particleBis localized inA's perspective, then particleAis also localized inB's perspective <cit.>. However, in the spatiotemporal framework investigated here, this is not always the case. Depending on the state of clockBinA's perspective, particleAmight have a spatial spread inB's reference frame even ifBis localized inA's perspective. This happens despite the notion of time in every perspective being, in a sense, the same in our study. In fact, it can be observed that the Heisenberg dynamics of the time operatort̂_Jin systemI's perspective is dt̂_J/dt_I = i [Ĥ_I̅, t̂_J] = 1_J for everyJ≠I. There are several directions that can be approached with the framework studied here. For instance, one could consider more general forms of the HamiltonianĤ_Tthat include interactions between the different parts of the system, including the clocks. Even more generally, one can consider Hamiltonian whose interacting terms are such that the Hamiltonian and momentum constraints do not commute everywhere in the kinematic Hilbert space. In this scenario, the physical solutions should live in a subdomain of space where these constraints commute, and second-class constraints should arise. Furthermore, one can extend this discussion to the 3+1 spatiotemporal scenario. In this case, the translational invariance constraint becomes effectively three individual constraints, one for each spatial direction. Additionally, one should also consider rotational invariance <cit.>, which also introduces one constraint per rotational direction. Interestingly, ifP̂_T^r = ∑_K∈ℑ p̂_K^ris the total momentum in the directionr∈ℜ{x,y,z}andL̂_T^u = ∑_J∈ℑ ℓ̂_J^uis the total angular momentum about the axisu∈ℜ, it follows that [L̂_T^u, P̂_T^r] = ∑_J,K∈ℑ [ℓ̂_J^u, p̂_K^r] = ∑_J,K∈ℑ∑_v,s∈ℜϵ_uvs [v̂_J p̂_J^s, p̂_K^r] = -i ∑_J∈ℑ∑_s∈ℜϵ_ursp̂_J^s = -i ∑_s∈ℜϵ_ursP̂_T^s, whereϵ_uvsis the Levi-Civita symbol. Then, although these operators do not commute everywhere in the kinematic Hilbert space, they do commute everywhere in the hypersurface characterized by the translational invariance constraint. As a result, the rotational invariance constraint can be added without bringing further commutation issues. Finally, one could build the relativistic version of the framework we have considered here. In other words, by considering a system in which each subsystem contains external (i.e., spatial) degrees of freedom and an internal degree of freedom that serves as a clock, one should aim at modifying the constraints applied here in order to obtain the dynamics and frame transformations in the relativistic regime. I.L.P. acknowledges support from the ERC Advanced Grant FLQuant. E.C. is supported by the Israeli Innovation Authority under Project 73795, by the Pazy Foundation, by the Israeli Ministry of Science and Technology, and by the Quantum Science and Technology Program of the Israeli Council of Higher Education.
http://arxiv.org/abs/2307.02195v1
20230705104205
Optimum-Preserving QUBO Parameter Compression
[ "Sascha Mücke", "Thore Gerlach", "Nico Piatkowski" ]
quant-ph
[ "quant-ph" ]
QUBO Compression]Optimum-Preserving QUBO Parameter Compression [1]Sascha Mü[email protected] 2]Thore Gerlach 2]Nico Piatkowski *[1]TU Dortmund, AI Group, Dortmund, 44227, Germany [2]Fraunhofer IAIS, Sankt Augustin, 53757, Germany Quadratic unconstrained binary optimization () problems are well-studied, not least because they can be approached using contemporary quantum annealing or classical hardware acceleration. However, due to limited precision and hardware noise, the effective set of feasible parameter values is severely restricted. As a result, otherwise solvable problems become harder or even intractable. In this work, we study the implications of solving problems under limited precision. Specifically, it is shown that the problem's dynamic range has a crucial impact on the problem's robustness against distortions. We show this by formalizing the notion of preserving optima between instances and explore to which extend parameters can be modified without changing the set of minimizing solutions. Based on these insights, we introduce techniques to reduce the dynamic range of a given instance based on theoretical bounds of the minimal energy value. An experimental evaluation on random instances as well as -encoded and problems show that our theoretical findings manifest in practice. Results on quantum annealing hardware show that the performance can be improved drastically when following our methodology. [ [ August 1, 2023 ================== § INTRODUCTION Over the past decades, intensive research has been conducted on the exploitation of quantum effects for problem solving and faster computing <cit.>. In the past years in particular, quantum machine learning (QML) emerged as a crossover field between machine learning (ML) and quantum computing (QC) <cit.>. One of several QC paradigms is adiabatic quantum computing (AQC), which exploits quantum tunneling effects to approximate the ground state of a parametric Hamiltonian, commonly of an Ising model <cit.>: In contemporary hardware implementations of AQC, the Hamiltonian can be written as quadratic function over binary vectors ∈{ 0,1}^n, for which it tries to find the minimum energy eigenstate |^*⟩. This procedure is equivalent to solving the quadratic unconstrained binary optimization () problem, which has been investigated since the 1960s (see <cit.>). Its value lies in its applicability to a wide range of combinatorial optimization problems, from economics <cit.> over satisfiability <cit.>, resource allocation and routing problems <cit.> to machine learning <cit.>—just to name a few. Solving is, in general, -hard <cit.>. Despite the promise of quantum speedup, currently available devices that claim to perform AQC could not be proven to be faster than classical computing resources yet <cit.>. Classical hardware-based solvers have been developed as alternatives to imperfect quantum devices <cit.> to facilitate both, research and practical applications. A problem that hardware solvers have in common is the limited parameter precision: While in theory is defined with parameters in ℝ, digital devices rely on finite number representations that necessarily sacrifice some precision, as B bits can only encode 2^B distinct values. The well-known solution to this problem in the classical realm is floating-point arithmetic, e.g., IEEE 754 <cit.>. A similar problem specific to D-Wave's quantum annealers is “integrated control errors” (ICE) <cit.>, which randomly distort the Hamiltonian, leading to a skewed energy landscape. Depending on the particular problem instance, this distortion may change the optimum (see <ref>). Classical solutions like IEEE 754 are not applicable, due to the physical, analogue representation of the Hamiltonian. Finally, similar effects occur also in digital annealing hardware, due to rounding of problem parameters, , when converting between floating point formats. To the best of our knowledge, no research has been conducted on the effect that rounding or otherwise distorting or Ising model parameters has on the energy landscape and the distribution of optima. However, we find that rounding can drastically deteriorate the probability to find the global optimum, depending on the parameter distribution: <Ref> shows that a low number of parameter bits and a high dynamic range (which we explain in <ref>) leads to a higher probability of changing the global optimum. In this work, we study the implications of solving problems with limited parameter precision. More specifically, we analyze the dynamic range, , the number of bits required to encode the parameters faithfully: The smaller the dynamic range, the more robust the instance is against distortion. We formalize the notion of optimum preservation between instances based upon their set of minimizing bit vectors and explore to which extend parameters can be modified without changing said set. Finally, we introduce techniques to reduce the dynamic range of a given instance based on theoretical bounds on the optimal energy value, which we demonstrate experimentally. The results demonstrate that the performance of quantum annealing hardware can be improved drastically when following our methodology. §.§ Notation and Background We define 𝔹{ 0,1} and [n]{ 1,…,n} for all n∈ℕ. The elements of are called bits, and ^n is the set of bit vectors (or binary vectors) of length n for any n∈ℕ. Let 1≤ m≤ n, I⊆ [n], x⃗∈𝔹^n, and z⃗∈𝔹^m. The m-dimensional sub-vector of x⃗ that contains only the variables indexed by I is denoted via x⃗_I(x_i)^⊤_i∈ I. The set of all n-dimensional bit vectors in which the variables indexed by I are fixed to the values in z⃗ is denoted as 𝔹^n_I←z⃗{x⃗':  x⃗'∈𝔹^n, x⃗'_I=z⃗}. As a shorthand in later sections we may write 𝔹^n_ij← 10 instead of 𝔹^n_{ i,j}← (1,0)^⊤. Special binary vectors are 0⃗^n (0,0,…,0) and 1⃗^n (1,1,…,1). The superscript denotes their length, but we omit it when clear from context. Given any function f:A→ B and a set M⊆ A, we define f(M)f(a):  a∈ M as the image set. Rounding a number a∈ℝ to the nearest integer is denoted by a. By convention, a number exactly halfway between two integers (with fractional part 0.5) is rounded up. Additionally, we write A to denote element-wise rounding for any real-valued vector or matrix A. The object of study throughout this paper is the quadratic unconstrained binary optimization () problem, which is defined as follows: Let _n denote the set of all upper triangular matrices in ℝ^n× n. A matrix Q∈_n gives rise to a pseudo-boolean function f_Q:𝔹^n→ℝ defined via f_Q(x⃗)x⃗^⊤Qx⃗=∑_i=1^n∑_j=i^n Q_ijx_ix_j . This function is sometimes called energy. The set of vectors which minimize f_Q is denoted by Q := min_x⃗ f_Q(x⃗). Now, the quadratic unconstrained binary optimization () problem is to determine x⃗^*∈Q. We drop the explicit dependence of on Q whenever it is clear from the context. This problem is known to be -hard <cit.>, and its exact solution requires, in the worst case, an exhaustive search of an exponentially large candidate space. A range of solution techniques has been developed over past decades. Some obtain the exact result <cit.> with worst-case exponential running time. Faster approximate techniques range from linear constraint solvers to heuristics such as simulated annealing <cit.>, tabu search <cit.>, and genetic programming <cit.> – see <cit.> for a comprehensive overview. Perhaps most remarkably, can be mapped to an Ising model <cit.> and solved through quantum annealing, which exploits quantum tunneling effects <cit.>. Let Q∈_n and α>0. The problem is linear w.r.t. Q, , α f_Q(x⃗) = f_αQ(x⃗) and f_Q+Q'(x⃗) = f_Q(x⃗) + f_Q'(x⃗),  ∀x⃗∈^n . The lemma tells us that the optimization problems described by αQ are equivalent for all α>0. As a consequence, the set of minimizing vectors remains unchanged, , Q = αQ, as well as the relative function value differences between all binary vectors. We will expand further on such types of relations between instances in <ref>. § PARAMETER PRECISION Even though the entries of a matrix are real-valued in theory, on any real-world computing device there is a limit to the precision with which numbers can be represented. Typically, binary representations are used, where floating-point numbers or 2-complement integers can be represented with a fixed number of bits. For instance, a register of B bits using 2-complement can represent all integers in { -2^B-1,…,2^B-1-1} (the first bit represents the sign). Values that have a fractional part must be rounded to the nearest integer in order to be represented in 2-complement. This leads to an important practical observation, which follows from <ref>: Any real-world computing device with finite floating-point precision can only solve instances with integer parameters. Any number with finite floating-point precision is, in fact, rational, therefore we can assume Q_ij=p_ij/q_ij for any i≤ j∈ [n] in a given instance Q∈_n. Let mlcm{ q_ij}_i≤ j the least common multiciple of all denominators, then m has integer parameters. In this section we investigate how the precision of the parameter representation, specifically rounding, affects the energy landscape of f_Q, and how we can evaluate the minimum number of bits necessary to represent the parameters of Q faithfully. Further we will define a relation between instances Q and Q' that holds if some or all minimizing vectors are preserved between f_Q and f_Q'. Using this concept, we show that there is a theoretical lower bound on a scaling factor that preserves at least one of the optima after rounding the parameters. §.§ Preserving Optima Every instance has at least one binary vector with minimum energy, which is the global optimum of the optimization problem. As known from Lemma <ref>, scaling a instance with a positive factor α does not change the optima, therefore Q=α Q. Note that Q cannot be the empty set, as in a non-empty finite set of real numbers a smallest element always exists. Having a choice between multiple optima has several benefits. For one, when there is a variety of equally good solutions, we can choose one according to other criteria that are not encoded in the optimization problem (, the solution with fewest 1-bits). Moreover, certain iterative optimization strategies may converge more quickly when there are multiple global optima scattered through the search space, as the distance to the nearest optimum across all binary vectors decreases. However, when we are interested in just any optimal solution, we have to accept that, when modifying the parameters of Q, some optima may get lost. To this end, we define the notion of optimum inclusion on instances: Let Q, Q'∈_n. We say that Q includes the optima of Q', written as Q Q', when the set of optima of Q is a subset of the optima of Q': Q Q'  ⟺  Q⊆ Q' The relation induces a preorder on _n. It is not antisymmetric, as Q Q' and Q' Q does not imply Q= Q', but it is reflexive and transitive. Informally, if Q Q', we know that Q and Q' share at least one global optimum, and that Q has no optima that Q' does not have as well. Therefore, a binary vector that minimizes f_ Q is guaranteed to also minimize f_ Q'. §.§ Dynamic Range To make the degree of precision we need to faithfully represent parameters measurable, we borrow the notion of dynamic range from signal processing. Let X⊂ℝ be a finite set. First, we define the set of absolute differences between all elements as X{x-y: x,y∈ X, x≠ y}, and we write XminX and XmaxX. The dynamic range (DR) of X is defined as (X) log_2(X/X) Its unit is bits. The DR of a matrix Q is defined as the DR of the set of its entries: (Q) ( Q), where   Q { Q_ij: i,j∈[n]} . Note that always 0∈ Q, because Q is triangular, thus Q_ij=0 when i>j. In the following, we will simply write Q, Q and Q instead of Q, Q and Q. A large dynamic range implies that Q needs many bits to represent all parameters faithfully in binary, because its parameters both cover a large value range and require small gradations. Note that scaling has no effect on dynamic range. Consider the following 2× 2 upper triangular matrices: Q = [ -1 14380; -2 ] Q' =[ -1 3; -2 ] Matrix Q defines the problem f_Q()=-x_1+14380x_1x_2-2x_2. The global optimum is x⃗^*=(0,1)^⊤ with value f_Q(x⃗^*)=-2. As we see at a glance, the value Q_12 is positive and very large, acting as a penalty weight between bits x_1 and x_2. However, a much smaller value has the same effect, as we see with Q', which is identical to Q except for Q'_12=3. The global optimum of f_Q' is still x⃗^*=(0,1)^⊤ with f_Q'(x⃗^*)=-2, therefore Q Q'. When we compare the dynamic ranges of Q and Q', we find that (Q) =log_2(14380-(-2)/0-(-1)) (Q') =log_2(3-(-2)/0-(-1)) ≈ 13.812 ≈ 2.322, which is a tremendous reduction: While we need 14 bits to encode the elements of Q, we only need 3 for Q'. This demonstrates that, in principle, it is possible to reduce the dynamic range while preserving the minimizing vectors of the problem. One way to enforce a reduction of dynamic range is scaling and rounding, which sets the smallest meaningful difference between values to 1. It is easy to see that this gives us an upper bound on the DR: Let Q∈_n; w.l.o.g. we assume that Q is scaled such that Q=1. For any α>0 the DR of α Q is bounded above by (α Q) ≤log_2(1+α) . Due to the normalization we have (α Q)=log_2(α/β) with β≤α. By rounding we enforce α Q≥ 1, therefore (α Q) = log_2(ϵ+α/ϵ'+1) (-1≤ϵ<1) ≤log_2(1+α/ϵ'+1) (0≤ϵ') ≤log_2(1+α) . Rounding a instance Q does not, in general, preserve Q, as it skews the values of f_ Q. We can model this perturbation caused by rounding after scaling with some α>0 as f_α() = ^⊤α = ^⊤(α+(,α)) = α^⊤+^⊤(,α) = α f_()+f_(,α)() . Here, (,α)∈(-1/2,1/2]^n× n denotes the matrix of difference between the real values and their nearest integers in Q after scaling with α>0. When we reverse the scaling after rounding, the error on each entry in Q is bounded in (-1/2α,1/2α], and the error on the total function value of any bit vector is exactly ϵ_ Q,α(x⃗) f_(,α)()/α . This matrix sum representation of rounding bridges the gap to the precision of the D-Wave quantum annealers, where the ICEs can be modeled by adding random noise to the parameters of the underlying Ising model. §.§ Optimal Rounding The question remains how to ensure that the rounding error does not lead to false optima. As <ref> shows, the magnitude of the error generally decreases as α increases, because E( Q, α)_ij≤ 0.5, ∀ i,j∈ [n], regardless of Q and α. This allows us to bound the error between f_α Q/α and f_ Q within an interval of ±C/α for some constant C>0. If we choose α such that the rounding errors cannot bridge the gap between the lowest and second to lowest value of f_ Q, we can guarantee that no non-optimal vector's value is rounded down to the optima's value. Let Q∈_n, and let y_1 min f_Q(^n) its lowest and y_2 min(f_Q(^n)\y_1) its second-to lowest value w.r.t. f_ Q, then the spectral gap γ_Q is defined as γ_Q y_2-y_1 . Originally, the spectral gap is defined as the difference between the lowest and second to lowest eigenvalue of a Hamiltonian operator in physics, however the concept can be extended to any optimization problem with a real-valued objective function. In general, computing the spectral gap is at least as difficult as solving the problem itself, i.e., intractable for large n. However, it gives us a theoretical lower bound for α that allows for optimum-preserving rounding: Let Q∈_n and γ_Q its spectral gap. Then ∀α≥α^*:  αQQ where α^*n^2+n/4γ_ Q . Recall <ref>. Each entry of E( Q,α) is bounded in (-1/2,1/2]. It is clear that the worst-case rounding error is proportional to the norm of the binary vector, as f_ Q(x⃗) contains more summands than f_ Q(x⃗') if x⃗_1>x⃗'_1. Accordingly, the overall worst-case rounding error would occur if each entry of E was +1/2 or close to -1/2 respectively, one of the optimal or second to optimal vectors was 1 and the other 0. Assume that 1 is a global minimum and 0 has the second to minimal value, then we find -n^2+n/4=-1/2n(n+1)/2< 1^⊤ E( Q,α) 1≤1/2n(n+1)/2= n^2+n/4 , and f_α Q( 0)=f_ Q( 0)=0. From <ref> we find that γ_αQ=αγ_Q for any α>0. To ensure that the combined rounding error cannot cause f_α Q( 0) -f_α Q( 1)≤ 0, we need 0 !≤ f_α( 0) - f_α( 1) = f_α( 0)-f_α( 1)- 1^⊤ E( Q,α) 1 =αγ_ Q- 1^⊤ E( Q,α) 1 ≤αγ_ Q-n^2+n/4 ⇔ α !≥n^2+n/4γ_ Q . § PARAMETER COMPRESSION As we have seen in <ref>, scaling and rounding reduces the dynamic range. However, in general, reducing the DR leads to coarser energy gradations. E.g., a instance where parameters are encoded with only 2 bits can only have values in -2,-1,0,1, which may be insufficient to accurately preserve the value function and, consequently, the minimizing binary vectors. In this section we develop strategies to balance these competing objectives and reduce the DR while keeping (some of) the optimal vectors intact. To this end, we modify parameter values while trying to stay within bounds which guarantee that a minimal solution stays minimal, and a non-minimal solution does not become minimal. For now, assume a instance Q∈_n has a unique global minimum x⃗^* of value y^* f_Q(x⃗^*). Our two objectives can be formulated as a constrained optimization problem, where we want to find a matrix A∈_n such that _A∈_n (Q+A) s.t. Q+AQ . To approach this problem, we update the parameters Q_kl↦ Q_kl+ sequentially. Deciding which parameter to update and how to choose the right value for will be the focus of the following subsections. For now, let the indices k,l∈ [n], k≤ l of some parameter within Q be arbitrary but fixed. For conciseness, we write x⃗^*_ab and y_ab instead of x^*_kl← ab and y_kl← ab. §.§ Bounding the Optimal Value Recall our definition of 𝔹^n_I←z⃗ from <ref>. Fixing one or more bits in a binary vector to constants induces subspaces of ^n, one for each possible assignment of variables indexed by I, which is 2^ I in total. Each subspace has its own set of minimizing binary vectors w.r.t. f_Q. Given Q∈_n, indices I⊆ [n] and an assignment z⃗∈^ I, the subspace optima are defined as I←z⃗ Q{x⃗^*:  x⃗^*∈^n_I←z⃗, f_ Q(x⃗^*)≤ f_ Q(x⃗)  ∀x⃗∈^n_I←z⃗} . Note that I←z⃗ Q∩I←z⃗' Q=∅ for any z⃗≠z⃗'. Therefore we can choose an arbitrary (but fixed) element x⃗^*_abkl← ab Q as a representative for all ab∈^2. Their respective values are denoted by y^*_ab f_Q(x⃗^*_ab). Naturally, y^*_ab are just as hard to compute as solving itself. Therefore, we work with upper and lower bounds on the true values, which are much easier to compute. We will use them to determine the update parameter . Upper (lower) bounds for y^*_ab are denoted by ŷ_ab (y̌_ab), such that y̌_ab≤ y^*_ab≤ŷ_ab, ∀ ab∈^2 . Further, let ^- min{0,min{ŷ_00,ŷ_01,ŷ_10}-y̌_11} , ^+ max{0, min{y̌_00,y̌_01,y̌_10}-ŷ_11} , if k≠l. Otherwise, when k=l, let ^-=min{0,ŷ_00 - y̌_11} and ^+ = max{0,y̌_00 - ŷ_11}. Let Q∈_n, and assume we perform an update Q_kl↦ Q_kl+. Given ^- and ^+ as defined in <ref>, then an optimum is preserved as long as ^-≤≤^+ . We focus on the case k≠l, the case k=l is analogous. A global minimum y^* must be equal to exactly one of the four subspaces' optima y^*_00,y^*_01,y^*_10,y^*_11. Notice that changing Q_kl by affects only y^*_11. Assume x⃗^*≠x⃗^*_11, then an optimum is preserved if y^*_11+≥ y^* ⇔ y^*_11+≥min{y^*_00,y^*_01,y^*_10} ⇐ y̌_11+≥min{ŷ_00,ŷ_01,ŷ_10} . Furthermore, can take any positive value, since y^*_11> y^*. Combining this observation with <ref>, we end up with the lower bound ≥min{0,min{ŷ_00,ŷ_01,ŷ_10}-y̌_11}=^- . If x⃗^*= x⃗^*_11, we can similarly deduce an upper bound ≤max{0,min{y̌_00,y̌_01,y̌_10}-ŷ_11}=^+ . Combining <ref> we obtain <ref>. <ref> uses bounds (y̌_ab and ŷ_ab) on the true optima y^*_ab to give us an interval for if we want to preserve an optimum. These bounds can also be used for determining optimality of x⃗^*_ab. The following implications hold: y̌_ab>min( {ŷ_00,ŷ_01,ŷ_10,ŷ_11}∖{ŷ_ab}) ⇒x⃗^*≠x⃗^*_ab ŷ_ab<min( {y̌_00,y̌_01,y̌_10,y̌_11}∖{y̌_ab}) ⇒x⃗^*=x⃗^*_ab We focus on the case k≠l, the case k=l is analogous. Assume that <ref> holds, , y̌_ab>min( {ŷ_00,ŷ_01,ŷ_10,ŷ_11}∖{ŷ_ab}) ⇒ y^*_ab > min( {y^*_00,y^*_01,y^*_10,y^*_11}∖{y^*_ab}) ⇔ y^*_ab > y^*  ⇔ x⃗^*≠x⃗^*_ab . The result in <ref> follows analogously. See <ref> for a visualization. If we find the inequality in <ref> to be true for some two-bit assignment ab, the dimension of the search space can be reduced by fixing x_k=a and x_l=b. Similar reduction techniques can be found in <cit.>. Knowing that x⃗^*≠x⃗^*_11 we can get rid of the upper bound <ref> (cf. proof of <ref>). The questions remains how to obtain lower and upper bounds on y^*_ab. Simple, but weak bounds can be computed in Θ(n^2). Let k,l be two indices with 1≤ k≤ l≤ n and ab∈^2 a variable assignment. An upper bound for y^*_ab is given by y^*_ab≤ f_ Q(0⃗_kl← ab) . To obtain better upper bounds, we can invest more computational effort and, for instance, perform a local search in the space ^n_kl← ab, or perform rejection sampling, and record the lowest observed value of f_ Q s.t. x⃗_kl=ab. In order to derive simple lower bounds we can take only the negative entries of Q and compute the lowest possible sum that can be formed from them. Let k,l and ab as before. Define Q^- such that Q^-_ij=min0,Q_ij, , the matrix containing only the negative values of Q. Then a lower bound for y^*_ab is given by y^*_ab≥ f_ Q^-(1⃗_kl← ab) . The lower bound can be improved by exploiting roof duality <cit.> at the cost of higher (yet polynomial) computational complexity. §.§ Reducing the Dynamic Range We have established intervals within which the parameters of a problem can be modified while an optimum is preserved. The question remains how to choose the values in a way that reduces the dynamic range. In this subsection we show multiple approaches to achieve this goal. Let m n^2 be the number of entries of an n by n square matrix. For any Q∈_n there is an ordering π:[m]→ [n]^2 of values in Q such that i≤i+1,  i≡ Q_π(i), ∀ i∈[m] Using this notation, 1=min Q, m=max Q, and further Q=m-1 and ∃ j∈[m-1]:  Q=q_j+1-q_j. Note that about half of all q_i are 0, as Q is upper triangular. Let Q∈_n and π(ℓ)=(k,l) with k≤ l and Q_kl≠ 0. When adding a value ∈ℝ to Q_kl, the dynamic range does not increase, , (Q)≥(Q+e⃗_ke⃗_l^⊤), if the following two conditions hold: * is bounded: 1-ℓ +δ_mℓ(m-1- m)- ℓ_^-≤≤m-ℓ +δ_1ℓ(2- 1)+ ℓ_^+ , * does not decrease the minimal parameter distance: |ℓ+-i| ≥ Q, ∀ i∈ [m]∖{ℓ} ∨ ℓ+ =i, ∃ i∈[m] . Here, δ_· is the Kronecker delta with δ_uv 1 if u=v, else 0, and ℓ is defined as ℓ Q(q_u:  u∈ [m]\ℓ/ Q-1) . We only consider an increase of the parameter Q_kl, >0, since the results for decreasing Q_kl can be deduced analogously. Firstly, consider the parameters ℓ>1. If ≤m - ℓ , then Q is not increased and thus to avoid an increase of the , Q should not be decreased (see <ref>). This can be achieved by maintaining a distance of at least Q to all other parameters, , |ℓ + - i|≥ Q, ∀ i∈ [m]∖{ℓ} , or “landing” on an already existing parameter, , ℓ + =i, ∃ i∈[m] . If the current maximum value is overshot, , >m-ℓ , Q is increased and thus to reduce the , Q has also to be increased. This can only be the case if ℓ is unique and is part of the minimum distance, , Q∖{ℓ}> Q. The change can then be bounded by (Q) ≥(Q+e⃗_ke⃗_l^⊤) ⇔ Q/ Q ≥ Q+ℓ+-m/minq_u:  u∈ [m]\ℓ, Q+ ≥ Q+ℓ+-m/q_u:  u∈ [m]\ℓ ⇔ ≤m-ℓ + Q(q_u:  u∈ [m]\ℓ/ Q-1) . Secondly, consider ℓ=1. If the smallest value 1 is not unique, we can also deduce bounds <ref>. On the other hand, when the smallest value is unique (2-1>0) the increase 1+>m does not necessarily increase Q. If >2-1, 1 + is not the minimum value anymore but 2 is. Thus, the difference 2-1 can be added to the bound in <ref> ≤(m - 1) + (2- 1)=m - 21 + 2 . Additionally, if 1 is part of the unique minimum distance, we can add 2-1 to the bound in <ref> ≤m-1 + 2-1+ Q(q_u:  u∈ [m]\ℓ/ Q-1) . Similar bounds can be obtained for a negative change <0. <ref> give us very loose bounds on the parameter changes. To determine the exact value, we present a few heuristic approaches. §.§.§ A Greedy Strategy The first heuristic is a greedy strategy which we denote by . The parameter Q_kl is increased if ℓ<0 and decreased otherwise, where π(ℓ)=(k,l). For increasing (decreasing) Q_kl we choose ^ maximally (minimally), , ^=^+ (^=^-). If the updated parameter lays too close to some other parameter, , |ℓ + ^± - i|≥ Q, ∃ i∈ [m]∖{ℓ} , we would set it equal to the next smaller (larger) parameter, that is, ℓ + ^=i,  j≤ℓ + ^+, ∀ j≤ i  (j≥ℓ + ^-, ∀ j≥ i) . Again, recall that there is always a q_u=0 for some u∈[m], and thus we may set parameters to 0. For certain target platforms, such as quantum annealers, this is particularly beneficial, as setting a parameter to 0 allows to discard the coupling between the qubits indexed by k and l, which saves hardware resources. As an alternative version to <ref>, we choose ^_0 such that ℓ + ^_0=0,  0≤ℓ + ^+,  (0≥ℓ + ^-) . We henceforth call this alternative version _0. §.§.§ Maintaining the Parameter Ordering With the preceding methods, we allowed parameters to cross over each other, changing their ordering. However, another heuristic approach is to maintain the ordering of the elements in Q, which should intuitively help to preserve the optimum. This heuristic is denoted by . For this, define bounds on a certain parameter q_ℓ ℓ+ min{t : t> ℓ,t∈[m]} , ℓ+ max{t : t≤ℓ,t∈[m]∖{ℓ}} , ℓ- min{t : t≥ℓ,t∈[m]∖{ℓ}} , ℓ- max{t : t< ℓ,t∈[m]} . If all entries of are unique, then ℓ+=ℓ- and ℓ+=ℓ-, so only if duplicate values exist, these bounds on ℓ differ. An example clarifying these bounds is given in <ref>. The idea is now that ℓ is changed in such a way that it lies exactly in the middle between ℓ± and ℓ±. For 1<ℓ<m we increase ℓ if ℓ-ℓ-<ℓ+-ℓ and decrease otherwise. The weight ℓ is thus changed by ^=ℓ+-ℓ+/2-min{ℓ+-ℓ,ℓ-ℓ+}, if ℓ-ℓ-<ℓ+-ℓ, ℓ--ℓ-/2+min{ℓ--ℓ,ℓ-ℓ-}, else. For the edge case ℓ=m, ^ is given by ^=m--m+ Q, if q_u:  u∈ [m]\m= Q, q_u:  u∈ [m]\m- Q, else. Similarly, for ℓ=1 ^=1+-1- Q, if q_u:  u∈ [m]\1= Q, Q-q_u:  u∈ [m]\1, else. Having the heuristic change ^· at hand, we can determine the final change as =min{max{^·,^-},^+} , which ensures that ∈[^-,^+], such that the optimum is preserved. Consider the following exemplary matrix with n=3: Q[ -1 0.4 1; 0.4 -0.8; -1.5 ], with (Q)=log_2 (2.5 / 0.2) = 3.64. The ordering 1,…,9 is given by (1,…,9)=(-1.5,-1,-0.8, 0,0,0, 0.4, 0.4, 1) and is visualized in <ref>. The bounds from <ref> for two specific parameters can be found in <ref>. We want to increase the value Q_23≡3=-0.8, because this would decrease Q and thus decrease the dynamic range. We fix k=2, l=3 and find that x⃗^*=(0, 1, 1)^⊤. For maintaining the optimum x⃗^* when changing Q_23, we need to obey <ref>. Computing accurate bounds, , the exact values, we obtain y̌_00=0, y̌_01=-1.5, y̌_10=0.4, ŷ_11=-1.9, and thus ^+=min{0,-1.5,0.4}- 1.9=0.4. In words, we can maximally increase Q_23 by 0.4 to maintain the optimum state, which is depicted in Fig. <ref>. For decreasing the we have a look at the three heuristics , and _0. The values ^=0.3, ^=1.6 and ^_0=0.8 are also depicted in <ref>. We observe that changes Q_23 to lie in the middle between its neighbors, maximally increases Q_23 to maintain the , while _0 sets Q_23 to 0. In <ref> the final changes are shown, using the three heuristics. Following <ref>, the changes are given by 0.3 for and 0.4 for and _0, respectively. Both result in a doubling of Q leading to a new dynamic range decreased by one bit, i.e., 2.64. §.§ Choosing the Next Parameter Since the changes of the parameters are carried out in a successive fashion, it remains to decide which k,l∈ [n], k≤ l to pick next. A very simple approach is to pick a random pair of indices, or iterate over all index pairs in sequence. Using this method, many – or, with growing n, most – iterations will not lead to a DR improvement, as only a few different parameters directly determine the DR, namely those closest and furthest apart (c.f. <ref>). Conversely, changing such an “inactive” parameter can never lead to a decrease in DR, only to an increase. This realization leads to a better strategy, which is choosing only among those index pairs whose parameters determine DR. In our experiments, we compute their respective update values a and greedily choose the one that leads to maximal DR reduction, breaking ties randomly. § EXPERIMENTS In this section we conduct experiments for showing the effectiveness of our proposed method. §.§ Results for Random Instances We conduct experiments with the parameter update methods presented in <ref>, namely Greedily choose the parameter update that leads to the greatest DR decrease; _0 Like , but prefer to set parameters to 0, if possible; Restrict bounds such that the parameter ordering remains intact. The methods described in this article are implemented as part of our Python package [<https://github.com/smuecke/qubolite>]. Using 1000 random matrices with the entries being sampled uniformly in the interval [-0.5, 0.5], we follow the different heuristics from <ref> for 1000 iterations. For preserving the optimum the upper bounds ŷ_ab are computed using a local search and the lower bounds y̌_ab are computed by using the roof-dual algorithm <cit.>. In addition we compare the two methods described in <ref>, namely choosing the next weight to be random or according to the largest impact on the decrease of the . The 95%-confidence intervals are indicated. <ref> depicts the dynamic range ratio between the matrices arising by following the different heuristics and the original matrix, that is ( Q+A)/( Q). We observe this ratio to be monotonically decreasing, indicating that our proposed heuristics are fulfilling their duty of reducing the . This decreasing percentage shrinks with larger n, , it is harder to reduce the dynamic range for larger n. Generally, choosing the next weight according to the myopic impact on the is better than a random decision (except for n=4), but also comes with a higher computational cost. With increasing n, it turns out that the heuristic performs worse than the heuristic. However, the slight adaption _0 of setting weights to 0 if possible outperforms . For further evaluation we first define a metric for comparing different rankings. Let Q∈_n, and let x⃗^1,x⃗^2,…,x⃗^2^n be the binary vectors of ^n in lexicographic order. The induced ranking of Q is a permutation π_ Q∈ [2^n]→[2^n] such that f_ Q(x⃗^π_ Q(i))≤ f_ Q(x⃗^π_ Q(i+1))  ∀ i∈[2^n-1] . Let π,π':[K]→[K] be two permutations for some K>1. The normalized Kendall τ distance between π and π' is given by K_d(π,π')1/2+1/K(K-1)∑_i,j∈ [K] i<j[(π(i)-π(j))·(π'(i)-π'(j))] . Intuitively, this distance measures the proportion of disagreement between rankings over all pairs of indices, , the percentage of (i,j) such that π(i)<π(j), but π'(i)>π'(j), and vice versa. If K_d(π,π')=0, then π and π' are identical, and if K_d(π,π')=1, then π' is the reverse of π. Therefore, Kendall τ distance gives us a measure of how much the DR reduction “scrambles” the value landscape of f_ Q by computing K_d(π_ Q,π_ Q+ A), which is relevant, , for the performance of local search heuristics. <ref> shows the Kendall τ distance between the induced rankings of the instances on ^n before and after their reduction. We observe that and _0 “scramble” the ordering of binary vectors more strongly than . This is exactly what we expected, as 's objective is to preserve the ordering of parameters, which in turn leads to more conservative parameter modifications. This reduces the overall “noise” that is added to the energy landscape. Instead of considering the values themselves, we depict the change of the weight ordering in <ref>. Again, the Kendall τ distance is used and we observe similar effects to <ref>. We can see that maintains the weight ordering. Finally, <ref> displays the unique weight ratio, that is the number of unique weights of the current iteration divided by the number of unique weights of the original . never changes the uniqueness of the weights, while the ratio is monotonically reduced for ^0, since many weights are set to 0. With using , the number of unique weights is first reduced but then starts to increase when the to be changed weights are chosen randomly. §.§ Results on Quantum Hardware Lastly, we want to investigate to what extend our DR reduction method can help to improve the performance of actual quantum hardware. To this end, we follow these steps: * Generate a instance Q * Apply our DR reduction method to obtain Q' * Apply quantum annealing to both Q and Q', performing multiple readouts * Compare the occurrence probabilities of the global minimum As two exemplary problems, we perform and . stands for “binary clustering” and is an unsupervised ML task, where data points are assigned to one of two classes (“clusters”). consists of finding a subset from a list of values that sum up to a given target value. Both have well-established embeddings <cit.>. To generate data for , we sample i.i.d. n=20 2-dimensional points from an isotropic standard normal distribution. Then we create two clusters by applying (x_1,x_2)↦ (x_1-4,x_2) to the first ten points, and (x_1,x_2)↦ (x_1+4,x_2) to the last ten. As a last step, we choose the points 1 and 19 and multiply their coordinates by 20, which leads to a data set containing two outliers, shown in <ref>. From this data we derive a instance Q using the method from <cit.>. We use a linear kernel, which leads to a vanilla 2-means clustering based on Euclidean distance. To ensure this problem has only one optimal solution, we assign class 0 to point 20 and only optimize over the remaining 19 points (otherwise there would be two symmetrical solutions). The resulting parameter values are shown in <ref>. As is apparent from the color scale, the DR is very high (22.7921 bits), which makes most values near 0 completely indistinguishable. Now, we apply our _0 DR reduction method to Q, since it was the most promising one apparent from <ref>. Again we use a local search and the roof-dual algorithm for computing optimum-preserving bounds. To limit execution time, we give our algorithm a fixed budget of 100 iterations. The resulting instance Q' is shown in <ref>; clearly, much more detail is visible, as the color scale is much narrower, already hinting at the lower DR. Indeed, we find that (Q')=11.0427, which is a reduction by more than half. Similarly, we sample a problem, using the same approach as for <ref> described in <ref>. We use the same algorithm configuration as before, but allow for 150 iterations. The parameter matrices are shown in <ref>. The DR is 25.6765 before and 9.8882 after. Next, we attempt to solve all four instances on a D-Wave quantum annealer. To this end, we use the Advantage system 4.1, which we access through the Python interface. The D-Wave annealers have a fixed connectivity structure, i.e., only a subset of qubit pairs can be assigned a weight. Therefore, dense problems (like ours) must be embedded into this connectivity graph structure through redundant encoding and additional constraints. To improve comparability, we have this embedding computed once for the original and re-use it for the compressed one. This is possible because they have the same size, differing only in parameter magnitudes. We perform 1000 reads for each instance and record their energy values. Recall that our proposed DR reduction method only keeps the solution vectors intact, but changes the parameter values, therefore the energy values are no longer comparable after reduction. For this reason, we compute the ground-truth minimal energies v^* for both Q and Q', and plot the relative deviation of the measured energies v w.r.t. these values, |v^*-v/v^*| , along the y-axis in <ref>. As is apparent from the figure, the DR-reduced instances yield the global optimum much more frequently than the original instances of high DR. We verified that the vector of minimal energy in the compressed instances indeed yield the minimal energy of the original problems, confirming that our method preserves the optimum correctly. To quantify the improvement, we consider the energy values of samples obtained from the compressed instances w.r.t. the original (uncompressed) . By inspecting the total number of samples with an energy equal to or lower than the lowest sampled energies obtained from the original instances, we find that the prevalence of low-energy solutions is 17.76 (237) higher for the compressed () instances. We conclude that our method helps a quantum annealer to find the optimum more reliably, since the ICEs limit the dynamic range of the hardware parameters <cit.>. § CONCLUSION While is arguably one of the most prominent and well-studied problem classes, many of their properties are not yet well understood. Our results fall into the cutset of theory and practice by providing a formalization of practical limitations w.r.t. rounding-induced perturbations of the problem parameters. Based on our formalization, we propose a completely novel methodology for reducing the precision required to encode instances. More precisely, we employ the notion of dynamic range to characterize instances w.r.t. the number of bits needed to faithfully encode their parameters. We show that a high dynamic range indicates that the parameters simultaneously cover a large value range while also containing fine gradations. Further, we defined the optimum inclusion relation , which formalizes the idea that the set of minimizing solutions of a instance Q is a subset of those minimizing Q'. This implies that a minimizing solution of Q is guaranteed to also minimize Q'. Based on scale-invariance properties, we derive a theoretical bound on the minimal scaling factor required such that, after rounding, the optimal solution remains unchanged. Applying our findings in a naive manner results in an intractable method due to the unavailability of the true minimal values. Nevertheless, we establish upper and lower bounds, that allow us to compute intervals in which parameter values can be modified without affecting the minimizing vectors. Hence, leaving optimal solutions intact. This opens a space of instances over which we can try to minimize the dynamic range while preserving the minima. To this end, a greedy algorithm is devised that iteratively modifies parameter values while reducing the problem's dynamic range. On top of the bare algorithm, we investigate different selection strategies for choosing the next entry of the parameter matrix. In a series of experiments we put our theoretical framework into practice. First, we randomly sample instances and apply our algorithm with different parameter selection strategies. We observe that all of them lead to a decrease of DR, the most effective one being the greedy strategy which tries to set parameters to zero, if possible (_0). In two more experiments, we took one binary clustering problem and one SubsetSum problem, reduced the resulting matrices' DR, and solved both the original and the compressed problem on a D-Wave quantum annealer. We find that our compression method greatly improves the performance of QA, leading to a significantly higher probability to observe the global optimum. We uncovered numerous intriguing open questions while working on this article, which may serve to inspire further research. The algebraic notion of optimum inclusion can be naturally extended to an equivalence relation, which induces a partitioning on _n. The number of equivalence classes must necessarily be finite and bounded above by 2^2^n-1, which implies there is only a finite number of meaningfully different instances in terms of their set of minimizing bit vectors. A fascinating research endeavor is to find, for each equivalence class, the representative of lowest DR, or—even better—an algorithm to do so. In the context of QA, the spectral gap, which we used for our theoretical bound, is crucial to the duration of the annealing process. It has been proven that for certain random problem instances, annealing time becomes exponentially long, rendering QA just as inefficient as brute force <cit.>. As we have shown in this article that DR has an impact on the result quality as well, it would be interesting to investigate the interrelation between the two quantities. The algorithm we proposed for reducing DR contains heuristic elements worth improving upon. We are currently investigating AI-based parameter selection strategies, , putting the DR reduction task in a Reinforcement Learning framework. In summary, we provide an efficient pre-processing technique that can be applied out-of-the-box to arbitrary instances to improve their feasibility. § STATEMENTS AND DECLARATIONS §.§ Author Contributions All authors contributed to the conception and design. The experiments were conducted by Sascha Mücke and Thore Gerlach. The first draft was written by Sascha Mücke and Thore Gerlach. All authors commented on and revised previous versions of the manuscript, and approved the final version. §.§ Competing Interests The authors of this article declare that there are no competing interests. §.§ Funding This research has been funded by the Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence. 41 [Altshuler et al(2009)Altshuler, Krovi, and Roland]altshuler2009 Altshuler B, Krovi H, Roland J (2009) Adiabatic quantum optimization fails for random instances of NP-complete problems. arXiv:09082782 [quant-ph] https://arxiv.org/abs/0908.2782https://arxiv.org/abs/arXiv:0908.2782 [quant-ph] [Bauckhage et al(2018)Bauckhage, Ojeda, Sifa, and Wrobel]bauckhage2018 Bauckhage C, Ojeda C, Sifa R, et al (2018) Adiabatic quantum computing for kernel k = 2 means clustering. In: LWDA, pp 21–32 [Biamonte et al(2017)Biamonte, Wittek, Pancotti, Rebentrost, Wiebe, and Lloyd]biamonte2017 Biamonte J, Wittek P, Pancotti N, et al (2017) Quantum machine learning. Nature 549:195–202 [Biesner et al(2022)Biesner, Gerlach, Bauckhage, Kliem, and Sifa]biesner2022 Biesner D, Gerlach T, Bauckhage C, et al (2022) Solving subset sum problems using quantum inspired optimization algorithms with applications in auditing and financial data analysis. In: 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, pp 903–908 [Boros et al(2006)Boros, Hammer, and Tavares]boros2006 Boros E, Hammer PL, Tavares G (2006) Preprocessing of unconstrained quadratic binary optimization. Tech. rep., RRR 10-2006, RUTCOR [Boros et al(2008)Boros, Hammer, Sun, and Tavares]boros2008 Boros E, Hammer PL, Sun R, et al (2008) A max-flow approach to improved lower bounds for quadratic unconstrained binary optimization (QUBO). Discrete Optimization 5(2):501–529. 10.1016/j.disopt.2007.02.001 [Brush(1967)]brush1967 Brush SG (1967) History of the lenz-ising model. Reviews of modern physics 39:883 [D-Wave Systems(2023)]dwaveice2023 D-Wave Systems (2023) Error Sources for Problem Representation. <https://docs.dwavesys.com/docs/latest/c_qpu_ice.html> [D-Wave Systems(2021)]d-wavesystems2021 D-Wave Systems (2021) Technical Description of the D-Wave Quantum Processing Unit [Date et al(2020)Date, Arthur, and Pusey-Nazzaro]date2020 Date P, Arthur D, Pusey-Nazzaro L (2020) QUBO Formulations for Training Machine Learning Models. arXiv:200802369 [physics, stat] https://arxiv.org/abs/2008.02369https://arxiv.org/abs/arXiv:2008.02369 [physics, stat] [Dunjko et al(2016)Dunjko, Taylor, and Briegel]dunjko2016 Dunjko V, Taylor JM, Briegel HJ (2016) Quantum-enhanced machine learning. Physical review letters 117:130,501 [Farhi et al(2000)Farhi, Goldstone, Gutmann, and Sipser]farhi2000 Farhi E, Goldstone J, Gutmann S, et al (2000) Quantum computation by adiabatic evolution. arXiv preprint quant-ph/0001106 https://arxiv.org/abs/quant-ph/0001106https://arxiv.org/abs/arXiv:quant-ph/0001106 [Farhi et al(2014)Farhi, Goldstone, and Gutmann]farhi2014 Farhi E, Goldstone J, Gutmann S (2014) A quantum approximate optimization algorithm. arXiv preprint arXiv:14114028 https://arxiv.org/abs/1411.4028https://arxiv.org/abs/arXiv:1411.4028 [Glover and Laguna(1998)]glover1998 Glover F, Laguna M (1998) Tabu search. Springer [Glover et al(2018)Glover, Lewis, and Kochenberger]glover2018 Glover F, Lewis M, Kochenberger G (2018) Logical and inequality implications for reducing the size and difficulty of quadratic unconstrained binary optimization problems. European Journal of Operational Research 265:829–842. 10.1016/j.ejor.2017.08.025 [Goldberg and Kuo(1987)]goldberg1987 Goldberg DE, Kuo CH (1987) Genetic algorithms in pipeline optimization. Journal of Computing in Civil Engineering 1(2):128–141 [Grover(1996)]grover1996 Grover LK (1996) A fast quantum mechanical algorithm for database search. 10.48550/arXiv.quant-ph/9605043, quant-ph/9605043 [Hammer and Shlifer(1971)]hammer1971 Hammer PL, Shlifer E (1971) Applications of pseudo-Boolean methods to economic problems. Theory and decision 1(3):296–308 [Hammer et al(1984)Hammer, Hansen, and Simeone]hammer1984 Hammer PL, Hansen P, Simeone B (1984) Roof duality, complementation and persistency in quadratic 0–1 optimization. Mathematical programming 28:121–155 [Kadowaki and Nishimori(1998)]kadowaki1998 Kadowaki T, Nishimori H (1998) Quantum annealing in the transverse Ising model. Physical Review E 58:5355 [Kirkpatrick et al(1983)Kirkpatrick, Gelatt Jr, and Vecchi]kirkpatrick1983 Kirkpatrick S, Gelatt Jr CD, Vecchi MP (1983) Optimization by simulated annealing. science 220(4598):671–680 [Kochenberger et al(2005)Kochenberger, Glover, Alidaee, and Lewis]kochenberger2005 Kochenberger G, Glover F, Alidaee B, et al (2005) Using the unconstrained quadratic program to model and solve Max 2-SAT problems. International Journal of Operational Research 1(1-2):89–100 [Kochenberger et al(2014)Kochenberger, Hao, Glover, Lewis, Lü, Wang, and Wang]kochenberger2014 Kochenberger G, Hao JK, Glover F, et al (2014) The unconstrained binary quadratic programming problem: A survey. Journal of Combinatorial Optimization 28:58–81 [Laughhunn(1970)]laughhunn1970 Laughhunn D (1970) Quadratic binary programming with application to capital-budgeting problems. Operations research 18(3):454–461 [Lewis and Glover(2017)]lewis2017 Lewis M, Glover F (2017) Quadratic unconstrained binary optimization problem preprocessing: Theory and empirical analysis. Networks 70(2):79–97 [Lloyd et al(2013)Lloyd, Mohseni, and Rebentrost]lloyd2013 Lloyd S, Mohseni M, Rebentrost P (2013) Quantum algorithms for supervised and unsupervised machine learning. arXiv preprint arXiv:13070411 https://arxiv.org/abs/1307.0411https://arxiv.org/abs/arXiv:1307.0411 [Matsubara et al(2017)Matsubara, Tamura, Takatsu, Yoo, Vatankhahghadim, Yamasaki, Miyazawa, Tsukamoto, Watanabe, Takemoto et al]matsubara2017 Matsubara S, Tamura H, Takatsu M, et al (2017) Ising-model optimizer with parallel-trial bit-sieve engine. In: Conference on Complex, Intelligent, and Software Intensive Systems. Springer, pp 432–438 [Mücke et al(2019)Mücke, Piatkowski, and Morik]mucke2019b Mücke S, Piatkowski N, Morik K (2019) Hardware acceleration of machine learning beyond linear algebra. In: Cellier P, Driessens K (eds) Workshops of the ECML-PKDD, Communications in Computer and Information Science, vol 1167. Springer, pp 342–347, 10.1007/978-3-030-43823-4_29 [Mücke et al(2019)Mücke, Piatkowski, and Morik]mucke2019a Mücke S, Piatkowski N, Morik K (2019) Learning Bit by Bit: Extracting the Essence of Machine Learning. In: Proceedings of the Conference on "Lernen, Wissen, Daten, Analysen" (LWDA), pp 144–155 [Mücke et al(2023)Mücke, Heese, Müller, Wolter, and Piatkowski]mucke2023 Mücke S, Heese R, Müller S, et al (2023) Feature selection on quantum computers. Quantum Machine Intelligence 5. 10.1007/s42484-023-00099-z [Narendra and Fukunaga(1977)]narendra1977 Narendra PM, Fukunaga K (1977) A branch and bound algorithm for feature subset selection. IEEE Trans Computers 26(9):917–922. 10.1109/TC.1977.1674939 [Neukart et al(2017)Neukart, Compostella, Seidel, von Dollen, Yarkoni, and Parney]neukart2017 Neukart F, Compostella G, Seidel C, et al (2017) Traffic Flow Optimization Using a Quantum Annealer. Frontiers in ICT 4 [Pardalos and Jha(1992)]pardalos1992 Pardalos PM, Jha S (1992) Complexity of uniqueness and local search in quadratic 0–1 programming. Operations research letters 11(2):119–123 [Peruzzo et al(2014)Peruzzo, McClean, Shadbolt, Yung, Zhou, Love, Aspuru-Guzik, and O'Brien]peruzzo2014 Peruzzo A, McClean J, Shadbolt P, et al (2014) A variational eigenvalue solver on a photonic quantum processor. Nature Communications 5:4213. 10.1038/ncomms5213 [Rehfeldt et al(2023)Rehfeldt, Koch, and Shinano]rehfeldt2023 Rehfeldt D, Koch T, Shinano Y (2023) Faster exact solution of sparse maxcut and qubo problems. Mathematical Programming Computation 10.1007/s12532-023-00236-6 [Rønnow et al(2014)Rønnow, Wang, Job, Boixo, Isakov, Wecker, Martinis, Lidar, and Troyer]ronnow2014 Rønnow TF, Wang Z, Job J, et al (2014) Defining and detecting quantum speedup. Science 345(6195):420–424. 10.1126/science.1252319 [Schuld et al(2015)Schuld, Sinayskiy, and Petruccione]schuld2015 Schuld M, Sinayskiy I, Petruccione F (2015) An introduction to quantum machine learning. Contemporary Physics 56:172–185 [Shor(1997)]shor1997 Shor PW (1997) Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM Journal on Computing 26:1484–1509. 10.1137/S0097539795293172, https://arxiv.org/abs/quant-ph/9508027https://arxiv.org/abs/arXiv:quant-ph/9508027 [International Organization for Standardization(2020)]iso2020 International Organization for Standardization (2020) ISO/IEC JTC 1/SC 25 60559:2020 pp 1–74. <https://www.iso.org/standard/80985.html> [Stollenwerk et al(2019)Stollenwerk, Lobe, and Jung]stollenwerk2019 Stollenwerk T, Lobe E, Jung M (2019) Flight Gate Assignment with a Quantum Annealer. In: Quantum Technology and Optimization Problems. Springer International Publishing, Lecture Notes in Computer Science, pp 99–110, 10.1007/978-3-030-14082-3_9 [van Dam et al(2001)van Dam, Mosca, and Vazirani]vandam2001 van Dam W, Mosca M, Vazirani U (2001) How powerful is adiabatic quantum computation? In: Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pp 279–287, 10.1109/SFCS.2001.959902 § METHODOLOGY FOR <REF> In this section we describe the experimental setup used to obtain the results shown in <ref>. We used random instances of the SubsetSum problem. For this problem, we are given a list A=(a_1,…,a_n) of n integers and a target value T. Our task is to find a subset S⊆ [n] such that ∑_i∈ Sa_i=T. This problem lends itself naturally to , where we use n binary variables which indicate if i∈ S for each i. The energy function is then simply f(x⃗)=(∑_ia_ix_i-T)^2 ∝∑_i,j a_ia_jx_ix_j -2T∑_i a_ix_i , which yields the parameters Q_ij=2a_ia_j if i≠ j a_i^2-2Ta_i otherwise. For our experiment, we set n=16 and sampled the elements of A i.i.d. as 10· Z, where Z follows a standard Cauchy distribution. This leads to occasional outliers with large magnitudes, which in turn yields instances with large DR. Next, we sample the number of summands k from U where U follows a triangular distribution with parameters a=n/5, b=n/2 and c=4n/5, so that, on average, half of the elements of A contribute to the sum. Finally, we sample k indices from [n] without replacement to obtain S, and set T=∑_i∈ Sa_i. This way, we handily generate problems where we already know the global optimum. Following this process we generate N=100,000 instances 𝒟={Q^i}_i∈[N] and compute their DR. From the empirical distribution of DR values, we compute the i/5-quantiles, which we label β_i, for i∈{ 0,…,5}. For each i∈[4], we define bins B_i=[β_i-1,β_i), and B_5=[β_4,β_5]. Now, we partition our data set by sorting the instances into the 5 bins by their DR, 𝒟_i{ Q^j: j∈[N], (Q^j)∈ B_i}. For each 𝒟_i, we scale and round each instance to the number of bits indicated by the x-axis. The y-axis shows “optimum correctness”, which indicates the proportion of rounded instances that retained a global optimum after scaling and rounding. To compute this value, firstly let correct(Q' Q) 1 if Q' Q and 0 otherwise. We check this by obtaining v^*=min_x⃗ x⃗^⊤ Qx⃗ and x̃⃗̃^*∈ Q' by brute force; if f_ Q(x̃⃗̃^*)=v^*, then necessarily Q' Q. For each number of bits n_b and bin index i, the y-value we plot is 1/20,000∑_ Q∈𝒟_icorrect(Q̃^(n_b) Q), where Q̃^(n_b) denotes the version of Q scaled and rounded to a bit precision of n_b. Intuitively, this tells us the approximate probability that, for a randomly sampled SubsetSum problem, we still obtain the correct solution when parameter precision is reduced to the given number of bits. Clearly, this probability increases when we use more bits to encode the parameters. This is expected, as higher precision leads to more accurate representation of the energy landscape. Further, the result shows that the probability decreases with higher DR, which we also expected: A high DR indicates that there are fine distinctions between parameter values, making it more likely for rounding to collapse parameters and, consequently, lose energy distinctions between solution vectors.
http://arxiv.org/abs/2307.01118v1
20230703154721
Pinning down the leptophobic $Z^\prime$ in leptonic final states with Deep Learning
[ "Tanumoy Mandal", "Aniket Masaye", "Subhadip Mitra", "Cyrin Neeraj", "Naveen Reule", "Kalp Shah" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-th" ]
[email protected] Indian Institute of Science Education and Research Thiruvananthapuram, Vithura, Kerala, 695 551, India Indian Institute of Science Education and Research Thiruvananthapuram, Vithura, Kerala, 695 551, India [email protected] Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad 500 032, India [email protected] Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad 500 032, India [email protected] Indian Institute of Science Education and Research Thiruvananthapuram, Vithura, Kerala, 695 551, India [email protected] Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad 500 032, India XXXXXX A leptophobic Z^' that does not couple with the Standard Model leptons can evade the stringent bounds from the dilepton-resonance searches. In our earlier paper [T. Arun et al., Search for the Z' boson decaying to a right-handed neutrino pair in leptophobic U(1) models, http://dx.doi.org/10.1103/PhysRevD.106.095035Phys. Rev. D, 106 (2022) 095035; http://arxiv.org/abs/2204.02949arXiv:2204.02949], we presented two gauge anomaly-free U(1) models where a heavy leptophobic Z' is present along with right-handed neutrinos (N_R). We pointed out the interesting possibility of a correlated search for Z' and N_R at the LHC through the pp→ Z'→ N_R N_R channel. This channel can probe a part of the leptophobic Z' parameter space that cannot be otherwise probed using the standard dijet resonance searches. In this paper, we analyse the monolepton final state arising from the decay of the N_R pair. We show that a leptophobic Z' as heavy as 7 TeV and with a gauge coupling of the order of the electroweak coupling is discoverable through this channel at the high-luminosity LHC. Pinning down the leptophobic Z^' in leptonic final states with Deep Learning Kalp Shah August 1, 2023 ============================================================================ § INTRODUCTION Many extensions of the Standard Model (SM), like the grand unified theories (GUTs) <cit.>, contain an electrically neutral colour-singlet heavy gauge boson, Z'. To get such a particle in the spectrum, one can simply add an extra local U(1) symmetry spontaneously broken by a complex scalar field with a TeV-scale vacuum expectation value in a bottom-up approach. The phenomenology of a TeV-scale Z' is well-explored in the literature (see, e.g., <cit.>). One of the current programs of the LHC is to look for the signature of Z^'. However, so far, the focus has been only on the cases where it decays exclusively to the SM particles <cit.>. The failure to find the Z' in the SM-decay modes motivates us to look for other possibilities with non-standard decay modes of Z', i.e., where it can decay to beyond-the-SM (BSM) particles as well. One interesting possibility is when the Z' decays to a pair of right-handed neutrinos (RHNs, N_R) <cit.>. While appending an additional U(1) gauge symmetry to the SM group, one must ensure the cancellation of all gauge anomalies to maintain gauge invariance and renormalizability of the theory. To cancel the gauge anomalies, one can include RHNs in the particle spectrum <cit.> or rely on some special mechanism like the Green-Schwarz (GS) mechanism <cit.>. Therefore, many anomaly-free U(1) extensions of the SM contain RHNs since they can also help generate light neutrino masses through various seesaw mechanisms. The strongest exclusion limits on the Z' parameter space usually come from the dilepton resonance searches <cit.>. For instance, the current mass exclusion limit on a sequential Z' is about 5 TeV <cit.>. The stringent dilepton exclusion limits can be evaded if the Z' is leptophobic, i.e., it does not couple (or couples very feebly) to the SM leptons. In the leptophobic case, the nonstandard decay modes of the Z' become especially important if the branching ratios (BRs) to the new modes are high. In our previous paper <cit.>, we presented two theoretically well-motivated leptophobic Z' scenarios where the Z' dominantly decays to a pair of RHNs. Being leptophobic, these scenarios can easily bypass the spoilsport dilepton exclusion limits. At the same time, a large BR of the Z'→ N_RN_R decay and the subsequent decays of N_R to Wℓ, Zν, and Hν final states open up many interesting leptonic final states (see, e.g., Fig. <ref>). The process pp→ Z'→ N_RN_R is not only important from the Z'-search point of view but for the RHN searches as well. Since RHNs are SM singlets, producing them at the LHC is difficult as their production cross sections are small, suppressed by the light-heavy neutrino mixing angles. However, they can be copiously produced if they come from the decays of another BSM particle <cit.>. The RHN pair production from the decay of the Z' has been previously discussed in some phenomenology papers <cit.>. We considered this channel in a leptophobic setup and estimated the high-luminosity LHC (HL-LHC) reach with a simple cut-based analysis in Ref. <cit.>. Interestingly, we found that a large chunk of the parameter space beyond the reach of pp→ Z'→ jj can be probed through the pp→ Z'→ N_RN_R channel at the HL-LHC. The decay of the RHN pair can lead to multilepton final states. There we considered the dilepton channel (which is easy to handle in a cut-based analysis and has a good sensitivity) to estimate the HL-LHC reach. The monolepton mode is complimentary but more challenging because of the difficulty in the background reduction. In this paper, we estimate the reach in the monolepton final state with a deep neural network (DNN) model. The RHNs we consider here are around the TeV scale, much lighter than the standard type-I seesaw scale ∼ 10^14 GeV. To naturally realise the TeV-scale RHNs, we consider inverse-seesaw mechanism (ISM) for neutrino mass generation <cit.>. As a result, unlike the Majorana type RHNs in the type-I seesaw, the RHNs here are pseudo-Dirac in nature. Earlier, in Refs. <cit.>, different U(1) extensions using the ISM were considered in various contexts. Also, the future lepton colliders could be the best testing ground for the heavy neutrinos <cit.>. § LEPTOPHOBIC Z' MODELS We look at the two examples of anomaly-free gauge extensions of the SM from Ref. <cit.>, where a leptophobic Z' with substantial Z'→ N_RN_R branchings is present. In one example, the RHNs cancel the mixed gauge-gravity anomaly while the GS mechanism cancels the rest, while in the other, the leptophobia of Z' arises in a GUT framework <cit.>. Below, for completeness, we revisit the essential details of these two examples of leptophobic constructions. §.§ GS mechanism and leptophobia In this case, the particle spectrum includes three RHNs—one for every generation—equally charged under the extra U(1)_z. The light-neutrino masses are generated through the ISM for which there is also an extra chiral sterile fermion S_L introduced for every generation. We show the U(1)_z charges of various fields in Table <ref>. The charge assignment is generation independent. The SM leptons are uncharged under U(1)_z to make the Z' leptophobic. The nonzero U(1)_z charges of the quarks ensure the Z' can be produced at the LHC through quark-quark fusion. Here, U(1)_z quark charges are kind of arbitrary, and other choices are also possible. The SM Higgs doublet is chargeless under U(1)_z to minimise the mixing between Z and Z'. The scalar ϕ with unit U(1)_z charge is the flavon field introduced to write down the Yukawa interactions of the SM fermions as shown in Eq. (<ref>). The fermion mass terms arise after the ϕ and H get vacuum expectation values. The above charge assignment leads to six anomalous triangle diagrams proportional to the traces of the product of the generators as follows: * [SU(3)_C]^2[U(1)_z]: tr[𝒯^a𝒯^b_z] = 12z_q, * [SU(2)_L]^2[U(1)_z]: tr[T^aT^b_z] = 6z_q, * [U(1)_z]^3: tr[z^3] = 12z_q^3 - z_N^3, * [U(1)_Y]^2[U(1)_z]: tr[Y^2z] = 22z_q/3, * [U(1)_Y][U(1)_z]^2: tr[Yz^2] = 0, * [R]^2[U(1)_z]: tr[z] = 12z_q - z_N. We assume that mixed gauge-gravity anomaly vanishes at low energy and set 12z_q - z_N = 0 , by hand. Thus, we only have one free charge in our model. To cancel the other anomalies with the Green-Schwarz (GS) mechanism <cit.>, we add new gauge-dependent terms in the Lagrangian with carefully chosen coefficients. The Peccei-Quinn terms for the mixed anomalies and the [U(1)_z]^3 anomaly are written as ℒ_PQ = 196π^2(ΘM)ε_μνρσ[g_z^2 C_zzz F_z^μν F_z^ρσ+g_z g' C_zzy F_z^μν F_Y^ρσ +g'^2 C_zyy F_Y^μν F_Y^ρσ + g^2 D_2tr(F_W^μν F_W^ρσ) +g_S^2 D_3tr(F_S^μν F_S^ρσ)]. Under the new U(1)_z, the pseudoscalar (Goldstone) axion Θ transforms as Θ→Θ + Mg_zθ_z and the new gauge field transforms as B^μ_z → B^μ_z -∂^μθ_z. Here, M is the scale at which U(1)_z breaks through the Stückelberg mechanism, F_z, F_Y, F_W, F_S are the field strengths and g_z, g', g, g_S are the coupling constants associated with the U(1)_z, U(1)_Y, SU(2)_L, SU(3)_c gauge groups, respectively. The generalised Chern-Simons (GCS) terms for other mixed anomalies are given as ℒ_GCS= 1/48 π^2ε_μνρσ[ g'^2 g_z E_zyy B_Y^μ B_z^ν F_Y^ρσ+g' g_z^2 E_zzy B_Y^μ B_z^ν F_z^ρσ + g^2 g_z K_2 B_z^μΩ_W^νρσ+g_S^2 g_z K_3 B_z^μΩ_S^νρσ] with Ω_G,W^νρσ = 1/3tr[A_G,W^ν(F_G,W^ρσ-[A_G,W^ρ,A_G,W^σ]) +(cyclic permutations of ρ, σ, v)] and A_X = [ G,W_X ]. The coefficients C, D, E, and K can be solved in terms of the U(1)_z charges of the fermions as C_zzz = -(12z_q^3 - z_N^3), C_zyy = -22z_q = 3/2 E_zyy, C_zzy = 0 = E_zzy, D_2 = -18z_q = -3/2 K_2, D_3 = -36z_q = -3/2 K_3. so that the anomalies are cancelled. The U(1)_z invariant Yukawa interactions are written as ℒ_Y ⊃ -λ_u ( ϕ/Λ)^2z_qQ_LH u_R - λ_d (ϕ/Λ)^2z_qQ_L H d_R + h.c., where H = iσ_2 H^*, and λ_u and λ_d are the Yukawa couplings. After the spontaneous breaking of the U(1)_z symmetry, ϕ acquires a vacuum expectation value v_ϕ = ⟨ϕ⟩ leading to the SM Yukawa terms. The neutrino masses are generated by the ISM through the following higher-dimensional operators, ℒ_Y ⊃ -λ^ν_ij( ϕ^†/Λ)^z_N L^iH N^j_R - M_R_ij( ϕ/Λ)^z_N N_R^i S_L^j - μ/2 S^c_L S_L + h.c. We get the mass matrix for the neutrinos after the sequential breaking of the U(1)_z and electroweak symmetries in the (ν _L^c N_R S_L^c )^T basis as, 𝐌_ν = [ 0 𝐦_𝐃 0; 𝐦_𝐃^𝐓 0 𝐌_𝐑; 0 𝐌_𝐑^𝐓 μ; ], where 𝐦_𝐃 = λ^ν v_h, μ, and 𝐌_𝐑 are 3×3 mass matrices in the generation space in general. The parameter μ is typically associated with lepton number violation and restores the lepton number symmetry in the μ→ 0 limit. After diagonalising the matrix in Eq. (<ref>), we get the active (light) neutrino mass matrix as <cit.>, 𝐦_ν = 𝐦_𝐃 𝐌^-1_R μ (𝐌_𝐑^𝐓)^-1𝐦_𝐃 + 𝒪(1/𝐌^4_𝐑). There is a double suppression of the light neutrino masses from the large 𝐌_𝐑 (taken around the TeV scale) and the small μ. §.§ Leptophobia in E_6 GUT models In some E_6 GUT models, leptophobia can be realised through gauge-boson kinetic mixing <cit.> even though the fermion couplings to the gauge bosons are not arbitrary free parameters. In such models, the U(1)_z group can come from a symmetry-breaking chain like, E_6 → S O(10) × U(1)_χ → S U(5) × U(1)_χ× U(1)_ψ → S U(2)_L × U(1)_Y × U(1)_z → S U(2)_L × U(1)_Y. The U(1)_z appears as a linear combination of U(1)_ψ and U(1)_χ. The U(1)_z charge Q_z can be expressed as Q_z = Q_ψcosθ - Q_χsinθ where θ is the E_6 mixing angle. However, there is no solution for θ where the lepton couplings vanish. Therefore, we look at the gauge-kinetic mixing terms allowed by the gauge invariance, ℒ_kin⊃ -1/4F_Y^μνF_Y μν - 1/4F_z^μνF_z μν - sinχ/2F_Y μνF^μν_z. The kinetic mixing, parametrised by sinχ, can be removed by a unitary transformation <cit.>: B̃^μ_Y = B_Y^μ - tanχ B_z^μ, B̃_z^μ = B_z^μ/cosχ. This transformation makes the kinetic mixing vanish only at a particular scale, implying that the mixing term can regenerate at higher orders. This makes the couplings energy-dependent. Hence, they must be evaluated at the TeV scale before relating to the experiments. After the rotation, the couplings of the gauge bosons with the fermions are given as <cit.>, ℒ_int⊃ -ψγ_μ[ g' Y_SMB^μ_Y + g_z( Q_z+√(3/5)δ Y_SM)B_z^μ]ψ, where δ = -g_Ysinχ/g_z. Now the fermion couplings are functions of two free parameters θ and δ, we can in general make the couplings of any two fields vanish. For the standard embedding (Table <ref>), leptophobia corresponds to θ = tan^-1√(3/5) and δ = -1/3 <cit.>. However, as shown in Table <ref>, different combinations of θ and δ work in different embeddings. § COLLIDER ANALYSIS For the collider analysis, we use the following simplified Lagrangian L⊃g_z2(z_u u̅_i^μ u_i + z_u d̅_i^μ d_i + z_N N̅_R^μ N_R)Z^'_μ , where z_u,d are the effective U(1)_z charges of up and down type quarks where for any quark q it is defined as z_q^2=z_qL^2 + z_qR^2. We use FeynRules <cit.> to build the above model and obtain the Universal FeynRules Output (UFO). We simulate the hard scattering in MadGraph5 <cit.>, showering and hadronisation in Pythia <cit.> and the LHC detector environment in Delphes3 <cit.>. The events are generated at √(s) = 14 TeV. Jets are clustered using the anti-k_T algorithm <cit.> with R=0.4 (we call them as AK-4 jet). For the electron, the parameter in the Delphes card (distance between the other isolated objects and the identified electron) for isolation has been modified to 0.2 from 0.5. §.§ Event selection criteria We show the signal topology in Fig. <ref>. In this paper we focus on the monolepton signature: p p → Z' → N_R N_R →ℓ_μ + E + jets which arises when one of the RHNs decays to a hadronically decaying W boson and a lepton, while the other decays to a Z/H boson and a neutrino. We look at Z' masses between 3-7 TeV and RHNs ranging from 0.5 to 3.5 TeV. From the topology and the distributions of the final state, we design the following basic selection criteria for the signal events: ℭ_1: H_T > 750 GeV, where H_T is the scalar sum of the transverse momenta of all hadronic objects in an event. ℭ_2: Exactly one muon, with p_T > 120 GeV with |η| < 2.5. ℭ_3: At least three AK-4 jets. We also demand that the leading jet has p_T > 120 GeV. ℭ_4: Exactly two fatjets, clustered using the anti-k_T algorithm with R=0.7 and p_T>200 GeV with mass M_J ∈ [40,165]. ℭ_5: The invariant mass between the reconstructed muon and missing transverse energy (E_T) must be greater than 140 GeV, i.e., M_μ E_T > 140 GeV. We pick the fatjet radius R=0.7 after optimising to tag a boosted two-pronged jet. The constraints on fatjet mass (ℭ_4) are chosen to include reconstructed W, Z and H fatjets. Since the missing energy and muon come from the decay of W-boson in all of the backgrounds, we put a high cut on the invariant mass of reconstructed lepton and missing transverse energy pair in ℭ_5. §.§ Background processes The major backgrounds considered for the analysis are listed in Table <ref>, along with the cut efficiencies and the total number of events at the HL-LHC luminosity, 3 ab^-1. To reduce the computation time, we generate the background events with some generation-level cuts. The W^± + jets process is the leading background before the cuts; the demand of a high-p_T jet (ℭ_3) and two fatjets (ℭ_4) reduces this background significantly. The contributions from the W^+ W^- + jets process drop drastically with the demand of two fatjets (ℭ_4). The fatjet criterion (ℭ_4) also cuts the tt̅ + jets background as most of these events have just one fatjet. The hadronically-decaying top is sometimes reconstructed as the fatjet in a fraction of tt̅ + jets events, the demand on the fatjet mass cuts down these events. After the event selection criteria are enforced, W^+ W^- + jets and tt̅ + jets become the leading backgrounds. §.§ Kinematic picture We use the kinematic information of the identified objects, i.e., one muon, three AK-4 jets, and two fatjets, along with some derived quantities to create the input set of variables for the multivariate analysis—the full list of variables is given in Table <ref> where J is used to denote a fatjet whereas j is used to identify an AK-4 jet. We show the distributions of the key features of the signal and the background for three benchmark parameters in Fig. <ref>. The H_T (scalar sum of all hadronic objects) distributions show a clear separation between the signals and background due to the boosted nature of the final state, see Fig. <ref>. Moreover, for the signals, the transverse momentum distributions of the tagged muon, AK-4 jets, and the fatjet and the E_T distribution peak at considerably higher values than those of the background, see Figs. <ref>, <ref>, <ref>, <ref>. We also consider the n-subjettiness variables <cit.> which check for two-pronged (τ_2/τ_1) and three-pronged (τ_3/τ_2) nature of a fatjet for various values of β (β∈{ 1,2,3}). Since our fatjet parameters are tuned to identify a W-like (two-pronged) jet, the τ_2/τ_1 ratios are the key variables. Of these, the τ_2/τ_1 for β=2 gives a clear separation between the signals and the background [Fig. <ref>]. The τ_3/τ_2 ratios do not offer clear separations but, nevertheless, we include them due to the inclusive nature of the analysis cuts on the number of fatjets. The variable m_J_i ℓ (invariant mass of a fatjet and the tagged muon) reconstructs the mass of the RHN very well—the trend can be clearly seen in Fig. <ref>. The Δ R_J_i E_T distributions in Fig. <ref> can also distinguish the signals and the background well, as the final state objects come from the decay of an on-shell Z'. From the full set of kinematic variables, we drop the features that offer little separation between the signals and the background. For instance, we ignore the features Δ R_J_1 j_1 and Δ R_J_2 j_2 as we see that the leading (subleading) jet gets clustered inside the leading (subleading) fatjet for both signal and backgrounds. We also drop Δ R_j_2 j_3, m_j_1 j_3 and m_j_2 j_3 as they show overlapping signal and background distributions. The final set of input features to the DNN model has 46 kinematic features. §.§ Deep Learning model To perform the classification task, we use a fully-connected DNN model which has an input layer, four hidden layers (with 50, 35, 30, and 10 nodes in the layers, respectively) and a single output layer. We implement the network using the module <cit.> and use a binary cross-entropy loss and the optimiser <cit.> with a learning rate 0.001 to train the network. We pick the benchmark point (M_Z', M_N_R) = (5.0, 2.0) TeV to optimise the hyperparameters using a grid search. The performance metric is the signal significance (Z score), 𝒵 = √(2(N_S + N_B)ln(N_S+N_B/N_B) - 2N_S), where N_S and N_B are the numbers of signal and background events allowed by the network at 3 ab^-1 luminosity. § RESULTS In Fig. <ref>, we show the values of the U(1)_z gauge coupling, g_z [Eq. (<ref>)], needed to achieve the discovery (5σ) and exclusion (∼ 2σ) significance for a typical choice of parameters on the M_Z'-M_N_R plane. To calculate the number of events, N_S and N_B, we use the final efficiencies of the DNN model at the working point with DNN response = 0.95. The g_z contours are broadly insensitive to the mass of the RHN. However, because of the high boosts, the event selection efficiency around M_N_R=0.5 TeV (or less) is lower than those at larger values. Hence, one needs a larger g_z to achieve a high signal significance. This can be seen from the slightly deformed contours between M_N_R=1.0 TeV and M_N_R=0.5 TeV. § CONCLUSIONS In this paper, we studied the prospects of a correlated search of a heavy neutral gauge boson Z' and the right-handed neutrino N_R in leptophobic U(1) extensions. In particular, we analysed an experimentally unexplored channel pp→ Z'→ N_RN_R where the N_R pair decays to a monolepton final state. Unlike the dilepton signature considered in our previous paper <cit.>, probing the monolepton signature is highly challenging due to the huge background from SM processes which are hard to reduce with a cut-based analysis. Hence, in this paper, we used a DNN model to isolate the signal from the SM background, providing orders of magnitude gains in significance scores. We found that for g_z∼ 0.7, a Z' with mass up to 7 TeV can be discovered at the 14 TeV HL-LHC. § ACKNOWLEDGEMENTS C. N. acknowledges the Department of Science and Technology (DST)-Inspire for his fellowship. We also thank Jai Bardhan for helping us speed up the input-feature generation pipeline and for helpful discussions. JHEPCust
http://arxiv.org/abs/2307.00659v1
20230702202746
Parameterized Density Functional Models for Block Copolymer Melts
[ "Sulin Wang", "Yuan Chen", "Zengqiang Tan", "Keith Promislow" ]
cond-mat.soft
[ "cond-mat.soft" ]
unsrt 0.0in 0.0in 6.8in - 0.6 in = 0pt Images/ 9. in plain
http://arxiv.org/abs/2307.02966v1
20230706130510
Diagnostics for categorical response models based on quantile residuals and distance measures
[ "Patrícia Peres Araripe", "Idemauro Antonio Rodrigues de Lara", "Gabriel Rodrigues Palma", "Niamh Cahill", "Rafael de Andrade Moral" ]
stat.ME
[ "stat.ME", "q-bio.QM" ]
Original Research Article Diagnostics for categorical response models based on quantile residuals and distance measures Patrícia Peres Araripe a, Idemauro Antonio Rodrigues de LaraaCorresponding author: [email protected], Gabriel Rodrigues Palmab, Niamh Cahillb and Rafael de Andrade Moralb a“Luiz de Queiroz” College, Postgraduate Program in Statistics and Agronomic Experimentation, University of São Paulo, Brazil; bDepartment of Mathematics and Statistics, Maynooth University, Ireland ========================================================================================================================================================================================================================================================================================================================================================================================= Polytomous categorical data, in a nominal or ordinal scale, are frequent in many studies in different areas of knowledge. Depending on experimental design, these data can be obtained with an individual or grouped structure. In both structures, the multinomial distribution may be suitable to model the response variable and, in general, the generalized logit model is commonly used to relate the covariates' potential effects on the response variable. After fitting a multi-categorical model, one of the challenges is the definition of an appropriate residual and choosing diagnostic techniques to assess goodness-of-fit, as well as validate inferences based on the model. Since the polytomous variable is multivariate, raw, Pearson, or deviance residuals are vectors and their asymptotic distribution is generally unknown, which leads to potential difficulties in graphical visualization and interpretation. Therefore, the definition of appropriate residuals, as well as the choice of the correct analysis in diagnostic tools is very important, especially for nominal categorical data, where a restriction of methods is observed. This paper proposes the use of randomized quantile residuals associated with individual and grouped nominal data, as well as Euclidean and Mahalanobis distance measures associated with grouped data only, as an alternative method to reduce the dimension of the residuals and to study outliers. To show the effectiveness of the proposed methods, we developed simulation studies with individual and grouped categorical data structures associated with generalized logit models. Parameter estimation was carried out by maximum likelihood and half-normal plots with simulation envelopes were used to assess model performance using residuals and distance metrics. These studies demonstrated a good performance of the quantile residuals and, also, the distance measurements allowed a better interpretation of the graphical techniques. We illustrate the proposed procedures with two applications to real data, for which the employed techniques validated the model choice. Generalized logit model; maximum likelihood; model selection; half-normal plot; normality. § INTRODUCTION Nominal polytomous variables are defined by a finite set of categories (more than two), being of interest in experimental design in many disciplines, especially in biological and agricultural sciences. The subjects can be an individual (a plant, an insect, or an animal) or groups of individuals (a stall with animals, a cage with insects, or a plant with its branches), in which the categorical responses are observed. Therefore, the categorical data structure can be individual or grouped, depending on experimental design and goals of the study. Independent of structure, in general, the multinomial distribution (or an extension of it) is assumed to model the response variable, and the generalized logit model is used to describe the relationship between the polytomous response and covariates <cit.>. Additionally, the assumptions of the fitted model must be verified to validate statistical inference. In this process, residual analysis is fundamental. The first step involves an appropriate definition for the residuals and, after that, formal (hypothesis tests) and informal (exploratory plots) techniques can be used to assess goodness-of-fit and model assumptions. According to <cit.>, residual analyses are essential to identify discrepancies between the model and the data, detecting outliers and influential points. For example, the deviance and Pearson statistics are quantitative measures widely used to test the goodness-of-fit of generalized linear models (GLMs), however they can only be applied to multinomial data in the grouped structure and are not reliable for small sample sizes <cit.>. In fact, residual analysis is still a challenge for the multinomial case. Since the polytomous response is multivariate, the ordinary residual, defined by the difference between the observed response and the estimated probabilities, is a vector for each individual, with a dimension defined by the number of categories. In addition, these residuals have an unknown asymptotic distribution, making them difficult to interpret in diagnostic plots <cit.>. It is important, therefore, to find or adapt diagnostic techniques to overcome these limitations. A few alternatives have been proposed: the first one is to reduce the number of categories (grouping into two) and to do residual analysis by means of logistic regression, whose techniques are well consolidated (e.g. , , ). However, grouping categories leads to the loss of information. Another would be to fit the generalized logit models (for pairs of variables) separately, and to define residuals for each sub-model and apply diagnostic tools, as proposed by <cit.> with three categories. However, the maximum likelihood estimates from the separate fitted models differ from those obtained in the simultaneous fit, and their standard errors tend to be larger <cit.>. For ordinal responses, <cit.> defined a continuous residual vector for the individual structure, with three categories, based on the methodology of <cit.>. Also, they presented the deviance and Pearson residual vectors and plots of residuals versus covariates. However, this technique is not suitable to the nominal case. For nominal data with grouped structure, <cit.> defined a vector of residuals based on the projected residuals presented by <cit.>, and the Pearson residuals were presented by <cit.> to detect influential points. However, these methodologies require theoretical development and are not implemented in statistical software. A residual defined for a broad class of models that can be easily implemented is the randomized quantile residual <cit.>. For discrete data, these residuals are an extension of the quantile residuals for continuous data and they follow approximately a normal distribution if the estimated parameters are consistent, but it is important to investigate their properties in small sample sizes <cit.>. <cit.>. Therefore, the quantile residuals are an alternative for multinomial case associated to generalized logit models, but there is a lack of investigation of their performance. In addition to the adoption of quantile residuals, another alternative is to use distance metrics, such as Euclidean and Mahalanobis distances, to reduce the dimension of the ordinary residuals for diagnostic analyses. These metrics are widespread in the literature on multivariate analysis, when calculating how far two individuals are in the original variable space (e.g. by using principal components and cluster analysis) <cit.>. In the context of diagnostics, these distances have already been used to detect outliers in linear regression ( and ). However, there are no records of their use in models for nominal data. Here, we propose the adoption of quantile residuals and the use of multivariate distance metrics. Our objectives are: (i) to assess the normality of randomized quantile residuals for nominal categorical models; and (ii) to reduce the dimension of ordinary residuals associated with nominal data using Euclidean and Mahalanobis distances for grouped structures. We review models and residuals for nominal polytomous data in Section <ref>. Then, we present the randomized quantile residual and the distance metrics in Sections <ref> and <ref>, respectively. The framework based on randomized quantile residuals and distances for nominal responses, which are the contributions of this work, are presented in Section <ref>. We present results of simulation studies in Section <ref>, and illustrate with two applications from the literature in Section <ref>. Finally, we present concluding remarks in Section <ref>. § MODELS AND RESIDUALS FOR NOMINAL POLYTOMOUS DATA Statistical models for polytomous data (nominal or ordinal) are based on the multinomial probability distribution <cit.>. The definition of the linear predictor structure is essential when defining the model, and influences the construction of residuals, as well as the diagnostic techniques. §.§ Nominal data structures It is important to distinguish between individual and grouped data structures. To establish the notation consider a sample of subjects, i= 1, 2, …, n and the set of J categories A={1,2, …, J}. In the individual case, each subject is a single individual, which is classified in some category of set A. Then, the random vector referring to the individual i is given by 𝐘_i = (Y_i1,…,Y_iJ)', where Y_ij=1 if the response of individual i is in category j, j=1,2,…,J, and Y_ij=0 otherwise, with ∑_j = 1^J Y_ij = 1. For the grouped case, each subject is composed of a group of m_i individuals. Then, the random variable Y_ij represents the number of times category j was observed in m_i individuals, with ∑_j = 1^J Y_ij = m_i. In both cases, we have a multinomial trial, that is, it is assumed that the random vector 𝐘_i follows a multinomial distribution, Y_i ∼(m_i,_i), with parameters m_i and _i=(π_i1,…,π_iJ)', restricted to ∑_j = 1^J π _ij = 1. §.§ Generalized logit model The multinomial distribution belongs to the canonical multi-parametric exponential family, with a vector of canonical parameters = [ log( π _1/π _J), … ,log( π _J- 1/π _J)]', where θ _j = log( π _j/π _J), j=1,…,J-1, use canonical link functions. Considering a random sample of subjects of dimension n, i = 1,2,…, n, the generalized logit model is defined as logit[ π _ij(x_i)] = log[ π _ij(x_i)/π _iJ(x_i)] = α _j + ∑_k = 1^p β _jkx_ik= α _j +_j'x_i, j = 1, … ,J - 1, where J is the number of categories, π _j(x_i) is the probability of an individual response in the j-th category, x_i = (x_i1,x_i2,… ,x_ip)' is the vector of covariates, _j= (β_j1,β_j2,… ,β_jp)' represents the vector of parameters, and α _j is the intercept. According to <cit.>, the covariates can be quantitative, factors (using dummy variables) or both. Model <ref> compares each category with one chosen as a reference, generally the first or last category, but this choice can be arbitrary <cit.> <cit.>. Also, from equation (<ref>), we have: π_ij(x_i) = exp( α_j +_j'x_i)/1 + ∑_j = 1^J - 1exp( α_j +_j'x_i), j = 1, … ,J - 1, and the probability for the reference category: π_iJ(x_i) = 1 - [ π _i1(x_i) + ⋯ + π _i(J - 1)(x_i)]=1/1 + ∑_j = 1^J - 1exp( α _j +_j'x_i). The parameter estimation process is done by maximum likelihood, which consists of maximizing π_ij(x_i) to simultaneously satisfy the J-1 equations that specify the model. We present here a brief summary of the process, distinguishing the individual and grouped structures. First, consider the data with individual structure with the observed vector y_i = (y_i1, … ,y_iJ) satisfying ∑_j = 1^J y_ij = 1, with mean (Y_ij|𝐱_i)=π_ij(𝐱_i), j = 1, 2,…, J. Then, the log-likelihood function is given by l = log∏_i = 1^n {∏_j = 1^J [ π _ij( x_i)]^y_ij} = log∏_i = 1^n {∏_j = 1^J - 1[ π _ij( x_i)]^y_ij[ π _iJ( x_i)]^y_iJ}. Using (<ref>) and (<ref>) we have [ l = ∑_i = 1^n {∑_j = 1^J - 1y_ij( α _j + _j'x_i) - log[ 1 + ∑_j = 1^J - 1exp (α _j + _j'x_i)]}. ] Now, considering the grouped data where the observed vector y_i = (y_i1, … ,y_iJ) satisfies ∑_j = 1^J y_ij = m_i, with mean (Y_ij|𝐱_i)=m_iπ_ij(𝐱_i), j=1,…,J, the log-likelihood is given by [ l^* = ∑_i = 1^n {∑_j = 1^J y_ijlog[ π_ij( x_i)] + log[ m_i!/y_i1! …y_iJ!]}; = ∑_i = 1^n {∑_j = 1^J - 1y_ijlog[ π _ij( x_i)] + y_iJlog[ π _iJ( x_i)] + log[ m_i!/y_i1! …y_iJ!]} . ] and, similarly to the individual process, by successive substitutions one finds: [ l^* = ∑_i = 1^n {∑_j = 1^J - 1y_ij(α _j +_j'x_i) - m_ilog[ 1 + ∑_j = 1^J - 1exp (α _j +_j'x_i )] + log[ m_i!/y_i1! …y_iJ!]}. ] An iterative method such as Newton-Raphson can be used to maximize l and l^* to obtain the maximum likelihood estimates <cit.>. More details can be found in <cit.>. §.§ Residuals associated with models for nominal categorical data An important step in model diagnostic checking is residuals analysis, used to validate model assumptions and detect outliers or influential points <cit.>. The definition of the residuals as well as the analytical techniques are essential tools that contribute to this. §.§.§ Residuals for individual data The ordinary residuals measure the deviations between the observed values and the predicted probabilities. For model (<ref>) they are vectors of dimension J× 1 per individual, i=1,2,…,n, given by <cit.> r̂_i =y_i-_i=( y_i1 - π̂_i1,y_i2 - π̂_i2, … ,y_iJ - π̂_iJ)', where y_i=(y_i1,y_i2,…,y_iJ)' is the vector of observations with y_ij=1 if the individual response i belongs to category j and y_ij=0, otherwise, and _i=(π̂_i1,π̂_i2,…,π̂_iJ)' is the vector of predicted probabilities. These residuals do not follow a multivariate normal distribution, and when used in diagnostic plots, they may not be informative, since visual interpretation is not straightforward. The Pearson and deviance residuals for model (<ref>) are given, respectively, by the vectors r_i^P = [ r_i1^P,r_i2^P, …, r_iJ^P]^' and r_i^D = [ r_i1^D,r_i2^D, …, r_iJ^D]^', whose elements are obtained by <cit.> r_ij^P = ( y_ij - π̂_ij)/√(π̂_ij(1 - π̂_ij)) and r_ij^D = ( y_ij - π̂_ij)√(2[ ( y_ij-1)log( 1 - π̂_ij) - y_ijlog( π̂_ij)]), where j=1,2,…,J. These definitions are extensions of the residuals used in logistic regression. Specifically for variables on the ordinal scale, <cit.> proposed the surrogate residuals, that are based on the methodology presented by <cit.>. As the scope of this work is centered on the nominal measurement scale, we leave it to the interested readers to consult <cit.> for more details. §.§.§ Residuals for grouped data The J-dimensional ordinary residuals vector for model (<ref>) per subject i, i=1,2,…,n, each with m_i individuals, according to <cit.> is defined by r̂_i = y_i-m_i ×_i/m_i = 1/m_i( y_i1 - m_iπ̂_i1,y_i2 - m_i π̂_i2, … ,y_iJ - m_iπ̂_iJ)', where y_i=(y_i1,y_i2,…,y_iJ)' is the vector of observed counts, such that ∑_j = 1^J y_ij = m_i, and _i=(π̂_i1,π̂_i2,…,π̂_iJ)' is the vector of predicted probabilities. The J-dimensional vector of Pearson residuals is given by r_i^P = [ r_i1^P,r_i2^P, …, r_iJ^P]' with elements <cit.> r_ij^P = ( y_ij - m_i π̂_ij)/√(m_iπ̂_ij(1 - π̂_ij)), where i=1,2,…,n and j=1,2,…,J. § RANDOMIZED QUANTILE RESIDUALS The quantile residual was proposed by <cit.> for continuous variables. For a continuous response, y_i, the quantile residual is defined by r_i^Q = Φ ^ - 1{F(y_i;θ̂_i,ϕ̂) }, i = 1, … ,n, where Φ ^ - 1 is the inverse of the cumulative distribution function (CDF) of the standard normal distribution, F(y_i;θ̂_i,ϕ̂) is the CDF associated with the response variable, θ̂_i is the maximum likelihood estimate of parameter θ_i and the ϕ̂ is the estimated dispersion parameter. If the response y_i is discrete, we introduce randomization through a uniform random variable in the CDF for each individual, obtaining the randomized quantile residual r_i^Q = Φ ^ - 1{F(u_i) }, i = 1, … ,n, where u_i represents a uniform random variable between a_i = lim _y →y_iF(y;θ̂_i,ϕ̂) and b_i = F(y_i;θ̂_i,ϕ̂). Under a well-fitting model, these residuals follow, approximately, a normal distribution. The quantile residuals have received little attention in the literature as model diagnostic tools until recently. For example, <cit.> used the standardized quantile residuals in goodness-of-fit tests for generalized linear models with inverse Gaussian and gamma variables. <cit.> investigated the performance of the quantile residual for diagnostics of the beta regression model and <cit.> used the standardized randomized quantile residuals to examine the goodness-of-fit of models applied to count data. Here, we introduce their use with polytomous data associated with generalized logit models. § DISTANCES Consider having n individuals denoted by the random vectors z_i = ( z_i1,z_i2, … ,z_iq)', i=1,2,…,n. Each individual is represented by a point in q-dimensional space, with each dimension representing a variable <cit.>. Distance metrics can quantify how far two individuals are by a scalar which measures their proximity. The Euclidean and Mahalanobis distances are widely known (see <cit.> and <cit.>) and can be calculated in the original scale of the response variable <cit.>. The Euclidean distance between individuals i and t is defined by d_it^E = √((z_i - z_t)'(z_i - z_t)) = √(∑_k = 1^q ( z_ik - z_tk)^2) , where z_.k is the k-th variable, with k=1,2,…, q, and i, t=1,2,…, n. According to <cit.>, this measure is the most popular to calculate the distance between individuals in q-dimensional space. If the individuals are correlated, the covariance or correlation between them can be considered when calculating the distance <cit.>. In this case, the Mahalanobis distance is useful, and is expressed by d_it^M = (z_i - z_t)^'C^ - 1(z_i - z_t), where C^ - 1 is the inverse of the q × q variance-covariance matrix. In the case where C=I, with I representing the identity matrix, the Mahalanobis distance reduces to the Euclidean distance. If C is a diagonal matrix, then it results in the standardized Euclidean distance <cit.>. The Euclidean distance yields quicker calculations than the Mahalanobis distance, but considering the covariances between variables can be important <cit.>. However, <cit.> reported that some issues must be observed when using Mahalanobis distances, such as problems that may lead to singular covariance matrices and the restriction that the sample size that must be greater than the number of variables. § METHODS Here, we describe the methodological procedures associated with residual analysis of generalized logit models fitted to nominal polytomous data. We propose the use of quantile residuals for individual data, and a new methodology to reduce the dimension of ordinary residuals associated with grouped data using distance metrics. §.§ Individual data For individual data, we obtain the standardized randomized quantile residuals considering the cumulative distribution function (CDF), F(y_i;_i,ϕ̂), for the response vector 𝐲_i given the vector 𝐱_i, i=1,2,…,n. The CDF for the multinomial distribution follows from its relationship with an independent Poisson sum, given a fixed total, i.e., the multinomial CDF is computed as the convolution of J truncated Poisson random variables, as shown by <cit.> and implemented in R through the package (<cit.>). Now let _i=( π̂_i1(x_i),π̂_i2(x_i), … ,π̂_iJ(x_i))' be the vector of estimated probabilities. Consider the probability mass function f(y_i;_i), corresponding the response of individual i in category j, y_ij = 1, and y_ij = 0 otherwise. Then, the estimated CDF for individual i is F^*(y_i,u_i;_i)=F(1-y_i; _i)+u_i × f(y_i;_i), where 1 is a J×1 unit vector, and u_i is a realization of a random variable with uniform distribution, i.e. U_i∼ (0,1). The randomized quantile residual for a polytomous response y_i is given by r_i^Q= Φ^-1[F^*(y_i,u_i;_i)], where Φ^-1 is the quantile function of the standard normal distribution. We have therefore a scalar value for each i, and these residuals are approximately normal under the null hypothesis that the model was correctly specified. Here, we used a standardized version of the randomized quantile residuals, given by r_i^S = r_i^Q - r̅^Q/s_r^Q, where r̅^Q and s_r^Q are the mean and standard deviation of the residuals r^Q_i, respectively. For individual data, distance measurements do not effectively contribute to the analysis of residuals, since regardless of the number of individuals in the sample, the individual structure always leads to J unique distance measurements, under the assumption that the model is correctly specified, i.e, (r_i|x_i)=0. §.§ Grouped data For grouped data we can use a similar procedure to construct the randomized quantile residuals (eq. <ref>), with the following modification in the estimated CDF (eq. <ref>): F^*(y_i,u_i;_i)=F(𝐦-y_i; _i)+u_i × f(y_i;_i), where 𝐦 represents a J×1 vector for group size, y_i represents the counts vector for each category in the group i, in which the sum is m. Additionally, unlike the individual case, we reduce the dimension of the vector of ordinary residuals using distance metrics, namely the Euclidean and Mahalanobis distances. Under the assumption that the model is specified correctly, we have that (r_i|x_i)=0, which is a null vector of dimension J. We have that the Euclidean and Mahalanobis distances between the residual vector i and the null vector are, respectively, written as d_i^E = √((r_i - 0)'(r_i - 0))= √(∑_j = 1^J r^2_ij) and d^M_i = (r_i - 0)'C^ - 1(r_i - 0)= r_i'C^ - 1r_i, where C is the J × J covariance matrix of the residuals. §.§ Residual analytic tools Once the randomized quantile residuals and distance measures are defined, formal (tests) and informal (plots) techniques are employed for diagnostics. Formally, a powerful and widely known test for detecting deviations from normality due to asymmetry or kurtosis (or both) is the Shapiro-Wilk test <cit.>. Informally, one can first visualize the distribution of residuals through a histogram, comparing its shape with that of the normal distribution. In the plot of residuals versus fitted values, it is possible to observe the existence of variance heterogeneity or the presence of outliers. The expected pattern in this plot is the zero-centered distribution of residuals with constant amplitude <cit.>. Additionally, the half-normal plot with a simulated envelope can be used to assess whether the observed data are a plausible realization of the fitted model. The absolute values of a given diagnostic measure (residuals or distances) are compared to the expected order statistics of the half-normal distribution obtained by Φ ^ - 1[ ( i + n - 1 / . - 8)/2n + 1 / . - 2], where Φ ^ - 1 is the standard normal quantile function. Here, we follow the steps established by <cit.> for the construction of these graphs. Given a well-fitted model, we expect most points to lie within the simulated envelope. § SIMULATION STUDIES We carried out simulation studies to evaluate the performance of the standardized quantile residuals for individual and grouped data, as well as distance measures for grouped data only. §.§ Models and scenarios We simulated from generalized logit models with 3, 4 and 5 response categories for both data structures (individual and grouped). We used two types of linear predictors: one with an intercept and a single continuous covariate effect (eq. <ref>), and one also including the effect of a factor with two levels (eq. <ref>). In the data simulation process, we considered sample sizes of 50, 100 and 200, and for grouped data we used the group dimensions m∈{5,10,15}. For model 1 the response variables were simulated from: log( π _ij/π _i1) = α _j + β _jx_i, j=2,…,J, where x_i are realizations of a standard normal random variable, and J=3,4,5 according to the number of categories. The true parameter values were set as: θ_(J=3) = (α_2, α_3, β_2, β_3) = (1.5, 3.0, -3.0, -5.0) θ_(J=4) = (α_2, α_3, α_4, β_2, β_3, β_4) = (1.5, 3.0, 2.0, -3.0, -5.0, -4.0) θ_(J=5) = (α_2, α_3, α_3, α_4, β_2, β_3, β_4, β_4) = (1.5, 3.0, 2.0, 4.0, -3.0, -5.0, -4.0, -7.0) For model 2, the linear predictor was: log( π _ij/π _i1) = α _j +β_1jx_i1+β_2jx_i2, j=2, …,J, where x_i1 are realizations of a standard normal random variable, x_i2 is a dummy variable (factor with two levels), and and J=3,4,5 according to the number of categories. The true values used were: θ_(J=3) = (α_2, α_3, β_12, β_13, β_22,β_23) = (1.5, 3.0, -3.0, -5.0, 1.5,2.5) θ_(J=4) = (α_2, α_3, α_4, β_12, β_13, β_14, β_22, β_23, β_24) = (1.5, 3.0, 2.0, -3.0, -5.0, -4.0, 1.5, 2.5, 3.0) θ_(J=5) = (α_2, α_3, α_3, α_4, β_12, β_13, β_14,β_15, β_22, β_23, β_24,β_25) = (1.5, 3.0, 2.0, 4.0, -3.0, -5.0, -4.0, 1.5, 2.5,3.0, 3.5) All simulations were implemented in software <cit.>, using the package to fit the multinomial models <cit.>, and the package <cit.> ( function) to generate the half-normal plots with a simulated envelope. §.§ Results for individual data We firstly compare residuals obtained from fitting model 1 to the data generated by model 1 itself, and from fitting the null model (intercept only; scenario 1). The distribution of the p-values of the Shapiro-Wilk test are presented in Figure <ref>, in which we observe that the residuals under the null model are considered to be mostly not normal, while a uniform pattern is seen for the p-values for the residuals obtained from the correct model. Similar patterns are observed for the scenario where model 2 was considered (scenario 2; Figure <ref>). It should also be noted, in both scenarios (model 1 and model 2), that the number of categories has no influence on the residual analysis, unlike the influence of the sample size, but this is also related to the sensitivity of the Shapiro-Wilk test. Specifically, the normality of residuals was rejected by the Shapiro-Wilk test p<0.05) in most simulations considering the null model. As illustration, for example, with J=3 and N=50, normality was rejected 86.6% of the time when considering model 1, and 92.7% of the time when considering model 2. However, when considering the correct linear predictors, normality was rejected only for 4.0% and 5.7% of the simulated datasets for models 1 and 2, respectively (i.e., close to 5%, as expected). This shows we may identify lack-of-fit of a multinomial model fitted to individual data by analysing the normality of the randomized quantile residuals. §.§ Results for grouped data The results for the grouped data, in general, were similar for all m values, indicating that the group dimension did not represent a source of variation for the standardized randomized quantile residuals and also to the Euclidean and Mahalanobis distance measures, particularly in this study. In this way, we present here the results for m=10, with the others results available at <https://github.com/GabrielRPalma/DiagnosticsForCategoricalResponse>. Initially, we present the distribution of p-values referring to the Shapiro-Wilk test applied to the quantile residuals for grouped data, considering scenario 3 (model 1 versus null) and scenario 4 (model 2 versus null). Just as in the individual case the results were satisfactory, i.e, the normality of residuals was rejected by the Shapiro-Wilk test p<0.05) in most simulations considering the null model, as can be observed from Figures <ref> (scenario 3) and <ref> (scenario 4). Next, we present the results for distances measures, considering model 1 versus null and Euclidean distance (scenario 5) and Mahalanobis distance (scenario 6). For both scenarios, it was possible to distinguish the true model from the null model by using half-normal plots with a simulated envelope for the distances (Figure <ref>). The median of the percentage of points outside the envelope is less than 5% for model 1, considering both distances, as opposed to almost 100% for the null model. Also, the distribution of these values within each level appears to be symmetric and has approximately the same variability (Figures <ref> and <ref>). Similar conclusions can be drawn for model 2 for the Euclidean Figure <ref>) and Mahalanobis distances (Figure <ref>), given that the median of the percentage of points outside the envelope is less than 5% for model 2 and close to 100% for the null model using both distances. This confirms that the proposed diagnostics are useful to identify well-fitting multinomial models for grouped nominal data. § APPLICATIONS Here, two motivation studies available in the literature are considered to illustrate the procedures presented in Sections <ref> and <ref>. §.§ Wine Classification This first dataset (individual structure) arises from a study carried out by <cit.>, involving wine classification techniques (<cit.>). In this study, a chemical analysis was carried out at the Institute of Pharmaceutical and Food Analysis and Technologies about 178 wines from three grape cultivars from the Liguria region in Italy, whose objective was to classify the different cultivars. The response variable represents the type of cultivar, assuming values {1,2,3}. In the analysis, the amounts of 13 chemical constituents of each cultivar were determined, among which are magnesium and phenols that can be considered good indicators of wine origin <cit.>. Further details as well as the dataset are available in the <cit.> package for software <cit.>. We define the following linear predictors: M1: intercept only (null model); M2: intercept + phenols; M3: intercept + magnesium + phenols (additive model) and M4: intercept + magnesium * phenols (interaction model). The final model was selected by applying likelihood-ratio (LR) tests to a sequence of nested models and we obtained: M1 × M2 LR =123.98 (p<0.01); M2 × M3 LR =13.14 (p<0.01) and M3 × M4 LR =1.25 (p=0.54), all statistics associated to 2 degrees of freedom. Therefore, model M3 was selected. The Akaike Information Criterion (AIC) also was used to compare models, and the lowest AIC (261.50) value was for model M3, but this measure does not verify the goodness-of-fit of the model or validates the distributional assumption. The histogram of the randomized quantile residuals (Figure <ref>(a)) indicate that resdiduals of model M3 are normally distributed. This is confirmed by the Shapiro-Wilk test (p=0.167). Also, the plot of residuals versus fitted values (Figure <ref>(b)) Residuals vary mainly between -2 and 2 and no pattern is evident, which also suggests that model M3 is well-fitted to the data. The half-normal plots with a simulated envelope for the standardized randomized quantile residuals are shown in Figure <ref> for model M3:  intercept + magnesium + phenols (a) and the null model (b). It appears that the model fits the data well, since no point is outside the envelope. §.§ Student preference The second dataset (grouped structure) refers to the choice made by high school students among different programs. This sample of 200 individuals was made available in 2013 by the statistical consulting group at the University of California at Los Angeles (UCLA), being used in studies involving polytomous data (e.g. <cit.>, <cit.> and <cit.>). The response variable is the choice by a program (1: academic, 2: general, 3: vocational). There are 11 covariates available in this study, including socioeconomic status, gender, and scores in specific subjects (mathematics, social studies, writing, among others). Here, we consider the maths score as a continuous covariate, to verify if the score contributed to the student’s decision. The data were organized in N=34 including groups varying from m=2 to m=13. For more details, see <cit.>. Considering the null hypothesis that program choice is independent of maths score, we employed a LR test to compare a null model (intercept only) with a model including the maths score in the linear predictor (model M1). We obtained a test statistic of 51.97, on 2 degrees of freedom (p<0.01). Model M1 also presented a lower AIC (182.81) when compared to the null model (230.77). Based on this result, it is concluded that maths score is significant to explain program choice. The half-normal plots with a simulated envelope indicate that model M1 (intercept + maths score) is suitable to analyse the data, for both Euclidean (Figure <ref> and Mahalanobis (Figure <ref>) distances. § CONCLUSION In this work we presented alternatives to residual analysis for nominal data with individual and grouped data structures using randomized quantile residuals and distance measures, respectively. The simulation studies showed that these residuals and the proposed distances presented good performance in assessing model goodness-of-fit with continuous and categorical covariates. Therefore, the randomized quantile residuals and the distances may be potential tools for checking diagnostics of generalized logit models. However, the analysis of residuals for polytomous data has many challenges yet to be explored. Studies focusing on small sample sizes are necessary to assess the fit of the model, which could lead to sampling uncertainty in the residuals and distances. Venues for future work also include simulation studies focusing on longitudinal designs. § ACKNOWLEDGMENTS This work derived from the thesis entitled “Residuals and diagnostic methods in models for polytomous data” with support from the Brazilian Foundation, Coordenação de “Coordenação de Aperfeiçoamento de Pessoal de Nível Superior” (CAPES) process number 88882.378344/2019-01. This publication also had the additional support from Brazilian Fundation-CAPES process number 88887.716582/2022-00 and from Science Foundation Ireland under grant number 18/CRT/6049. § SUPPLEMENTARY MATERIAL All R code, including the implementations of the proposed methods, are available at <https://github.com/GabrielRPalma/DiagnosticsForCategoricalResponse>. tfs
http://arxiv.org/abs/2307.01305v1
20230703192205
Efficient Communication for Pursuit-Evasion Games with Asymmetric Information
[ "Dipankar Maity" ]
eess.SY
[ "eess.SY", "cs.GT", "cs.SY" ]
Social Impressions of the NAO Robot and its Impact on Physiology Ruchik Mishra Department of Electrical and Computer Engineering University of Louisville Louisville, USA [email protected] Karla Conn Welch Department of Electrical and Computer Engineering University of Louisville Louisville, USA [email protected] Received ; accepted ============================================================================================================================================================================================================================================================================================================== We consider a class of pursuit-evasion differential games in which the evader has continuous access to the pursuer's location but not vice-versa. There is an immobile sensor (e.g., a ground radar station) that can sense the evader's location and communicate that information intermittently to the pursuer. Transmitting the information from the sensor to the pursuer is costly and only a finite number of transmissions can happen throughout the entire game. The outcome of the game is determined by the control strategies of the players and the communication strategy between the sensor and the pursuer. We obtain the (Nash) equilibrium control strategies for both the players as well as the optimal communication strategy between the static sensor and the pursuer. We discuss a dilemma for the evader that emerges in this game. We also discuss the emergence of implicit communication where the absence of communication from the sensor can also convey some actionable information to the pursuer. § INTRODUCTION Pursuit-Evasion games <cit.> have been applied to investigate a wide class of civilian and military applications involving multi-agent interactions in adversarial scenarios <cit.>. While several variations ranging from complex dynamic models for the players (e.g., <cit.>) to complex geometry of the environment (e.g., <cit.>) to limited visibility of the players (e.g., <cit.>) have been considered, one of the prevailing assumptions have been the continuous sensing capability for the players, with the exception of <cit.> among few others. By `continuous sensing' we refer to the capability that enables the players to keep their sensors turned on continuously for the entire duration of the game. Extensions of pursuit-evasion games in the context of sensing limitations have mainly considered limited sensing range e.g., <cit.>, limited field of view e.g., <cit.>, and noisy measurements e.g., <cit.>, but the challenges associated with the lack of continuous sensing remains unsolved. In this paper, we revisit the classical linear-quadratic pursuit-evasion game where the pursuer does not have a continuous sensing capability. In particular, the pursuer relies on a remotely located sensor (e.g., a radar station) to sense the evader's position. Upon request, the remote sensor can perfectly sense the location of the evader and share it with the pursuer.[ One may alternatively also consider a scenario where the pursuer has an onboard sensor to measure the evader's location, however, it can only use the sensor intermittently due to, for example, energy and computational constraints. ] The communication channel between the pursuer and the remote sensor is assumed to be noiseless, instantaneous (i.e., no delay), and perfectly reliable (i.e., no packet losses). The pursuer intermittently requests the evader's location to update its pursuit strategy. Due to the resource (e.g., energy) constraints, the pursuer can only make a finite number of requests and it aims to minimize the number of communications. On the other hand, the evader is able to sense the pursuer continuously and is aware of the sensing limitation of the pursuer. The objective of this work is to analyze the game under this asymmetric sensing limitation and obtain: (i) the optimal communication times between the sensor and the pursuer and, (ii) the equilibrium control strategies for the pursuer and the evader. It is noteworthy that the majority of the existing work not only considers continuous sensing (or no sensing at all) but also often assumes that the sensing capability is superior for the pursuer; e.g., perfect measurement for the pursuer and noisy measurement for the evader <cit.>. To the best of our knowledge, some of the earliest works involving intermittent measurements for linear quadratic differential games were discussed in <cit.> where both the players had access to only intermittent measurements. These players, however, had to come to an agreement on the sensing instances, which was solved by an optimization problem. This restriction was later relaxed in <cit.>. These works were also extended to discrete time <cit.>, infinite horizon <cit.>, and recently to asset defense scenarios <cit.>. In this paper, we consider a three-player problem involving a pursuer, an evader, and a remote sensor, where the remote sensor is a passive player that does not make any decision about the sensing times. The pursuer intermittently communicates with the remote sensor which results in an intermittent sensing capability for the pursuer. This problem has similarity with the sampled-data H_∞ optimal control problem studied in <cit.>. In this paper, we not only consider an intermittent sensing (equivalent to the sampled-data) framework, but also we obtain the optimal time instances for the sensing/communication. Intermittent sensing/communication has been a subject of active research in networked control systems. Efficient frameworks such as event- and self-triggered controls have been developed to reduce the communication/sensing needs. These methods have also been used for studying games, e.g., <cit.>. Although these methods use intermittent sensing/communication, they do not study the optimality of the sensing/communication strategy, except in some special cases, e.g., <cit.>. This is primarily due to the fact that intermittent sensing/communication makes the problem analytically and computationally intractable. In contrast to those works, our objective in this paper is to find the optimal number of required communications as well as the optimal communication times. The rest of the paper is organized as follows: In Section <ref>, we describe the linear-quadratic pursuit-evasion game problem. In Section <ref>, we discuss the closed- and open-loop equilibria for this game. The pursuer has continuous access to the game state in the closed-loop case whereas, in the open-loop case, the pursuer only knows the initial state of the game. Next, we study the game with intermittent communication in Section <ref>. We derive the optimality conditions for the communication as well as the equilibrium control strategies for the pursuer and the evader. Finally, we discuss some open problems and conclude the paper in Section <ref>. § PROBLEM FORMULATION We consider a linear quadratic pursuit-evasion game where the state of the game follows the dynamics ẋ(t) = A x(t) + B (t) + C (t), x(t_0) = x_0, where x(t)∈^n_x is the state at time t, (t)∈^n_p and u_e(t) ∈^n_e are the inputs of the pursuer and the evader, respectively, at time t. The initial state x_0 is known to both the players at the beginning of the game. The payoff function for this game is J̅ = ∫_t_0^t_f( x(t)^2_Q + (t)^2_R_p - (t)^2_R_e) ṭ + x(t_f)^2_Q_f, where R_p, R_e ≻ 0, and Q, Q_f ≽ 0 are symmetric matrices. Although the game is deterministic, the payoff may not necessarily be deterministic if the players adopt randomized strategies for their inputs. Therefore, in the subsequent section, wherever it is appropriate, we will consider the expected value of the payoff function, i.e., J ≜_∼(, )[J̅], where μ_p and μ_e denote the strategies of the pursuer and the evader, respectively. _∼(, )[·] denotes expectation with respect to the randomization induced by and . The strategies and are measurable functions of the information sets of the players. To describe the information sets of the players, we first denote m(t) to be the total number of sensing requests upto time t and (t) ≜{t_0, t_1, …, t_m(t)} be the set of sensing instances upto time t, where t_0 is the initial time, and t_i< t_i+1 for all i, and t_m(t) is the latest sensing time such that t_m(t)≤ t. Furthermore, we denote _e(t) ≜{x(s), (t)  |  s≤ t } to be the information available to the evader at time t and _p(t) ≜{x(s'), (t) | s'∈(t)} to be the pursuer's available information. In the subsequent sections we will suppress the time argument, as much as possible, for notational brevity, e.g., we shall use x instead of x(t). § CLOSED-LOOP AND OPEN-LOOP EQUILIBRIA In this section, we revisit the classical results on close-loop equilibrium and also discuss the open-loop case for the pursuer. To that end, let matrix P follow the Riccati equation Ṗ + A P + PA + Q + P(CR_e^-1C - BR_p^-1 B)P = 0, P(t_f) = Q_f. The solution to the Riccati equation is well defined for all t≤ t_f if the assumption CR_e^-1C - BR_p^-1 B≺ 0 is satisfied. Without this condition, the Riccati equation may have a finite escape time (conjugate point) in the interval [t_0, t_f] and the cost J̅ becomes infinity. The matrices are chosen such that CR_e^-1C - BR_p^-1 B≺ 0. Loosely speaking, the above assumption implies that the pursuer has more controllability than the evader, as also discussed in <cit.>. Provided the solution to (<ref>) is well defined for the interval [t_0, t_f] –which is now guaranteed by Assumption <ref>– one may verify that J̅ in (<ref>) can be rewritten as (e.g., see <cit.>) J̅ = x_0^2_P_0 + ∫_t_0^t_f + R_p^-1B P x^2_R_pṭ- ∫_t_0^t_f - R_e^-1C P x^2_R_eṭ, where we defined P_0 ≜ P(t_0). When both the players have access to the state for all t, we observe from (<ref>) that the (Nash) equilibrium inputs for the players are = - R_p^-1B P x ≜(_p) , = R_e^-1C P x ≜(_e). Substituting these strategies in the dynamics (<ref>) yields x(t) = Φ(t) x_0 where the state transition matrix Φ satisfies Φ̇= (A- BR_p^-1B P + CR_e^-1C P ) Φ, with the initial condition Φ(t_0) = I, where I is the identify matrix. Let us define the `open-loop' input pair (, ): = - R_p^-1B P Φ x_0 , = R_e^-1C P Φ x_0. Using (<ref>) and (<ref>) we notice that (t) = (t) and (t) = (t) for all t. Therefore, it appears from (<ref>) that the players only need to know x_0 since everything else can be computed offline. If (, ) is an equilibrium pair (producing the same payoff as the pair (,)), then in our problem, it would imply that there is no need for communication between the remote sensor and the pursuer. However, (, ) is not necessarily an equilibrium pair, even though it is derived from the equilibrium one (, ). To best illustrate this, we consider the following example. Consider the pursuit-evasion game with x_p ∈^2 and x_e ∈^2 denoting the states of pursuer and the evader, respectively. They have the dynamics ẋ_p = and ẋ_e = u_e. The state of the game is x = [x_p, x_e] and the objective function is J̅ = ∫_0^1 (1/4^2 - 1/2u_e^2) ṭ + x_p(1) - x_e(1)^2. The initial states for the players are x_p(0) = [0, 0] and x_e(0) = [1, 0]. For this example, we will explicitly write I_n and 0_n to denote, respectively, the identity and the zero matrices of dimension n. In this example, A = 0_4, B = [ I_2; 0_2 ] and C = -[ 0; I_2 ], and Q_f = [ I_2 - I_2; -I_2 I_2 ], R_e = 1/2, R_p = 1/4, t_0=0, and t_f=1. By solving (<ref>), we obtain P(t) = 1/3-2t[ I_2 -I_2; -I_2 I_2 ]. The open-loop inputs, defined in (<ref>), are found to be = [ 4/3,   0], = [ 2/3,   0]. The resulting payoff from the strategy pair (, ) is J̅ = 1/3. When the pursuer's strategy is fixed to , the evader has an incentive to choose a strategy different from . To see this, let us pick an arbitrary strategy = [-c,   0] where c>0 is some constant. Therefore, x_p(1) = [4/3,  0] and x_e(1) = [1-c,   0], and the payoff from this strategy is J̅ = ∫_0^1 (1/4(4/3)^2 - 1/2c^2)ṭ + (c + 1/3)^2 = 1/2c^2 + 2/3c + 5/9. The resulting payoff is not bounded from above as a function of c. Since the evader is aiming to maximize J̅, it can drive the payoff to infinity. Example <ref> demonstrates that the pursuer may not commit to the open-loop equivalent of the equilibrium strategy without sacrificing performance. Moreover, as shown in this example, the payoff resulting from such an open-loop execution can be arbitrarily high (i.e., payoff diverging to infinity). While the difference in the payoffs resulting from and is infinite in the chosen example, one might ask whether it is always the case for any such pursuit-evasion games represented by (<ref>)-(<ref>). Furthermore, one might be interested in the payoff when the pursuer has intermittent access to the state x(t). These are some of the questions that we investigate in the rest of the paper. More precisely, we answer the following fundamental questions: (i) Does there exist an intermittent communication strategy between the pursuer and the remote sensor that can ensure the same payoff as the one obtained from continuous communication? (ii) If such an intermittent communication strategy exists, what is the minimum number (N_min) of required communications, and what are the optimal time instances (t_i's) for the communications? and finally, (iii) If the available number of communications is less than N_min, how much will the payoff degrade? § GAME UNDER INTERMITTENT COMMUNICATION Since the communication between the remote sensor and the pursuer is intermittent instead of continuous, and the total number of communications is bounded, the evader may have an incentive to deviate from its earlier equilibrium strategy of = R_e^-1C Px. Without loss of generality, let us assume that the evader follows = R_e^-1C Px + w, where w(t) ∈^n_e is an input vector that is chosen by the evader strategy . The input w(t) can depend on the state and it is also allowed to be a random variable. Allowing w(t) to be a random variable implies that the evader can employ randomized/mixed strategies. Our first objective in this section is to investigate the existence of an intermittent communication strategy with the same payoff as the continuous communication strategy. To that end, let {t_i}_i=1^N denote the N communication instances where t_i < t_i+1 < t_f for all i=1,…, (N-1). For national brevity, we further introduce t_N+1 = t_f and the 0-th communication instance is defined to be the initial time of the game t_0. §.§ Existence of an Intermittent Communication Strategy To show the existence of an intermittent strategy that performs equally good as the continuous one, we first compute the payoff for a given set of communication instances = {t_i}_i=0^N+1. For a given , let the pursuer follow the strategy ẋ̂̇ = (A-BR_p^-1B P + CR_e^-1C P)x̂, x̂(t_i) = x(t_i), = - R_p^-1B P x̂. Later we will prove that (<ref>) is indeed the optimal strategy for the pursuer. Given the evader strategy to be (<ref>) and the pursuer strategy to be (<ref>), we have ẋ = A_1x - BR_p^-1B P x̂ + C w, where A_1 ≜ A+C R_e^-1C P. Let e(t) ≜ x(t) - x̂(t) denote the difference between the true state x and the pursuer's estimate x̂ at time t. Using (<ref>) and (<ref>), one may verify that, for all t∈ [t_i, t_i+1 ) and for all i = 0, …, N, ė = A_1 e + C w t∈ [t_i, t_i+1 ) e(t_i) = 0. Substituting (<ref>) and (<ref>) in (<ref>) yields J̅ = x_0^2_P_0 + ∫_t_0^t_f( R_p^-1B P e^2_R_p - w^2_R_e) ṭ. Since w is potentially a random process, we consider the expected payoff J = x_0^2_P_0 + ∫_t_0^t_f_∼μ_e(e^2_P B R_p^-1 B P - w^2_R_e) ṭ, where the expectation is with respect to the randomness introduced by the evader strategy . The expected cost (<ref>) can be rewritten as J = x_0^2_P_0 + ∑_i=0^N ∫_t_i^t_i+1_∼μ_e(e^2_P B R_p^-1 B P - w^2_R_e) ṭ. Since, e(t_i) resets to zero regardless of the evader's strategy, the choice of w in the interval [t_i, t_i+1) does not affect the costs ∫_t_j^t_j+1_∼μ_e(e^2_P B R_p^-1 B P - w^2_R_e) ṭ for any j i. Therefore, (<ref>) is more amenable than (<ref>) for computing the optimal w that maximizes J. Let us now consider the evader's optimal control problem for the interval [t_i, t_i+1), which we denote by OP_i: OP_i max∫_t_i^t_i+1_∼μ_e(e^2_P B R_p^-1 B P - w^2_R_e) ṭ, subject to (<ref>). In the following theorem we characterize the optimal solution and the optimal value for OP_i. Let M satisfy the following Riccati equation Ṁ + A_1 M + MA_1 - PBR_p^-1 B P - MCR_e^-1 C M = 0, M(t_i+1) = 0, where A_1 = A + CR_e^-1 C P. The optimal value of OP_i is zero if and only if M(t) does not have an escape time in the interval [t_i, t_i+1), and the corresponding optimal input is w(t) = 0. On the other hand, if M(t) has an escape time in the interval [t_i, t_i+1), then the optimal value of OP_i is unbounded from above. Let us first consider the case that M(t) does not have an escape time in the interval [t_i, t_i+1). In this case, one may use 0 = ∫_t_i^t_i+1/ṭ(e^2_M)ṭ = ∫_t_i^t_i+1( e (Ṁ + A_1 M + M A_1) e + 2 e M C w ) ṭ to rewrite ∫_t_i^t_i+1 (e^2_P B R_p^-1 B P - w^2_R_e) ṭ = - ∫_t_i^t_i+1w - R_e^-1C M e^2_R_eṭ  ≤ 0. Since w must be chosen to maximize the optimal value of OP_i, we conclude that w = R_e^-1C M e is the optimal choice according to the last equation. Substituting w = R_e^-1C M e in (<ref>), we obtain e(t) = 0 and consequently w(t) = 0. On the other hand, when M(t) has a finite escape time during the interval [t_i, t_i+1), one may construct an input w to show that the optimal value is unbounded. We omit this construction of w due to page limitations. Theorem <ref> implies that, as long as the communication instances t_i's are chosen such that M is well defined in each interval, the optimal choice for the evader is w = 0, this is in line with what was observed for the sampled-data H_∞ optimal control problem in <cit.>. Theorem <ref> essentially shows that J = x_0^2_P_0, i.e., the payoff is exactly the same as the one from the continuous communication case. Theorem <ref> also states that, if any of the communication instance is chosen such that M has a finite escape time, then the payoff becomes infinity. Therefore, the intermittent communication can either perform as good as the continuous communication or it will perform arbitrarily bad, but nothing in between. A necessary condition for an intermittent communication strategy to produce a finite payoff is to ensure M(t) has a well defined solution for each interval [t_i, t_i+1). The proof follows directly from Theorem <ref> and the preceeding discussion. Given the (i+1)-th communication time t_i+1 and the terminal condition M(t_i+1) = 0, there exists a finite duration δ such that M does not have a finite escape time in the interval [t_i+1-δ,  t_i+1).[This is due to the local Lipschitz property of the function f(M)=-A_1 M - MA_1 + PBR_p^-1 B P + MCR_e^-1 C M. ] Therefore, the inter-communication duration (t_i+1 - t_i) is lower bounded by δ. This implies that the total number of communications is finite for a finite interval [t_0, t_f). In summary, there always exists an intermittent communication strategy that produces the same payoff as the continuous communication case. Furthermore, the sensing instances must satisfy the necessary condition given in Corollary <ref>. This necessary condition is related to the finite escape times of a certain Riccati equation. While there exists several intermittent strategies satisfying this necessary condition, next we investigate the optimal intermittent communication strategy that incurs the least number of communications. Before proceeding to the next section, we conclude this secstion by formally showing that the assumed strategy of the pursuer in (<ref>) is an equilibrium strategy. The equilibrium strategy for the pursuer is: ẋ̂̇ = (A-BR_p^-1B P + CR_e^-1C P)x̂, x̂(t_i) = x(t_i), = - R_p^-1B P x̂(t). Without loss of any generality we assume that the communication instances are chosen such that they satisfy the necessary condition in Corollary <ref>; otherwise, the payoff is unbounded from above regardless of the pursuer's strategy. Since the evader's policy is to pick w=0 when the intermittent communication instances follow Corollary <ref>, we have = - R_e^-1C P x and the payoff (<ref>) becomes J̅ = x_0^2_P_0 + ∫_t_0^t_f + R_p^-1B P x^2_R_pṭ. At this point, one may verify that x̂(t) = x(t) for all t, and hence, = -R_p^-1B P x̂ is the equillibrium strategy. §.§ Optimal Communication Instances The discussion in this section will focus on computing the optimal communication instances. To that end, we shall discuss the finite escape times of M defined in (<ref>). Two major challenges in studying the escape times of (<ref>) are: (i) A_1 is time varying since it depends on P, and (ii) M depends on P, which follows another Riccati equation. To overcome these challenges we consider the matrix Π = M - P. Since P does not have a finite escape time (due to Assumption <ref>), M will have a finite escape time if and only if Π does. Next we verify that Π also follows a Riccati equation: Π̇ = Ṁ - Ṗ = - (A_1 M + MA_1 - PBR_p^-1 B P - MCR_e^-1 C M)   + A P + PA + Q + P(CR_e^-1C - BR_p^-1 B)P = -AΠ - Π A + Q + Π CR_e^-1 CΠ. Notice that the dynamics equation of Π does not depend on P anymore, and all the matrices involved the equation of Π̇ are time invariant. This form is important since we may readily use the results from <cit.> that studies the escape time of Riccati equations of this form. Next, we will study the finite escape time of Π in the interval [t_i, t_i+1) with the boundary condition Π(t_i+1) = - P(t_i+1). The following theorem provides the optimal communication instances. The i-th triggering instance is the escape time for Π where Π̇ + AΠ + Π A - Q - Π CR_e^-1 CΠ = 0, t< t_i+1, Π (t_i+1) = - P(t_i+1), where t_N+1 = t_f. Let {t_i}_i=1^N be the set of communication times obtained from Theorem <ref> and let {t_i'}_i=1^N' be an arbitrary sequence of communication instances that satisfy the necessary condition in Corollary <ref>. We prove this theorem by contradiction and, to that end, we assume that N'< N. Notice that t_N+1 = t_N' + 1' = t_f. Since t_N is the escape time for the interval [t_N, t_f), then we must have t_N≤ t_N''. Now, starting from the terminal condition Π(t_N'') = - P(t_N''), one may verify that Π(t_N) ≼ P(t_N) and therefore, Π will have an escape time before t_N-1 since t_N-1 was the escape time with the boundary condition Π(t_N) = P(t_N). Therefore, we conclude t_N-1≤ t_N'-1'. Following an inductive argument, one obtain t_1 ≤ t_1' and consequently, N' ≥ N. Thus, the proof is complete. Notice that the triggering instances are computed backwards and the duration (t_i+1 - t_i) is maximized while finding t_i for a given t_i+1. Since t_0 is fixed and t_1 is the first triggering instance, we are guaranteed that Π does not have a finite escape time in the interval (t_0, t_1]. Therefore, unless t_0 itself is an escape time, there is some slack in the choice of t_1, and t_1 can be increased to t_1' without introducing an escape time within the new interval (t_0, t_1']. Given this new t_1', one may increase t_2 to t_2' while ensuring (t_1', t_2'] does not contain an escape time. Therefore, the optimal communication instances are non-unique unless t_0 is the escape time for Π with the boundary condition Π(t_1) = - P(t_1). Although there is non-uniqueness in the actual communication instances, the total number of required communication is unique. §.§ Evader's Dilemma At time t_i (or, at an arbitrary time t in general), the evader does not know when/whether the pursuer is going to request for communications. Therefore, if the evader picks w=0 for the interval [t_i, t_i+1) and the pursuer does not request for a communication, then the evader has lost the opportunity that would have given the evader a much higher (theoretically infinite) payoff if it did not select w=0. On the other hand, if the evader picked a non-zero w and the pursuer communicated with the sensor, then the evader will have incurred a loss in its payoff. Therefore, the evader has to make a decision first on whether it should use a non-zero w without the pursuer having to commit to the next communication instance. This is an interesting dilemma from the evader's side where it has to pick between a high-risk-high-reward (i.e., the evader picks a nonzero arbitrarily high w) and a no-risk-no-reward (i.e., evader picks w=0) strategy. This dilemma will not occur in a slightly different scenario where the remote sensor can continuously sense the evader and the sensor imitates the communication instead of the pursuer requesting for it. In this case, the evader knows that if it picks a nonzero w, the sensor will communicate state at appropriate t_i's and this will result in a worse payoff for the evader. Therefore, even in absence of a communication from the sensor, the pursuer is ensured that the evader is not able to get a payoff higher than x_0^2_P_0 and the pursuer is safe to continue with its strategy = -R_p^-1B P x̂ without having to reset the value of x̂. This implies an implicit communication in absence of a physical communication, which is in line with what was found for a linear-quadratic optimal control problem in <cit.>. §.§ Example <ref> Revisited In this section, we revisit Example <ref> and compute the communication instances according to Theorem <ref>. By solving the Riccati equation for Π, we obtain Π(t) = 1/3-4t_i+1+2t[ -I_2 I_2; I_2 -I_2 ]. ∀ t∈ (t_i, t_i+1]. Therefore, the i-th communication time is found by solving 3-4t_i+1+2t_i = 0. Given t_N+1=t_f = 1, we obtain, t_N = 4t_N+1-3/2 = 1/2. Similarly, given t_N = 1/2, we obtain t_N-1 = 4t_N - 3/2 = - 1/2 < t_0. Therefore, only one communication is needed and the communication occurs at time t_1 = 1/2. As discussed after Theorem <ref>, the communication instances are non-unique. In our example, one may choose any t_1 such that t_1< 3/4. This is obtained by plugging in t_0 = 0 in the equation 3-4t_1 + 2t_0 > 0. The conditions t_1< 3/4 or t_1 = 1/2 are obtained by solving an escape time problem and they do not provide much physical insights. To understand what these conditions physically imply in the context of this example, we use a graphical representation of the system in Fig. <ref>, where the blue hollow/filled dots represent the pursuer and the red ones representing the evader. Recall that the optimal (open-loop) input for the evader was = [2/3, 0] and therefore, the control effort used in the interval [0, t_1] is ∫_0^t_11/2^2 = 2t_1/9. Let us consider the set of all control functions with a maximum control effort of 2t_1/9 in the interval [0,t_1), i.e., consider 𝒰 ={ | ∫_0^t_11/2^2 ≤2t_1/9}. Given a t_1, the reachability set of the evader becomes a circle with radius 2t_1/3 when the evader's input is restricted to the set 𝒰. These reachability sets (boundaries) are shown in the subfigures with dashed red circles. Notice that, for = [ 4/3,   0], t_1 = 3/4 is the time when the defender reaches the initial location of the evader, which is at [1,  0]. If the pursuer communicates with the sensor at this very moment (i.e., at t=3/4), then the payoff starting from this time is going to be the same regardless of where the evader is on the boundary of the dotted red circle. On the other hand, if the pursuer communicates before time 3/4, as illustrated in the top-right subfigure of Fig. <ref>, then the the evader must be at x_ opt to start the next segment of the game, as also shown in that subfigure. If the pursuer communicates after t=3/4, as shown in the bottom-left subfigure, the optimal strategy for the evader is to go behind the pursuer and be at x_ risky, since any other point on the reachability circle will have a smaller distance from the pursuer. The evader's strategy in Example <ref> was constructed based on this observation. On the other hand, at time t=1/2, the pursuer's location intersects with the evader's reachability circle at the point x_ risky. Therefore, if the evader decided to go behind the defender, this would be the most vulnerable point in time. Although the evader's input was restricted to the set 𝒰 for the above discussion, a similar observation is noted also when this restriction is omitted. § CONCLUSION AND DISCUSSIONS In this work, we considered a class of linear-quadratic pursuit-evasion games where the pursuer relies on a remote sensor to measure the current state of game. The pursuer intermittently communicates with the sensor and the total number of communication is minimized. The optimal communication instances are found by solving for the finite escape times of a certain Riccati equation. The optimal communication instances are, in general, non-unique. The evader is faced with a dilemma between taking a high-risk-high-reward and a no-risk-no-reward strategy. In this work, we assumed that the remote sensor can perfectly sense the game state and the communication between the sensor and the pursuer is ideal (i.e., no delay, no packet loss, and no transmission noise). It would be interesting to study the problem with the communication being suffered from random delays. In that case, the pursuer may not receive the measurement in time to satisfy the escape time condition. Since there is generally some slack in the choice of the sensing instances, the pursuer may utilize this slack to ensure that the measurement is received in time, although such an approach will only work with certainty if the communication delay is bounded by the available slack. Alternatively, the pursuer needs to increase the number of communications. It is not obvious how to choose the communication instances in presence of stochastic delays. Likewise, analyzing the effects of packet dropoutson the payoff is also an interesting research direction. We notice that the evader is able to achieve an arbitrarily high payoff when the pursuer lets Π to have a finite escape time. However, in order to achieve this arbitrarily high payoff, the evader needs apply a control input with arbitrary high magnitude, which is unrealistic from a practical point of view. Therefore, perhaps adding an upper bound on the magnitude of (t) (and (t)) is a more realistic scenario for this problem. In this case, the pursuer can ensure a finite payoff even when Π has finite escape times. The received payoff will degrade if Π is allowed to have finite escape times and this degradation in the payoff will be related to the inter-communication durations and the upper bound on the magnitude of . IEEEtran
http://arxiv.org/abs/2307.02390v1
20230705160138
Causal Discovery with Language Models as Imperfect Experts
[ "Stephanie Long", "Alexandre Piché", "Valentina Zantedeschi", "Tibor Schuster", "Alexandre Drouin" ]
cs.AI
[ "cs.AI", "cs.CL", "cs.LG" ]
[ Causal Discovery with Language Models as Imperfect Experts equal* Stephanie Longequal,mcgill Alexandre Pichéequal,snow,udem,mila Valentina Zantedeschiequal,snow Tibor Schustermcgill Alexandre Drouinsnow,mila milaMila - Quebec AI Institute udemUniversité de Montréal snowServiceNow Research mcgillMcGill University Valentina [email protected] Machine Learning, ICML 0.3in ] Understanding the causal relationships that underlie a system is a fundamental prerequisite to accurate decision-making. In this work, we explore how expert knowledge can be used to improve the data-driven identification of causal graphs, beyond Markov equivalence classes. In doing so, we consider a setting where we can query an expert about the orientation of causal relationships between variables, but where the expert may provide erroneous information. We propose strategies for amending such expert knowledge based on consistency properties, e.g., acyclicity and conditional independencies in the equivalence class. We then report a case study, on real data, where a large language model is used as an imperfect expert. § INTRODUCTION Understanding the cause-and-effect relationships that underlie a complex system is critical to accurate decision-making. Unlike any statistical association, causal relationships allow us to anticipate the system's response to interventions. Currently, randomized control trials (RCTs) serve as the gold standard for establishing causation <cit.>. However, RCTs can be costly and oftentimes impractical or unethical. As such, there has been growing interest in causal discovery, which aims to uncover causal relationships from data collected by passively observing a system (see <cit.> for a review). Causal discovery methods have been successfully applied in various fields, including genetics <cit.> and climate science <cit.>. Nevertheless, a fundamental limitation of such methods is their ability to only recover the true graph of causal relationships up to a set of equivalent solutions known as the Markov equivalence class (MEC), leading to uncertainty in downstream applications, such as estimating the effect of interventions <cit.>. One approach to reducing such uncertainty is the incorporation of expert knowledge, e.g., to rule out the existence of certain edges and reduce the set of possible solutions <cit.>. However, such methods typically assume that the knowledge provided by the expert is correct. In this work, we consider a more realistic case, where the expert is potentially incorrect. Our approach leverages such imperfect experts, e.g., large language models, to reduce uncertainty in the output of a causal discovery algorithm by orienting edges, while maintaining fundamental consistency properties, such as the acyclicity of the causal graph and the conditional independencies in the MEC. Contributions: * We formalize the use of imperfect experts in causal discovery as an optimization problem that minimizes the size of the MEC while ensuring that the true graph is still included (<ref>). * We propose a greedy approach that relies on Bayesian inference to optimize this objective by incrementally incorporating expert knowledge (<ref>). * We empirically evaluate the performance of our approach, on real data, with an expert that returns correct orientations with some fixed probability (<ref>). * We then empirically assess if the approach holds when taking a large language model as the expert – with mitigated results (<ref>). § BACKGROUND AND RELATED WORK We now review key background concepts and related work. Causal Bayesian networks:   Let (X_1, …, X_d) be a vector of d random variables with distribution p() and <_, _> be a directed acyclic graph (DAG) with vertices _ = {v_1, …, v_d} and edges _⊂_×_. Each vertex v_i ∈_ corresponds to a random variable X_i and a directed edge (v_i, v_j) ∈_ represents a direct causal relationship from X_i to X_j. We assume that p() is Markovian with respect to , i.e., p(X_1, …, X_d) = ∏_i=1^d p(X_i |pa_i^), where pa_i^ denotes the parents of X_i in . Causal discovery:  This task consists of recovering from data, which are typically sampled from p() <cit.>. Existing methods can be broadly classified as being: i) constraint-based <cit.>, which use conditional independence tests to rule-out edges, or ii) score-based <cit.>, which search for the DAG that optimizes some scoring function. One common limitation of these approaches is their inability to fully identify the true underlying graph beyond its Markov equivalence class (MEC) <cit.>. Equivalence classes:   The MEC, _, is a set of graphs that includes and all other DAGs with equivalent conditional independences. These may have different edge orientations, leading to uncertainty in downstream tasks, such as treatment effect estimation <cit.>. One common approach to reducing the size of _ is to include interventional data  <cit.>. However, similar to RCTs, the collection of such data may not always be feasible or ethical. An alternative approach, which we adopt in this work, is to eliminate graphs that are deemed implausible by an expert. Expert knowledge:   Previous work has considered experts that give: (i) forbidden edges <cit.>, (ii) (partial) orderings of the variables <cit.>, (iii) ancestral constraints <cit.>, and (iv) constraints on interactions between types of variables <cit.> (see <cit.> for a review). Typically, all DAGs that are contradicted by the expert are discarded, resulting in a new equivalence class ⊆_. One pitfall is that, realistically, an expert is unlikely to always be correct, and thus, might be discarded, i.e., ∉. In this work, we attempt to reduce _ as much as possible, while ensuring ∈ with high probability, in the presence of imperfect expert knowledge. We note that our work is akin to <cit.>, but a key difference is that they assume a directionally informed expert, i.e., that cannot misorient edges in . Moreover, their approach is expert-first, i.e., data is used to expand an initial graph given by an expert, while our approach is data-first, i.e., the expert is used to refine the solution of a causal discovery algorithm. Large language models:   In situations where access to human experts is limited, Large Language Models (LLMs), such as GPT-4 <cit.>, offer promising alternatives. Recent studies have demonstrated that certain LLMs possess a rich knowledge base that encompasses valuable information for causal discovery <cit.>, achieving state-of-the-art accuracy on datasets such as the Tübingen pairs <cit.>. In this work, we investigate the use of LLMs as imperfect experts within the context of causal discovery. Unlike prior approaches, which typically assume the correctness of extracted knowledge,[The Bayesian approach of <cit.> is an exception.] we propose strategies to use, potentially incorrect, LLM knowledge to eliminate some graphs in _, while ensuring that ∈ with high probability. § PROBLEM SETTING We now formalize our problem of interest. Let represent the true causal DAG, as defined in <ref>, and let _ be its MEC. We assume that _ is known, e.g., that it has been obtained via some causal discovery algorithm. Further, we assume the availability of metadata {μ_1, …, μ_d}, where each μ_i provides some information about X_i, e.g., a name, a brief description, etc. We then assume access to an expert, who consumes such metadata and makes decisions: (Expert) An expert is a function that, when queried with the metadata for a pair of variables (μ_i, μ_j), returns a hypothetical orientation for the X_i - X_j edge: E(μ_i, μ_j ) = {[ → (v_i, v_j) ∈_; ← (v_j, v_i) ∈_; ].. Of note, E(μ_i, μ_j ) can be incorrect (imperfect expert) and thus, our problem of interest consists in elaborating strategies to maximally make use of such imperfect knowledge. Let be the set of indices of all pairs of variables related by an edge whose orientation is ambiguous in _: { (i, j) | i < j and ∃ G, G' ∈_ s.t. . . (v_i, v_j) ∈_G ∧ (v_j, v_i) ∈_G'}. We aim to elaborate a strategy S that uses the expert's knowledge to orient edges in and obtain a new equivalence class , such that uncertainty is reduced to the minimum, i.e., | |≪|_|, but still belongs to with high probability, that is: min | | such that p(∈ ) ≥ 1 - η, where η∈ [0, 1] quantifies tolerance to the risk that the true graph is not in the resulting equivalence class. This problem can be viewed as a trade-off between reducing uncertainty, by shrinking the set of plausible DAGs, and the risk associated with making decisions based on an imperfect expert. § STRATEGIES FOR IMPERFECT EXPERTS Instead of blindly accepting expert orientations, we leverage the consistency information provided by the true MEC to estimate which decisions are most likely incorrect. Indeed, among all possible combinations of edge orientations, only a few are possible, since many of them would create cycles or introduce new v-structures. The different strategies that we now propose for solving Problem (<ref>) leverage such consistency imperatives, as well as Bayesian inference, to increase robustness to errors in expert knowledge. Noise model:   First, let us define the noise model that, we assume, characterizes mistakes made by the expert. Figure <ref> shows the dependency graph for the decision process of a type of imperfect expert that we dub “ε-expert". For any pair p_i = (p_i1, p_i2) ∈, we use the notation O_p_i to denote the unknown true edge orientation and E_p_i E(μ_p_i1, μ_p_i2) denotes the orientation given by the expert. Further, for any subset of indices I ⊆, we use O_I { O_p_i}_i=1^|I|; the same applies to E_I. Notice that (i) true edge orientations are, in general, interdependent because of the aforementioned consistency properties of the MEC, and that (ii) edges already oriented in _ are not represented since they are constants (i.e., the expert is not queried for those). In this model, we assume that, for any p_i ∈, the expert's response depends only on the true value O_p_i, i.e., p(E_p_i| O_) = p(E_p_i| O_p_i) and is incorrect with constant probability ε. We now define the components of our Bayesian approach. Prior:  We consider a simple prior that encodes the knowledge given by the true MEC. It boils down to an uniform prior over the graphs in _, effectively assigning no mass to any edge combination that is not consistent (creates a cycle or a new v-structure). Thus, the prior for a partial edge orientation O_I corresponds to its frequency in the graphs of _: p(O_I) 5mu = 5mu∑__ I5mu p(O_I, 5mu O_∖ I5mu = 5mu_ I), where we marginalize over all possible combinations of values, _ I, for the remaining unoriented edges O_∖ I. Posterior:  The posterior probability that orientations for all edges in are correct, given all observed expert decisions E_, is then given by: p(O_| E_) = -100mup(E_| O_) 5mu p(O_)/p(E_), where, for the ε-expert noise model, the likelihood is s.t., p(E_| O_) = ∏_p_i ∈ p(E_p_i| O_p_i). In contrast, due to interdependencies between the true edge orientations, the posterior probability cannot similarly be factorized and, in general, p(O_p_i| E_) ≠ p(O_p_i| E_p_i). Note that the posterior for a subset edges I ⊆, e.g, oriented by an iterative strategy, can be obtained via simple marginalization. Finally, note that the posterior can be used to estimate p(∈ ), since any mistake in orienting p_i ∈ results in excluding from . graphs Greedy approach:   We now propose a greedy strategy for optimizing Problem (<ref>) that iteratively orients edges in . Let ^(t) denote the MEC at the t-th iteration of the algorithm and let ^(t)_p_i denote the MEC resulting from additionally orienting p_i ∈(^(t)) at step t and propagating any consequential orientations using <cit.>'s rules. The algorithm starts with ^(1) = _. We consider two strategies to greedily select the best p_i: * S_size: selects the edge that leads to the smallest equivalence class: min_p_i  |^(t)_p_i| * S_risk: selects the edge that leads to the lowest risk of excluding from the equivalence class: min_p_i[ 1 - p(O_∖(^(t)_p_i)| E_) ] This procedure is repeated while p(∈^(t)_p_i), estimated according to <ref>, is greater or equal to 1 - η. § RESULTS AND DISCUSSION We now evaluate the ability of our approach to leverage imperfect expert knowledge using real-world causal Bayesian networks from the bnlearn repository <cit.>. Networks:  We considered the following networks: (i)  <cit.>, (ii)  <cit.>, (iii)  <cit.>, and (iv)  <cit.>. For each network, we extracted variable descriptions from the related publication and used them as metadata μ_i (see <ref>). Experts:   We considered two kinds of expert: (i) ε-experts, as defined in <ref>, with various levels ε, and (ii) LLM-based experts based on GPT-3.5 <cit.>. Details about prompting can be found at <ref>. For each kind of expert, we considered both strategies: S_size and S_risk. Moreover, for the LLM-based expert, we also considered a naive strategy that consists of simply orienting all edges according to the expert, as in <cit.>. Metrics:   The expert/strategy combinations were evaluated based on: (i) the resulting size of their equivalence class, | |, (ii) the structural Hamming distance (SHD) between the completed partially DAG (CP-DAG; see <cit.>) of and the true graph , (iii) an empirical estimate of p(∈ ), taken over repetitions of the experiment. Protocol: For each Bayesian network, we extracted the MEC _ based on the structure of . This simulates starting from the ideal output of a causal discovery algorithm. We then attempted to reduce the size of _ by querying each expert according to the greedy approach in <ref> for η∈ [0.1, 1], where η = 1 corresponds to disregarding the constraint of Problem <ref>. Each ε-expert experiment was repeated 5 times and LLM-expert experiment was repeated 8 times. Given that the LLM-experts have a deterministic output for a given prompt, we randomized the causation verb in order to introduce stochasticity (see <ref>). <ref> shows the results of our experiments for and with strategy S_risk. Results for other networks and strategy S_size are in <ref>. Results for ε-experts:  On all networks, our approach, combined with both strategies, decreases the MEC's size for all noise levels (ε), while keeping the true graph in with probability at least 1-η, as predicted by our theoretical results. Consequently, the SHD also decreases as the tolerance to risk increases. This highlights the effectiveness of our approach when the expert satisfies the noise model of <ref>. Results for the LLM-expert: Overall, we observe a clear reduction in SHD for compared to the starting point _. This shows that some causally-relevant knowledge can be extracted from LLMs, which is in line with the conclusions of recent work. On all datasets, the LLM-based experts achieve SHDs that are on par or better than those of their naive counterparts <cit.> for η = 1, while additionally enabling the control of the probability of excluding . Further, on every dataset, except for , each LLM-based expert performs comparably to at least one of the ε-experts. For , we observe that is excluded from , even for small tolerance η. This can be explained by ambiguities in the metadata, which are sometimes ambiguous even for human experts (see <ref>). Finally, another key observation is the poor uncertainty calibration of compared to , which is in line with observations made by <cit.>. The model is often over-confident in its answers, which leads it to underestimate the probability of excluding from . Consequently, even for small tolerance η, the resulting equivalence classes contain incorrectly oriented edges. § CONCLUSION This work studied how imperfect expert knowledge can be used to refine the output of causal discovery algorithms. We proposed a greedy algorithm that iteratively rejects graphs from a MEC, while controlling the probability of excluding the true graph. Our empirical study revealed that our approach is effective when combined with experts that satisfy our assumptions. However, its performance was mitigated when a LLM was used as the expert. Nevertheless, our results show the clear potential of LLMs to aid causal discovery and we believe that further research in this direction is warranted. Possible extensions to this work include the exploration of noise models better-suited for LLMs, as well as alternative methods for querying such models (e.g., different prompt styles, better uncertainty calibration, etc.,). Further, our approach could be coupled with Bayesian causal discovery methods, replacing our MEC-based prior by one derived from a learned posterior distribution over graphs. icml2022 § ADDITIONAL EXPERIMENTAL RESULTS § IMPLEMENTATION DETAILS The code for this work is available at https://github.com/StephLong614/Causal-discohttps://github.com/StephLong614/Causal-disco. § DETAILS FOR THE BNLEARN CAUSAL BAYESIAN NETWORKS Acquisition:  All Bayesian network structures were acquired from https://www.bnlearn.com/bnrepository/https://www.bnlearn.com/bnrepository/. Variable metadata:  For each of the included causal Bayesian networks, we extracted a code book, i.e., a list of variable names and an associated description (e.g., 'birth asphyxia': 'lack of oxygen to the blood during birth'), from the associated original paper. For instance, for the network, this information was extracted from <cit.>. All code books are available at: https://github.com/StephLong614/Causal-disco/tree/main/codebookshttps://github.com/StephLong614/Causal-disco/tree/main/codebooks. Metadata pitfalls:   Certain Bayesian networks contain edge orientations between variable pairs that appear incongruent with intuitive reasoning. For example, in the CHILD Network (Figure <ref> ), the edge orientation between disease and age exhibits a counterintuitive direction: disease → age. Implying the causal relationship of “disease causes age” rather than the more intuitive and expected “age causes disease”. § DETAILS FOR LLM-BASED EXPERTS §.§ Querying for edge orientations In order to obtain a probability distribution over the orientations of an edge, we use a prompt similar to <cit.>. We use the following prompt format: where verb_k is randomized at each decision and the variables . For example, if we wanted to elicit a prediction for the direction of an edge between variables with metadata μ_i: “lung cancer”, μ_j: “cigarette smoking”, and causation verb verb_k: “causes” we would use the following prompt: We then compute the log probability of the responses and , and use the softmax to obtain a probability distribution over the directions of the edge <cit.>. Since we rely on scoring, instead of generation, the output of the LLM-expert is deterministic given a fixed prompt. To foster randomness in the LLM-expert outputs, we randomly draw verb_k from the following verbs of causation: provokes, triggers, causes, leads to, induces, results in, brings about, yields, generates, initiates, produces, stimulates, instigates, fosters, engenders, promotes, catalyzes, gives rise to, spurs, and sparks. § CAUSAL BAYESIAN NETWORKS INCLUDED We included the following causal Bayesian networks in this work. All were extracted from the bnlearn Repository https://www.bnlearn.com/bnrepository/https://www.bnlearn.com/bnrepository/.
http://arxiv.org/abs/2307.02516v1
20230705142846
Exploring new ways: Enforcing representational dissimilarity to learn new features and reduce error consistency
[ "Tassilo Wald", "Constantin Ulrich", "Fabian Isensee", "David Zimmerer", "Gregor Koehler", "Michael Baumgartner", "Klaus H. Maier-Hein" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
[ Exploring new ways: Enforcing representational dissimilarity to learn new features and reduce error consistency equal* Tassilo Waldhi,dkfz Constantin Ulrichdkfz Fabian Isenseehi,dkfz David Zimmererhi,dkfz Gregor Koehlerhi,dkfz Michael Baumgartnerhi,dkfz,hi Klaus H. Maier-Heinhi,dkfz,unihd hiHelmholtz Imaging dkfzDepartment of medical image Computing, German cancer research center (DKFZ), Heidelberg, Germany unihdPattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany Tassilo [email protected] Representations, Similarity, Diversity 0.3in ] Independently trained machine learning models tend to learn similar features. Given an ensemble of independently trained models, this results in correlated predictions and common failure modes. Previous attempts focusing on decorrelation of output predictions or logits yielded mixed results, particularly due to their reduction in model accuracy caused by conflicting optimization objectives. In this paper, we propose the novel idea of utilizing methods of the representational similarity field to promote dissimilarity during training instead of measuring similarity of trained models. To this end, we promote intermediate representations to be dissimilar at different depths between architectures, with the goal of learning robust ensembles with disjoint failure modes. We show that highly dissimilar intermediate representations result in less correlated output predictions and slightly lower error consistency, resulting in higher ensemble accuracy. With this, we shine first light on the connection between intermediate representations and their impact on the output predictions. § INTRODUCTION Machine learning methods learn features automatically when trained on a dataset. The high dimensionality of the data should allow for a variety of solutions, yet independent models tend to learn similar features to each other. Many downstream effects of this feature similarity are observed throughout current machine learning literature: [label=*)] * <cit.> showed that independently trained CNNs tend to predict erroneously on the same cases much more often than expected by chance given their accuracy, and more often than e.g. humans. * <cit.> showed that different models are functionally similar, through stitching <cit.> the top of a model to the bottom of another independently trained model with marginal accuracy penalties. * <cit.> showed that independently trained ResNets exhibit a linear mode connectivity with zero loss barrier, given a previous functionally invariant kernel weight permutation. * <cit.> showed that distinct latent spaces of two independently trained models tend to differ just by an quasi-isometric transformation. While the feature similarity is not a problem for a single model, multiple models are often combined into an ensemble to improve performance and to measure predictive uncertainty <cit.>. When these models learn the same features, they may learn spurious correlations that are not actually useful for the task at hand. This causes them to share failure modes making them fail in the same way. Ensemble improvement is highly dependent on models having a large disagreement error ratio <cit.> or low error consistency <cit.>. This can be increased slightly through different augmentation schemes, moderately through different pre-training schemes and strongly through pre-training on a different dataset, with higher error inconsistency in error rates improving ensemble benefits more <cit.>. Trying to find a new method to increase such predictive diversity between existing models and a new model may become difficult when learning large groups of models, hence methods were introduced that are conditioned on pre-existing models with the intention of learning to solve the task differently. Early works explored negative correlation <cit.> and evaluated classical methods like boosting or bagging <cit.> for neural networks under the assumption of them being weak classifiers. These approaches suffer great accuracy penalties in modern settings, hence other approaches needed to be developed. In a more modern setting <cit.> trained multiple models simultaneously, enforcing high entropy and orthogonality between the negative class predictions leading to improved adversarial robustness and in-distribution performance of ensembles. <cit.> proposed a two stage approach for the domain of adversarial learning, first training a model, then learning an augmenting adversarial auto-encoder, that tries to remove the most predictive features while staying as close as possible to the actual image. Given this lens a model can be trained with the lens to learn different features. So far diversification approaches only regularize inputs or at the position of the output features. However the constraints on regularization at the input and the output are rather large. Adapting input images too much can degrade performance too much and the features at the very end are constrained by having to encode the target classes, constraining the potential solution space models can converge to. In this paper, we propose to regularize internal representations of a new model to be dissimilar to an existing model to promote discovering novel ways of solving the task, which, to the best of our knowledge, has not been explored so far. Through this we hope to learn about the connection of internal similarity to the predictive behavior between models, specifically whether inducing diversity in intermediate processing stages leads to different predictive behavior and more robust ensembles. Our main contributions are: * We utilize methods from the field of representational similarity in a novel way to train ensembles of very low representational similarity at intermediate layers. * We show that highly dissimilar internal representations can be learned at chosen positions with only minor penalties to the model accuracy. * We show that enforcing dissimilar internal representations can lead to lower error consistency in the predicted outputs, overall improving ensembling performance relative to an ensemble of independently trained models. § REPRESENTATIONAL (DIS)SIMILARITY We are interested in enforcing models to learn different features termed representations, therefore we must define what a representation is. In the field of representational similarity representations z are defined as the input-output responses of all channels of a network's layer to a set of inputs <cit.>. For a given input batch x_i of B samples this results in z∈ℛ^B× C × W × H values for each model. Since e.g. the channel dimensions between neurons are not aligned <cit.> metrics either need to learn a linear transformation to align representations or use sub-space metrics, e.g. canonical correlation analysis <cit.>, to measure similarity. For a nice summary of the differences we refer to <cit.> and more recently <cit.>. To formalize: An already trained model T and an untrained model U are composed of k sequential layers f_θ predicting some class probabilities ŷ, see <ref>. ŷ_U,i = (f_θ_U,k∘⋯∘ f_θ_U,0)(x_i) At some intermediate layer of choice l we collect representations of both models z_T,i,l and z_U,i,l, z_U,i,l = (f_θ_U,l∘⋯∘ f_θ_U,0)(x_i) ŷ_U,i = (f_θ_U,k∘⋯∘ f_θ_U,l-1)(z_U,i,l) and measure similarity s between z_T,i,l and z_U,i,l, using a similarity metric 𝒮 providing a bounded value between a and b with the lowest value, a, representing most dissimilar and b most similar. 𝒮(z_T,l, z_U,l) with  𝒮(z_1,z_2) ∈ (a, b)  for ∀ z_1,z_2. [We omit the batch index for better readability where appropriate.] Similarity metrics like channel-wise correlation and regression need aligned channel pairs between the two representations, hence we introduce a projection layer proj_θ_p(z_T) that tries to approximate the representations of the untrained z_U by linearly combining the activations z_T of the same spatial location, through a linear projection resulting in ẑ_U,l = p_proj(z_T,l). Combining these, the final objectives of the new model U can be described as min_θ_U𝒥(y_i, x_i,z_T,i,l) =  CE(ŷ_i, y_i) + λ 𝒮(z_U,i,l, ẑ_U,l), with lambda being a weighting factor for controlling the importance of learning dissimilar representations. Simultaneously the projection layer is optimized to maximize similarity between the representations of T and U. min_θ_proj𝒥(y_i, x_i, z_U,i) = - 𝒮(z_U,i, ẑ_U,l). A visualization of the training scheme is highlighted in <Ref>. To extend this scheme to an arbitrary number of models we concatenate the representations of an arbitrary amount of pre-existing models of a layer Z_T,l∈{z_T_1,l, ⋯, z_T_N,l} before the projection layer. When training multiple models we train sequentially, ending up with one unregularized model, the first one, and multiple regularized models conditioned on all preceding models of the same sequence. As similarity metrics 𝒮 we evaluate the L2 Correlation L2Corr, bounded explained variance of a linear regression ExpVar and Linear Centered Kernel Alignment LinCKA <cit.>. The former two representing aligned metrics with L2Corr being scaling invariant and the Explained variance being scaling sensitive. Full details of the functions are provided in <Ref>. § EXPERIMENTS We train sequences of up to 5 ResNets on CIFAR10, CIFAR100 and compare similarity and predictive behavior of the models within a sequence to each other and between models of different sequences. Explicit hyperparameters of the experiments are provided in <ref>. Final similarity measurement between models is conducted through LinCKA due to it not requiring a linear projection to be calculated. In the main paper we focus only on CIFAR10 and ExpVar with λ = 1 unless stated otherwise, while the additional architectures and metrics can be found in <ref>. §.§ Internal dissimilarity can be controlled precisely Given the scheme proposed in <ref> dissimilarity can be enforced in a variety of ways given the degrees of freedom, like layer position, dissimilarity weight λ, different regularization metrics, regularizing multiple layers simultaneously, effects of larger models or other datasets. Of these experiments we only highlight the Layer position here while the remaining ones can be found in <ref>. Layer position Many different layers can be selected to enforce representational dissimilarity, therefore we evaluate a variety of positions (see <ref>). We find that regularizing earlier and later layers leads to a lower decrease in similarity, while intermediate layers easily become almost fully dissimilar. Interestingly we observe that the effect of the regularization often translates to neighboring layers being also highly dissimilar, mostly affecting the entire residual block, becoming more similar again after a down-sampling layer. §.§ Dissimilar representations are unique Given that independent models converge to very similar solutions, one might assume that models, trained to be dissimilar from them may end up being highly similar to each other. This assumption seems to be amiss, as we observe high representational dissimilarity between the models, see <ref>. The regularized models are even more dissimilar to each other than they are to the base model they were regularized on, highlighting that the different models learn unique dissimilar solutions to the baseline models. §.§ Ensembles of dissimilar models After establishing that models can learn dissimilar representations we extend the setting to larger number of models training an ensemble of N=5 ResNet34s and compare them to each other as visualized in <ref>. We observe high dissimilarity between early models at the chosen layer, while later models have a less pronounced similarity decrease to existing models. We attribute this to the fact that all representations are merged to approximate the new model, resulting in the new models learning dissimilar representations to this group and not every single model. While this effect looks like efficacy decreases with growing numbers, the model still learns lower similarity levels than the independently trained baseline. §.§ Downstream effects of internal dissimilarity Due to the compositional nature of networks (<ref>) we wonder if high internal dissimilarity translates to less correlated predictive behavior between the model pairs. To this end we compare the absolute ensemble performance, the error consistency Cohen's Kappa κ between models (see <ref>) and the accuracy of the latest trained model Acc_U for an ensemble of up to 5 models, see <ref>. We observe that regularizing representational similarity leads to an overall decrease in Cohen's Kappa of 3% up to 7%, while largely maintaining single model performance. Subsequently the ensemble composed of dissimilar models features higher ensemble performance than the baseline ensemble of independently trained models, for every number of models in the ensemble. § DISCUSSION, LIMITATIONS AND CONCLUSION In this paper we enforce models to learn very dissimilar internal representations to preexisting models by applying various metrics from the field of representational similarity in a novel way. We show that learning features that can't be approximated through linear regression at an intermediate layer position can result in lower error consistency between the two models. Furthermore we show that this slight decrease in error-consistency suffices to improve overall ensemble accuracy over an ensemble of independent models. While we show that error consistency between models can be decreased through representational dissimilarity, the decrease is still slight, showing a decrease between 7% to 3% points. Optimally one would like to achieve Cohen's Kappa scores of <0 to maximize ensembling benefits. Furthermore, we could explore only a small subset of all experiments we deem interesting and have to leave a lot of interesting questions for future work, e.g.: [label=(*)] * Which features does the dissimilar model learn? * Does the dissimilar model end up in a disjoint loss minimum? Can it still be stitched? * Which position reduces Cohen's Kappa the most while maintaining single model performance? * Are there other metrics that decrease error consistency more strongly with less effect on accuracy? Overall, we provide a proof of concept, that enforcing models to learn dissimilar representations also results in different predictive behavior, providing another avenue to make models learn different, novel features, with the goal of achieving robust ensembles. icml2023 § EXPERIMENT DETAILS In this paper the architectures we use are ResNet18s, ResNet34s and ResNet101s trained on CIFAR10 and CIFAR100. Across all experiment settings we train 5 indepedent runs that differ by their random seed, including the ensemble sequence experiments. §.§ Hyperparameters All hyperparameters were optimized to maximize single model validation accuracy, as one would in a normal training setting. Finding best performing hyperparameters is a crucial step in order to assure that the developed methodology does not only work for less accurate models where more predictions errors are made and the diversification task becomes easier. We conducted this process for all datasets separately, with all architectures sharing the same final hyperparameter settings. Once found on single models the hyperparameters were frozen and not adapted to the dissimilarity regularization scheme, avoiding optimizing hyperparameters when starting the dissimilarity regularization experiments. The following hyperparameters for the different architectures were used: CIFAR10 We train the architectures for 250 epochs with a batch size of 128, trained with nesterov SGD, learning rate of 0.1, momentum 0.9, cosine annealing learning rate schedule and weight decay 5e-4. For augmentations we use RandomCrops of size 32×32 with padding 4, the AutoAugment CIFAR10 policy from <cit.>, followed by Cutout of with size 16, finished by normalizing the images. CIFAR100 We utilize the same hyperparameters from CIFAR10 just with a shorter epoch time of 200. §.§ Architectures We train ResNet18, ResNet34 and ResNet101[The architectures implementation is inspired by https://github.com/kuangliu/pytorch-cifarkuangliu's Github repository], with the extraction positions being before the ReLU activation function, as highlighted in <ref>. All ResNets <cit.> are composed of 4 blocks containing various number of layers per block with a constant amount of channels per block. ResNet18 Our ResNet18 is composed of four residual blocks of (2-2-2-2) layers with (64-128-256-512) channels each, using basic residual blocks without the inverse bottleneck structure. ResNet34 ResNet34 is composed of four residual blocks of (3-4-6-3) layers with (64-128-256-512) channels each, using basic residual blocks without inverse bottleneck structure. ResNet101 ResNet101 is composed of four residual blocks of (3-4-23-3) layers with (256-512-1024-2048) channels respectively, using inverse bottleneck structure of blocks. §.§ Evaluation All results provided in this document are calculated on the test sets of the respective datasets. For CIFAR10 and CIFAR100 we use the official test set of N=10000 samples to evaluate both, representational similarity between models and output metrics between models. For an ensemble >2, most of the output metrics like Cohen's Kappa κ, JSD and ERD are not directly defined. Hence, we calculate the pairwise metrics and average them over all pairs, leading to an average value which we report in e.g. <ref>. § METRICS The metrics we employ in this study are threefold: L2-Correlation (L2Corr), explained variance (ExpVar) and the sub-space metric Linear Centered Kernel Alignment (LinCKA) <cit.> as similarity measures 𝒮. §.§ (Dis)similarity metrics Aligned metrics Some metrics compare channels directly and subsequently need exact alignment between the channels of the two networks U and T. This channel alignment is non-existent when trained independently and needs to be learned. Hence, we utilize the linear projection to learn channel-wise alignment, as explained in <ref>. We implement this linear projection through a 1×1 Convolution, which maintains spatial information. Given these aligned, approximated values <ref> of the new representation z_U, we can calculate the L2Corr r_c and ExpVar R^2 for each channel. r_c(z_c, ẑ_c)=∑ _i=1^n(z_i-z̅)(ẑ_i-ẑ̅̂)/√(∑ _i=1^n(z_i-z̅)^2)√(∑ _i=1^n(ẑ_i-ẑ̅̂)^2) R^2_c(z_c, ẑ_c) = 1 - ∑_i=1^n (𝐳_i - ẑ_i)^2/∑_i=1^n(𝐳_i - 𝐳̅)^2 While the correlation is nicely bounded between r_c ∈{-1, 1}, the explained variance R^2 is bounded upwards to 1 but can reach -inf. Therefore we introduce a celu function that wraps the <ref> with α = 1 that bounds the function. CELU(R^2)= max(0,R^2) + min(0,α·(exp(R^2 / α) - 1)) In the conducted experiments this was necessary to assure stability of the convergence process, as single representation channels z_U,c could occasionally feature very low variance close to 0 in a mini-batch, leading to numeric problems when calculating R^2 scores. Given the channel wise scores we average them over all channels resulting in the final score. ExpVar1/C·∑^C_c=0CELU(R^2_c) L2Corr1/C·∑^C_c=0CELU(r_c) Sub-space metrics Layers of CNNs are composed of multiple channels with the directly following convolution combining the values of its preceding layer through a linear weighted sum of k× k× C spanning all channels, making channel order irrelevant and a subject to the stochastic optimization process. This lack of alignment of single neurons was highlighted by <cit.> showing that no perfect one-to-one matching of single channels/neurons exists. This insight led to the field of sub-space metrics, which don't try to measure similarity between single channels/neurons but view the stack of channels as vectors spanning a sub-space <cit.>. These spanned sub-spaces can then be compared directly. In this paper we use linear Centered Kernel Alignment (CKA) <cit.> a metric inspired by the dissimilarity matrices of the neuroscience domain<cit.>, which does not need this channel alignment either. Linear Centered Kernel Alignment itself leverages the Hilbert-Schmidt Independence Criterion (HSIC) to compare the similarity of such similarity matrices. In this work we use the unbiased HSIC estimator (<ref>) introduced by <cit.> and used in mini-batch CKA (<ref>) as introduced by <cit.>. CKA_minibatch(K, L) = 1/k∑_i=1^kHSIC(K_i,L_i)/√(1/k∑_i=1^kHSIC(K_i,K_i))√(1/k∑_i=1^kHSIC(L_i,L_i)) HSIC(K,L) = 1/n(n-3) ·( tr(KL) + 1^T K11^T L1/(n-1)(n-2) - 2/n-21^TKL1) §.§ Output prediction metrics Cohen's Kappa Cohen's Kappa <cit.> measures the error consistency between predictors. Specifically it measures the observed error overlap e_obs over a cohort N and relates it to the expected error overlap e_exp given the accuracies of the predictors while assuming independence between the raters. A κ of 0 indicates that the observed error overlap between the two raters is exactly the expected value of error overlap given the two raters accuracies, indicating that the raters are independent. Values of 1 indicate that when one raters errs the other errs as well, whereas negative values indicate that if one rater errs the other is more likely to be correct on the case. κ = e_obs - e_exp/1 - e_exp Given an ensemble of models a low or negative value of κ is highly desirable as it indicate that the models do not fail on the same samples, leading to high uncertainty on disagreed upon cases or given enough models a correct consensus. Following up on this error-inconsistency was introduced, which measures when one or the other rater fails but not both. The error-inconsistency is a desirable feature as high values indicate that mistakes are disjoint and, given enough models in an ensemble, can be corrected to improve ensemble performance significantly <cit.>. Jensen-Shannon Divergence The Jensen-Shannon divergence (JSD) measures the similarity between two distributions and is based on the Kullbach-Leibeler-Divergence (KL-Divergence) D. Opposed to Cohen's Kappas, which only cares about the argmax prediction being right or wrong, it works with the probability distributions of two models instead, measuring changes in predictive behaviour in a less discrete way. JSD(P‖ Q) = 1/2( D(P‖ M) + D(Q‖ M)), with M = 1/2 (P + Q) In our case, the probability distribution is represented by the output softmax probabilities of the models, hence the JSD measures how similar the distributions of the two models predictions are before assigning a hard label. Error disagreement ratio Additionally to the established metrics, we propose a metric we term error disagreement ratio (EDR), which measures the ratio of different errors to identical errors between two predictors for all joint errors N_wrong (<ref>). If two models disagree in their errors as often as they agree on their errors EDR is 1. Should the models always agree on their errors the EDR would be 0 and the EDR is greater than 1 for cases where the models disagree more often than agree when both err. Given that silent failures are detrimental for the applicability of deep learning methods, a high EDR is wanted, yet the EDR in our baselines is commonly <1 around 0.3. EDR(p̂_1, p̂_2) = ∑^N_wrong_i (1 - eq(p̂_1_i, p̂_2_i)/∑^N_wrong_i eq(p̂_1_i, p̂_2_i) eq(p̂_1_i, p̂_2_i) = 1, if p̂_1_i = p̂_2_i 0 , else § DISSIMILARITY ABLATIONS Additionally to the experiments in the main body, we provide additional information and ablations. The additional ablations include the effect of changing the dissimilarity weight λ and metric, the number of layers used for regularization and the architectures. §.§ Layers Complimentary to <ref>, we provide the diagonal LinCKA similarity of the models from <ref>. When choosing a layer for regularization, the LinCKA similarity decreases the most at the selected layer, mostly affecting layers that are part of its residual block. After a downsampling stage, the similarity tends to increase drastically, but still stays below baseline levels (see <ref>). §.§ Regularizing multiple layers Additionally to the previously described setting, one can choose to enforce dissimilarity at multiple layers simultaneously as opposed to only a single selected layer. When choosing multiple layers, the decrease in similarity from baseline is lower than when regularizing only a single layer, as the dissimilarity loss is averaged over all layers. This is highlighted in <ref>. It nicely portraits the decrease of dissimilarity as more layers are regularized at once. §.§ ResNet101 and CIFAR100 We expand the experiments to deeper architectures on CIFAR100, as previous experiments were constrained to CIFAR10 and ResNet34. Layer regularization Similarly to <ref> we regularize ResNet101 at a very early location Layer 1, an early layer 3, a intermediate Layer 20, and a later layer 32, visualized in <ref>. Similarly to the smaller ResNet34 we observe that learned dissimilarity is either localized to the explicit layer of choice or affects the layers of the residual block the regularized layer is situated in. This is especially prominent for the 23 layer deep residual block when regularizing Layer 20. §.§ Evaluating different dissimilarity metrics and loss weights In our experiments we so far constrained ourselves to ExpVar and LinCKA with λ = 1. In order to evaluate if this regularization weight is too strong or too weak, we ablate it for λ∈{0.25,1.0, 4.0} in <ref> <ref>, and <ref>. Across all metrics one can see that increasing λ decreases Cohen's κ and increases JSD while simultaneously decreasing the accuracy of the new model. Setting λ too high can lead to instabilities, and significantly reduced ensembling performance as is the case for L2Corr. Furthermore we can observe that different layers require different λ values as no one value dominates the others for the entire architecture, complicating the process, should one want to optimize λ. §.§ More detailed output metrics Revisiting <ref> the factor lambda determines the importance of learning representations that are dissimilar. Given that we observe <Ref> in the main did not contain all metrics of interest we gathered, hence we provide the table containing additional metrics like the Jensen-Shannon-Divergence (JSD) (see <ref>). § CHOICE OF SIMILARITY METRICS Many elaborate similarity metrics exist, from SVCCA <cit.> to PWCCA <cit.> to CKA <cit.> a variety of methods can be employed to learn dissimilar representation as all are differentiable. In this work we decided to keep the metric choice as simple as possible and use aligned regression and correlation metrics ExpVar and L2Corr as their channel-wise nature and alignment allows good interpretability and introspection possibilities for early development. Additionally we included LinCKA as it is the most recent popular similarity metric and has a lower memory footprint than SVCCA, due to it working on the similarity matrices instead of representations directly. Additionally LinCKA can be nicely calculated on mini-batches <cit.>.
http://arxiv.org/abs/2307.02798v1
20230706061322
Semi-supervised Domain Adaptive Medical Image Segmentation through Consistency Regularized Disentangled Contrastive Learning
[ "Hritam Basak", "Zhaozheng Yin" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Disentangled Consistency Contrast H Basak et al. Dept. of Computer Science, Stony Brook University, NY, USA Semi-supervised Domain Adaptive Medical Image Segmentation through Consistency Regularized Disentangled Contrastive Learning Hritam Basak, Zhaozheng Yin August 1, 2023 ============================================================================================================================ Although unsupervised domain adaptation (UDA) is a promising direction to alleviate domain shift, they fall short of their supervised counterparts. In this work, we investigate relatively less explored semi-supervised domain adaptation (SSDA) for medical image segmentation, where access to a few labeled target samples can improve the adaptation performance substantially. Specifically, we propose a two-stage training process. First, an encoder is pre-trained in a self-learning paradigm using a novel domain-content disentangled contrastive learning (CL) along with a pixel-level feature consistency constraint. The proposed CL enforces the encoder to learn discriminative content-specific but domain-invariant semantics on a global scale from the source and target images, whereas consistency regularization enforces the mining of local pixel-level information by maintaining spatial sensitivity. This pre-trained encoder, along with a decoder, is further fine-tuned for the downstream task, (i.e. pixel-level segmentation) using a semi-supervised setting. Furthermore, we experimentally validate that our proposed method can easily be extended for UDA settings, adding to the superiority of the proposed strategy. Upon evaluation on two domain adaptive image segmentation tasks, our proposed method outperforms the SoTA methods, both in SSDA and UDA settings. Code is available at https://github.com/hritam-98/GFDA-disentangledGitHub. § INTRODUCTION Despite their remarkable success in numerous tasks, deep learning models trained on a source domain face the challenges to generalize to a new target domain, especially for segmentation which requires dense pixel-level prediction. This is attributed to a large semantic gap between these two domains. Unsupervised Domain Adaptation (UDA) has lately been investigated to bridge this semantic gap between labeled source domain, and unlabeled target domain <cit.>, including adversarial learning for aligning latent representations <cit.>, image translation networks <cit.>, etc. However, these methods produce subpar performance because of the lack of supervision from the target domain and a large semantic gap in style and content information between the source and target domains. Moreover, when an image's content-specific information is entangled with its domain-specific style information, traditional UDA approaches fail to learn the correct representation of the domain-agnostic content while being distracted by the domain-specific styles. So, they cannot be generalized for multi-domain segmentation tasks <cit.>. Compared to UDA, obtaining annotation for a few target samples is worthwhile if it can substantially improve the performance by providing crucial target domain knowledge. Driven by this speculation, and the recent success of semi-supervised learning (SemiSL), we investigate semi-supervised domain adaptation (SSDA) as a potential solution. Recently, Liu <cit.> proposed an asymmetric co-training strategy between a SemiSL and UDA task, that complements each other for cross-domain knowledge distillation. Xia <cit.> proposed a co-training strategy through pseudo-label refinement. Gu <cit.> proposed a new SSDA paradigm using cross-domain contrastive learning (CL) and self-ensembling mean-teacher. However, these methods force the model to learn the low-level nuisance variability, which we know is insignificant to the task at hand. Hence, these methods fail to generalize if similar variational semantics are absent in the training set. Fourier Domain Adaptation (FDA) <cit.> was proposed to address these challenges by a simple yet effective spectral transfer method. Following <cit.>, we design a new Gaussian FDA to handle this cross-domain nuisance variability, without explicit feature alignment. Contrastive learning (CL) is another prospective direction where we enforce models to learn discriminative information from (dis)similarity learning in a latent subspace <cit.>. Liu <cit.> proposed a margin-preserving constraint along with a self-paced CL framework, gradually increasing the training data difficulty. Gomariz <cit.> proposed a CL framework with an unconventional channel-wise aggregated projection head for inter-slice representation learning. However, traditional CL utilized for DA on images with entangled style and content leads to mixed representation learning, whereas ideally, it should learn discriminative content features invariant to style representation. Besides, the instance-level feature alignment of CL is subpar for segmentation, where dense pixel-wise predictions are indispensable <cit.>. To alleviate these three underlined shortcomings, we propose a novel contrastive learning with pixel-level consistency constraint via disentangling the style and content information from the joint distribution of source and target domain. Precisely, our contributions are as follows: (1) We propose to disentangle the style and content information in their compact embedding space using a joint-learning framework; (2) We propose encoder pre-training with two CL strategies: Style CL and Content CL that learns the style and content information respectively from the embedding space; (3) The proposed CL is complemented with a pixel-level consistency constraint with dense feature propagation module, where the former provides better categorization competence whereas the later enforces effective spatial sensitivity; (4) We experimentally validate that our SSDA method can be extended in the UDA setting easily, achieving superior performance as compared to the SoTA methods on two widely-used domain adaptive segmentation tasks, both in SSDA and UDA settings. § PROPOSED METHOD Given the source domain image-label pairs {(x_s^i, y_s^i)_i=1^ℕ_s∈𝒮}, a few image-label pairs from target domain {(x_t1^i, y_t1^i)_i=1^ℕ_t1∈𝒯1}, and a large number of unlabeled target images {(x_t2^i)_i=1^ℕ_t2∈𝒯2}, our proposed pre-training stage learns from images in {𝒮∪𝒯; 𝒯=𝒯1 ∪𝒯2} in a self-supervised way, without requiring any labels. The following fine-tuning in SSDA considers image-label pairs in {𝒮∪𝒯1} for supervised learning alongside unlabeled images 𝒯2 in the target domain for unsupervised prediction consistency. Our workflow is shown in <ref>. §.§ Gaussian Fourier Domain Adaptation (GFDA) Manipulating the low-level amplitude spectrum of the frequency domain is the easiest way for style transfer between domains <cit.>, without notable alteration in the visuals of high-level semantics. However, as observed in <cit.>, the generated images consist of incoherent dark patches, caused by abrupt changes in amplitude around the rectangular mask. Instead, we propose a Gaussian mask for a smoother transition in frequency. Let, ℱ_A(·) and ℱ_P(·) be the amplitude and phase spectrum in frequency space of an RGB image, and ℱ^-1 indicates inverse Fourier transform. We define a 2D Gaussian mask g_σ of the same size as ℱ_A, with σ being the standard deviation. Given two randomly sampled images x_s ∼𝒮 and x_t∼𝒯, our proposed GFDA can be formulated as: x_s→ t = ℱ^-1[ℱ_P(x_s), ℱ_A(x_t) ⊙ g_σ + ℱ_A(x_s) ⊙ (1-g_σ) ], where ⊙ indicates element-wise multiplication. It generates an image preserving the semantic content from 𝒮 but preserving the style from 𝒯. Reciprocal pair x_t→ s is also formulated using the same drill. The source and target images, and the style-transferred versions {x_s, x_s→ t, x_t, x_t→ s} are then used for contrastive pre-training below. Visualization of GFDA is shown in the supplementary file. §.§ CL on Disentangled Domain and Content We aim to learn discriminative content-specific features that are invariant of the style of the source or target domain, for a better pre-training of the network for the task at hand. Hence, we propose to disentangle the style and content information from the images and learn them jointly in a novel disentangled CL paradigm: Style CL (SCL) and Content CL (CCL). The proposed SCL imposes learning of domain-specific attributes, whereas CCL enforces the model to identify the ROI, irrespective of the spatial semantics and appearance. In joint learning, they complement each other to render the model to learn domain-agnostic and content-specific information, thereby mitigating the domain dilemma. The set of images {x_s, x_s→ t, x_t, x_t→ s}, along with their augmented versions are passed through encoder ℰ, followed by two parallel projection heads, namely style head (𝒢_𝒮) and content head (𝒢_𝒞) to obtain the corresponding embeddings. Two different losses: style contrastive loss ℒ_SCL and content contrastive loss ℒ_CCL, are derived below. Assuming {x_s,x_t→ s} (along with their augmentations) having source-style representation (style A), and {x_t, x_s→ t} (and their augmentations) having target-style representation (style B), in style CL, embeddings from the same domain (style) are grouped together whereas embeddings from different domains are pushed apart in the latent space. Considering the i^th anchor point x^i_t∈𝒯 in a minibatch and its corresponding style embedding s^i_t ←𝒢_𝒮(ℰ(x^i_t)) (with style B), we define the positive set consisting of the same target domain representations as Λ^+={s_t^j+, s_s→ t^j+}←𝒢_𝒮(ℰ({x_t^j, x_s→ t^j})),∀ j∈minibatch, and negative set having unalike source domain representation as Λ^-={s_s^j-, s_t→ s^j-}←𝒢_𝒮(ℰ({x_s^j, x_t→ s^j})),∀ j∈minibatch. Following SimCLR <cit.> our style contrastive loss can be formulated as: ℒ_SCL = ∑_i,j-logexp(sim(s^i, s^j+)/τ)/exp(sim(s^i, s^j+)/τ) + ∑_j∈Λ^-exp(sim(s^i, s^j-)/τ), where {s^i,s^j+}∈style B; s^j-∈style A, sim(·,·) defines cosine similarity, τ is the temperature parameter <cit.>. Similarly, we define ℒ_CCL for content head as: ℒ_CCL =∑_i,j -logexp(sim(c^i, c^j+)/τ)/exp(sim(c^i, c^j+)/τ) + ∑_j∈Λ^-exp(sim(c^i, c^j-)/τ), where {c^i,c^j}←𝒢_C(ℰ({x^i,x^j})). These contrastive losses, along with the consistency constraint below enforce the encoder to extract domain-invariant and content-specific feature embeddings. §.§ Consistency Constraint The disentangled CL aims to learn global image-level representation, which is useful for instance discrimination tasks. However, segmentation is attributed to learning dense pixel-level representations. Hence, we propose an additional Dense Feature Propagation Module (DFPM) along with a momentum encoder ℰ' with exponential moving average (EMA) of parameters from ℰ. Given any pixel m of an image x, we transform its feature f^m_ℰ' obtained from ℰ' by propagating other pixel features from the same image: f^m_ℰ' = ∑_∀ n∈ x𝒦(f_ℰ'^m)⊗cos(f_ℰ'^m, f_ℰ'^n) where 𝒦 is a linear transformation layer, ⊗ denotes matmul operation. This spatial smoothing of learned representation is useful for structural sensitivity, which is fundamental for dense segmentation tasks. We enforce consistency between this smoothed feature f_ℰ' from ℰ' and the regular feature f_ℰ from ℰ as: ℒ_Con = ∑_[d(m,n)<Th] -[cos(f^m_ℰ',f^n_ℰ)+cos(f^m_ℰ,f^n_ℰ')] where d(·,·) indicates the spatial distance, Th is a threshold. The overall pre-training objective can be summarized as: ℒ_Pre = λ_1ℒ_SCL+λ_2ℒ_CCL+ℒ_Con §.§ Semi-supervised Fine-tuning The pre-training stage is followed by semi-supervised fine-tuning using a student-teacher framework <cit.>. The pre-trained encoder ℰ, along with a decoder 𝒟 are used as a student branch, whereas an identical encoder-decoder network (but differently initialized) is used as a teacher network. We compute a supervised loss on the labeled set {𝒮∪𝒯1} along with a regularization loss between the prediction of the student and teacher branches on the unlabeled set {𝒯2} as: ℒ_Sup = 1/ℕ_s + ℕ_t1∑ _x^i∈{𝒮∪𝒯1}CE[𝒟_S(ℰ_S(x^i)),y^i ] ℒ_Reg = 1/ℕ_t2∑ _x^i∈{𝒯2}CE[𝒟_S(ℰ_S(x^i)), 𝒟_T(ℰ_T(x^i)) ] where CE indicates cross-entropy loss, ℰ_S, 𝒟_S, ℰ_T, 𝒟_T indicate the student and teacher encoder and decoder networks. The student branch is updated using a consolidated loss ℒ = ℒ_Sup + λ_3ℒ_Reg, whereas the teacher parameters (θ_T) are updated using EMA from the student parameters (θ_S): θ_T (t) = αθ_T(t-1) + (1-α)θ_S(t) where t tracks the step number, and α is the momentum coefficient <cit.>. In summary, the overall SSDA training process contains pre-training (<ref>-<ref>) and fine-tuning (<ref>), whereas, we only use the student branch (ℰ_S,𝒟_S) for inference. § EXPERIMENTS AND RESULTS Datasets: We evaluate our work on two different DA tasks to evaluate its generalizability: (1) Polyp segmentation from colonoscopy images in Kvasir-SEG <cit.> and CVC-EndoScene Still <cit.>, and (2) Brain tumor segmentation in MRI images from BraTS2018 <cit.>. Kvasir and CVC contain 1000 and 912 images respectively and were split into 4:1 training-testing sets following <cit.>. BraTS consists of brain MRIs from 285 patients with T1, T2, T1CE, and FLAIR scans. The data was split into 4:1 train-test ratio, following <cit.>. Source→Target: We perform experiments on CVC→ Kvasir and Kvasir→ CVC for polyp segmentation, and T2→{T1,T1CE,FLAIR} for tumor segmentation. The SSDA accesses 10-50% and 1-5 labels from the target domain for the two tasks, respectively. For UDA, only 𝒮 is used for ℒ_Sup, whereas 𝒯1∪𝒯2 is used for ℒ_Reg. Implementation details: Implementation is done in a PyTorch environment using a Tesla V100 GPU with 32GB RAM. We use U-Net <cit.> backbone for the encoder-decoder structure, and the projection heads 𝒢_𝒮 and 𝒢_𝒞 are shallow FC layers. The model is trained for 300 epochs for pre-training and 500 epochs for fine-tuning using an ADAM optimizer with a batch size of 4 and a learning rate of 1e-4. λ1,λ2,λ3, and Th are set to 0.75,0.75,0.5,0.6, respectively by validation, τ, α are set to 0.07,0.999 following <cit.>. Augmentations include random rotation and translation. Metrics: Segmentation performance is evaluated using Dice Similarity Score (DSC) and Hausdorff Distance (HD). §.§ Performance on SSDA Quantitative comparison of our proposed method with different SSDA methods <cit.> for both tasks are shown in <ref> and <ref>. ACT <cit.> simply ignores the domain gap and only learns content semantics, resulting in substandard performance on the BraTS dataset that has a significant domain gap. FSM <cit.>, on the other hand, is adaptable to learning explicit domain information, but lacks strong pixel-level regularization on its prediction, resulting in subpar performance. We address both of these shortcomings in our work, resulting in superior performance on both tasks. Other methods like <cit.>, which are originally designed for natural images, lack critical refining abilities even after fine-tuning for medical image segmentation and hence are far behind our performance in both tasks. The margins are even higher for less labeled data (1L) on the BraTS dataset, which is promising considering the difficulty of the task. Moreover, our method produces performance close to its fully-supervised counterpart (last row in <ref> and <ref>), using only a few target labels. §.§ Performance on UDA Unlike SSDA methods, UDA fully relies on unlabeled data for domain-invariant representation learning. To analyze the effectiveness of DA, we extend our model to the UDA setting (explained in <ref>[Source→Target]) and compare it with SoTA methods <cit.> in <ref> and <ref>. Methods like <cit.> rely on adversarial learning for aligning multi-level feature space, which is not effective for small-sized medical data. Other methods <cit.> rely on an image-translation network but fail in effective style adaptation, resulting in source domain-biased subpar performance. Our method, although relies on FDA <cit.>, outperforms it with a large margin of upto 12.5% DSC for polyp segmentation, owing to its superior learning ability of disentangled style and content semantics. Similar results are observed for the BraTS dataset in <ref>, where our work achieved a margin of upto 2.4% DSC than its closest performer. §.§ Ablation Experiments We perform a detailed ablation experiment, as shown in <ref>. The effectiveness of disentangling and joint-learning of style and content information is evident from the experiment (b)&(c) as compared to (a), where the introduction of SCL and CCL boosts overall performance significantly. Moreover, when combined together (experiment (d)), they provide a massive 9.54% and 8.52% DSC gain over traditional CL (experiment (a)) for CVC→ Kvasir and Kvasir→ CVC, respectively. This also points out a potential shortfall of traditional CL: its inability to adapt to a complex domain in DA. The proposed DFPM (experiment (e)) provides local pixel-level regularization, complementary to the global disentangled CL, resulting in a further boost in performance (∼ 1.5%). We have similar ablation study observations on the BraTS2018 dataset, which is provided in the supplementary file, along with some qualitative examples along with available ground truth. § CONCLUSION We propose a novel style-content disentangled contrastive learning, guided by a pixel-level feature consistency constraint for semi-supervised domain adaptive medical image segmentation. To the best of our knowledge, this is the first attempt for SSDA in medical image segmentation using CL, which is further extended to the UDA setting. Our proposed work, upon evaluation on two different domain adaptive segmentation tasks in SSDA and UDA settings, outperforms the existing SoTA methods, justifying its effectiveness and generalizability. Supplementary for Disentangled Consistency Contrast H. Basak et al. Dept. of Computer Science, Stony Brook University, NY, USA Supplementary File: Semi-supervised Domain Adaptive Medical Image Segmentation through Consistency Regularized Disentangled Contrastive Learning Hritam Basak, Zhaozheng Yin August 1, 2023 ================================================================================================================================================ splncs04
http://arxiv.org/abs/2307.01474v2
20230704045130
Photodisintegration Cross Section of $^4$He in the Giant Dipole Resonance Energy Region
[ "M. Murata", "T. Kawabata", "S. Adachi", "H. Akimune", "S. Amano", "Y. Fujikawa", "T. Furuno", "K. Inaba", "Y. Ishii", "S. Miyamoto", "M. Tsumura", "H. Utsunomiya" ]
nucl-ex
[ "nucl-ex" ]
Research Center for Nuclear Physics (RCNP), Osaka University, Ibaraki, Osaka 567-0047, Japan Department of Physics, Osaka University, Toyonaka, Osaka 560-0043, Japan Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Sendai, Miyagi 980-8578, Japan Faculty of Science and Engineering, Konan University, Kobe, Hyogo 658-8501, Japan Laboratory of Advanced Science and Technology for Industry (LASTI), University of Hyogo, Ako, Hyogo 678-1205, Japan Department of Physics, Kyoto University, Sakyo, Kyoto 606-8502, Japan Department of Physics, Osaka University, Toyonaka, Osaka 560-0043, Japan Department of Physics, Kyoto University, Sakyo, Kyoto 606-8502, Japan Department of Physics, Kyoto University, Sakyo, Kyoto 606-8502, Japan Laboratory of Advanced Science and Technology for Industry (LASTI), University of Hyogo, Ako, Hyogo 678-1205, Japan Institute of Laser Engeneering, Osaka University, Suita, Osaka 565-0871, Japan Department of Physics, Kyoto University, Sakyo, Kyoto 606-8502, Japan Faculty of Science and Engineering, Konan University, Kobe, Hyogo 658-8501, Japan Shanghai Advanced Research Institute, Chinese Academy of Science, Shanghai 201210, China We simultaneously measured the ^4He(γ, n)^3He and ^4He(γ, p)^3H reactions in the energy range around the giant dipole resonance. A quasi-mono-energetic photon beam produced via the laser Compton scattering technique was irradiated on the active-target time-projection chamber filled with helium gas, and trajectories of charged decay particles emitted from ^4He were measured. Our data suggest that the ^4He(γ, n)^3He and ^4He(γ, p)^3H cross sections peak around 26 MeV. This result contradicts the previous experimental data reported by Shima et al. but is consistent with other experimental results. Photodisintegration Cross Section of ^4He in the Giant Dipole Resonance Energy Region H. Utsunomiya August 1, 2023 ===================================================================================== § INTRODUCTION The isovector giant dipole resonance (GDR) is one of the most examined nuclear resonances, which is excited through the E1 transition of nuclear ground states. It is a representative example of the collective excitation modes of atomic nuclei in which protons and neutrons coherently oscillate in antiphase, and its energy-integrated cross section exhausts approximately 100% of the Thomas-Reiche-Kuhn sum-rule value <cit.>. The photodisintegration reaction is a suitable probe to investigate the GDR because the photoabsorption dominantly induces E1 transition as the long wave-length approximation commonly holds in various nuclei. The cross section of the ^4He photodisintegration reaction in the GDR energy region has recently attracted research interest because it is an important aspect for understanding nucleosynthesis process in the universe. One example is the ν-process in the He-layer of core-collapse supernovae <cit.>. In this process, rare elements such as ^7Li and ^11B are produced through a series of nuclear reactions initiated by the ^4He(ν,ν ^' n) and ^4He(ν,ν ^' p) reactions <cit.>. The giant resonances, such as GDR and spin-dipole resonances, make a dominant contribution on these reactions because the ^4He(ν,ν^') reaction primarily excites these resonances with L=1  <cit.>. The Gamow-Teller resonance is approximately forbidden in ^4He due to the system's double magicity. Because of technical difficulties in measuring neutrino-nucleus reactions, estimation of neutrino-nucleus reaction cross sections for examining the ν-process relies on the nuclear structure theories. Nuclear structural information is therefore a source of uncertainty with regard to the calculations. By using the analogy between electromagnetic responses and weak responses of nuclei <cit.>, the experimental cross sections of the ^4He photodisintegration reaction can provide a criterion to test the validity of the estimated neutrino-nucleus reactions cross sections. Furthermore, a relationship was also discussed between the ^4He photodisintegration and the lithium problem in the Big Bang nucleosynthesis (BBN)  <cit.>. The lithium problem is an unsolved discrepancy between the primordial abundances of lithium isotopes estimated from the astronomical observation and those predicted from the BBN calculation. It is an intriguing problem that the primordial abundance of ^7Li estimated from observing the metal-poor halo starts is deficient with regard to the standard BBN prediction by a factor of three <cit.>. In addition, the possibility of an overabundance of ^6Li at the level of approximately three orders of magnitude was also suggested from the spectroscopic measurements <cit.>. It was proposed that the extended BBN modified with non-thermal photons produced through radiative decays of unstable relic neutral massive particles might solve the lithium problem <cit.>. Although the relic particle has not yet been observed, its possible mass, abundance, and lifetime are constrained by the cross sections of the ^4He photodisintegration reaction. The experimental studies on the ^4He photodisintegration reaction have continuously been reported since 1950s <cit.>. Most of the previous studies measured one of the (γ, n) and (γ, p) reactions and fewer studies were conducted using the simultaneous measurement  <cit.>. Some authors measured the photodisintegration reactions directly <cit.>, while others deduced its cross section from the surrogate reactions, such as the radiative capture reactions, which is the inverse reaction of the photodisintegration reactions <cit.>. Although the direct measurements in early years were conducted with continuous-energy photon beams generated with bremsstrahlung, the quasi-mono-energetic beam generated from the laser Compton scattering (LCS) <cit.> or tagged photon technique <cit.> were employed in the recent measurement. However, these previous results exhibit significant deviation beyond their respective error ranges. Therefore, it is difficult to extract reliable information to constrain theoretical calculations. Here, we focused on the recent experimental studies <cit.> in which quasi-mono-energetic photon beams were employed, and both the (γ, n) and (γ, p) channels were simultaneously measured. Regarding the reaction probe, quasi-mono-energetic photon beams are more suitable than continuous-energy beams since the unfolding procedure is necessary for the continuous-energy beams. This unfolding process could cause significant errors on the cross sections, unless the stability of the accelerator, sufficient counting statistics, and understanding of the beam-energy spectrum were guaranteed. Furthermore, the simultaneous measurement is desirable to obtain reliable cross sections of the (γ, n) and (γ, p) reactions because one can cancel out the experimental errors due to the target thickness and the beam flux. Firstly, Shima et al. <cit.> performed the measurement using a time-projection chamber (TPC). By operating the TPC as an active target, they measured the trajectories of charged particles and their energy deposits along the trajectories. Their deliberate experimental design and careful treatments of the data made it possible to clearly distinguish photodisintegration events from background events. They reported that no peak structures were observed in the photon-energy dependence of the cross sections of the (γ, n) and (γ, p) reactions at E_γ < 30 MeV. A follow-up measurement by Shima et al. presented a preliminary result suggesting that the peak of the GDR is located around E_γ = 32 MeV <cit.>. Raut et al. <cit.> and Tornow et al. <cit.> carried out the measurement in the HIγS facility using a high-pressure ^4He-Xe gas scintillator as an active target. They identified photodisintegration events of ^4He by measuring the total energy deposit of decay charged particles in the detector. The thick target and intense beam allowed them to obtain statistically high precision data. The theoretical calculations  <cit.> describe the measured cross sections reasonably well and suggest that the peak of the GDR is located around E_γ = 26 MeV. However, they did not report the cross sections of the (γ, n) reactions in the lower energy region of E_γ < 27 MeV. Thus, we can not conclude the energy dependence of the cross sections of the (γ, n) and (γ, p) channels solely from their result. From a theoretical perspective, an elaborate calculation using a realistic nuclear force with a three-body interaction was reported <cit.>. The cross sections of the ^4He(γ, n)^3He and the ^4He(γ, p)^3H reactions were calculated in the microscopic R-matrix approach, in which final state interactions and two- and three-body decay channels were considered. Both theoretical cross sections of the (γ, n) and (γ, p) reactions rise sharply from the threshold and become maximal at E_γ= 26 MeV. This result is consistent with the cross sections previously calculated using the Lorentz integral transform method <cit.> and discrepant with the results by Shima et al. The data from Shima et al. contradicts most of previous experimental data and theoretical calculations, and thus, the results appear unfavorable. However, their measurements using an active target TPC to track the charged particle trajectories in the final state are unprecedentedly sophisticated. Background events could be effectively rejected with information from the particle trajectories. Therefore, the possibility that their results differing from the other experiments are correct cannot be ruled out owing to their improved methodology. In order to solve this situation, we simultaneously measured the cross sections of the ^4He(γ, n)^3He and ^4He(γ, p)^3H reactions using one experimental setup with a new active target TPC for elucidating the energy dependence of the cross sections in the GDR region. In this paper, we report the cross sections of the ^4He(γ, n)^3He and ^4He(γ, p)^3H reactions in the energy region between E_γ = 23–30 MeV measured using a quasi-mono-energetic photon beams generated with the LCS technique and the active target TPC. § EXPERIMENT The experiment was performed at the beam line 01 (BL01) <cit.> in the NewSUBARU synchrotron radiation facility. Figure <ref> shows an overview of the experimental setup. A quasi-mono-energetic gamma-ray beam with a maximum energy of E_γ= 23–30 MeV was produced using the LCS technique. Laser photons generated using the solid-state laser (Nd:YVO_4 λ = 1,064 nm) were injected into the storage ring at NewSUBARU and impinged on the electron beam circulating in it. The energy of the laser photon was amplified through Compton scattering with a high-energy electron. The resultant energy was determined from its scattering angle and the energy of the electron beam. Photons scattered at 180^∘, which had the highest energy, were selected by limiting the scattering angle with the lead collimators installed on the BL01. A pair of collimators was located at 1,547 and 1,848 cm away from the collision point, and their apertures were 3 mm and 2 mm, respectively. The energy of the gamma-ray beam was tunable by altering the energy of the electron beam, and its absolute value was precisely calibrated <cit.>. For this experiment, an electron beam with an energy of 1 GeV delivered from the injector linear accelerator was accelerated to 1.146–1.311 GeV in the storage ring. A linearly polarized gamma-ray beam was employed in the present measurement. The laser photons were almost 100% linearly polarized, and a half-wave plate was used to tilt the polarization axis of the laser photons at 10^∘ from the horizontal axis. Because the polarization of the LCS gamma rays is maximum at the scattering angle of 180^∘ <cit.>, highly polarized beams were obtained at the GACKO beam hutch. The gamma-ray beam was monitored with the NaI(Tl) scintillation detector installed at the end of the BL01. A crystal of the NaI(Tl) scintillator was cut into a cylindrical shape with a diameter of 203.2 mm (8 inches) and a thickness of 304.8 mm (12 inches). The gamma-ray beam provided through the collimator of the finite-size aperture inevitably had an energy spread as the energy of the scattered photon correlates to the Compton scattering angle. In order to estimate the total photon number irradiated on the target, the energy spectra deposited to the NaI(Tl) scintillator by the LCS photon beams were acquired during the measurement. In addition, the energy spectrum with a low-intensity beam was also measured at each beam energy to deduce the energy profile. Charged particles emitted from the photodisintegration of ^4He were measured using the MAIKo active target <cit.>. As shown in Fig. <ref>, the MAIKo active target was installed in the GACKO beam hutch on the BL01. A schematic view of the MAIKo active target is shown in Fig. <ref>. The MAIKo active target worked as a TPC. A negative high voltage was applied on the cathode plate at the top of the MAIKo to form a vertical electric field. Field wires made of beryllium copper were doubly wound around the pillars with 5-mm intervals to form a uniform electric field. The sensitive volume of the TPC was the cubic region enclosed in the field cage. Its dimension defined with the area of the read-out electrodes, and the field cage was 10 × 10 × 11 cm^3 in width, depth, and height, respectively. The field cage was aligned so that one dimension of it was normal to the beam direction. As shown in Fig. <ref>, hereinafter we define the three-dimensional Cartesian coordinates such that the z-axis is parallel to the beam direction and the y-axis is vertical. The whole structure shown in Fig. <ref> was installed in the vacuum chamber filled with the detection gas. The detection gas comprised helium with 90% and CH_4 with 10% by their partial pressures, in which helium and CH_4 served as the target and quench gas, respectively. The gas pressure was optimized to realize the condition that the ranges of the proton and ^3H were long enough to escape from the sensitive volume, whereas that of ^3He was short enough to stop inside the sensitive volume, as summarized in Table <ref>. As an incident gamma ray was absorbed by a ^4He nucleus inside the sensitive volume, the ^4He nucleus decayed into a proton and a ^3H nucleus (a neutron and a ^3He nucleus), which traveled in the opposite directions. Those charged decay particles ionized gas molecules along their trajectories and kicked out electrons. These electrons were drifted toward the negative direction of the y-axis by the uniform electric field inside the drift cage and gas-amplified by the gas electron multiplier (GEM). Finally, the amplified electrons reached the micro-pixel chamber (μ-PIC) <cit.> at the bottom of the MAIKo active target. The μ-PIC consisted of 400-μm pitched 256 × 256 circular pixels aligned in the square region. Each pixel was composed of a cylindrical anode electrode with a diameter of 50 μm at the center and a cathode electrode on the circumference with a diameter of 256 μm. The electrons reached the pixels, caused a Townsend avalanche in the high electric field formed around the anode electrodes, and induced electric signals on the anode and cathode electrodes. The anode (cathode) electrodes on the pixels in line were electrically connected to form 256 anode (cathode) strips for signal readout. The electric signals induced on the μ-PIC were read out and processed with the dedicated electronics with the amplifier-shaper-discriminator (ASD) <cit.>. When a trigger signal was provided to the signal readout electronics, the status of the discriminators at every 10 ns (100 MHz) was recorded as a function of the clock number within a time window of 10.24 μs. Thus, the acquired data were equivalent to two black-and-white images with 256 × 1024 pixels in which the time-over-threshold of the signal was presented with the filled pixels, as shown in Fig. <ref>. The clock number corresponds to the vertical position from where the trigger event occurred. In contrast, the strip number is the horizontal position. Because the directions of the cathode strips were parallel to the z-axis, and those of the anode strips were parallel to the x-axis, the black-and-white image obtained from the anode (cathode) strips presented particle trajectories projected onto the xy (zy) plane. Figures <ref>(a) and (b) show two-dimensional images of a typical ^4He(γ, n)^3He event, whereas Figs. <ref>(c) and (d) are those of a typical ^4He(γ, p)^3H event. ^3He was observed as one thick trajectory, and proton and ^3H left two thin and medium long trajectories. All trajectories of charged particles started from the horizontal center of cathode images, where the beam-injection point was located. By combining a couple of images, we could reconstruct the three-dimensional configuration of the charged particle trajectories. Analog signals on adjacent 32 consecutive strips were summed up, and the waveform of the summed signal was also sampled at a frequency of 25 MHz. Because the time constant of the shaping amplifiers was shorter than the typical time scale of the induced signal, one could obtain energy-deposit information by integrating the waveform. The summed signals were also employed for the trigger decision. The trigger signal was generated when the pulse height of any summed signals exceeded the threshold level. The threshold level was set sufficiently low so as to avoid missing the photodisintegration events. The trigger rate was less than 100 Hz, and the data acquisition efficiency was higher than 99%. § ANALYSIS §.§ Event Selection Firstly, candidates of the ^4He photodisintegration events were selected out of all the events acquired with the MAIKo active target. The main sources of background events were electrons emitted from Compton scattering and discharges that occurred on the μ-PIC. Photodisintegration events of ^12C in CH_4 molecules were also a source of the background. Another source would be the photodisintegration events occurring outside the sensitive volume. The shapes of the analog signals resulting from the Compton scattering events and the discharging events were distinctively different from those of the photodisintegration events. Furthermore, the photodisintegration events that arose outside the sensitive volume did not produce any signals significantly higher than the noise level on the μ-PIC electrodes adjacent to the beam axis. Thus, these background events were safely excluded using the pulse-shape information. These event selection criteria were set loose enough so as not to exclude the ^4He photodisintegration events. After this selection procedure, the number of events was reduced by one order of magnitude. Secondly, we performed tracking analysis to extract the ^4He(γ, n)^3He and ^4He(γ, p)^3H events from the candidate events selected by the pulse-shape analysis. The ^4He(γ, n)^3He events exhibited one short thick trajectory due to ^3He, which terminated inside the sensitive volume. On the other hand, the ^4He(γ, p)^3H events left two long thin and medium trajectories due to a proton and a ^3H nucleus which continued to the boundary of the sensitive volume. The procedure of the tracking analysis was as follows. Following the cartesian coordinate defined in Fig. <ref>, we refer to the track image obtained from the anode (cathode) strips as ZY (XY) image hereafter. * Clear false hit pixels due to electric noise in the ZY and XY images. * Determine the reaction point in the XY image. The x position of the reaction point was fixed at the center of the beam axis, and the y position was determined by averaging the y positions of the hits around the beam axis. * Track the trajectories starting from the reaction point in the XY image and derive the number, direction, and length of the trajectories. * By using the y position of the reaction point in the XY image, determine the reaction point in the ZY image, and track the trajectories in the ZY image. * Based on the y positions of the end points in the XY and ZY images, determine the combination of the trajectories in the two images to reconstruct the three-dimensional trajectories. After the tracking analysis, the analyzed events were classified with the number of reconstructed trajectories. When only one trajectory was reconstructed, this event was classified as a candidate of the ^4He(γ, n)^3He event. If two trajectories were reconstructed, this event was classified as a candidate of the ^4He(γ, p)^3H event. For further analysis, we calculated total energy loss in the sensitive volume (E) and the differential energy loss (dE/dx) of the reconstructed trajectory. We utilized them to reject the remaining background due to the ^12C photodisintegration. For the ^4He(γ, n)^3He candidates, we required the five conditions below. * The number of reconstructed trajectories is one. * The reaction point is at least 1 cm distant from the edge of the sensitive volume. * The end point of the trajectory is inside the sensitive volume. * The correlation between E and the trajectory length is consistent with that of ^3He. * Kinematically reconstructed beam energy is in the range between E_max-1 and E_max MeV. For the ^4He(γ, p)^3H candidates, we imposed five conditions below. * The number of reconstructed trajectories is two. * The reaction point is at least 1 cm distant from the edge of the sensitive volume. * The two trajectories reach the edge of the sensitive volume. * dE/dx of one trajectory is consistent with that of a proton, and dE/dx of the other trajectory is consistent with that of ^3H. * The two trajectories are oriented in a back-to-back direction in the XY image. Finally, we defined events that satisfy the above conditions the procedure as true photodisintegration events. Because there should be some true events lost in the tracking and event-selection procedures, it was necessary to consider the detection efficiency of the true events with the MAIKo active target to determine the cross sections. §.§ Efficiency Estimation The detection efficiency with the MAIKo active target was estimated using a Monte Carlo simulation. We generated photodisintegration events inside the sensitive volume of the MAIKo active target and simulated various process such as ionization of the detection gas by decay particles, transport process of electrons, gas amplification on the GEM and μ-PIC, and response of the readout circuits. First, the ionization process was simulated using the SRIM code <cit.>. Second, the transport process of generated electrons was simulated with the Garfield++ code <cit.> by using the drift velocity and diffusion coefficient of electrons calculated with the Magboltz code <cit.>. Third, the gas amplification process of the electrons reached the GEM, and μ-PIC was calculated with a simple stochastic model in which the amplification factor was assumed to be a random variable according to the Pólya distribution. Finally, the pseudo data set (hit pattern and signal shape) was virtually produced by a software that emulated the readout circuit <cit.>. The parameters for this simulation were optimized to reproduce the real data taken in the experiment. The detection efficiency was estimated by analyzing the simulated pseudo data. We defined the detection efficiency as the survival ratio of the produced events after the analysis. We evaluated the efficiency at every polar and azimuthal angular bin with steps of Δθ = Δϕ = 20 ^∘ and at energy bins with a step of Δ E_γ = 500 keV over E_γ = 19–30 MeV. §.§ Beam Analysis The energy profile and total photon number of the beams irradiated on the MAIKo active target were estimated from the energy spectrum measured with the beam monitor located at the most downstream of the beam line. A NaI(Tl) scintillation detector was employed as the beam monitor. The energy profile was estimated from a Monte Carlo simulation of the LCS beam generation process. We generated LCS photons at the collision point using realistic values of the parameters for the electron beam and the laser optical system, such as beam emittance and beam size. We also virtually transported the LCS photons toward the NaI(Tl) beam monitor in the experimental hutch. The interactions of the LCS photons with the beam-line materials were simulated by using the Geant4 toolkit <cit.>. The incident energy spectrum of the LCS photons estimated by the simulation is shown in Fig. <ref>(a). Its deduced energy-deposit spectrum folded with the finite energy resolution of the NaI(Tl) detector was compared with the experimental spectrum measured when the beam intensity was as low as several thousands photons per second in Fig. <ref>(b). Figure <ref>(a) shows a typical energy spectrum measured with the beam monitor when the maximum LCS-photon energy was 30.0 MeV, and the beam intensity was approximately 172 k photons per second. Notably, several discrete peak structures were observed. These structures were due to the multi-hit events in which multiple LCS photons were detected simultaneously. The LCS photons intermittently arrived at the beam monitor at 16 kHz, which was synchronized with the repetition frequency of the laser module for the LCS photon production. In a single beam pulse, the number of photons varied between one and a few tens. Because the time difference among the LCS photons in one pulse was shorter than the time resolution of the beam monitor, these photons were detected as a single signal. The discrete peaks in the spectrum correspond to one, two, three, and more photons in the order from left. This distribution can be resolved as a linear combination of detector-response functions for various photon multiplicities. The response function for one photon injection was obtained from the measurement with a low-intensity beam. The multi-photon response templates of up to 60 photons were also generated from the iterative convolutions of the one photon response. Figure <ref>(b) shows the response functions for various photon multiplicities. These functions are normalized so that the integral values are equal to unity. Moreover, the response function of the background events caused by a source other than the LCS was generated from the measurement with the laser turned off. The counts at the x ch in the measured energy spectrum should be described by the template function T(x) composed of a linear combination of the detector-response functions and the background response as follows T(x) = ∑_n=1^60w_n t_n(x) + w_BGt_BG(x), where t_n(x) and t_BG(x) stand for the i-photons and the background response functions, and w_i and w_BG are the weighting factors for the response functions. In addition, the quenching effect of the signals induced on the NaI(Tl) scintillator was also taken into account by distorting T(x) using the empirical saturation function described in Ref. <cit.>. We fitted this function to the measured energy spectra to determine the weighting factors and the parameters for the saturation function reproducing the spectra best. Figure <ref>(c) shows the best-fit template functions compared with the measured spectrum. The distorted response functions multiplied by the weight factors are also displayed with green dashed lines. The total photon number irradiated on the MAIKo active target N_γ was determined as N_γ = ∑_n=1^60 n w_n, by using the weighting factors determined from the fitting. The statistical uncertainty of the total photon number was estimated to be approximately 0.2%. The systematic uncertainty originated from the fitting procedure, the quenching of detector output, the contribution from the background events, and the photon multiplicity higher than 60 was approximately 4%. §.§ Cross Sections Energy averaged total cross sections of the photodisintegrations were determined from the yield (Y), the detection efficiency (ϵ), and total photon number (N_γ). Here, we define the energy-profile function f_k(E) where k is the index of the beam energy listed in Table <ref>. The energy-profile function f_k(E) was estimated by the Monte Carlo simulation described in Sec. <ref> [see Fig. <ref>(a)] and normalized so that the energy-integral value was equal to the total photon number N_γ, k as N_γ, k = ∫_0^E_max, k dE f_k(E), where E_max,k denotes the maximum energy of the photon beam. In preparation for determining the total cross sections, we first evaluated the differential cross section at each angular bin using the following formula ⟨dσ/dΩ⟩_θ_i, ϕ_j, k = Y_i, j, k/[∫_0^E_max,kdE f_k(E) ϵ_i, j, k(E) ]  τΔΩ_i,j, where i and j are the indices of the angular bins of θ and ϕ whereas k is the index of the beam energy. Here τ is the target thickness given in the areal number density of ^4He nuclei, and ΔΩ_i, j is the solid angle of the angular bin with Δϕ = Δθ = 20^∘ around θ_i and ϕ_j, respectively. The total cross section is obtained by integrating the differential cross sections over the full solid angle of 4π. In the present analysis, however, the cross sections in the angular bins with lower detection efficiencies must be removed from the integration because those angular bins caused larger systematic uncertainties amplified by the efficiencies. Hereafter, we define ΔΩ̃_k as the angular range in which the differential cross sections are reliable and included in the analysis. In order to obtain the total cross section, it is necessary to deduce the correction factor for converting the cross section integrated over the partial solid angle ΔΩ̃_k into that over 4π. For this purpose, we assume the photodisintegration reactions in the present energy region are primarily induced through the E1 transition, and model the angular distribution of the differential cross sections at E_γ = E by the following formula (dσ_M/dΩ)_k = σ_M(E) {3/8πsin ^2 θ[ 1+α_kcos 2(ϕ - ϕ̃)̃] }, where θ and ϕ are the polar and azimuthal angles of the decay particles in the center of mass frame. ϕ̃ and α_k are the polarization direction of the linearly polarized photon beam and the azimuthal asymmetry parameter, respectively. σ_M corresponds to the total cross section obtained by integrating the modeled differential cross section over 4π. In addition, we assume that the energy dependence of the modeled total cross section σ_M(E) is given by σ_M (E) = ∑_l=1^4p_l(E-E_th)^l (E ≥ E_th) 0 (E<E_th), where E_th is the threshold energy of the photodisintegration reactions. Based on the cross section model described above, the expected yield Ỹ at a certain angular bin is estimated as Ỹ_θ_i, ϕ_j, k = τ∫ dΩ∫_0^E_max,k dE [ I_ΔΩ_i, j( dσ_M/dΩ)_k f_kϵ_i, j, k]. Here, for the sake of simplicity, the arguments (E, θ, and ϕ) of the functions in the integrand are omitted. I_A is an indicator function on the angular range which is defined as, I_A(x) = 1 (x ∈ A) 0 (x ∉ A). Therefore, the angular integration in Eq. (<ref>) was done within an angular bin ΔΩ_i, j with Δϕ=Δθ=20^∘ around θ_i and ϕ_j. This angular bin is the same as that employed in Eq. (<ref>). The unknown parameters, α_k and p_l in Eqs. (<ref>) and (<ref>), were optimized to maximize the likelihood of the expected yields Ỹ by comparing them with the measured yields Y at all angular bins and beam energies while ϕ̃ was fixed at ϕ̃ = 10 ^∘. The average beam energy ⟨ E ⟩_k was calculated from the weighted mean of the beam energy defined as follows: ⟨ E ⟩_k = ∫_0^E_max, k dE ∫ dΩ{ E [ I_ΔΩ̃_k(dσ_M/dΩ)_k f_k ϵ_i, j, k]}/∫_0^E_max, k dE ∫ dΩ[ I_ΔΩ̃_k(dσ_M/dΩ)_k f_k ϵ_i, j, k]. Finally, the measured cross sections were corrected to obtain the experimental total cross section at the average beam energy ⟨ E ⟩_k by the following formula ⟨σ⟩ _k = C_k ⟨σ⟩_ΔΩ̃_k = C_k ∑^ΔΩ̃_k_θ_i, ϕ_j⟨dσ/dΩ⟩_θ_i, ϕ_j,kΔΩ_i,j where the correction factor C_k is given as C_k = σ_M(⟨ E ⟩_k)/∫ dΩ[ I_ΔΩ̃_k( dσ_M/dΩ)_k (⟨ E ⟩_k) ]. The second factor in Eq. (<ref>), ⟨σ⟩_ΔΩ̃_k is the integration of the differential cross section in Eq. (<ref>) over the angular bins adopted in the analysis. The correction factor C_k corresponds to the ratio of the modeled cross section integrated over 4π to that over ΔΩ̃_k. The uncertainty of the total cross sections due to the assumption on the angular distribution of the differential cross section was evaluated by using a general formula given in Ref. <cit.> instead of Eq.(<ref>), and that due to the definition of ΔΩ̃_k was also estimated by varying ΔΩ̃_k. They were included in the systematic uncertainty of the total cross sections. § RESULTS & DISCUSSION The total cross section of the ^4He(γ, n)^3He and ^4He(γ, p)^3H reactions measured in the present work are given in Table <ref> and plotted using the upward triangles in Fig. <ref>. The statistical and systematic uncertainties are shown as the bars and the bands, respectively. The systematic uncertainties considered in the present study are summarized below. The uncertainty, which is primarily due to the correction of the solid angle and the model selection, is estimated to be varying between ten and a few tens of percentages, depending on the measurement conditions. In addition, the uncertainty from the efficiency estimations due to the incompleteness of the Monte Carlo simulations was approximately 10%, which is taken from Ref. <cit.>. As discussed in Sec. <ref>, the uncertainty of the incident photon number was 4%. We monitored and controlled the density of the target gas, and uncertainty originated from that was 0.1% or less. The experimental total cross sections measured with the quasi-mono-energetic photon beams <cit.> were selected from numerous previous studies and compared with the present results in Fig. <ref>. The present results showing the GDR peak structure around E_γ=26 MeV in both the ^4He(γ, n)^3He and ^4He(γ, p)^3H reactions do not support those reported by Shima et al. <cit.>, but are consistent with those from the HIγS group <cit.>. The uncertainties in the present results are slightly higher than those in previous studies, but smaller than the difference between the results by Shima et al. and those by the HiγS group. Notably, the present ^4He(γ,n)^3He result also agrees with that by the MAX-lab group <cit.>, which was obtained by using energy-tagged bremsstrahlung photons. The GDR peak energy is in accord with the theoretical calculation <cit.> shown by the solid lines in Fig <ref>. In addition, the absolute values of our data support the theoretical calculations within the uncertainty except for some points of the ^4He(γ, n)^3He reaction whereby their statistical uncertainties are higher than those of others. As described in Sec. <ref>, the cross sections of the ^4He photodisintegration reactions in the GDR energy region are important to elucidate the certain processes in the nucleosynthesis in the universe. Because cross sections theoretically calculated from the nuclear structure theory are generally employed for the astrophysical calculation, one would have to revise the theoretical frameworks and the scenario of the nucleosynthesis if the theoretical predictions largely deviate from the experimental cross sections. The striking experimental results reported by Shima et al. challenged the conventional view of the ^4He photodisintegration reactions that the GDR peak was located around E_γ=26 MeV. However, the present results support the conventional view and do not force one to revise the physical pictures on the nucleosynthesis drawn by the nuclear structure theories. § SUMMARY We performed a simultaneous measurement of the ^4He(γ, n)^3He and ^4He(γ, p)^3H reactions at the BL01 in the NewSUBARU synchrotron radiation facility by using the TPC-based gaseous active target, MAIKo. Quasi-mono-energetic photon beams at E_γ =23–30 MeV, which were close to the GDR energy in ^4He, were irradiated on the MAIKo active target. Three-dimensional trajectories of charged particles emitted from the photonuclear reactions were measured, and the ^4He photodisintegration events were identified. The detection efficiencies of the photodisintegration events by the MAIKo active target were estimated with the Monte Carlo simulation in which the response of the detector was considered. The total photon number and energy distribution of photon beams were evaluated from the energy spectra measured with the NaI(Tl) scintillator. Finally, we determined the cross sections of the ^4He(γ, n)^3He and ^4He(γ, p)^3H reactions. The cross sections obtained from the present measurement show the GDR peaks around E_γ = 26 MeV. This result is in accord with the experimental results from the HIγS facility <cit.> and the MAX-lab facility <cit.> but inconsistent with the result from Shima et al. <cit.>. The present results support the theoretical predictions for the ^4He photodisintegration reactions <cit.>, and do not force one to revise the theoretical frameworks. This study demonstrated the applicability of the MAIKo active target, which was originally designed for the in-beam spectroscopy of the unstable nuclei <cit.>, to the study of the photonuclear reactions. The present work expanded our capability of providing reliable data on important reactions in nuclear astrophysics with the MAIKo active target. We express our gratitude to the NewSUBARU synchrotron radiation facility and Research Center for Nuclear Physics, Osaka University, for providing their facilities and supports for the present work. We would like to thank Prof. T. Kajino, Prof. T. Suzuki, and Prof. W. Horiuchi for fruitful discussions regarding interpretation of the experimental results. M. M. appreciates the support of Grant-in-Aid for JSPS Research Fellow JP16J05592. This work was also supported by JSPS KAKENHI Grants Numbers JP23340068 and JP15H02091.
http://arxiv.org/abs/2307.00967v2
20230703123728
DREAM: III.A helium survey in exoplanets on the edge of the hot Neptune desert with GIANO-B@TNG
[ "G. Guilluy", "V. Bourrier", "Y. Jaziri", "W. Dethier", "D. Mounzer", "P. Giacobbe", "O. Attia", "R. Allart", "A. S. Bonomo", "L. A. Dos Santos", "M. Rainer", "A. Sozzetti" ]
astro-ph.EP
[ "astro-ph.EP" ]
III. A helium survey in exoplanets on the edge of the hot Neptune desert with GIANO-B at TNG DREAM: III Guilluy et al. INAF – Osservatorio Astrofisico di Torino, Via Osservatorio 20, 10025, Pino Torinese, Italy Observatoire Astronomique de l'Université de Genève, Chemin Pegasi 51b, 1290, Versoix, Switzerland Univ. Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France Département de Physique, Institut Trottier de Recherche sur les Exoplanètes, Université de Montréal, Montréal, Québec, H3T 1J4, Canada Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA INAF – Osservatorio Astronomico di Brera, Via E. Bianchi, 46, 23807 Merate (LC), Italy The population of close-in exoplanets features a desert of hot Neptunes whose origin remains uncertain. These planets may have lost their atmosphere, eroding into mini-Neptunes and rocky super-Earths below the desert. Direct observations of evaporating atmospheres are essential to derive mass-loss estimates and constrain this scenario. The metastable HeI triplet at 1083.3 nm represents a powerful diagnostic of atmospheric evaporation because it traces the hot gas in extended exoplanet atmospheres while being observed from the ground. In addition, it is located at the bright near-infrared stellar continuum and is very weakly affected by interstellar medium (ISM) absorption. We carried out a homogeneous HeI transmission spectroscopy survey, targeting a selected sample of nine planets along the different edges of the desert, to interpret the absorption line profile with evaporation models and to better understand the role of photoevaporation in the desert formation. We observed one transit per planet using the high-resolution, near-infrared spectrograph GIANO-B mounted on the Telescopio Nazionale Galileo telescope. We focused our analysis on the HeI triplet, based on a comparison of the in-transit and out-of-transit observations, and we computed high-resolution transmission spectra. We then employed the 1D p-winds model to calculate the planetary thermospheric structures and to interpret the observed transmission spectra. We found no signatures of planetary absorption in the HeI triplet in any of the investigated targets. We thus provided 3 σ upper-limit estimations on the thermosphere absorption, temperature and mass loss, and combined them with past measurements to search for correlations with parameters such as the stellar mass and XUV flux, which are thought to be key drivers in the formation of the HeI triplet. These results strengthen the importance of performing homogeneous surveys and analyses in bringing clarity to HeI detections and (thereby) to plausible Neptunian desert origins. Our findings corroborate literature expectations that state the HeI absorption signal is correlated with the stellar mass and the received XUV flux. However, when translated in terms of mass-loss rates, these trends seem to disappear. Thus, further studies are essential to shed light on this aspect and to better understand the photoevaporation process. DREAM G. Guilluy1, V. Bourrier2, Y. Jaziri2 , W. Dethier3, D. Mounzer2, P. Giacobbe1, O. Attia2, R. Allart4Trottier Postdoctoral Fellow, A. S. Bonomo1, L. A. Dos Santos5, M. Rainer6, A. Sozzetti1 Received date ; Accepted date =============================================================================================================================================================================================================================== § INTRODUCTION The population of close-in exoplanets (P30 days) features a dearth of Neptune-size planets on very short orbits (P4 days). This so-called “Neptunian desert” <cit.> is not an observational bias, as close-in Neptunes are easy to detect via both transits and radial-velocity measurements. Debate around the key driver mechanisms at the origin of the desert, which are linked to the formation, migration, and atmospheric evolution of close-in planets, is still ongoing. Photoevaporation <cit.> and high-eccentricity orbital migration followed by tidal interaction with the star <cit.> are the most likely explanations to date, but their interplay remains to be explored. Among questions that need to be addressed are the range of mass and period over which these processes are at play and whether they also shape the Neptunian “savanna” that is represented by a lighter deficit of Neptune-size planets at longer periods and lower irradiation, as highlighted by <cit.>. Investigating this complex puzzle is the goal of the Desert-Rim Exoplanets Atmosphere and Migration (DREAM) program. In DREAM I <cit.>, we measured the orbital architectures of a large sample of exoplanets spanning the borders of the Neptunian desert and savanna. This work revealed a high fraction of misaligned orbits, strengthening the importance of high-eccentricity orbital migration for close-in planets. Architecture measurements from DREAM I were included in a large statistical study of spin-orbit angles in DREAM II <cit.>. This work confirmed the major role of tides in shaping the overall distribution of close-in planets' orbital architectures, except for a substantial fraction of planets on polar orbit that appears resilient to tidal realignment and further support the importance of disruptive dynamical processes. A subsample of the systems in DREAM I were observed in transit as part of a campaign our team led with the GIARPS observing mode (GIANO-B+HARPS-N) at the Telescopio Nazionale Galileo (TNG) telescope, to measure their Rossiter-McLaughlin effect in optical HARPS-N data and to analyze the planetary atmospheric spectra in near-infrared GIANO-B data. The objective of this third paper in the DREAM series is to search these GIANO-B data for absorption by helium escaping the upper atmosphere of these planets, bringing constraints on their mass loss and the role of atmospheric escape in the formation of the desert. Strong high-energy X-rays and extreme ultraviolet (XUV) stellar radiation can lead to an expansion of the upper atmospheric layers and the substantial escape of gas into space <cit.>. While hot Jupiters are generally stable against this photoevaporation, hot Neptunes have lower gravitational potential that makes them more vulnerable <cit.>. The upper atmospheric layers of these planets have traditionally been probed via transit spectroscopy in the ultraviolet (UV), by monitoring the change in absorption during transit of the stellar Lyα line. The HI exospheres of hot Jupiters yield absorption signatures in the stellar Lyα that are ten times deeper than the lower atmosphere <cit.>, and this absorption level is even higher for the exospheres of warm Neptunes <cit.> – and possibly even for mini-Neptunes <cit.>. However, UV observations can only be performed from space, and the stellar Ly-α line is contaminated by geocoronal emission and absorbed by the interstellar medium (ISM) absorption. While geocoronal emission can be reliably subtracted in the data reduction, there is nothing that can be done for the ISM absorption, so that only the wings of the line are usable when probing escaping atmospheres. In this way, gas dynamics in regions closer to the planet itself, at the wind-launching radius, remains obscured with Ly-α observations <cit.> leading to low-precision mass-loss rates, as we observe the gas when it has already escaped in the exosphere. Additionally, Ly-α studies have only been performed on a few systems due to ISM absorption, which prevents observing stellar Ly-α lines beyond ∼50 pc. Accessing the thermosphere, the upper atmospheric layer below the exosphere, represents a way to overcome these limitations. As it is very weakly affected by interstellar absorption and it can be observed from the ground the HeI triplet at λ∼1083.3 nm (vacuum wavelength) has been recently identified as a robust alternative for tracing atmospheric expansion and evaporation <cit.>. The first detection of a helium thermosphere was obtained by with the Wilde Field Camera 3 (WFC3) of the Hubble Space Telescope (HST). The HeI feature was not spectrally resolved due to the low-resolution of the data, however, further observations at high-resolution with CARMENES <cit.> allowed for the absorption lines to be spectrally resolved and to derive atmospheric properties, showing that the helium tracer can probe the planet thermosphere and occasionally the exosphere. To date, HeI has been searched for in the upper atmospheres of about 40 planets which (see Table <ref>). These observations have led to HeI absorption or upper limits estimations. However, the non-homogeneity in both observing methodology, and data reduction technique may mask possible trends in the data, thus making it difficult to find a clear correlation with the parameters (e.g., stellar mass, XUV irradiation) that are believed to drive the detection (or non-detection) of the HeI triplet <cit.>. Despite the several helium studies, the parameters that are considered to be important in triggering the HeI detection are currently under debate. For instance, proposed that planets orbiting K-type stars should be promising targets for showing evaporating or escaping helium atmospheres. This is because K-type stars emit a high amount of stellar XUV emission, which is responsible for HeI atoms ionization from the ground state (which can then recombine into the metastable state) and a low stellar mid-ultraviolet emission, which reduces the ionization of metastable HeI atoms <cit.>. However, the discovery of a strong helium signature for a gas-giant planet orbiting an F-star <cit.>, highlighted that also other planets orbiting stars with different spectral energy distribution (SED) can exhibit large helium outflow regardless of the stellar spectral type. More studies are thus essential to shed light on which are the mechanisms and parameters important in the HeI detection. The sample we analyze in DREAM III is part of a pilot survey of nine planets located at the different edges of the Neptunian desert and savanna. The sample is described in Sect. <ref>, and its near-infrared (nIR) observations with the GIANO-B high-resolution spectrograph are presented in Sect. <ref>. We detail the data reduction procedures in Sect. <ref>, and the interpretation of the helium absorption observations in Sect. <ref>. We then present our findings in Sect. <ref>, followed by our conclusions in Sect. <ref>. § SAMPLE Our survey consists of nine planets along the transitions defining the Neptunian desert and savanna. Their low densities (ρ_pl<3.5 ) and bright host stars (J<10.7) favor observations of their atmosphere in the nIR. The planet and star parameters adopted in this work are presented in Table <ref>. Below, we describe the main features of interest for these planets, which led to their inclusion in our survey. We would like to stress that no helium studies were reported in the literature for these investigated planets. Hot Jupiter HAT-P-3b. Its strong irradiation is expected to induce a large mass loss. The small radius of HAT-P-3b is indicative of a metal-enriched composition <cit.>, which could be the result of atmospheric escape over the last 2.6 Gyr. Hydrogen would be lost preferentially, making helium a particularly interesting tracer for this planet. HAT-P-3b orbits a K-star, thus according to <cit.>, it is likely to show metastable helium absorption. DREAM I reports a polar orbit for this planet and, if confirmed, dynamical simulations will be needed to understand whether the present-day architecture is a result of a disruptive dynamical history (with partial evaporation of its volatile content) or a primordial misalignment between the protoplanetary disk and the star. Hot Jupiter HAT-P-33b. The extreme irradiation and very low density (0.134_-0.042^+0.053 g cm^-3, ) of this highly inflated planet are expected to induce a large mass loss. <cit.> measured an excess depth during transit in the R-band, which contains the Hα transition, suggesting that the planet may be undergoing hydrodynamical escape. The misalignment of the system, due to the inclination of the host star, suggests that HAT-P-33 b underwent a high-eccentricity migration, and thus it possibly migrated close to the star long after its formation, which would change its atmospheric history compared to an early-on migration and erosion. Ultra-hot-Jupiter HAT-P-49b. It is a gas giant exoplanet discovered orbiting a bright (V = 10.3) slightly evolved F-star <cit.>. Its extreme irradiation, due to its proximity to the host star (a=0.0438±0.0005 au, ), is expected to induce a large mass loss. According to the analysis presented in DREAM I, the planet is probably on a polar orbit, supporting a disruptive dynamical origin or evolution for the system, whose architecture was unaffected by tidal interactions with the shallow convective envelope of the host star (DREAM II). Warm super-Neptune HD89345b. This planet is five times more irradiated than the evaporating super-Neptune WASP-107b, yet it survived atmospheric escape for 9.4 Gyr <cit.>. HD 89345b stands at the transition between stable Jupiter-mass planets and hot Neptunes that entirely lost their atmosphere and this is thus an essential piece in the puzzle that is the origin of the desert. HD 89345b is located on a misaligned orbit (DREAM I) right within the savanna (see Fig. <ref>). The present-day misalignment could trace both a primordial formation of the system, arising from the tilt of the early star or protoplanetary disk, or the planet could have migrated more recently, exiting a Kozai resonance with an outer companion <cit.>. This second scenario would imply that HD 89345b arrived near the star at the end of its main-sequence lifetime, changing our view of its irradiative history and our interpretation of its inflation <cit.> and hydrodynamical escape. Warm sub-Neptune K2-105b. It remains unclear why sub-Neptunes appear to be more resilient than warm Neptunes to the processes that created the desert <cit.>. K2-105b stands at the transition between these two populations and is predicted to have an atmosphere accounting for up to 10% of its total mass <cit.>. Detecting the presence of this atmosphere and measuring its mass loss could bring constraints on the interior of the planet; if its evolution was controlled by atmospheric escape, it is estimated to have retained its envelope only if its core mass is greater than 6 <cit.>. DREAM I reported a possibly misaligned orbit which, if confirmed, might support a turbulent dynamical history and the planet's late arrival into its close-in orbit. However, the presence of other targets may indicate a primordial inclination of the star or protoplanetary disk, as K2-105b is far away from its host stellar companion to experience tidal interactions. Warm Neptune Kepler-25c. It is close to a resonant periodic configuration with a companion planet, which is known to be the final state of a system that undergoes migration within the protoplanetary disk <cit.>. Kepler-25c should thus be evaporating since its formation 11 Gyr ago <cit.>, yet its low density (0.588^+0.053_-0.061 g cm^-3, ) indicates the presence of a H/He envelope. Warm Neptune Kepler-63b. It is a gas giant exoplanet with a radius between Neptune and Saturn. The orbital period is around 9.4 days, leading to an equilibrium temperature of about 900 K <cit.>. The planet is in a polar orbit around a young Sun-like star (, DREAM I), thus offering the possibility to assess how evaporation shapes a Neptune’s atmosphere in its early life. Its radius and insolation are similar to those of the other Neptunian targets, but it is much younger (200 Myr vs 10 Gyr) and still possibly undergoing vigorous escape. Hot sub-Neptune Kepler-68b. With a density of 3.32^+0.86_-0.98 <cit.>, it is considered a candidate ocean planet <cit.> possibly topped by a moderate H/He envelope <cit.>. Detecting He would offer insights on the mysterious nature of this sub-Neptune, representative of the transition between rocky planets and gas giants <cit.>. Warm sub-Neptune WASP-47d. The WASP-47 planetary system is composed of at least four planets, a hot Jupiter (WASP-47 b; P = 4.159 days, ) with an inner super-Earth (WASP-47 e; P = 0.7896 days, ), a close-orbiting outer Neptune (WASP-47 d; P = 9.031 days, ), and a long-period giant planet (WASP-47 c; P = 588.4 days, ). WASP-47 d is near a 2:1 resonance with the inner Hot Jupiter WASP-47b. It has a similar radius and insolation of K2-105b but is three times less massive. Their comparison could provide valuable insight into evaporation processes on sub-Neptunes. § OBSERVATIONS We observed the systems in our sample with the nIR echelle spectrograph GIANO-B installed on the 3.6 m Telescopio Nazionale Galileo (TNG) telescope. The observations were performed with the GIARPS configuration and were carried out with the nodding acquisition ABAB <cit.>. Therefore, while the targets were observed in one nodding position along the slit (A and B), the sky spectra were gathered simultaneously with the other one, thus providing an accurate reference for subtracting the thermal background and telluric emission lines. GIANO-B covers the Y, J, H, and K spectral bands (0.95-2.45 μm) in 50 orders at a resolving power of R∼50,000. For this analysis, we focus on order #39, where the helium triplet falls. We collected a transit observation for each investigated target. The only exception is HAT-P-3b, as due to bad weather conditions, we collected two nights of observation, namely UT 14 April 2019 and UT 30 January 2020, but the first visit was excluded from our analysis since observations had to be stopped just before the transit. A log of the observations is reported in Table <ref>. Figure <ref> shows the variation in the signal-to-noise ratio (S/N) for order #39 and the variation in airmass for each exposure. Due to the lack of a sufficient number of collected images, we had to discard Kepler-63b from our analysis. Moreover, K2-105b's and HD89345b's observations were affected by GIANO-B auto-guide problems and by the presence of clouds, so we decided to discard the AB couples of exposures which exhibit a very low S/N. § DATA ANALYSIS Extended or evaporating atmospheres can be detected through an excess absorption by metastable helium in the planet transmission spectrum. In the following section, we discuss the steps we performed in order to reduce the raw GIANO-B data, extract individual transmission spectra, and calculate average in-transit spectra in the planet rest frame. §.§ Initial data reduction The raw spectra were dark-subtracted, flat-corrected, and extracted (without applying the blaze function correction) using the GOFIO pipeline <cit.>. In addition, GOFIO yields a preliminary wavelength calibration (defined in vacuum) using U-Ne lamp spectra as a template. We used the ms1d spectra, with the echelle orders separated and the Barycentric Earth Radial Velocity (BERV) correction applied, the spectra are defined in the terrestrial rest frame. Since the U-Ne is acquired at the end of the night to avoid the persistence of the saturated signal of some emission lines on the detector polluting the scientific observations, the mechanical instability of GIANO-B makes the wavelength solution determined by GOFIO insufficient in terms of accuracy. We corrected for this by aligning all the GIANO-B spectra to the telluric reference frame via spline-interpolation based on the retrieved shifts obtained by cross-correlating with a time-averaged spectrum used as a template <cit.>. We thus aligned the spectra to the reference frame of the Earth’s atmosphere, which is also assumed as the frame of the observer (neglecting any ∼10 m s^-1 differences due to winds). We then used the atmospheric transmission spectrum generated via the ESO Sky Model Calculator[<https://www.eso.org/observing/etc/bin/gen/form?INS.MODE=swspectr+INS.NAME=SKYCALC>] to refine the initial GOFIO wavelength calibration.[When the telluric lines are not strong enough, the re-alignment into the telluric rest frame may not work properly, as in the case of Kepler-68b. Thus, we preferred to discard this step in the analysis for this specific target.] §.§ Transmission spectroscopy We performed the transmission spectroscopy, applying the steps described below to each transit and target independently and considering the system parameters listed in Table <ref>. §.§.§ Telluric correction First, we performed a detailed correction for telluric contamination. We used the pcrMolecfit ESO software <cit.> to correct for the transmission telluric lines <cit.>. As this is the first time that pcrMolecfit has been applied to GIANO-B data, we report the adopted parameters in Table <ref> . pcrMolecfit is based on a combination of two different sources: an atmospheric standard profile (MIPAS), and a Global Data Assimilation System (GDAS) profile. pcrMolecfit gives the merging of these two profiles as input for a line-by-line radiative transfer model (LBLRTM). We considered the precipitable water vapor in our transmission model, and selected a fixed grid to merge the two atmospheric models, namely, the variations in temperature, pressure, humidity, and abundance of H_2O from 0 to 120 km are described with a fixed number of layers (50). Meanwhile, LBLRTM returns the telluric spectrum. We considered one observation at a time,and we initially performed the model fitting on selected spectral intervals inside the order #39 showing a well-determined continuum level, a good number of telluric lines, and a few or zero stellar lines. Based on the best-fit parameters derived by pcrMolecfit, we then generated a telluric spectrum for the entire spectral order and we corrected the science spectrum. An example of telluric removal is shown in Fig. <ref>. In the spectral region of interest, there are three OH emission lines that fall near the HeI triplet (at ∼1083.21 nm, ∼1083.24 nm, and ∼1083.43 nm, vacuum wavelengths). As the observations were gathered with the nodding acquisition mode that allows for the subtraction of the thermal background and emission lines (see Sect. <ref>), there is no need to correct for telluric emission lines. However, due to seeing variations during the observing nights, the A-B subtraction can leave some residuals at the wavelengths of the OH lines. We thus masked the correspondent wavelengths. §.§.§ Alignment into stellar rest frame. We then shifted the spectra in the stellar rest frame by accounting for the stellar radial velocity, V_⋆, in the telluric reference system. This is given by: V_⋆/⊕=∑_i K_⋆ i[cos (ν_i + ω_i) + e_i cos(ω_i)]+V_sys + V_bar, where we account for the velocity of the observer induced by the rotation of the Earth and by the motion of the Earth around the Sun, namely: the barycentric Earth radial velocity, V_bar, the stellar reflex motion induced on the host star by each planet i in the system (i.e. K_⋆ i[cos (ν_i + ω_i) + e_i cos(ω_i)], where ν_i is the true anomaly obtained from the eccentric anomaly via the Kepler’s equation, in this way we directly account for the eccentricity which is significative for some of our targets, e.g., HAT-P-33b, HD89345b), ω_i is the argument of periastron, e_i is the eccentricity, and K_⋆ i is the stellar radial-velocity semi-amplitude, and the systemic velocity of the star-planets system with respect to the barycentre of the Solar System (V_sys). §.§.§ Transmission spectra calculation. For every considered target, we divided each spectrum[We did not consider some spectra which exhibited much lower S/N compared to the other exposures or outliers near the position of the HeI triplet -see Fig.<ref> for details.] by its median value, thus obtaining the normalized spectra, F̃_̃ĩ. We then built a master stellar spectrum, S_out(λ), by averaging the out-of-transit spectra (i.e., with an orbital phase smaller than t_1 or greater than t_4), and derived individual transmission spectra, T_λ,i, by dividing each spectrum by S_out(λ), that is, T_λ,i=F̃_̃ĩ(̃λ̃)̃/S_out(λ) -1. Finally, we linearly interpolated transmission spectra in the planet rest frame, as follows: V_pl_j/⋆ =-∑_i K_⋆ i[cos (ν_i + ω_i)+ e_i cos(ω_i)] -K_pl_j(cos(ν_j+ω_j)+e_j cos(ω_j) ) , where K_pl_j is the computed planet radial-velocity semi-amplitude (see Table <ref>) of the considered target j). The 2D maps of the transmission spectra in the planet rest frame are shown in the left panels of Fig. <ref>. The usual method to search for faint planetary atmospheric signatures is to average in-transit transmission spectra in the planet rest frame and thus boost the S/N. However, the naive calculation of transmission spectra performed above neglects the change in broadband flux level of in-transit flux spectra, due to the occultation of regions with varying flux intensity by the opaque planetary disk. For example, limb-darkening, if unaccounted for, biases the retrieved atmospheric absorption signal toward smaller values at phases close to the stellar limb, as compared to the stellar disk center. We thus followed the approach presented in <cit.> that had been re-adapted from <cit.> to compute the in-transit transmission spectra: (R_pl(λ,t)/R_⋆)^2=LD_mean/LD(t)F_out(λ)-(1-δ(t))F_i(λ,t)/F_local(λ,t) , where F_i(λ,t) is each observed spectrum at phase t, 1-δ(t) is the broadband ("white-light") transit light curve, δ(t) is the transit depth, LD(t) and LD_mean represent the stellar limb darkening at the position of the transiting planet and the disk-averaged limb-darkening, respectively[The limb-darkening correction is applied on the spectra aligned in the planet rest frame]. We used the Python batman code <cit.> and the system parameters from Table <ref> to calculate the white-light transit light curve and the limb-darkening coefficients (see Fig. <ref>). F_local(λ) is the normalized local stellar spectrum (see Sect. <ref>) occulted by the planet at phase t. We applied Eq. <ref> only to fully in-transit orbital phases (i.e., obtained between the t_2 and t_3 contact points), while ingress, and egress were not considered here. Indeed LD, and occulted stellar surface are not well known at the limbs. If we neglect the RM effect and center-to-limb (CLV) variations, the occulted local stellar spectrum F_local(λ) is equal to the disk-integrated stellar spectrum F_out(λ) (see Sect. <ref>), so that Eq. <ref> becomes: (R_pl(λ,t)/R_⋆)^2=LD_mean/LD(t)(1-(1-δ(t))F_i(λ,t)/F_out(λ,t)). All transmission spectra fully in-transit were finally averaged (T_mean) to create one transmission spectrum for each observed transit (right panels of Fig. <ref>). §.§.§ Fringing correction Our GIANO-B spectra presented a sinusoidal fringing pattern caused by the sapphire substrate (∼0.38 mm thick) placed above the sensitive part of the detector, which behaves as a Fabry-Pérot in generating interference fringing. Such fringing patterns must be corrected for in studying the HeI triplet. We followed and re-adapted the second and third approaches (Method#1b-Method#2) presented in <cit.>. We focused on correcting this effect at the level of the final transmission spectra in order to have better control over fringing in the final transmission spectrum itself and to avoid risks in "overfitting" the data. First, for each planet, we binned T_mean (bin size of 0.2 nm), we thus computed the Lomb-Scargle periodogram to find the characteristic frequency of the periodic fringing signal present in the data f_best. We then selected the most prominent frequency of the periodogram, and we fitted the fringing pattern using a sine function yfit=C+Asin(2 πλ f + ϕ), where A is the amplitude, ϕ the phase, f is the fringing frequency (where we assumed f_best as starting point for the fit), and C is the overall offset. We finally corrected our final transmission spectra by yfit. § INTERPRETATION OF THE TRANSMISSION SPECTRA Somewhat surprisingly no HeI absorption signature was detected in our sample, as can be seen in Fig. <ref>. This either means that the targeted planets have no extended atmosphere, which would be surprising given the strong irradiation of their H/He atmosphere; or that their thermosphere's metastable helium population is not dense enough to be detectable within the precision of the GIANO-B observations. Under this assumption, we can still put upper limits on the escape rate by fitting the transmission spectra with models of the planets' thermospheric structure. §.§ Stellar modelling Given the scarcity of stellar high-energy measurements, we calculated the X-EUV spectral energy distribution of the eight target stars in a consistent manner using Table 5 in . This formula depends on the total X-EUV flux emitted by the star, which is calculated based on the stellar age and Equations 3 and 4 from . §.§ Thermosphere modelling We used an approximate 1D model, the p-winds <cit.> code, largely based on the formulations of <cit.> and <cit.>, to calculate the thermospheric structure and resulting signature of the metastable helium triplet. The atmospheric density and velocity profiles were calculated according to the Parker wind approximation, assuming an isothermal planetary outflow <cit.>. We assumed for all targets an atmospheric composition of 90 % H and 10 % He (a good approximation of the Jupiter H/He ratio) and an input stellar X-EUV spectrum (calculated as explained in Section <ref>). The code calculates only the density profiles of hydrogen in its neutral and ionized states, as well as that of helium in its neutral, excited, and singly ionized states. The signature of interest is the metastable transition at 1083.3 nm of the helium excited level. A theoretical ideal spectrum is calculated at mid-transit without taking into account geometrical effects and inhomogeneities of the stellar surface. This absorption signature is compared to the observed mean transmission spectrum to estimate upper atmosphere characteristics such as temperature and the mass-loss rates. Since there is no clear evidence of a helium signature, we quickly explored the input parameter space of the p-winds models by varying only the isothermal temperature profile, T, and the total atmospheric escape rate, Ṁ, while all other input parameters are fixed. For instance, H/He ratio was not a fitting parameter. We expect that this can slightly change the derived values, but the conclusion would still remain the same. In our models, the radius at the top of the simulated atmosphere was set to the Roche lobe <cit.>. We note that the value chosen for this upper radius has not been discussed in previous studies using the p-winds code or similar codes <cit.>, even though it directly controls the amount of helium that contributes to the theoretical absorption signature. The preferred approach in the literature seems to be increasing the radius until the neutral triplet helium density no longer contributes significantly to the absorption signal. Yet this is hardly compatible with the change in nature of the atmosphere beyond the Roche lobe, from a collisional thermosphere shaped by planetary gravity, which may (at first order) still be described by a 1D vertical structure, to an asymmetrical exosphere shaped by the stellar gravity, radiation, and wind. Our choice to set the upper model radius at the Roche lobe is based on the reasonable assumption that once helium atoms escape into the exosphere, they cannot be excited into their metastable state by collisions anymore and are quickly photo-ionized so that these layers contribute little to the observed signature (as supported by the lack of clear detection of extended exospheric tails in the literature). In our simulations, high escape rates lead to an increase in the total density of metastable helium in the thermosphere but the densest layers are shifted to higher altitudes above the Roche lobe, where they no longer contribute to the theoretical signature. This boundary effect is visible in Fig. <ref>, with the reappearance of fit regions compatible with the non-detection of an absorption signature in our data. Simulations at high escape rates are therefore model-biased and should be considered cautiously. Furthermore, we note that the code p-winds was unable to calculate the atmospheric structure in certain regions of the parameter space (shown in white in Fig. <ref>). It is still unclear whether this is a numerical issue or a truly non-physical regime for the thermosphere. §.§ Parameter space exploration We determined whether the models were compatible with the measured transmission spectra using χ^2 comparison. Since no absorption signature was detected for any of the planets we took the null hypothesis (a flat transmission spectrum) as the best-fit model and use Δχ^2 = χ^2_model - χ^2_flat as a criterion to determine 3-σ upper limits on the atmospheric mass-loss rate. We constrained the parameter space to realistic models in mass loss, using the maximum efficiency for a photoionization-driven isothermal Parker wind <cit.>, and in temperature, using the model of <cit.> as a function of the gravitational potential of the planet. Below log(-Φ_G) = log GM_pl/R_pl = 13.0 erg·g^-1, their model predicts temperatures lower than 10 000 K, while above this limit, it predicts temperatures lower than 20 000 K. χ^2 maps as a function of mass loss and temperature are shown in Fig. <ref> for all planets in our sample. Table <ref> gathers all the derived 3-σ upper limits. § ACCURATE STELLAR LINE PROFILES Planet-occulted line distortions (POLD, ) can bias or even hide planetary absorption signatures in transmission spectra <cit.>. They appear in particular when one uses the disk-integrated stellar spectrum (F_out) to normalize the spectrum that is absorbed by the planet and its atmosphere. Indeed, the line profiles of F_out are shaped by a combination of the local effects of stellar rotation and CLV from all over the stellar disk; thus, they are not necessarily representative of the line profiles occulted by the planet. To mitigate the POLDs one needs to define more accurate estimates of the local stellar spectrum occulted by the planetary disk at each exposure. However, this quantity is complex to estimate from observations, as stars cannot be resolved spatially. To estimate the planet-occulted stellar spectrum, we fit a model to the measured disk-integrated spectrum, using a combination of analytical and simulated theoretical local spectra. The stellar disk is discretized by a 2D uniform square grid, each cell being associated with a specific local intensity spectrum. The simulated component of these intensity spectra is defined using the Turbospectrum code for spectral synthesis[<https://github.com/bertrandplez/Turbospectrum2019>] <cit.>. This code uses MARCS photospheric models <cit.>[<https://marcs.astro.uu.se>] and spectral line-lists from VALD3 database[<http://vald.astro.uu.se>] <cit.> to generate synthetic spectra under the assumption of local thermodynamic equilibrium[We used the module to derive a MARCS model for the exact values of temperature, metallicity and log g of our target stars. Available for downloads at <https://marcs.astro.uu.se/software.php>]. For each star, we used to generate high-resolution intensity spectra for a series of positions along the stellar radius to sample broadband limb-darkening and CLV. These synthetic spectra, however, do not contain the He I triplet lines at 10830Å as its formation in stellar atmospheres necessitates non-local thermodynamical equilibrium conditions that are usually met in chromospheric layers, whereas MARCS models focus on the photospheric layers. We thus calculate the He I triplet absorption lines analytically, assuming Gaussian cross-sections and a common temperature and density for the metastable helium gas. Figure <ref> shows a series of intensity spectra across the stellar disk of HAT-P-33. The series of synthetic+analytical intensity spectra is then interpolated over the whole stellar grid, and Doppler-shifted according to the local radial velocity set by the projected stellar rotational velocity <cit.>. Subsequently, intensity spectra are scaled into local flux spectra using the surface of the stellar grid cells, and summed over the whole grid to derive a simulated disk-integrated spectrum of the target star. Finally, the disk-integrated spectra are convolved with GIANO-B instrumental response and resampled to match its spectral resolution. The observed and simulated disk-integrated spectra are compared using a MCMC fit with free parameters set to the temperature and density of the metastable helium atoms. Figure <ref> shows the results of our fits for HAT-P-33 and HAT-P-49, which have the highest vsini_⋆ of our sample, highlighting the local spectrum at disk's center that is later used in Eq. <ref>. We only applied this approach to the four targets with the highest vsini_⋆, as POLDs are expected to be negligible for the other targets. For these slow rotators, the rotational broadening of the disk-integrated line profiles is small and they remain good proxies for the local planet-occulted lines, especially for the shallow He I triplet lines. We note that CLV is not accounted for in our analytical estimates of the He I lines. We underline that even for the four fast-rotating targets for which we used a more accurate proxy for the planet-occulted stellar line the amplitude of the POLD was found to be comparable to the dispersion of the data. This is partly due to the shallowness of the He I triplet lines, and to the fact that POLDs partially smooth out when averaging transmission spectra in the planet rest frame over the transit window. Indeed, POLDs shift along the stellar surface RVs, while planetary signatures shift along the planet orbital RVs (see Fig. <ref>). Therefore, our final interpretation is made based on the transmission spectra calculated with the disk-integrated spectrum. § RESULTS The presence of an extended and possibly escaping helium atmosphere would appear as an absorption feature in the transmission spectrum in the planet’s rest frame at the position of the stellar helium triplet. Unfortunately, as shown in Fig. <ref>, we did not detect significant helium absorption features for any of our targets. We thus evaluated 3σ upper limits from the data itself as in <cit.>. Following an approach similar to that of <cit.>, we computed Allan plots (see Fig. <ref>) to estimate the noise present in the data. We assumed the white noise σ_1 to be the standard deviation (hereafter, root mean square, rms) of the transmission spectrum, excluding the helium triplet region (1083.0-1083.6 nm). Then, we binned the transmission spectrum by bins of N elements each and calculated the rms of the binned transmission spectrum. We repeat the process for a wide range of N elements for bin (from 1 to 42). In the absence of correlated noise, σ_1 scales as √(N). We then fitted the rms in a log-log space to derive the trend of the noise, the fitted rms at 0.075 nm is the 1 σ uncertainty. We set three times this value as the 3σ upper limit on the signature contrast, c. An alternative approach to providing more rigorous estimations of the noise present in the data requires the use of Gaussian processes (See Appendix Sect. <ref>). However, for the rest of the analysis, to maintain consistency with the results published in <cit.>, we used upper limits estimated from Allan plots. We then derived an upper limit on the equivalent opaque radius δ_R_P, namely, the height of an opaque atmospheric layer that would produce the observed absorption signal, as: δ_R_P=√((R_P^2+R_⋆^2 × c))-R_P , where R_p and R_⋆ are the planetary and stellar radius, respectively. We finally computed the quantity δ_R_P/H_eq <cit.>, which expresses the number of scale heights (H_eq) probed by the atmosphere in the considered spectral range, with H_eq = k_BTeq/μ g and k_B the Boltzmann constant, T_eq the planetary equilibrium temperature (listed in Table <ref>), g the planetary gravity computed from the planetary mass and radius (reported in Table <ref>), μ the mean molecular weight (for which we assumed a hydrogen-dominated atmosphere and hence a value of 1.3 times the mass of a hydrogen atom). Table <ref> reports the derived δ_R_P/H_eq values for each investigated planet. We explored how the derived constraints vary as a function of the stellar mass and XUV flux between 5 and 504 Å, which are the energies mainly responsible for the population of the metastable HeI level <cit.>[For consistency, in Appendix in Fig. <ref> we reported also the same plots as a function of the insolation level of mid-UV flux, which accordingly to ionizes the Helium's metastable state]. We focused on these two parameters because <cit.> showed that they do yield visible trends with δ_R_P/H_eq. Trends related to the excess of absorption and atmospheric extension are shown in the top and middle panels of Fig. <ref>. All our targets are outside the area of the parameter space, with masses between ∼0.6 and ∼0.85 M_, as pointed out by <cit.> to favor the HeI detection. This range of stellar masses corresponds to K-stars, in agreement with the predictions of <cit.>, thus our non-detections are not entirely unexpected. However, half of our non-detections do not agree with the XUV flux range found to favor the presence of HeI in <cit.>, 1400-17800 ergs^-1cm^-2. However, the XUV flux values depend on the model and are associated with star ages, which are typically not well-constrained. So we have to be cautious with these values. Finally, as already highlighted in <cit.>, even if surprising, there are no clear correlations between Ṁ and the stellar mass or XUV flux (bottom panels of Fig. <ref>). We chose to focus on the targets analyzed in this paper and in <cit.>, rather than other results from the literature because we derived the absorption, atmospheric extension, and mass-loss values homogeneously with the same methodology. § SUMMARY AND CONCLUSIONS In this paper, we describe our HeI survey of 9 planets at the edge of the Neptunian desert, with the goal of understanding the role of photoevaporation in sculpturing this feature. We analyzed observations gathered with the high-resolution GIANO-B spectrograph mounted on the TNG, and we used the transmission spectroscopy technique to detect a possible extended or evaporating helium atmosphere in the investigated planets. We found no sign of planetary absorption at the position of the stellar HeI triplet in any of the investigated targets, and we thus provided 3σ upper limits on the HeI absorption. We underline that the GIANO-B transmission spectra are affected by various systematics that are not fully understood and difficult to properly remove. These systematics may be caused by low data quality (e.g., low S/N) or instrumental effects (e.g., auto-guide problems). We interpreted our derived transmission spectra with the p-wind code <cit.> and we attempted to interpret our findings by putting them in the wider context of the measurements presented in Allart et al. 2023 submitted. We searched for correlations with the stellar mass and XUV flux <cit.>, which are thought to be key drivers in the formation of the HeI triplet. Constraints from our sample support the trend of δ_R_P/H_eq, with the stellar mass proposed by <cit.>, which remains a good indicator for the presence of metastable helium in exoplanet atmospheres. In addition, they are not incompatible with the trend highlighted in <cit.> with the XUV flux as they are not constraining enough to reach a better precision. We stress the importance of carrying out helium surveys with the same instrument and analyzing them with the same data reduction technique, as heterogeneity can obscure any trends in the data <cit.>. Several instruments are now available to perform this kind of homogeneous survey such as NIRSPEC, SPIROU, CARMENES, and GIANO-B. aa We thank the referee for the comments and suggestions. We thank V.l Andretta for his help. G.G. acknowledge financial contributions from PRIN INAF 2019, and from the agreement ASI-INAF number 2018-16-HH.0 (THE StellaR PAth project). R. A. is a Trottier Postdoctoral Fellow and acknowledges support from the Trottier Family Foundation. This work was supported in part by a grant from the Fonds de Recherche du Québec - Nature et Technologies (FRQNT). This work was funded by the Institut Trottier de Recherche sur les Exoplaneètes (iREx). This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Sci- ence Foundation under grants 51NF40182901 and 51NF40205606. This project has received funding from the European Research Council (ERC) under the Eu- ropean Union’s Horizon 2020 research and innovation programme (project Spice Dune, grant agreement No 947634). This material reflects only the authors views and the Commission is not liable for any use that may be made of the information contained therein. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. This work has made use of the Turbospectrum code for spectral synthesis § ADDITIONAL FIGURES AND TABLES lcc Adopted parameters Parameter Value Reference continued Parameter Value Reference HAT-P-3 ∙ Stellar parameters Spectral type K1 <cit.> Stellar mass, M_⋆ (M_) 0.925 ± 0.046 <cit.> Stellar radius, R_⋆ (M_) 0.850 ± 0.021 <cit.> Stellar age, τ (Gyr) 2.9^+2.7_-4.9 <cit.> Effective temperature, Teff (K) 5190 ± 80 <cit.> Metallicity (dex) 0.24 ± 0.08 (Fe/H) <cit.> log g (log_10()) 4.545 ± 0.023 <cit.> Systemic velocity, v_sys () -23.379680 DREAM I Limb-darkening coefficients μ_1 0.216 EXOFAST[1] μ_2 0.286 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 0.46^+0.22_-0.25 DREAM 1 Magnitude (J-band) 9.936±0.022 <cit.> ∙ Planetary parameters Orbital period, P (days) 2.89973797 ± 0.00000038 <cit.> Transit epoch, T_0 (BJD_TDB) 2454218.75960 ± 0.00016 DREAM I Eccentricity, e 0.0 (fixed) <cit.> Argument of periastron, ω_⋆ 90 (fixed) <cit.> Stellar reflex velocity, K_⋆ () 90.63 ± 0.58 <cit.> Scaled separation, a/R_⋆ 9.8105 ± 0.2667 <cit.> Orbital inclination, i 86.31 ± 0.19 deg <cit.> Planet-to-star radius ratio, R_P/R_⋆ 0.11091 ± 0.00048 <cit.> Planetary mass, M_pl () 0.595 ±0.019 <cit.> Planetary density, ρ_pl () 0.9750±0.1000 <cit.> Projected spin-orbit angle, λ (deg) -25.3^+29.4_ - 22.8 DREAM I Planet radial-velocity semi-amplitude, K_pl() 145.2±2.4 This paper[2] Equilibrium temperature, T_eq(K) 1170±17 <cit.> HAT-P-33 ∙ Stellar parameters Spectral type F4 <cit.> Stellar mass, M_⋆ (M_) 1.42 ^+0.16_-0.15 <cit.> Stellar radius, R_⋆ (M_) 1.91^+0.26_-0.20 <cit.> Stellar age, τ (Gyr) 2.30 ± 0.30 <cit.> Effective temperature, teff (K) 6460 ^+300_ -290 <cit.> Metallicity (dex) 0.01 ± 0.31 [Fe/H] <cit.> Surface gravity, log g_⋆ (cgs) 4.030 _-0.090^+0.079 <cit.> Systemic velocity, v_sys () 23.080601 DREAM I Limb-darkening coefficients μ_1 0.097 EXOFAST[1] μ_2 0.301 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 15.57 ± 0.31 DREAM I Magnitude (J-band) 10.263±0.021 <cit.> ∙ Planetary parameters Orbital period, P (days) 3.47447773 ± 0.00000066 DREAM I Transit epoch, T_0 (BJD_TDB) 2456684.86508 ± 0.00027 DREAM I Eccentricity, e 0.180 _-0.096 ^+0.110 DREAM I Argument of periastron, ω_⋆ (deg) 88^+33_ -34 <cit.> Stellar reflex velocity, K_⋆ 74.4±8.5 DREAM I Scaled separation, a/R_⋆ 5.69^+0.58_-0.59 <cit.> Orbital inclination, i (deg) 88.2 ^+1.2_ -1.3 <cit.> Planet-to-star radius ratio, R_P/R_⋆ 0.10097 ^+0.00056 _-0.00052 <cit.> Planetary mass, M_pl () 0.72 ^+0.13_ -0.12 <cit.> Planetary density, ρ_pl () 0.134_-0.042^+0.053 <cit.> Projected spin-orbit angle, λ (deg) 5.9±4.1 deg DREAM I Planet radial velocity semi-amplitude, K_pl() 160.6^+6.9_-6.3 This paper[2] Equilibrium temperature, T_eq(K) 1782±28 <cit.> HAT-P-49 ∙ Stellar parameters Spectral type F3 DREAM I Stellar mass, M_⋆ (M_) 1.543±0.051 <cit.> Stellar radius, R_⋆ (R_) 1.833+0.138_-0.076 <cit.> Stellar age τ (Gyr) 1.50 ± 0.20 <cit.> Effective temperature, Teff (K) 6820±52 <cit.> Metallicity (dex) 0.074±0.080 [Fe/H] <cit.> Surface gravity, log g_⋆ (cgs) 4.10±0.04 <cit.> Systemic velocity, v_sys () 14.208478 DREAM I Limb-darkening coeffcients μ_1 0.078 EXOFAST[1] μ_2 0.303 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 10.68^+0.46_ -0.47 DREAM I Magnitude (J-band) 9.550±0.020 <cit.> ∙ Planetary parameters Orbital period, P (days) 2.6915539±0.0000012 DREAM I Transit epoch, T_0 (BJD_TDB) (BJD_TDB) 2456975.61736 ±0.00050 DREAM I Eccentricity, e 0.0 (fixed) <cit.> Argument of periastron, ω_⋆ (deg) 90 (fixed) <cit.> Stellar reflex velocity, K_⋆ () 177.6 ± 16.0 DREAM I Scaled separation, a/R_⋆ 5.13 ^+0.19_ -0.30 <cit.> Orbital inclination, i (deg) 86.2±1.7 <cit.> Planet-to-star radius ratio, R_P/R_⋆ 0.0792 ± 0.0019 <cit.> Planetary M_pl () 1.730 ± 0.205 <cit.> Planetary density, ρ_pl () 0.75±0.17 <cit.> Projected spin-orbit angle, λ (deg) -97.7±1.8 DREAM I Planet radial velocity semi-amplitude, K_pl() 176.5±2.0 This paper[2] Equilibrium temperature, T_eq(K) 2131^+69_ -42 <cit.> HD89345 ∙ Stellar parameters Spectral type G5 <cit.> Stellar mass, M_⋆ (M_) 1.120^+0.040_ -0.010 <cit.> Stellar radius, R_⋆ (R_) 1.657^+0.020_ -0.004 <cit.> Stellar age, τ (Gyr) 9.40 ^+0.40_-1.30 <cit.> Effective temperature, Teff (K) 5499 ± 73 <cit.> Metallicity (dex) 0.45± 0.04 [Fe/H] <cit.> Surface gravity log g_⋆ (log_10()) 4.044^+0.006_ -0.004 <cit.> Systemic velocity, v_sys () 2.223394 DREAM I Limb-darkening coeffcients μ_1 0.182 EXOFAST[1][2] μ_2 0.300 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 0.58±0.28 DREAM I Magnitude (J-band) 8.091 ±0.020 <cit.> ∙ Planetary parameters Orbital period, P (Days) 11.8144024 ±0.0000066 DREAM I Transit epoch T_0 (BJD_TDB) 2458740.81147±0.00044 DREAM I Eccentricity, e 0.208 ± 0.039 DREAM I Argument of periastron ω (deg) 21.7 ± 19.1 DREAM I Stellar reflex velocity, K_⋆ () 9.1 ± 0.5 DREAM I Scaled separation, a/R_⋆ 13.625 ± 0.027 <cit.> Orbital inclination, i (deg) 87.68 ± 0.10 DREAM I Planet-to-star radius ratio, R_P/R_⋆ 0.03696 ± 0.00041 DREAM I Planetary mass, M_pl () 0.112±0.010 <cit.> Planetary density, ρ_pl () 0.609±0.067 <cit.> Projected spin-orbit angle, λ (deg) 74.2^+33.6_ -32.5 DREAM 1 Planet radial velocity semi-amplitude, K_pl() 99.2^+1.4_ -0.9 This paper[2] Equilibrium temperature, T_eq(K) 1053±14 <cit.> K2-105 ∙ Stellar parameters Spectral type G5 This paper[5] Stellar mass, M_⋆ (M_) 1.05 ± 0.02 <cit.> Stellar radius, R_⋆ (R_) 0.97 ± 0.01 <cit.> Stellar age, τ (Gyr) >0.6 <cit.> Effective temperature, Teff (K) 5636^+49_ -52 <cit.> Metallicity (dex) 0.23^+0.04_ -0.03 [Fe/H] <cit.> Surface gravity log g_⋆ (log_10()) 4.49 ± 0.01 <cit.> Systemic velocity, v_sys () -32.390637 DREAM I Limb-darkening coefficients μ_1 0.169 EXOFAST[1] μ_2 0.299 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 2.13^+0.96_-0.92 DREAM I Magnitude (J-band) 10.541±0.02 <cit.> ∙ Planetary parameters Orbital period, P (days) 8.2669897±0.0000057 DREAM I Transit epoch, T_0 (BJD_TDB) 2458363.2387^+0.00069_ -0.000633 DREAM I Eccentricity, e 0 (fixed) DREAM I Argument of periastron, ω (deg) 90 (fixed) DREAM I Stellar reflex velocity, K_⋆ () 9.4 ± 5.8 <cit.> Scaled separation, a/R_⋆ 17.39 ± 0.19 DREAM I Orbital inclination, i (deg) 88.62 ± 0.10 DREAM I Planet-to-star radius ratio, R_P/R_⋆ 0.03332 ± 0.00067 DREAM I Planetary mass, M_pl () 0.094 ± 0.060 <cit.> Planetary density, ρ_pl () 2.3^+1.7_-1.6 This paper Projected spin-orbit angle, (deg) λ -81^+50_-47 DREAM I Planet radial-velocity semi-amplitude, K_pl() 107.0±0.7 This paper[2] Equilibrium temperature, T_eq(K) 814±12 <cit.> Kepler-25 ∙ Stellar Parameters Spectral type F8 DREAM I Stellar mass, M_⋆ (M_) 1.26 ± 0.03 <cit.> Stellar radius, R_⋆ (R_) 1.34±0.01 <cit.> Stellar age, τ (Gyr) 2.75 ± 0.30 <cit.> Effective temperature, Teff (K) 6354±27 <cit.> Metallicity [Fe/H] (dex) 0.11±0.03 <cit.> Surface gravity log g (log_10()) 4.285±0.003 <cit.> Systemic velocity, v_sys () -8.633258 DREAM I Limb-darkening coeffcients μ_1 0.106 EXOFAST[1] μ_2 0.304 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 8.89^+0.59_ -0.63 DREAM I ∙ Planetary parameters Magnitude (J-band) 9.764 ±0.020 <cit.> ⋆ Planet, b Orbital period, P (days) 6.2385347882 <cit.> Transit epoch, T_0 (BJD_TDB) 2458648.00807^+0.00057_ - 0.00051 DREAM I Eccentricity, e 0.0029 ^0.0023_ - 0.0017 <cit.> √(e) cosω 0.042 ^0.017_ - 0.036 <cit.> √(e) sinω 0.007 ^0.038_ - 0.035 <cit.> Stellar reflex velocity K_⋆ () 2.6±0.7 This paper[3] Orbital inclination i (deg) 87.173^0.084_ - 0.083 <cit.> Planet-to-star radius ratio R_P/R_⋆ 0.019160 ^+5.1e-5_ -4.8e-5 <cit.> Planetary M_pl 0.0275 ^0.0079_ - 0.0073 <cit.> ⋆ Planet c Orbital period P (days) 12.720370495 ± 0.000001703 <cit.> Transit epoch T_0 (BJD_TDB) 2458649.55482 _-0.00051^ +0.00057 BJD_TDB DREAM I Eccentricity e 0.0061 ^+0.0049_ - 0.0041 <cit.> √(e) cosω -0.024 ^0.067_ - 0.053 <cit.> √(e) sinω 0.004 ^0.065_ - 0.062 <cit.> Stellar reflex velocity, K_⋆ () 3.6^+0.3_-0.4 This paper[3] Scaled separation, a/R_⋆ 18.336 ± 0.27 <cit.> Orbital inclination, i (deg) 87.236 _-0.039^ +0.042 <cit.> Planet-to-star radius ratio, R_P/R_⋆ 0.03637 ± 0.00012 <cit.> Planetary mass, M_pl () 0.0479 ^+0.0041_ - 0.0051 <cit.> Planetary density 0.588^+0.053_-0.061 <cit.> Projected spin-orbit angle, λ -0.9^+7.7_ - 6.4 DREAM I Planet radial-velocity semi-amplitude, K_pl() 98.4±0.8 This EXOFAST[2] Equilibrium temperature, T_eq(K) 992±8 This work[4] ⋆ Planet, d Orbital period, P (days) 122.40^+0.80_ - 0.71 d <cit.> Transit epoch, T_0 (BJD_TDB) 2455715.0^+6.8_ - 7.2 DREAM I Eccentricity, e 0.13^+0.13_ - 0.09 <cit.> √(e) cosω 0.07 ^0.027_ - 0.029 <cit.> √(e) sinω 0.16 ^0.23_ - 0.28 <cit.> Stellar reflex velocity, K_⋆ () 8.0±0.2 This paper[3] Minimum mass, M_pl sini () 0.226^0.032_ - 0.031 <cit.> Kepler-68 ∙ Stellar parameters Spectral type G1 <cit.> Stellar mass, M_⋆ (M_) 1.079±0.051 <cit.> Stellar radius, R_⋆ (R_) 1.243 ± 0.019 <cit.> Stellar age, τ (Gyr) 6.3 ± 1.7 <cit.> Effective temperature, Teff (K) 5793±74 <cit.> Metallicity (dex) 0.12±0.074 [Fe/H] <cit.> Surface gravity, log g (log_10()) 4.281±0.008 <cit.> Systemic velocity, v_sys () -20.762823 DREAM I Limb-darkening coefficients μ_1 0.148 EXOFAST[1] μ_2 0.301 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 0.5±0.5 DREAM I Magnitude (J-band) 8.975 ±0.046 <cit.> ∙ Planetary parameters ⋆ Planet b Orbital period, P (days) 5.3987525913 ± 0.0000005231 <cit.> Transit epoch, T_0 (BJD_TDB) 2455006.85878000 ± 0.00007639 <cit.> Eccentricity, e 0.0 (fixed) <cit.> Argument of periastron, ω (deg) 90 (fixed) <cit.> Stellar reflex velocity, K_⋆ 2.70^+0.48_ - 0.49 m s^-1 <cit.> Scaled separation, a/R_⋆ 10.68 ± 0.14 <cit.> Orbital inclination, i (deg) 87.60 ± 0.90 <cit.> Planet-to-star radius, ratio R_P/R_⋆ 0.01700 ± 0.00046 <cit.> Planetary mass, M_pl () 0.026 ^+0.007_ -0.008 <cit.> Planetary density, ρ_pl () 3.32 ^+0.86_-0.98 <cit.> Projected spin-orbit angle, λ (deg) non-detection DREAM I Planet radial-velocity semi-amplitude, K_pl() 124.4±2.0 This paper[2] Equilibrium temperature, T_eq(K) 1280±90 <cit.> ⋆ Planet c Orbital period, P (days) 9.60502738150± 0.0000132365 d <cit.> Transit epoch, T_0 (BJD_TDB) 2454969.38207000 ± 0.00110495 <cit.> Eccentricity, e 0.0 (fixed) <cit.> Argument of periastron, ω (deg) 90 (fixed) <cit.> Stellar reflex velocity, K_⋆ () 0.59 ^+0.50_ -0.52 <cit.> ⋆ Planet d Orbital period, P (days) 634.6 ^+4.1_ -3.7 <cit.> Transit epoch, T_0 (BJD_TDB) 2455878 ± 11 <cit.> Eccentricity, e 0.112 _ -0.034^+0.035 <cit.> Argument of periastron, ω (deg) -64.74_-20.63^+25.78 <cit.> Stellar reflex velocity, K_⋆ () 17.75^+0.50_-0.49 <cit.> WASP-47 ∙ Stellar parameters Spectral type G9 <cit.> Stellar mass, M_⋆ (M_) 1.040 ±0.031 <cit.> Stellar radius, R_⋆ (R_) 1.137 ± 0.013 <cit.> Stellar age, τ (Gyr) 6.5 ^+2.6_-1.2 <cit.> Effective temperature, Teff (K) 5552±75 <cit.> Metallicity (dex) 0.38±0.05 [Fe/H] <cit.> Surface gravity, log g (log_10()) 4.3437±0.0063 <cit.> Systemic velocity, v_sys () -25.847809453 <cit.> Limb-darkening coefficients μ_1 0.179 EXOFAST[1] μ_2 0.299 EXOFAST[1] Stellar projected velocity, vsini_⋆ () 1.80^+0.24_-0.16 DREAM I Magnitude (J-band) 10.613±0.022 <cit.> ∙ Planetary parameters ⋆ Planet b Orbital period, P (days) 4.1591492 ±0.000006 <cit.> Transit epoch, T_0 (BJD_TDB) 2457007.932103 ± 0.000019 <cit.> Eccentricity, e 0 (fixed) <cit.> Argument of periastron, ω (deg) 90 (fixed) <cit.> Stellar reflex velocity, K_⋆ () 140.84 ±0.40 <cit.> ⋆ Planet c Orbital period, P (days) 588.8 ± 2.0 <cit.> Transit epoch, T_0 (BJD_TDB) 2457763.1 ± 4.3 <cit.> Eccentricity, e 0.296 ± 0.016 <cit.> Argument of periastron, ω (deg) 112. ± 4.3 <cit.> Stellar reflex velocity, K_⋆ () 31.04 ± 0.40 <cit.> ⋆ Planet d Orbital period, P (days) 9.03052118 ±0.00000753 DREAM I Transit epoch, T_0 (BJD_TDB) 2459426.5437 ±0.0028 DREAM I Eccentricity, e 0.010^+0.011_-0.007 <cit.> Argument of periastron, ω (deg) 16.5 ^+84.2_-98.6 <cit.> Stellar reflex velocity, K_⋆ () 4.26 ±0.37 <cit.> Scaled separation, a/R_⋆ 16.34 ^+0.08_-0.11 <cit.> Orbital inclination, i (deg) 89.55 ^+0.30_-0.27 <cit.> Planet-to-star radius ratio, R_P/R_⋆ 0.02876 ± 0.00017 <cit.> Planet mass, M_pl () 14.2±1.3 <cit.> Planetary density, ρ_pl () 1.72±0.17 <cit.> Projected spin-orbit angle, λ (deg) 0±24 DREAM I Planet radial-velocity semi-amplitude, K_pl() 103.6±1.0 This paper[2] Equilibrium temperature, T_eq(K) 919±13 This paper[4] ⋆ Planet e Orbital period, P (days) 0.7895933 ± 0.0000044 <cit.> Transit epoch, T_0 (BJD_TDB) 2457011.34862 ± 0.00030 <cit.> Eccentricity, e 0 (fixed) <cit.> Argument of periastron, ω (deg) 90 (fixed) <cit.> Stellar reflex velocity, K_⋆ () 4.55 ± 0.37 <cit.> * For a homogeneous analysis we used quadratic limb-darkening coefficients derived using the EXOFAST calculator <https://astroutils.astronomy.osu.edu/exofast/limbdark.shtml> <cit.> in the J-band. * K_pl=2π a/Psini/√(1-e^2)=(2π G/P)^1/3(M_⋆+M_pl)^1/3sini/√(1-e^2). * K_⋆=(2π G/P)^1/3M_pl*sini/M_⋆^2/3√(1-e^2). * T_eq= T_⋆( R_⋆/2 a)^1/2 (1-A)^1/4, where R_⋆ is the stellar radius, a is the semi-major axis, A is the geometric albedo, we assumed an albedo of 0.2 <cit.>. * Derived from <http://www.pas.rochester.edu/ emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt> § GAUSSIAN PROCESSES TO DERIVE UPPER LIMITS ON THE HELIUM ABSORPTION To have a better description of the correlated noise present in the data, we performed a Differential Evolution Markov chain Monte Carlo (DEMCMC) fit of a Gaussian profile fixing the position at 1083.326 nm and the FWHM at 0.07 nm and varying the peak value, an offset for the continuum, an uncorrelated jitter, and a correlated noise modeled with a Gaussian process (GP) and a squared exponential kernel. From the posterior distribution, we were therefore able to derive the 3σ upper limits (the value to which 95% of the peak distribution is subject) at the position of the helium triplet marginalized over an uncorrelated jitter and the presence of correlated noise. The values are reported in Table <ref>.
http://arxiv.org/abs/2307.02351v1
20230705151049
Online Hybrid CTC/Attention End-to-End Automatic Speech Recognition Architecture
[ "Haoran Miao", "Gaofeng Cheng", "Pengyuan Zhang", "Yonghong Yan" ]
eess.AS
[ "eess.AS" ]
Online Hybrid CTC/attention End-to-End Automatic Speech Recognition Architecture Haoran Miao, Student Member, IEEE, Gaofeng Cheng, Member, IEEE, Pengyuan Zhang, Member, IEEE, Yonghong Yan, Member, IEEE Manuscript received xxxxxxxx; revised xxxxxxxx. (Corresponding author: Gaofeng Cheng.) H. Miao (e-mail: [email protected]), G. Cheng (e-mail: [email protected]), P. Zhang (e-mail: [email protected]) and Y. Yan (e-mail: [email protected]) are with Institute of Acoustics, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Beijing, China. August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Recently, there has been increasing progress in end-to-end automatic speech recognition (ASR) architecture, which transcribes speech to text without any pre-trained alignments. One popular end-to-end approach is the hybrid Connectionist Temporal Classification (CTC) and attention (CTC/attention) based ASR architecture, which utilizes the advantages of both CTC and attention. The hybrid CTC/attention ASR systems exhibit performance comparable to that of the conventional deep neural network (DNN) / hidden Markov model (HMM) ASR systems. However, how to deploy hybrid CTC/attention systems for online speech recognition is still a non-trivial problem. This paper describes our proposed online hybrid CTC/attention end-to-end ASR architecture, which replaces all the offline components of conventional CTC/attention ASR architecture with their corresponding streaming components. Firstly, we propose stable monotonic chunk-wise attention (sMoChA) to stream the conventional global attention, and further propose monotonic truncated attention (MTA) to simplify sMoChA and solve the training-and-decoding mismatch problem of sMoChA. Secondly, we propose truncated CTC (T-CTC) prefix score to stream CTC prefix score calculation. Thirdly, we design dynamic waiting joint decoding (DWJD) algorithm to dynamically collect the predictions of CTC and attention in an online manner. Finally, we use latency-controlled bidirectional long short-term memory (LC-BLSTM) to stream the widely-used offline bidirectional encoder network. Experiments with LibriSpeech English and HKUST Mandarin tasks demonstrate that, compared with the offline CTC/attention model, our proposed online CTC/attention model improves the real time factor in human-computer interaction services and maintains its performance with moderate degradation. To the best of our knowledge, this is the first work to provide the full-scale online solution for CTC/attention end-to-end ASR architecture. End-to-end speech recognition, online speech recognition, hybrid CTC/attention speech recognition. § INTRODUCTION In the past decade, we have witnessed impressive progress in automatic speech recognition (ASR) field. The hybrid deep neural network (DNN)/hidden Markov model (HMM) <cit.> is the first successful deep-learning ASR architecture. A typical hybrid DNN/HMM system is composed of several separate modules, including acoustic, pronunciation and language models (AM, PM, LM), all of which are designed manually and optimized separately. Additionally, the hybrid DNN/HMM system needs a complex decoder which has to be performed by integrating AM, PM and LM. To conclude, the hybrid DNN/HMM ASR models require linguistic information and complex decoders, and thus it is difficult to develop hybrid DNN/HMM ASR systems for new languages. In recent years, end-to-end speech recognition <cit.> has gained popularity in the ASR community. End-to-end ASR models simplify the conventional DNN/HMM ASR system into a single deep neural network architecture. Besides, the end-to-end models require no lexicons and predict graphemes or words directly, which makes the decoding procedure much simpler than those of the hybird DNN/HMM models. To date, the end-to-end ASR architectures have gained significant improvement in speech recognition accuracy <cit.>. There are two major types of end-to-end ASR architectures. One is Connectionist Temporal Classification (CTC) based ASR architecture <cit.> and the other is the attention-based ASR architecture <cit.>. To combine theses two architectures, the hybrid CTC/attention architecture <cit.> was proposed to leverage the CTC objective as an auxiliary task in an attention-based encoder-decoder network. During training, a CTC objective is attached to an attention-based encoder-decoder model as a regularization. During decoding, a joint CTC/attention decoding approach was proposed to combine the decoder scores and CTC scores in the beam search algorithm. Specifically, the CTC branch uses the CTC prefix score <cit.> to efficiently exclude hypotheses with poor alignments. By taking the advantages of both CTC and attention mechanisms, the hybrid CTC/attention architecture has been proved to be superior to simple attention-based and CTC-based models <cit.>. Although the hybrid CTC/attention end-to-end ASR architecture is reaching reasonable performance <cit.>, how to deploy it in online scenarios remains an unsolved problem. After inspections of the CTC/attention ASR architecture, we identify four challenges to deploy online hybrid CTC/attention end-to-end ASR systems: * Attention mechanism: The conventional CTC/attention ASR architecture employs global attention mechanism like additive attention <cit.> or location-aware attention <cit.>, which performs attention over the entire input representations. * CTC prefix score: CTC prefix score is defined as the cumulative probability of all label sequences sharing the same prefix. We need the information of complete utterances to compute the CTC prefix scores. * Unsynchronized predictions between CTC and attention: Attention-based ASR models perform label synchronous decoding, while CTC-based ASR models are frame-synchronized. Even though CTC/attention joint training can guide them to have more synchronized predictions, the predictions of CTC and attention are not strictly synchronized. This phenomenon makes online joint CTC/attention decoding difficult. * Bidirectional encoder: Bidirectional long short-term memory (BLSTM) <cit.> networks, which can exploit long-term dependency of input speeches, are widely used in CTC/attention end-to-end ASR architectures. However, the bidirectional encoder hampers the online deployment of hybrid CTC/attention models. To surmount the obstacles of deploying online attention based end-to-end models, there have been some prior published efforts to stream attention mechanisms, including neural transducer (NT) <cit.>, hard monotonic attention (HMA) <cit.>, monotonic chunk-wise attention (MoChA) <cit.>, triggered attention (TA) <cit.>, etc. NT is a limited-sequence streaming attention-based model that consumes a fixed number of input frames and produces a variable number of labels before the next chunk of input frames arrive. However, NT requires coarse alignments during training (e.g., what words are verbalized by what chunk of input frames). HMA and MoChA are alignment-free streaming attention models, which enables the attention to automatically select a fixed number of input representations in monotonic left-to-right mode. However, <cit.> found that MoChA had convergence problem because the attention weights of MoChA decreased immediately, and <cit.> found that MoChA had problems scaling up to long utterances. TA was proposed to leverage CTC-based networks to dynamically partition utterances, and to perform attention on the incremental input representations as the decoder generates labels. Suffering from the joint CTC/attention decoding strategy, it requires sophisticated skills to realize the online CTC/attention end-to-end ASR architecture. This paper summarizes our work on streaming the CTC/attention ASR architecture in four aspects. First, we propose a stable MoChA (sMoChA) to address the non-convergence problem of MoChA. Furthermore, we design a monotonic truncated attention (MTA) to deal with the training-and-decoding mismatch problem of sMoChA. Compared with sMoChA, MTA is more simple and achieves higher recognition accuracy. Second, we leverage CTC output to segment input representations and compute a truncated CTC (T-CTC) prefix score on top of the segmented input representations rather than the complete utterances. Our experiments demonstrate that the T-CTC prefix score can approximate to its corresponding CTC prefix score and accelerate the decoding speed. Third, we propose a dynamic waiting joint decoding (DWJD) algorithm to solve the unsynchronized predictions problem of joint CTC/attention decoding. Finally, we use a stack of VGGNet-style <cit.> convolutional neural networks (CNNs) as the encoder front-end and apply latency-controlled BLSTM (LC-BLSTM) <cit.> as the encoder back-end. Figure <ref> depicts our proposed online CTC/attention end-to-end ASR architecture. Our experiments showed that almost no speech recognition accuracy degradation was caused by using MTA, T-CTC and DWJD, which demonstrated the superiority and robustness of our proposed methods. The major recognition accuracy degradation came from the using of LC-BLSTM based encoder networks. Compared with our preliminary version<cit.>, the new content in this paper includes proposing MTA to improve the online performance, providing more details of decoding algorithms and verifying the validity of the system in Chinese and English by more experiments. The rest of this paper is organized as follows. In section 2, we describe the hybrid CTC/attention ASR architecture. In section 3, we present some prior streaming attention methods. In section 4, we describe the proposed online CTC/attention end-to-end ASR architecture. The experimental setups and results are described in section 5. Finally we make the conclusions in section 6. § HYBRID CTC/ATTENTION ARCHITECTURE In this section, we introduce the training and decoding details about the hybrid CTC/attention end-to-end ASR architecture <cit.> and reveal why it is difficult to deploy it in online scenarios. §.§ Training of Hybrid CTC/attention ASR Architecture The hybrid CTC/attention end-to-end ASR architecture is based on the attention-based encoder-decoder model <cit.>. Suppose the T input frames X=(x_1, ···, x_T) correspond to the L output labels Y=(y_1, ···, y_L), the encoder converts X into input representation vectors H=(𝐡_1, ···, 𝐡_T). To create a context for each representation vector that depends on both its past as well as its future, we usually choose the stack of BLSTM as the offline encoder network. Besides, there are also alternatives for low-latency encoders, such as LSTM, LC-BLSTM and etc. The attention-based decoder is an autoregressive network and computes the conditional probabilities for each label y_i as follows: 𝐚_i = GlobalAttention(𝐚_i-1, 𝐪_i-1, H), 𝐫_i = ∑_j=1^T a_i,j𝐡_j, p(y_i|y_1, ···, y_i-1, X)=Decoder(𝐫_i, 𝐪_i-1, y_i-1), where i and j are indices of output labels and input representation vectors, respectively, 𝐚_i∈ℝ^T is a vector of the attention weights, 𝐪_i-1 is the (i-1)-th state of the decoder, 𝐫_i is a label-wise representation vector. In the state-of-the-art end-to-end ASR models <cit.>, the global attention is implemented as the location-aware attention (LoAA). For j=1,···,T, LoAA computes the attention weights as follows: 𝐟_i = 𝐐∗𝐚_i-1, e_i,j = 𝐯^⊤tanh(𝐖_1𝐪_i-1+𝐖_2𝐡_j+𝐖_3𝐟_i+𝐛), a_i,j = Softmax({e_i,j}_j=1^T), where matrices 𝐖_1, 𝐖_2, 𝐖_3, 𝐐 and vectors 𝐛, 𝐯 are learnable parameters. The term ∗ denotes an one-dimensional convolution operation, with the convolutional parameter, 𝐐, along the input frame axis j. The computational complexity of LoAA is 𝒪(TL). Different from the basic encoder-decoder model, the hybrid CTC/attention end-to-end ASR architecture leverages the CTC <cit.> objective as an auxiliary task during training. Specifically, the CTC branch shares the encoder network and has an additional classification layer to predict labels or “blank” label ⟨ b⟩. Therefore, the objective function ℒ_mtl is made up of a linear interpolation of the CTC and attention objectives, which is expressed as ℒ_mtl=λℒ_ctc + (1-λ)ℒ_att, where the tunable coefficient λ satisfies 0≤λ≤1. §.§ Decoding of Hybrid CTC/attention ASR Architecture The previous work <cit.> shows that the attention-based encoder-decoder model tends to perform poor alignment without additional constraints, prompting the decoder to generate unnecessarily long hypotheses. In the hybrid CTC/attention architecture, the joint CTC/attention decoding method helps to refine the search space <cit.> by considering the CTC prefix scores <cit.> of hypotheses. However, the joint decoding method is inapplicable in online scenarios not only for the global attention, but also for the CTC prefix scores. Suppose the decoder generate a partial hypothesis l=(y_1, ···, y_n), where n is the length of this hypothesis, the score of l assigned by the decoder branch is S_att=∑_i=1^nlog p_att(y_i|y_1,···,y_i-1,H). On the one hand, because the global attention aligns each output label to the entire input representation vectors, the attention-based decoder depends on the full utterance; on the other hand, because the attention provides no information about the time boundary of l, it is difficult for the CTC branch to compute the exact probability of l. Therefore, in the joint decoding method, the CTC branch leverages the CTC prefix score to consider all possible time boundaries, which requires the full utterance. The score of l assigned by the CTC branch is S_ctc=log∑_j=1^Tp_ctc(l|H_1:j). To further improve the decoding accuracy, we employ an external RNN language model <cit.>, which is separately trained with the training transcriptions or extra corpus. In the joint decoding framework, a beam search is applied to prune l in accordance with the scoring function S=μ S_ctc + (1-μ)S_att + β S_lm, where S_lm is the score from the language model, μ and β represent the CTC and language model weights, respectively adjusting the proportion of different scores in the beam search. For the decoding end detection criteria, if there is little chance of finding a longer complete hypothesis with higher score, the beam search will be terminated <cit.>. § PRIOR STREAMING ATTENTION WORKS The attention-based end-to-end models typically leverage a global attention network to perform alignments. To apply online CTC/attention decoding, we first need to stream the attention mechanisms. In this section, we introduce some recent prior efforts on streaming the attention mechanisms. §.§ Hard Monotonic Attention (HMA) In many ASR tasks, according to the observations <cit.>, the attention tends to align each output label to the input representations nearly locally and monotonically <cit.>. Based on this property, HMA aligns each output label to a single input representation in a monotonic left-to-right way, as shown in figure <ref>(a). §.§.§ Decoding of HMA In the decoding stage, HMA always begins a search from a previously attended input representation and chooses the next input representation via a stochastic process. Let i and j denote the indices of the output labels and input representation vectors, respectively. Assuming that HMA has attended to the t_i-1-th input representation vector 𝐡_t_i-1 at the (i-1)-th output time-step, the stochastic process selects the next input representation vector by a sequential method, e.g., for j=t_i-1, t_i-1+1, ..., e_i,j = Energy(𝐪_i-1,𝐡_j), p_i,j = Sigmoid(e_i,j), z_i,j ∼ Bernoulli(p_i,j), where 𝐪_i-1 is the (i-1)-th state of the decoder, p_i,j is a selection probability and z_i,j is a discrete variable. Once we sample z_i,j=1 for some j, HMA stops the searching process and sets the label-wise representation vector 𝐫_i=𝐡_j. To make the decoding process deterministic and efficient, <cit.> replaced Bernoulli sampling with an indicator function z_i,j=𝕀(p_i.j>0.5). The decoding computational complexity of HMA is 𝒪(T), where T is the length of the sequential input representation, because HMA scans each input representation vector only once. §.§.§ Training of HMA As the discrete variable z_i,j is conflict to backpropagation algorithm in the training stage, we have to compute the expectations 𝔼[z_i,j] and 𝔼[𝐫_i] based on all the input representation vectors during training: 𝔼[z_i,j] = p_i,j∑_k=1^j(𝔼[z_i-1,k]∏_l=k^l-1(1-p_i,l)), 𝔼[𝐫_i] = ∑_j=1^T𝔼[z_i,j]·𝐡_j. To analyze the convergence property of 𝔼[z_i,j]. Equation (<ref>) can be rewritten in recursion: 𝔼[z_i,j]=p_i,j/p_i,j-1·(1-p_i,j-1)𝔼[z_i,j-1]+p_i,j𝔼[z_i-1,j]. According to the term p_i,j/p_i,j-1·(1-p_i,j-1)𝔼[z_i,j-1], the expectations of z exponentially decay by (1-p_i,j-1) along the index j <cit.>, which means that it is difficult for the attention network to attend to the input representation vector at long distance. Therefore, <cit.> used a modified Energy function as follows: e_i,j=g𝐯^⊤/||𝐯||tanh(𝐖_1𝐪_i-1+𝐖_2𝐡_j+𝐛)+r. The attention network learns the appropriate scale and offset of e_i,j via trainable scalars g and r. We have to initialize r by setting it to a negative value to guarantee that the mean of p_i,j approaches to 0 <cit.>. However, according to the term p_i,j𝔼[z_i-1,j], the expectations of z exponentially decay as p_i,j along the index i, which indicates that the alignments disappear when predicting output labels at long distance. Therefore, 𝔼[z] tend to vanish along either j or i. In addition to the attention weights vanishing problem, HMA also suffers from the mismatch problem between the training and decoding stages. In the decoding stage, we replace the Bernoulli sampling with an indicator function z_i,j=𝕀(p_i.j>0.5) assuming that p_i,j is discrete, i.e., p_i,j approaches 0 or 1. However, there is no strict guarantee that p_i,j is close to 0 or 1. §.§ Monotonic Chunk-wise Attention (MoChA) MoChA extends HMA by aligning each output label to consecutive input representations, as shown in figure <ref> (b). The choice of chunk size is task-specific. §.§.§ Decoding of MoChA In the decoding stage, MoChA searches for end-points in the same way as HMA does. Additionally, MoChA preforms soft alignments on consecutive input representation vectors within a fixed-length chunk. Assuming that the end-point at the (i-1)-th output time-step is the t_i-1-th input representation vector 𝐡_t_i-1, the next end-point t_i is determined by (<ref>)-(<ref>), and the label-wise representation vector 𝐫_i is computed as follows, for k=t_i-w+1,···,t_i: u_i,k = 𝐯^⊤tanh(𝐖_1𝐪_i-1+𝐖_2𝐡_k+𝐛), α_i,k = .exp(u_i,k)/∑_l=t_i-w+1^t_iexp(u_i,l)., 𝐫_i = ∑_k=t_i-w+1^t_iα_i,k𝐡_k, where matrices 𝐖_1, 𝐖_2 and bias vectors 𝐛, 𝐯 are learnable parameters. In (<ref>), u_i,k is the pre-softmax activations. In (<ref>), w denotes the chunk width and α_i,k denotes the attention weight within the chunk. The decoding computational complexity of MoChA is 𝒪(T+wL). Therefore, MoChA needs a less decoding computational cost when T, L are large and w is small, compared with the global attention models. §.§.§ Training of MoChA Similar to the training of HMA, we have to compute the expectation values of α_i,j and 𝐫_i when we train MoChA: 𝔼[α_i,j] = ∑^j+w-1_k=j𝔼[z_i,k]· exp(u_i,j)/∑^k_l=k-w+1 exp(u_i,l), 𝔼[𝐫_i] = ∑_j=1^T𝔼[α_i,j]·𝐡_j, where 𝔼[z_i,k] is the expectation value computed by (<ref>). In (<ref>), 𝔼[𝐫_i] is the weighted sum of all the input representation vectors. The vanishing problem of 𝔼[z] is still a tricky issue for MoChA and HMA. Similar to the findings in <cit.>, we failed to train the MoChA and HMA with long utterances, shown in figure <ref>(a)-(d), but succeeded with short utterances, shown in figure <ref>(g)-(i). It should be noted that the MoChA also suffers from the mismatch problem between the training and decoding stages. The work in <cit.> found that this mismatch problem would harm the performance of MoChA. § PROPOSED ONLINE CTC/ATTENTION BASED END-TO-END ASR ARCHITECTURE In this section, we propose several innovative algorithms to develop an online joint CTC/attention based end-to-end ASR architecture, including sMoChA, MTA, T-CTC prefix score and DWJD algorithms. §.§ Stable MoChA (sMoChA) We design sMoChA to address the vanishing attention weights problem of HMA and MoChA in the first part. Then, we alleviate the training-and-decoding mismatch problem of HMA and MoChA in the second part. §.§.§ Addressing Vanishing Attention Weights Problem Training sMoChA is different from training MoChA because we allow sMoChA to inspect the sequential input representation from the first element, rather than from the previous end-point like MoChA does. Let i and j denote the indices of output labels and input representation vectors, respectively. The expectation of the discrete variable z_i,j changes to: 𝔼[z_i,j] = p_i,j∏_k=1^j-1(1-p_i,k) = p_i,j/p_i,j-1·(1-p_i,j-1)𝔼[z_i,j-1]. Now, we discuss the convergence property of sMoChA. In (<ref>), 𝔼[z_i,j] no longer decays to zero with the increase of i, because 𝔼[z_i,j] is independent of the 𝔼[z_i-1,j]. However, 𝔼[z_i,j] still exponentially decays to zero with the increase of j. To prevent 𝔼[z_i,j] from vanishing, we enforce the mean of (1-p_i,j) to be close to 1 by initializing r in (<ref>) to a negative value, e.g., r=-4. As illustrated in figure <ref> (e), sMoChA has successfully addressed the vanishing attention weights problem of the original MoChA. §.§.§ Alleviating Mismatch Problem in Training / Decoding To enable sMoChA applicable to online tasks, we force the end-point to move forward and perform alignments within a single chunk in the decoding stage, just like MoChA. However, there exists a mismatch problem between the training and decoding stages, because the label-wise representation vector 𝐫_i in the decoding stage is computed within a single chunk, while the 𝐫_i in the training stage is computed over all the input representation vectors. This mismatch problem will cause speech recognition accuracy degradation. To alleviate this mismatch problem, we propose to use the higher order decoding chunks mode for sMoChA, in the decoding stage, rather than one single decoding chunk. When switching to the higher order decoding chunks mode, the formulation of 𝐫_i changes to: 𝔼[z_i,k] = .𝔼[z_i,k]/∑_l=t_i-n+1^t_i𝔼[z_i,l]., α_i,j = ∑^j+w-1_k=j𝔼[z_i,k] exp(u_i,j)/∑^k_l=k-w+1 exp(u_i,l), 𝐫_i = ∑_j=t_i-w+n^t_iα_i,j𝐡_j, where t_i denotes the selected end-point, n denotes the chunk order, i.e., the number of consecutive decoding chunks, and w denotes the chunk width. 𝔼[z_i,k] is the normalization of 𝔼[z_i,k] across the consecutive chunks. u_i,j is the pre-softmax activations computed by (<ref>), α_i,j is the attention weight and 𝐡_j is the j-th input representation vector. Because of the higher order decoding chunks mode, sMoChA actually attends to w+n-1 consecutive input representation vectors at each time-step. The gap between the training and decoding stages is reduced with the increase of n. The decoding computational complexity of sMoChA is as follows: (1) 𝒪(T+wL) for n=1, which is the same with MoChA; (2) 𝒪(TL+wL+nL) for n>1 because {p_i,j≤ k} are required to compute 𝔼[z_i,k]. For n→∞, as shown in figure <ref>(c), sMochA covers all the historical input representation vectors. As n increases, the decoding computational complexity increases. §.§ Monotonic Truncated Attention (MTA) During decoding, sMoChA suffers from the training-and-inference mismatch problem, and the receptive field of sMoChA is restricted to the predefined chunk width. Even though we can adopt the higher order decoding chunks mode for sMoChA, this bears the higher computational cost. To better exploit the historical information and simplify sMoChA, we propose MTA to perform attention over the truncated historical input representations, as shown in figure <ref> (d). §.§.§ Decoding of MTA In the decoding stage, MTA starts the search from the previous end-point and selects a new end-point to truncate the sequential input representation. Then, MTA performs alignments on the truncated sequential input representation. Since MTA only needs truncated historical information, it can be applied to online tasks. Assuming that the maximum number of the input representation vectors is T, and the previous end-point at the (i-1)-th output time-step is t_i-1, we compute the attention weights for j=1,···, T as follows: e_i,j = Energy(𝐪_i-1,𝐡_j), p_i,j = Sigmoid(e_i,j), z_i,j = 𝕀(p_i,j>0.5 ∧ j≥ t_i-1), α_i,j = p_i,j∏_k=1^j-1(1-p_i,k), where 𝐪_i-1 is the (i-1)-th state of the decoder, 𝐡_j is the j-th input representation vector. The Energy function will assign a larger value when 𝐪_i-1 and 𝐡_j are more relative. p_i,j is the truncation probability, z_i,j is the discrete truncate or do not truncate decision and α_i,j is the attention weight. Once p_i,j>0.5 and j≥t_i-1, MTA will stop computing attention weights and set the current end-point t_i to j. MTA ensures that the end-point moves forward consistently by enforcing t_i≥t_i-1. Then, the label-wise representation vector 𝐫_i is computed as follows: 𝐫_i=∑_j=1^t_iα_i,j𝐡_j. Equation (<ref>) shows that MTA includes all the historical input representation vectors, which means that MTA has broader receptive field and better modeling capability than HMA, MoChA and sMoChA with limited chunk order. MTA also simplifies the method of computing attention weights by using truncation probabilities straightly while MoChA and sMoChA use ChunkEnergy function to recompute attention weights. The decoding computational complexity of MTA is 𝒪(TL), where T and L denotes the lengths of input representation vectors and output labels, respectively. §.§.§ Training of MTA In the training stage, MTA discards the indicator function in (<ref>) and computes the label-wise representation vector 𝐫_i as follows: 𝐫_i=∑_j=1^Tα_i,j𝐡_j. Similarly, we choose (<ref>) as the Energy function and initialize the bias r to a negative value, e.g., r=-4, just like sMoChA, to prevent the attention weights from vanishing. Furthermore, MTA alleviates the training-and-decoding mismatch problem. According to (<ref>) and (<ref>), MTA performs alignments on top of all the input representation vectors in the training stage, but the attention weight α_i,j after the end-point is close to 0 due to the sharp selection probability p_i,j at the end-point. Therefore, there is little influence on the recognition accuracy when the future information is unavailable in the decoding stage. §.§ Truncated CTC (T-CTC) Prefix Score The joint CTC/attention decoding is of crucial importance for the decoder to generate high quality hypotheses. However, as we have stressed in section 2-B, the CTC prefix scores are computed on the full utterance. In this section, we propose a truncated CTC (T-CTC) prefix score to stream the calculation of CTC prefix score. §.§.§ Properties of CTC Prefix Scores First, we discuss the reason why it is redundant to compute CTC prefix score based on the full utterance. Given that the encoder has generated a sequential input representation H=(𝐡_1, ···, 𝐡_T_ max) of length T_ max and the decoder has generated a partial hypothesis l=(y_1, ···,y_n) of length n, the probability of l over H_1:j in (<ref>) is computed by: p_ctc(l|H_1:j)=Φ· p(y_n|𝐡_j), Φ={[ γ^b_j-1(l_1:n-1) if y_n-1=y_n; γ^b_j-1(l_1:n-1)+γ^n_j-1(l_1:n-1) otherwise ]. , where p(y_n|𝐡_j) is the j-th output of the CTC branch, which also represents the posterior probability. γ^n_j(l) and γ^b_j(l) denote the forward probabilities of l over H_1:j, where the superscripts n and b represent different cases in which all CTC alignments end with a non-blank or blank label. Considering the peaky posterior properties of CTC-based models <cit.>, p(y_n|𝐡_j) is approximately equal to 0 everywhere except for the time-steps when the CTC-based model predicts y_n. Therefore, only when the CTC-based model predicts l over H_1:j and predicts y_n at 𝐡_j for the first time, p_ctc(l|H_1:j)≫0; otherwise p_ctc(l|H_1:j)≈0. We can always find an end-point t_n that p_ctc(l|H_1:j)≈0 for j>t_n. Figure <ref> illustrates the property of CTC prefix scores, which is in line with our analysis. Based on this property, we only need t_n input representation vectors to estimate the CTC prefix score of l. The proposed T-CTC prefix score is thus computed as follows: S_tctc=log∑_j=1^t_np_ctc(l|H_1:j). §.§.§ T-CTC Prefix Score Algorithm The Algorithm <ref> describes how the CTC branch determines the end-point t_n and computes the T-CTC prefix score of l over H_1:t_n. The initialization and recursion steps for the forward probabilities, γ^n_j(l) and γ^b_j(l), are described in lines 5-7 and lines 9-28, respectively. When j≤ t_n-1, the T-CTC prefix probability Ψ can be updated directly (line 22) because γ^n_j-1(l_1:n-1) and γ^b_j-1(l_1:n-1) have already been calculated in the previous step. When j>t_n-1, the missing forward probabilities have to be computed before updating Ψ (lines 10-18). When the change of Ψ is less than the threshold θ, e.g., θ=10^-8, the recursion step will be terminated (line 24). The algorithm enforces the end-point to move forward as the hypothesis expands, which is consistent with monotonic alignments in CTC-based models. Finally, the algorithm returns a T-CTC prefix score when the last label of l is ⟨ eos⟩ (lines 1-3). It should be noted that the end-point will not exceed the total number of input representation vectors T_ max, which is determined by voice activity detection (VAD) as the input frames comes. §.§.§ T-CTC vs. CTC The T-CTC prefix scores not only approximates the CTC prefix score, but also have lower computational complexity than the CTC prefix scores. For a partial hypothesis with the end-point t_n, the T-CTC prefix score algorithm reduces the computational cost to t_n/T_ max. During the beam search, the majority of partial hypotheses are pruned and the T-CTC prefix score algorithm can accelerate the joint CTC/attention decoding process. §.§ Dynamic Waiting Joint Decoding (DWJD) We have described the proposed streaming attention methods and T-CTC prefix score algorithm in previous sections. However, it remains a problem that how the encoder, the attention-based decoder and the CTC branch collaborate to generate and prune the hypotheses in the online decoding stage. Because the attention based decoder and the CTC branch do not generate synchronized predictions, as shown in figure <ref>, we have to compute the scores of the same hypothesis in the attention-based decoder and the CTC branch at different time-steps. To address the CTC/attention unsynchronized prediction problem in the online setting, we propose a dynamic waiting joint decoding (DWJD) algorithm, which dynamically collects the prediction scores of the attention-based decoder and the CTC branch in an online manner. See Algorithm <ref> for the details of DWJD algorithm. §.§.§ DWJD Algorithm In the joint CTC/attention decoding method, the attention based decoder generates new hypotheses and the CTC branch assists in pruning the hypotheses. In the DWJD algorithm, the streaming attention model first searches the end-point t_ att and compute the label-wise representation vector 𝐫_i based on t_ att input representation vectors (lines 4-20). After that, the decoder predicts a new label, along with the corresponding decoder score S_ att (lines 21-22). Then, the CTC branch searches the end-point t_ ctc and computes the T-CTC prefix score S_ tctc of the new hypothesis conditioned on t_ ctc input representation vectors (lines 23-34). Finally, we prune this hypothesis by the combination of S_ att and S_ tctc. In our experiments, we also utilize a separately trained LSTM language model, together with S_ ctc and S_ att, to prune l in the beam search. In the offline joint CTC/attention decoding method, the forward propagation of the encoder and the beam search of the decoder perform separately. However, in our online joint CTC/attention decoding method, the encoder and decoder perform concurrently and have to be carefully scheduled. The key of the DWJD algorithm is to organize the input representation vectors, i.e., streaming encoder outputs, in the beam search. When the attention-based decoder and the CTC branch predict the end-points at different time-steps, or different hypotheses correspond to different end-points, the hypothesis with the end-point on the back should be suspended until the encoder provides enough outputs. For example, if the attention-based decoder needs more encoder outputs than the CTC branch, the attention-based decoder has to wait for the encoder to produce enough outputs in the first place. When the decoding process switch to the CTC branch, we just need to retrieve the encoder outputs obtained. §.§.§ End Detection Criteria In our online joint CTC/attention decoding method, it is important to terminate the beam search appropriately. We find that the T-CTC prefix score cannot exclude the premature ending hypotheses during the beam search as the CTC prefix score does. If the decoder predicts ⟨ eos⟩ earlier than expected, T-CTC will assign a higher score than the CTC prefix score because such hypotheses are reasonably based on the truncated information, but may be wrong conditioned on the complete utterances. Besides, suppose the hypotheses have the same prefix, the decoder and the external language model assign higher scores to shorter hypotheses because of the property of probabilities (0≤p≤1). Therefore, the scores of short hypotheses tend to be higher than that of long hypotheses in our online joint CTC/attention decoding method. The end detection criteria adopted in <cit.>, which terminates the beam search when the scores get lower, will lead to the early stop problem in our online system. To tackle this problem, we design a different end detection criteria to terminate the decoder appropriately. First, we believe that the monotonic attention based decoder should continue predicting new labels until the end-point t_ ctc reaches the last encoder outputs, which is determined by VAD in the front-end. Therefore, our online joint CTC/attention decoding method can get rid of the requirements for length penalty factor, coverage term <cit.> and other predefined terms that are often adopted in label-synchronous decoding. Second, we use the end detection criteria as follows: ∑^M_m=1𝕀(max_l∈Ω:|l|=nS(l)-max_l^'∈Ω:|l^'|=n-mS(l^')<D_ end)=M, given that the length of current complete hypotheses is n. Equation (<ref>) becomes true if scores of current complete hypotheses are much smaller than those of shorter complete hypotheses, which implies that there is little chance to get longer hypotheses with higher scores. In our experiments, we set M to 3 and D_ end to -10. For the final recognized hypotheses, we suggest to replace the T-CTC prefix scores of the collected ending hypotheses with the CTC prefix score to eliminate those premature ending hypotheses. § EXPERIMENTS AND RESULTS §.§ Corpus We evaluated our experiments using two different ASR tasks. The first ASR task uses the standard English speech corpus, LibriSpeech <cit.> (about 960h). The second ASR task is HKUST Mandarin conversational telephone (HKUST) <cit.> (about 200h). Experiments with these two corpora were designed to show the effectiveness of the proposed online methods. To improve the ASR accuracy in HKUST task, we applied speed perturbation <cit.>, of which the speed perturbation factors were 0.9, 1.0 and 1.1, to the training sets of HKUST. The development sets of both LibriSpeech and HKUST were used to tune the ASR model training and decoding hyperparameters. §.§ ASR Model Descriptions We used ESPNet <cit.> toolkit to build both the offline CTC/attention end-to-end ASR baselines and the proposed online models. §.§.§ Inputs For all ASR models, we used 83-dimensional features, which included 80-dimensional filter banks, pitch, delta-pitch and Normalized Cross-Correlation Functions (NCCFs). The features were computed with a 25 ms window and shifted every 10 ms. §.§.§ Outputs For LibriSpeech task, we tried English characters and pronunciation-assisted sub-word modeling (PASM) <cit.> units as the ASR model outputs. PASM is a sub-word extraction method that leverages the pronunciation information of a word. The characters consisted of 26 characters, and PASM units contained 200 sub-word units. In addition, both the character based and PASM based CTC/attention ASR models used four other output units, which included apostrophe, space, blank and sos/eos tokens. For HKUST task, we adopted a 3655-sized output set as the output units of the HKUST ASR model. The output units include 3623 Chinese characters, 26 English characters, as well as 6 non-language symbols denoting laughter, noise, vocalized noise, blank, unknown-character and sos/eos. §.§.§ CTC/attention Encoder and Decoder We used VGGNet-style CNN-blocks as the CTC/attention encoder front-end. Each CNN-block contained two CNN layers and one max-pooling layer. The CNN layers used ReLU activation function. The first CNN-block had 64 filter kernels for each CNN layer, and the second CNN-block had 128 filter kernels for each CNN layer. All the kernel size is 3. The max-pooling layer had a 2×2 pooling window with a 2×2 stride. Following the CNN-blocks, a multi-layer BLSTM was used for offline systems and a multi-layer Uni-LSTM or LC-BLSTM was used for online systems. For LibriSpeech task, we used 5-layer BLSTM with 1024 cells for VGG-BLSTM-Large (VBL) and VGG-LC-BLSTM-Large (LC-VBL), 3-layer BLSTM with 640 cells for VGG-BLSTM-Small (VBS) and VGG-LC-BLSTM-Small (LC-VBS), 10-layer Uni-LSTM with 1024 cells for VGG-Uni-LSTM-Large (VUL) and 6-layer Uni-LSTM with 640 cells for VGG-Uni-LSTM-Small (VUS). For HKUST task, we used 3-layer BLSTM with 1024 cells for VBL and LC-VBL, 3-layer BLSTM with 640 cells for VBS and LC-VBS, 6-layer Uni-LSTM with 1024 cells for VUL and 6-layer Uni-LSTM with 640 cells for VUS. After that, we used an 1-layer and 2-layer fully-connected neural network for large and small models, respectively. The output frame-rate of all models was 25 Hz. The CTC/attention decoder was one 2-layer Uni-LSTM with 1024 cells and 640 cells for large and small models, respectively. The parameter sizes of (LC-)VBL, (LC-)VBS, VUL, VUS for LibriSpeech task are 151M, 46M, 111M and 35M, respectively. The parameter sizes of (LC-)VBL, (LC-)VBS, VUL, VUS for HKUST task are 112M, 53M, 89M and 42M, respectively. The training and decoding parameters are listed in table <ref>. §.§.§ External Language Model We followed the language model configurations in ESPnet <cit.>. We applied a 1-layer Uni-LSTM with 1024 cells for Librspeech task and 2-layer Uni-LSTM with 640 cells for HKUST task. We trained the RNN language models separately from the CTC/attention ASR models. For LibriSpeech task, the text corpus contained the LibriSpeech training transcripts and 14500 public domain books. For HKUST task, the text corpus only consisted of the training transcripts. It should be noted that, for LibriSpeech task, we implemented the multi-level language model decoding method <cit.> for both the character based and PASM based joint CTC/attention decoding. §.§ Results of Streaming Attention First, we compare various streaming attention methods by observing their attention weights during training. From figure <ref>, we can see that, HMA or MoChA failed to learn monotonic alignments on LibriSpeech. We tried different initialization biases and trained for more than 5 epochs, but we did not observe convergence of HMA or MoChA. As mentioned in section 3-A, MoChA could have handled input representation vectors at longer distance if the initialization of bias r in (<ref>) was smaller. From figure <ref>(a)-(d), we can see that the MoChA tended to attend over broader encoder span, but with the sacrifice that the attention weights vanished faster as the output increased, which is in line with our analysis in section 3-A and other related work <cit.>. Because 80% utterances have more than 100 outputs in LibriSpeech when using PASM units, we have to constrain the length of output sequence by segmenting the utterances in accordance with a DNN/HMM system. In our experiments, the output sequence lengths of all utterances are within 25 when training and testing HMA and MoChA on LibriSpeech. From figure <ref>(g)-(i), we can see that the MoChA did learn monotonic alignments in segmented utterances. Different from LibriSpeech, 85% utterances in HKUST have less than 20 characters and the longest output sequence length is 64. Therefore, we observed the convergence of the HMA and MoChA on HKUST. However, our proposed streaming attention methods, i.e., sMoChA and MTA, overcome the difficulty of modeling long utterances. Figure <ref>(e)-(f) show that sMoChA and MTA were stable and learned nearly monotonic alignments with the suggested initial bias r=-4 in <cit.>. Next, we compare the performance of the mentioned attention methods on LibriSpeech and HKUST. §.§.§ LibriSpeech Table <ref> lists experimental results on LibriSpeech task. Word error rate (WER) was used to measure the ASR accuracy. First, we trained the offline baselines by using both the English character output units and PASM output units. In the first line, we can see that PASM based models outperformed the character based models, and VBL models outperformed VBS models. The results proved that pronunciation information contributed to the English ASR tasks, and larger models could learn better. Second, we evaluated HMA, MoChA and the proposed sMoChA in lines 2-6. It shows that sMoChA outperformed HMA and MoChA. Furthermore, we explored how chunk width and chunk order would affect the performance of sMoChA. When chunk order was set to be 1, we can see that the character-level models performed better with smaller chunk width (chunk width = 1), while the PASM-based model performed better with larger chunk width (chunk width = 3). It implied that a broader context was helpful for PASM units, which were always longer than the single English character. As mentioned in section 4-A, higher decoding chunk order can reduce the mismatch phenomenon between sMoChA training and decoding, and thus stabilize and improve the decoding accuracy of sMoChA. We can see that, by increasing the chunk order from 1 to ∞, the sMoChA-based model achieved consistent ASR accuracy improvement. The accuracy improvements of VBS models were larger than those of VBL models. This might because VBS models were much smaller and more vulnerable to the mismatch between the training and decoding stages. Finally, we evaluated the proposed MTA method. MTA does not have the concept of chunk width or chunk order, its training and decoding are designed to be matched. Compared with the chunk order = ∞ decoding mode of sMoChA, MTA needs a less computational cost. From table <ref>, we can see that, no matter what model sizes or what output units we tried, MTA performed almost the best among the streaming attention methods, except for the VBS model with PASM units on the test-clean test set of LibriSpeech task. Compared with the LoAA based offline baselines, the MTA based models achieved comparable performance. §.§.§ HKUST Table <ref> lists the experimental results of the mentioned attention methods on HKUST task. We used the Chinese characters as the output units and character error rate (CER) to measure the ASR accuracy. We conducted more experiments thanks to the relatively small size of HKUST task. The conclusions got on HKUST task were similar to those on LibriSpeech task: 1. sMoChA outperformed HMA and MoChA; 2. higher chunk order decoding methods always performed better; 3. the performance of MTA is best among the mentioned streaming attention methods, and is the same as that of LoAA. Taking the results of both LibriSpeech and HKUST tasks into consideration, we can make the conclusion that our proposed streaming MTA method will not cause ASR accuracy degradation and is robust across different languages. §.§ Results of T-CTC Prefix Score In this section, we examined how well the T-CTC prefix score approximated the offline CTC prefix score. Table <ref> lists the results of T-CTC prefix score on both LibriSpeech and HKUST tasks. We used PASM units on LibriSpeech task and Chinese character on HKUST task. The first line shows the results for the offline baselines. The second line shows the results of the MTA based models with CTC prefix score. For the models in the third line, we used MTA to stream attention, T-CTC to stream the CTC prefix score calculation and DWJD algorithm to stream the joint CTC/attention decoding. From table <ref>, we can see that, with all the above streaming methods applied, the proposed method only caused very slight degradation in ASR accuracy. §.§ Results of Low-latency Encoder In this section, we applied low-latency encoders, MTA, T-CTC prefix scores and DWJD algorithm to our online CTC/attention end-to-end models. We chose the Uni-LSTM and LC-BLSTM as low-latency encoders for comparisons. Uni-LSTM only needs history information while LC-BLSTM exploits finite future information. LC-BLSTM performs on the segmented frame chunks. Each chunk contains N_c current frames and N_r future frames, and LC-BLSTM hops N_c frames at each time-step. The latency of the LC-BLSTM is limited to N_r <cit.>. It should be noted that we segmented the input frames according to N_c and N_r, and the padding of VGG-style CNN-blocks would not bring about extra latency. The results of different encoders are listed in table <ref>. First, for the comparison in lines 2-3, we found that the proposed online CTC/attention based models achieved consistent accuracy improvements over the simpler Uni-LSTM end-to-end models trained by CTC objective, which was similar to the findings in other end-to-end systems <cit.>. Second, for the comparisons in lines 3-6, we can see that the LC-BLSTM based encoder is superior to the Uni-LSTM based encoder for end-to-end ASR systems, which was also reported in <cit.>. Third, in the LC-BLSTM encoder, we got significant improvement by increasing N_r but minor improvement by increasing the N_c. This is because the LC-BLSTM, which carries the historical information across chunks, can make use of the historical information no matter how long the N_c is, but it only obtains future context from N_r future frames. Finally, the proposed online hybrid CTC/attention model with 320 ms latency obtained 4.2% and 13.3% WERs on the test-clean and test-other sets, respectively. In HKUST task, the online CTC/attention model obtained 29.4% and 27.8% CERs on the development and evaluation sets, respectively. Compared with the offline baselines, our online end-to-end models exhibited acceptable performance degradation. More importantly, our online models only need limited future context, which significantly reduce the latency from utterance level to frame level and thus improve the user experience in human-computer interaction. §.§ Results of Decoding Speed Decoding speed is one concern when people deploy the online ASR systems. In this section, we evaluated the proposed online hybrid CTC/attention system, where the decoding was performed with beam sizes 1, 3, 5, 10 and 20. We used VBL and LC-VBL encoders, and chose 64 and 32 for N_c and N_r for LC-VBL encoder, respectively. The decoding time and CERs/WERs were measured by a computer with Intel(R) Xeon(R) Gold 6226 CPU, 2.70 GHz. To further improve the decoding speed, we quantized the neural network activations and weights to 16-bits integers and the biases to 32-bits integers <cit.>. Figures <ref> and <ref> display the relationships between the WERs/CERs and the real time factor (RTF) on LibriSpeech and HKUST, respectively. Comparing the black, blue and green dash lines in figures <ref> and <ref>, MTA and T-CTC prefix scores indeed accelerated the decoding speed, especially when the beam size was larger than 5. This is because MTA and T-CTC prefix scores have less computational cost. Furthermore, in online human-computer interaction scenarios, we have to take the speech time into consideration when evaluating the real decoding time. To simulate speaking, we sent 10 audio frames to the ASR systems every 100 ms. Under this scenario, the offline ASR system will be suspended until all the audio frames have been received while the online one can process the speech as it receives audio frames. From figures <ref> and <ref> we can see that the online systems (red solid lines) were consistently faster than the offline baselines (black solid lines). Even though the model sizes of online and offline systems were almost the same, the online systems were around 1.5x faster. The decoding acceleration is due to that our online systems can process speech as the user begins speaking, which not only reduce the decoding latency but also improve the CPU utilization. § CONCLUSION This paper proposes an online hybrid CTC/attention end-to-end ASR architecture, which can decode the speech in the low-latency and real-time manner. This architecture can be trained from the scratch and in an end-to-end way. Compared with the simpler Uni-LSTM CTC online end-to-end ASR systems, the proposed online CTC/attention based systems achieved consistent accuracy improvements. Compared with the offline CTC/attention baselines, the proposed online systems achieved comparable performance. Furthermore, the proposed online systems have the advantages over the offline baselines in both decoding latency and decoding speed. In the decoding latency aspect, the proposed online systems require limited future audio frames while the offline baselines needs the full audio. In the decoding speed aspect, the proposed online systems can recognize from streaming audio or as the user begins speaking, which improves the CPU utilization, and thus accelerates the decoding speed by around 1.5 times compared with its offline counterparts. In the future work, we will explore more parallelizable neural network architectures because it is difficult to parallelize the training and decoding processes of recurrent neural network based architectures. Besides, our experiments revealed that the proposed MTA, T-CTC prefix score and DWJD algorithms almost did not degrade the recognition accuracy. However the application of the low-latency encoders, which were LSTM and LC-BLSTM in this paper, dragged down the recognition accuracy. Another future work is to adopt teacher-student learning approaches to reduce the latency and maintain the recognition accuracy at the same time. § ACKNOWLEDGMENT This work is supported by the National Natural Science Foundation of China (Nos. 11590774,11590772,11590770) IEEEtran [ < g r a p h i c s > ]Haoran Miaoreceived the B.S. degree in physics from the Nanjing University, Nanjing, China, in 2017. He is currently working toward the Ph.D. degree at the Institute of Acoustics, Chinese Academy of Sciences, Beijing, China. His research interests include automatic speech recognition and deep learning. [ < g r a p h i c s > ]Gaofeng Cheng received the B.S. degree in applied physics from the Beijing University of Posts and Telecommunications, Beijing, China, in 2014, and the Ph.D. degree in information and signal processing from the University of Chinese Academy of Sciences, Beijing, China, in 2019. He is currently an Assistant Professor with the Speech Acoustics and Content Understanding Laboratory, Chinese Academy of Sciences. His research interests include speech recognition and deep learning. [ < g r a p h i c s > ]Pengyuan Zhang received the Ph.D. degree in information and signal processing from the Institute of Acoustics, Chinese Academy of Sciences, Beijing, China, in 2007. From 2013 to 2014, he was a Research Scholar with the University of Sheffield. He is currently a Professor with the Speech Acoustics and Content Understanding Laboratory, Chinese Academy of Sciences. His research interests include spontaneous speech recognition, speech synthesis, and acoustic signal detection. [ < g r a p h i c s > ]Yonghong Yan received the B.E. degree in electronic engineering from Tsinghua University, Beijing, China, in 1990, and the Ph.D. degree in computer science and engineering from the Oregon Graduate Institute of Science and Technology, Hillsboro, OR, USA, in 1995. He is currently a Professor with the Speech Acoustics and Content Understanding Laboratory, Chinese Academy of Sciences, Beijing, China. His research interests include speech processing and recognition, language/speaker recognition, and human computer interface.
http://arxiv.org/abs/2307.00485v1
20230702061407
TopicFM+: Boosting Accuracy and Efficiency of Topic-Assisted Feature Matching
[ "Khang Truong Giang", "Soohwan Song", "Sungho Jo" ]
cs.CV
[ "cs.CV" ]
Khang et al.: TopicFM+: Boosting Accuracy and Efficiency of Topic-Assisted Feature Matching TopicFM+: Boosting Accuracy and Efficiency of Topic-Assisted Feature Matching Khang Truong Giang, Soohwan Song*, Sungho Jo* * Corresponding author. Khang Truong Giang and Sungho Jo are with School of Computing, KAIST, Daejeon 34141, Republic of Korea. Soohwan Song is with Intelligent Robotics Research Division, ETRI, Daejeon 34129, Republic of Korea. (Email addresses: {khangtg,shjo}@kaist.ac.kr, [email protected]). ====================================================================================================================================================================================================================================================================================================================================================================== This study tackles the challenge of image matching in difficult scenarios, such as scenes with significant variations or limited texture, with a strong emphasis on computational efficiency. Previous studies have attempted to address this challenge by encoding global scene contexts using Transformers. However, these approaches suffer from high computational costs and may not capture sufficient high-level contextual information, such as structural shapes or semantic instances. Consequently, the encoded features may lack discriminative power in challenging scenes. To overcome these limitations, we propose a novel image-matching method that leverages a topic-modeling strategy to capture high-level contexts in images. Our method represents each image as a multinomial distribution over topics, where each topic represents a latent semantic instance. By incorporating these topics, we can effectively capture comprehensive context information and obtain discriminative and high-quality features. Additionally, our method effectively matches features within corresponding semantic regions by estimating the covisible topics. To enhance the efficiency of feature matching, we have designed a network with a pooling-and-merging attention module. This module reduces computation by employing attention only on fixed-sized topics and small-sized features. Through extensive experiments, we have demonstrated the superiority of our method in challenging scenarios. Specifically, our method significantly reduces computational costs while maintaining higher image-matching accuracy compared to state-of-the-art methods. Image Matching, Camera Pose Estimation, Transformer, Topic Modeling. § INTRODUCTION Image matching involves identifying corresponding pixels between two images at a detailed level. The outcomes of the matching process are crucial for various computer vision tasks, including structure from motion (SfM) <cit.>, simultaneous localization and mapping (SLAM) <cit.>, visual tracking <cit.>, and visual localization <cit.>, etc. Traditional image-matching approaches <cit.> generally consist of several steps: keypoint detection, keypoint description, keypoint matching, and outlier rejection. Typically, sparse keypoints are extracted using handcrafted local features, and matching is done through nearest neighbor search. More recently, convolutional neural networks (CNNs) have been utilized to extract local features <cit.>, surpassing the performance of conventional handcrafted features. However, these CNN-based methods often face challenges in scenarios with illumination variations, repetitive structures, or low-texture conditions, leading to a significant decline in their performance. To address this problem, alternative methods that do not rely on detectors, detector-free methods, have been suggested <cit.>. These methods enable direct feature matching on dense feature maps without the need for a keypoint detection stage. In particular, some studies <cit.> have utilized Transformers <cit.> to learn robust feature representations. Transformers effectively capture comprehensive contextual information in images by iteratively employing self-attention and cross-attention layers. Consequently, they have achieved state-of-the-art results by generating numerous precise matches, even in cases involving repetitive patterns and textureless images. However, despite the advantages of detector-free methods, they still encounter several challenges that hinder the feature-matching performance: (i) These methods struggle to sufficiently incorporate the global context of a scene, which is crucial for accurate feature matching. Some approaches <cit.> have attempted to implicitly capture global contextual information using Transformers. However, higher-level contexts, such as semantic instances or structural shapes, should be explicitly exploited to learn robust representations. (ii) These methods exhaustively search for features across the entire image area. This can lead to interference from keypoints in non-overlapping regions, resulting in incorrect matches and increased computation time. As a result, their matching performance significantly suffers when there are limited areas of overlap between images. (iii) Existing detector-free methods <cit.> also demand substantial computational resources and extensive memory for dense matching. Consequently, they are limited to low-resolution images or pose challenges in real-time applications. Therefore, minimizing the runtime and GPU memory usage becomes crucial to deploy detector-free methods on devices with memory and computational limitations. (iv) Several studies <cit.> have adopted a coarse-to-fine strategy to handle high-resolution images. This approach involves initially finding matches at a coarse level and refining them at a finer level. However, keypoints identified at the coarse level may lack sufficient information, and they can end up being located in occluded or non-overlapping regions at the fine level, making it difficult to determine accurate matching locations. To address these issues, we propose a robust and efficient method for detector-free feature matching called TopicFM. This method incorporates high-level contextual information from images using a topic modeling strategy <cit.> inspired by data mining techniques. In TopicFM, an image is represented as a multinomial distribution over topics, with each topic representing a latent semantic instance like an object or structural shape. By leveraging this representation, TopicFM performs probabilistic feature matching based on the distribution of the latent topics. Fig. <ref> illustrates the representative topics inferred from the image-matching results. The image regions with the same semantic instance or shape are assigned to the same topic. Through the comprehensive context information captured by the topics, TopicFM can learn discriminative and high-quality features. Furthermore, TopicFM effectively matches features within overlapping regions of image pairs by estimating the topics that are covisible between them. This approach resembles the cognitive process of humans, who quickly recognize covisible regions based on semantic information and then search for matching points within those regions. By applying this top-down approach, our method successfully identifies robust and accurate matching points, even in challenging cases involving significant variations in scale and viewpoint. Moreover, to further improve computational performance, we have developed an efficient end-to-end network architecture based on a coarse-to-fine framework. In the coarse-level matching stage, we introduce a pooling-and-merging attention network in TopicFM to streamline computation. This network comprises two crucial steps: context pooling and context merging. The pooling step captures high-level contexts using latent topics, while the merging step utilizes these topics to enhance feature robustness. Both steps have low computational costs as they involve attention layers with small-sized inputs. Unlike other transformer-based methods that require computationally expensive multiplications between large feature matrices, our attention layers only operate on smaller feature matrices or fixed-sized topic matrices. Fig. <ref> illustrates the impressive achievements of our method, which achieves an approximately 50% reduction in both runtime and computational cost while surpassing the state-of-the-art Transformer-based methods in image-matching accuracy. In the fine-level matching stage, we improve upon coarse matches by determining an adaptive feature location rather than relying on a fixed center location <cit.>. To achieve this, we propose a dynamic feature refinement module that employs a self-feature-detector trained using a symmetric epipolar distance loss <cit.>. This loss function does not require explicit ground-truth feature correspondences, allowing the learned network to automatically prioritize easily matchable points, such as peak points or overlapping points, as the keypoints of interest. By utilizing these self-detected keypoints instead of a center-fixing keypoint approach, we observe an overall enhancement in matching accuracy. To ensure efficiency in the fine-level matching process, we design the self-detector network using MLP-Mixer layers, which are more computationally efficient than transformer layers <cit.>. As a result, our coarse-to-fine method produces high-quality matching results while requiring less computation compared to state-of-the-art transformer-based methods <cit.>. The contributions of our study can be summarized as follows: * We introduce a novel feature-matching approach that integrates local context and high-level semantic information into latent features using a topic modeling strategy. By inferring covisible topics, this method achieves accurate dense matches in challenging scenes. * We formulate the topic inference process as a learnable transformer module, enabling interpretability of the matching results, even from a human perspective. * To enhance fine-level matching performance, we propose dynamic feature refinement, which identifies highly matchable features within image patches. This refinement process is trained without explicit reliance on ground-truth matches, allowing the automatic selection of reliable keypoints. * We have designed an efficient end-to-end network that reduces dependence on the Transformer layer. The network leverages attention mechanisms operating on fixed-sized latent topics and small-sized image features in the coarse stage, and utilizes an efficient MLP-Mixer network in the fine stage. This model considerably accelerates the processing of image frames. * We extensively evaluate the robustness and efficiency of the proposed method through empirical experiments. Additionally, we demonstrate the interpretability of our topic models. The source code for our method is available at <https://github.com/TruongKhang/TopicFM> A preliminary version of this paper has been previously presented in <cit.>. In this study, we introduce an upgraded and more efficient approach for the coarse-level matching step by utilizing the pooling-and-merging attention module. While the pooling step performs topic inference as described in <cit.>, the merging step enhances the features by directly merging them with the inferred topic embeddings. This merging step eliminates the need for in-topic feature augmentation, as presented in <cit.>. The previous augmentation step involved employing attention layers for all features within a topic, which incurs a higher computation cost compared to the merging step proposed in this paper. Additionally, we introduce a novel dynamic feature refinement network in the fine-level matching step, which is trained in a self-supervised manner relying solely on the ground-truth fundamental matrices. This network enhances the robustness of the matching process at a finer level. Furthermore, we provide comprehensive validation of our methods through additional experiments, evaluations, and detailed analyses. These efforts aim to demonstrate both the effectiveness and efficiency of the proposed approach. § RELATED WORKS §.§ Image Matching Image matching has long been a prominent challenge in the field of computer vision. Typically, two main approaches are employed: detector-based and detector-free feature matching. The detector-based approach involves using traditional algorithms <cit.> or deep CNNs <cit.> to detect a set of keypoints of interest in each input image. These keypoints are then used to extract high-dimensional feature vectors by encoding their contextual information. The context can be represented by image patches <cit.> or implicitly captured through various types of CNNs <cit.>. Once the features of interest are obtained for each image, the matching process is usually performed for a pair of images by computing pairwise similarity between the two sets of features. Subsequently, mutual nearest neighbor search <cit.> or ratio test <cit.> is applied to establish correspondences. However, these methods primarily consider local context information, neglecting the global or geometric connections between features. Consequently, the extracted features are less effective when dealing with challenging cases. To address this limitation, some studies <cit.> have incorporated global context information to improve the distinctiveness of detected features. ContextDesc <cit.> and ALSFeat <cit.> introduced a geometric context encoder using a large patch sampler and a deformable CNN, respectively. SuperGlue <cit.> and SGMNet <cit.> utilized self- and cross-attentions of a graph neural network to integrate global visual and geometric information into local feature representations. While these detector-based methods <cit.> are efficient due to sparse matching, their performance heavily relies on the separated keypoint detection step, which still struggles in challenging conditions such as small overlapped regions or large untextured scenes. To address this issue, researchers have introduced detector-free methods <cit.>. These methods have designed end-to-end network architectures that perform image matching in a single forward pass, eliminating the need for separate steps. Instead of extracting sparse feature points, these networks directly process dense feature maps. To efficiently handle the dense features of high-resolution images, many studies have employed a coarse-to-fine strategy. Patch2Pix <cit.> detects patch-level coarse matches in low-resolution images and progressively refines them at higher resolutions. Similarly, several coarse-to-fine methods <cit.> utilize transformers to learn robust and distinctive features. LoFTR <cit.> employs a linear transformer <cit.> to enhance the representation ability of visual descriptors by incorporating global context information. Matchformer <cit.> is an extract-and-match scheme that intuitively combines self- and cross-attention in each stage of the hierarchical encoder. This match-aware encoder reduces the burden on the decoder, resulting in a highly efficient model. QuadTree <cit.> constructed token pyramids similar to quadtree data structures and calculates attention in a step-by-step manner, starting from coarse regions and gradually refining to finer regions. This approach effectively reduces the computational complexity by progressively eliminating irrelevant subdivided rectangular regions. ASpanFormer <cit.> utilized a hierarchical attention framework for feature matching, incorporating attention operations at multiple scales to facilitate both global context understanding and precise matching. The span of attention is dynamically determined, allowing the model to capture both long-range dependencies and fine-grained details in local regions. However, these existing methods <cit.> face inefficiencies when propagating global context information across the entire image region. We argue that the unobserved regions between image pairs are unnecessary and may introduce noise when utilizing transformers for feature learning. Additionally, as previously mentioned, transformers demand substantial computational resources and memory, making them unsuitable for real-time processing or lightweight models. To address these issues, we propose a topic modeling approach that leverages relevant contextual cues for feature learning. Our approach involves classifying features into latent topics and utilizing the distinctive context information encoded within each topic to improve the quality of features. This strategy effectively reduces computation by applying attention layers exclusively on inputs with limited sizes. We demonstrate that our design achieves state-of-the-art performance while being remarkably more efficient than existing methods. §.§ Interpretable Image Matching The interpretability has emerged as a prominent area of research in the field of computer vision models. Its primary objective is to provide explanations for the decisions and predictions made in image recognition <cit.> and deep metric learning <cit.>. In the context of image matching, detector-based methods have been employed to estimate interpretable feature keypoints such as corners, blobs, or ridges. However, these detected features fail to capture the spatial or semantic structures present in the images. On the other hand, existing end-to-end methods rely on CNNs <cit.> or transformers to extract dense feature maps using either local or global context <cit.>. Nonetheless, these approaches do not explicitly convey the specific details of the observed context, limiting the interpretability of their outcomes. In contrast, the human cognitive system rapidly identifies covisible regions by leveraging high-level contextual information, such as objects or structures, and subsequently determines the matching points within these regions. Drawing inspiration from this cognitive process, several studies <cit.> have proposed image-matching methods that explicitly extract covisible regions. These methods predict covisible regions of two images, such as overlapping bounding boxes <cit.> or overlapping masks <cit.>, and conduct feature matching exclusively within these regions. However, these approaches entail a significant computational overhead in finding covisible regions and lack sufficient interpretability for users. Moreover, inaccurate identification of covisible regions can substantially impact the quality of matching as these methods directly perform feature matching within the covisible region. On the other hand, our method efficiently categorizes local structures within images into distinct topics and implicitly harnesses the information within these topics to enhance the features. Instead of employing binary labels, we model an image as a multinomial distribution of topics and subsequently perform probabilistic feature matching. Additionally, our method ensures interpretable matching by selecting crucial topics within the covisible regions of the two images. To the best of our knowledge, our method is the first to introduce a high level of interpretability to the task of image matching. §.§ Transformers with Attention The Transformer architecture gained popularity in various vision tasks during the early 2020s and demonstrated state-of-the-art performance in image classification tasks <cit.>. However, when it comes to feature matching for camera pose estimation, there is a significant difference compared to classification. In feature matching, the spatial location of features plays a crucial role in capturing precise context information and learning distinctive features across images. Although the Transformer layer uses positional encoding to incorporate the spatial information of features, it cannot effectively leverage this information to observe adequate context for feature learning. In fact, most existing Transformer-based feature-matching methods <cit.> still rely on a deep CNN backbone to extract the initial features. This raises the question of whether extensive self/cross attention between the features is necessary for the feature-matching task. Our proposed method addresses this question by applying attention only to the small-length features and the fixed-sized latent topics, demonstrating that such an approach is sufficient for achieving effective feature matching. § PROPOSED METHOD The goal of image matching is to establish correspondences between features extracted from two images, represented as feature maps {F^A,F^B} derived from images {I^A,I^B}. Specifically, it aims to determine a set of feature correspondences {(f_i^A,f_j^B)}, where f_i^A ∈ F^A and f_j^B ∈ F^B. To achieve accurate pixel-level matching while ensuring computational efficiency, we have adopted a coarse-to-fine approach. This approach involves initially estimating coarse matches at a lower image resolution and then refining them at a higher resolution. The overall architecture, depicted in Fig. <ref>, comprises three main components: feature extraction, coarse-level matching (Section <ref>), and fine-level matching (Section <ref>). In the feature extraction stage, we employ a feature pyramid network (FPN) <cit.> to generate multi-scale feature maps. The multi-scale feature extraction network is constructed based on an efficient standard CNN block, which comprises a Conv2D layer, BatchNorm layer, and GELU activation layer. This choice of design improves computational efficiency significantly compared to using a ResNet block <cit.>. For the coarse and fine stages, we choose feature maps with resolutions of H/8×W/8 and H/2×W/2, respectively, denoted as {F_c^A,F_c^B} and {F_f^A, F_f^B}, where H and W represent the height and width of the original image. In the coarse-level matching stage, we introduce topic-assisted feature matching to estimate the matching distribution P(M) between the coarse feature maps F_c^A and F_c^B. This feature matching module is designed as a pooling-and-merging attention network, which shares similarities with the squeeze-and-excitation concept <cit.>. This network consists of two essential steps: context pooling and context merging. In the context pooling step, we use cross-attention to infer high-level contexts, topics, for each image. Then, in the context merging step, we leverage these inferred topics to enhance the robustness of features. The fixed size of the latent topics allows us to achieve both accurate estimation of the matching distribution P(M) and computational efficiency. With the computed distribution, we determine a set of coarse matches {(x,y)} (x ∈ F_c^A,y ∈ F_c^B) by extracting feature pairs with high matching confidence and satisfying a mutual nearest neighbor condition. In the fine-level matching stage, we upscale the coarse matching coordinates to the high-resolution feature maps {F_f^A,F_f^B}. Then, we crop image patches {(P_x^A,P_y^B)} centered around these upscaled coordinates. To detect the final matches {(x, y)}, where x∈ P_x^A and y∈ P_y^B, we propose a dynamic refinement network that operates on these cropped patches. Unlike existing methods <cit.> that use a fixed center location for each patch, our approach identifies more reliable keypoint locations by estimating a score map for each patch. Importantly, this network is trained using a self-supervised learning framework, eliminating the need for ground-truth matching pairs. This enables the detection of reliable keypoints and enhances the adaptability and robustness of the feature refinement process. §.§ Topic-Assisted Feature Matching Given a pair of feature maps {F_c^A,F_c^B} extracted from an image pair {I^A,I^B} at the coarse level, our objective is to estimate the matching distribution M={m_ij} <cit.>: P(M | F_c^A,F_c^B ) = ∏_m_ij∈ M P ( m_ij |F_c^A,F_c^B ) where the binary random variable m_ij indicates that the i^th feature F_c,i^A is matched to the j^th feature F_c,j^B. Previous methods <cit.> have directly employed attention-based networks on all feature tokens to estimate the distribution of M, followed by the use of Dual-Softmax <cit.> or optimal transport layers <cit.>. However, this approach leads to high computational overhead, especially when dealing with a large number of features. Additionally, applying attention to all features can introduce noise from non-overlapping regions, resulting in degraded performance. To tackle these challenges, we incorporate latent topics to encode sufficient and semantic context information into the features. These latent topics capture high-level contexts, such as objects or structural shapes, in the images. By understanding the underlying topics within each image, we can improve the matching probability while mitigating the computational complexity associated with existing methods. To implement this idea, we take inspiration from topic modeling techniques used in data mining <cit.>, where each document in a corpus consists of K latent topics, and each topic is characterized by a collection of tokens that share the same semantic information. In the context of feature matching, we can consider each image as a document and each feature point as a word. Consequently, the image can be represented as a multinomial distribution of K topics, where each topic encompasses a set of feature "tokens" that capture high-level contextual information. Our goal is to estimate a distribution over topics (referred to as topic distribution) for each feature point. We denote the topic indicator of feature f_i as z_i, where z_i belongs to the set {1,2,…,K}, and the topic distribution for f_i as θ_i ∈ℝ^K, where θ_i,k represents the probability p(z_i=k). To incorporate latent topics into Eq. <ref>, we introduce the random variable z_ij, which represents the assigned topic for a pair of feature points {F_c,i^A, F_c,j^B}. Here, z_ij can take values from the set 𝒵={1,2,…,K,NaN}. If z_ij=k, it indicates that F_c,i^A and F_c,j^B are co-assigned to topic k. Conversely, if z_ij is NaN, the feature pair does not belong to the same topic and is considered highly unmatchable. Eq. <ref> can be rewritten as follows: log P(M | F_c^A, F_c^B ) = ∑_m_ij∈ Mlog P (m_ij| F_c^A, F_c^B ) = ∑_m_ij∈ Mlog∑_z_ij∈𝒵 P (m_ij, z_ij| F_c^A, F_c^B ) To compute Eq. <ref>, we employed an evidence lower bound (ELBO) function: ℒ_ELBO = ∑_m_ij∈ M∑_z_ij∈𝒵 P(z_ij| F_c ) log P(m_ij| z_ij, F_c ) = ∑_m_ij E_p(z_ij)log P(m_ij| z_ij, F_c) where P(m_ij |z_ij,F_c^A,F_c^B) represents the conditional matching probability of the match m_ij given the co-assigned topic z_ij, which contains semantic contextual information. Eq. <ref> can be computed using Monte-Carlo (MC) sampling as follows: ℒ_ELBO = ∑_m_ij∈ M1/S∑_s=1^S log P(m_ij| z_ij^(s), F_c^A, F_c^B ) z_ij^(s)∼ P(z_ij| F_c^A, F_c^B ) Based on Eqs. <ref> and <ref>, our objective is to infer the topic distribution P(z_ij |F_c^A,F_c^B) and then estimate the topic-assisted matching distribution P(m_ij |z_ij^(s),F_c^A,F_c^B). The topic inference step involves grouping features that share similar global contexts into topics and representing these contexts using topic embedding vectors. This process is referred to as context pooling. Subsequently, we utilize the inferred topics to enhance the features, where features with similar topic distributions are represented more closely. We perform topic-assisted feature augmentation to incorporate topic information into the features. This step can be understood as context merging. After obtaining the updated features, we calculate the matching probabilities to extract coarse correspondences. In the following subsections, we will provide a comprehensive and detailed explanation of this process.. §.§.§ Topic Inference – Context Pooling The distribution of z_ij is inferred by decomposing it into two separate distributions, z_i and z_j. This decomposition is represented as follows: P(z_ij = k | F_c) = P(z_i = k | F_c^A ) P(z_j = k | F_c^B) = θ_i,k^A θ_j,k^B where θ_i^A and θ_j^B are the topic distributions of feature F_c,i^A and F_c,j^B, respectively. Eq. <ref> demonstrates that the distribution of the topic co-assignment can be estimated by individually inferring the topic distribution for each feature point. To perform individual topic inference, we utilized cross-attention. We introduced a learnable global representation of topic k, denoted as T_k ∈ℝ^D. Through cross-attention layers, we estimated the local representation T conditioned on the coarse feature map F_c as follows: T_k = (T_k, F_c) where (T_k,F_c) represents an attention layer with queries T_k, keys F_c, and values F_c. Eq. <ref> serves two purposes: i) categorizing different spatial structures or objects within the feature map F_c into distinct topics, and ii) representing these structures with updated topic embeddings T. Geometrically, T_k can be considered as the centroid of the shapes and semantic structures contained in topic k. Next, we applied the Softmax layer to estimate the topic distribution θ_i for each feature f_i ∈ F_c, as follows: θ_i, k = exp(⟨T̂_k, F_i ⟩)/∑_h=1^K exp(⟨T̂_h, F_i ⟩) where ⟨.,.⟩ represents the dot-product operator. After performing individual topic inference in Eq. <ref>, we computed the probability of co-assigning the feature pair {F_c,i^A, F_c,j^B} to topic k following Eq. <ref>. However, obtaining the explicit topic label k for each feature pair during training presents a challenge. Therefore, we defined the probability of co-assigning the feature pair to an implicit topic as follows: P(z_ij∈{1…K}| F_c^A,F_c^B) = ∑_k=1^K θ_i,k^A θ_j,k^B = ⟨θ_i^A, θ_j^B⟩ Additionally, the probability of the feature pair not belonging to the same topic was calculated as follows: P(z_ij = NaN | .) = 1 - P(z_ij∈{1...K}| .) = 1 - ⟨θ_i^A, θ_j^B⟩ During training, given the ground-truth feature pairs, Eqs. <ref> and <ref> are considered topic-matching formulas used to train the latent topics. §.§.§ Topic-Assisted Feature Augmentation – Context Merging After the topic inference step, topic labels were sampled for each feature f_i based on the estimated topic distribution θ_i. The features were then clustered into their respective topics. We denote the sets of features belonging to topic k = z_ij^(s) as F_c^A,k⊂ F_c^A and F_c^B,k⊂ F_c^B. These features capture the high-level contexts associated with topic k. Then, self/cross-attention layers were applied to encode those contexts into the features: F_c^A, k = ( F_c^A, k, F_c^A, k), F_c^B, k = ( F_c^B, k, F_c^B, k) F_c^A, k = ( F_c^A, k, F_c^B, k), F_c^B, k = ( F_c^B, k, F_c^A, k) We selectively focused on co-visible topics between the image pair for feature augmentation to improve computational efficiency. Co-visible topics were determined by comparing the topic distributions of the two images. The topic distribution of an image is estimated by integrating the distributions of all features: θ_k^A ∝∑_i=1^|F_c^A|θ_i,k^A, θ_k^B ∝∑_j=1^|F_c^B|θ_j,k^B where ∝ denotes the normalization operator. The co-visible probability was then computed by multiplying the two image-level topic distributions, denoted as θ^covis=θ^A θ^B. Finally, the topics with the highest probabilities were selected for feature augmentation. Efficiency Improvement Strategy. Although the feature-augmentation algorithm above is computationally efficient compared to Transformer-based methods <cit.>, it encounters a bottleneck when most of the features are assigned to a single topic. This limitation arises from large textureless scenes or noise during topic inference. To address this issue and further enhance efficiency, we propose an upgraded version of feature augmentation. In this upgraded approach, we eliminated the topic sampling and the in-topic self-/cross-attention (Eq. <ref>). Instead, after topic inference, the features are merged directly with the context-aware topic embeddings T using cross-attention layers, as follows: F_c = (F_c, T) where F_c are the enhanced features after this context merging step. It is observed that the attention layer in Eq. <ref> estimated the amount of context information received for each feature point F_c,i as follows: E_i[T] = ∑_k=1^K θ_i,kT_k where θ_i is the topic distribution of feature F_c,i estimated in Eq. <ref>. Eq. <ref> indicates that if two features have a similar topic distribution θ (i.e., belong to the same topic), they receive the same amount of context information and are highly matchable. This demonstrates the effective assistance of inferred topics in learning distinctive features across images. Notably, Eq. <ref> updates the features from fixed-sized topic matrix T, in which the number of topics K is much smaller than the number of features assigned to topic k (F_c^A,k, F_c^B,k).Therefore, using Eq. <ref> for feature augmentation is more efficient than using Eq. <ref>. However, we find that Eq. <ref> slightly decreases the robustness of features compared to Eq. <ref>, leading to a minor reduction in performance, as described in the ablation study (Section <ref>). §.§.§ Coarse Match Estimation After the context merging step, let F_c^A and F_c^B be the augmented versions of F_c^A and F_c^B, respectively. The dual-softmax operation <cit.> was applied to compute the matching probabilities as follows: P(m_ij| z_ij^(s) = k, F_c) = (⟨F_c,i^A, F_c,j^B ⟩) Using Eq. <ref>, we obtain a matching confidence matrix P_c ∈ [0, 1]^N^A × N^B, where N^A and N^B represent the lengths of F_c^A and F_c^B, respectively. Finally, a confidence threshold τ and a mutual nearest neighbor search <cit.> were applied to extract coarse matches. §.§.§ Training Latent Topics We devised an effective procedure for learning latent topics. To mitigate topic overfitting, we employed a dropout layer on the initial topic embeddings T before the context pooling step. This regularization technique prevents the dominance of a few topics during training and prediction. Then, we utilized the topic-matching formulas mentioned in Eqs. <ref> and <ref>. For a given ground truth matching pair (i,j) at the coarse stage, we enforced the feature vectors (f_i^A,f_j^B) to belong to the same topic. To achieve this, the negative log-likelihood of Eq. <ref> was calculated as follows: ℒ_ij^m = -log( (θ_i^A)^T θ_j^B ) We also introduced a non-matching loss to prevent assigning all features to a single topic. For each ground truth match (i,j), we sampled N unmatched pairs {(i,n)}_n=1^N and defined the loss using Eq. <ref>: ℒ_in^u = -log( 1 - (θ_i^A)^T θ_n^B ) The final loss, known as the topic matching loss, combines the two losses above as follows: ℒ_c^topic = 1/|M_c^gt|∑_(i,j) ∈ M_c^gtℒ_ij^m + 1/|N * M_c^gt|∑_(i, n)ℒ_in^u where M_c^gt represents the number of ground truth matching pairs. §.§ Dynamic Feature Refinement Let (P_x^A,P_y^B) be a pair of feature patches cropped at a coarse match (x,y), where x and y are the center coordinates of P_x^A and P_y^B, respectively. We propose a dynamic refinement network to estimate a refined match (x, y) at the finer stage. Existing methods <cit.> usually fix the first keypoint (i.e., x = x) and then employ an attention network to determine the location of the second keypoint y with the highest similarity score. However, this approach can degrade the matching accuracy when the center location x belongs to a flat region or a non-overlapping region of two patches. To address this issue, the refinement network estimates a score map for patch P_x^A, allowing it to detect a more reliable location as the keypoint of interest. First, we employed two MLP-Mixer blocks to transform the features of the two patches: {P_x^A, P_y^B} = ({ P_x^A, P_y^B }) The MLP-Mixer block consists of a channel-mixing block and a token-mixing block, each employing a multi-layer perceptron (MLP) and a LayerNorm layer. The channel-mixing block operates on each feature token independently, extracting new features by mixing channel information. On the other hand, the token-mixing block operates on each channel, facilitating communication between different spatial locations. Compared to a standard Transformer block, MLP-Mixer is more efficient <cit.>. The detailed design of the MLP-Mixer block is illustrated in Fig. <ref>. Next, we designed a detector network 𝒟, also utilizing the MLP-Mixer blocks, to estimate a score map S_x^A for feature patch P_x^A as: S_x^A = 𝒟(P_x^A), (S_x^A ∈ℝ^N_p, P_x^A ∈ℝ^N_p × C) where N_p is the number of features and C is the number of channels. The score map S_x^A was then converted to a probability heatmap H_x^A using the softmax function with a small temperature t. Finally, we extracted the keypoint of interest x∈ℝ^2 and the corresponding feature descriptor f∈ℝ^C using the expectation operation: x = ∑_n=1^N_p H_x,n G_n, f_x^A = ∑_n=1^N_p H_x,n^A P_x,n^A Here, G ∈ℝ^N_p × 2 is a grid map indicating the spatial coordinates of features. The entire network can be trained in a self-supervised manner, meaning it does not require ground-truth matching pairs. Moreover, it is observed that our network adaptively extracts keypoints by leveraging the information from both patches. As a result, the detected keypoint x is highly matchable, thereby increasing the chance of finding the accurate matching keypoint y. The effectiveness of our approach is demonstrated in Fig. <ref>, where the peak locations within the patches are detected as the refined results. Once we detected the interested keypoint for the first patch P_x^A, we estimated the matching keypoint in the second patch P_y^B. Given the detected feature f_x^A, we computed a heatmap H_y^B to measure the similarity between f_x^A and all feature points in P_y^B. H_y^B = ( ⟨f_x^A, P_y^B ⟩) Finally, the matching keypoint y and the feature descriptor f_y^B were determined using the expectation operator as in Eq. <ref>. §.§ Loss Function The total loss is defined by the weighted sum over two loss components, coarse-level loss ℒ_c and fine-level loss ℒ_f, as follows ℒ = λ_c ℒ_c + λ_f ℒ_f where λ_c and λ_f are constant weights for each loss term, and in this study, we set the weights as λ_c = λ_f = 0.25. §.§.§ Coarse-Level Loss The coarse-level loss (ℒ_c = ℒ_c^feat + ℒ_c^topic) is comprised of the feature-matching loss ℒ_c^feat and the topic-matching loss ℒ_c^topic, which are used to train both the network parameters and latent topics. The topic-matching loss was described in Section <ref>. The feature-matching loss was computed from the matching probability matrix P_c in Eq. <ref> by using cross-entropy function: ℒ_c^feat = - ∑_(i,j) ∈ M_c^gtlog(P_c(i, j)) §.§.§ Fine-Level Loss We used the symmetric epipolar distance function <cit.> for the fine-level loss. This loss does not require explicit ground-truth matching pairs; therefore, it makes beneficial for training our self-detector network. Given a ground-truth fundamental matrix F ∈ℝ^3 × 3 and the matching coordinate pair (x, y) extracted from our method, the fine-level loss was defined as follows: ℒ_f = 1/M∑_(x, y)x^T F y^2 ( 1/F^T x_0:2^2 + 1/F y_0:2^2) where M represents the number of matching outputs. x, y in Eq. <ref> above were already converted to homogeneous coordinates, and v_0:2 indicates that we only considered the first two elements of vector v. § EXPERIMENTS §.§ Experimental Setups Training Details. The proposed network was implemented in PyTorch, and the PyTorch-Lighting framework was employed for distributed training. The network was trained on the MegaDepth dataset <cit.>, which comprises 196 scenes captured from diverse locations worldwide. The dataset underwent preprocessing to filter out low-quality scenes and divide each scene into sub-scenes based on covisible scores <cit.>. For training, we utilized four GPUs, each with 11GB of memory. A subset of 368 sub-scenes was selected for training. In each training epoch, 50 image pairs were randomly sampled from each sub-scene, resulting in a total of 18,400 samples per epoch. To maintain consistent input sizes during training, the images were resized such that the highest dimension of each image was set to 800 pixels. The training process commenced with an initial learning rate of 0.001 and lasted for 40 epochs, completed in less than two days. In contrast to recent transformer-based models such as LoFTR <cit.>, which require approximately 24GB of memory per GPU and a day of training using 16 GPUs, our training approach is more efficient and well-suited for limited computational resources. During training, the number of topics (K) was fixed at 100, and the confidence threshold (τ) for extracting coarse matches was set to 0.2. Evaluation. We introduced two upgraded model variants: TopicFM-fast and TopicFM+. These models are extensions of our initial model, TopicFM <cit.>. Both variants incorporate dynamic feature refinement in the fine-level matching stage (Section <ref>). The difference between the two variants lies in the implementation of the context merging step (Section <ref>) during the coarse-level matching stage. Specifically, TopicFM-fast utilizes an efficient context-merging strategy (Eq. <ref>), while TopicFM+ estimates co-visible topics and employs in-topic self/cross-attention (Eq. <ref>). We conducted a comprehensive evaluation of our models across multiple tasks and benchmarks, including: i) homography estimation, ii) relative pose estimation, iii) visual localization, and iv) image matching challenge. Additionally, we compared the efficiency of our models with state-of-the-art methods. It is important to note that our evaluation used models trained exclusively on the MegaDepth dataset, without any fine-tuning on other datasets. In the following sections, we will provide a detailed description of our evaluation process. §.§ Homography Estimation We utilized the HPatches dataset <cit.> for evaluating this task. The HPatches dataset consists of 108 sequences, each containing one reference image and five target images. These image pairs were captured under varying illuminations and viewpoints, and ground-truth homography matrices between the reference and target images were provided. In our evaluation, we predicted feature correspondences for each image pair and estimated the homography matrix using OpenCV libraries. Previous image-matching methods have also conducted experiments on the HPatches dataset, although with different setups <cit.>. We performed two versions of the evaluation, one based on LoFTR <cit.> and the other on Patch2Pix <cit.>. In the first version, during the feature-matching step, we resized the image’s lowest dimension to 480. We selected an average of 1000 matches for each image pair to estimate the homography. In the second version, we resized the image’s highest dimension to 1024 and used all output correspondences for homography estimation. To assess the accuracy of the estimated homographies, we first warped the four corners of the first image to the second image based on the estimated and ground-truth homographies. We then computed the corner error (ϵ) between the two warped versions <cit.>. In the first evaluation, we measured the corner error using the AUC metric with thresholds of 3, 5, and 10 pixels <cit.>. In the second evaluation, we calculated the percentage of correct estimates when the error (ϵ) was smaller than 1, 3, and 5 pixels. The results are presented in Table <ref>, which illustrates the significant accuracy improvements of the proposed models, TopicFM-fast and TopicFM+. As shown in Table <ref> (left), both models outperformed the transformer-based method LoFTR <cit.> and the recent dense matching method PDC-Net+ <cit.> by a substantial margin. In Table <ref> (right), our models achieved the best overall performance, surpassing both detector-based and detector-free methods. It is worth noting that, under the viewpoint-changing condition, only the SP+ClusterGNN baseline exhibited competitive performance. This method combines the robust keypoint detector SuperPoint (SP) <cit.> with the attention-based feature-matching model ClusterGNN <cit.>. It is important to mention that both SP and ClusterGNN were trained independently on various datasets, whereas our models were trained end-to-end on MegaDepth, demonstrating the robustness of our proposed approach. §.§ Relative Pose Estimation Feature correspondences are commonly used for estimating the relative pose, which includes the rotation matrix R and translation vector T, between two images. To evaluate the accuracy of relative pose estimation, we conducted evaluations on both outdoor (MegaDepth <cit.>) and indoor (ScanNet <cit.>) datasets. Each test set consisted of 1500 pairs of images. For the ScanNet dataset, we used an image resolution of 640×480, while for MegaDepth, we resized the highest dimension of the image to 1200. The relative pose for each image pair was estimated using OpenCV libraries. Following the approach in <cit.>, we measured the Area Under the Curve (AUC) of pose estimation error at thresholds of {5^o,10^o,20^o}. Tables <ref> and <ref> present the results for the MegaDepth and ScanNet datasets, respectively. To ensure a fair comparison on ScanNet, we used models trained solely on MegaDepth for all detector-free methods. This helps to verify the models' generalization capacity. The proposed method, TopicFM+, outperformed the robust detector-based method SP <cit.> + SuperGlue <cit.>, as well as recent Transformer-based detector-free methods, including LoFTR <cit.>, MatchFormer <cit.>, QuadTree <cit.>, and ASTR <cit.>, as shown in Tables <ref> and <ref>. Although our efficient version, TopicFM-fast, had lower performance compared to the robust version TopicFM+, it remained competitive with the Transformer-based baselines. Importantly, TopicFM+ demonstrated significant performance improvement on the ScanNet dataset, highlighting its strong generalization capabilities. On the MegaDepth dataset, our method achieved approximately an improvement of +3 AUC@5^o compared to the second-best method, AspanFormer. §.§ Visual Localization The visual localization task aims to estimate the absolute camera pose for an input query image based on a sparse 3D map of the scene. This task involves two main steps: finding a set of 2D-3D correspondences between the query image and the 3D map, and then using a perspective-n-points (PnP) algorithm to estimate the camera pose. The feature-matching model can be applied to the first step. In line with previous work <cit.>, we utilized the visual localization pipeline HLoc <cit.> for evaluation. We integrated the matching model into this pipeline to assess its performance. HLoc also supports the construction of a 3D structure of the scene from a set of database images prior to performing query image localization. We evaluated both outdoor scenes (Aachen Day-Night v1.1 <cit.>) and indoor scenes (InLoc <cit.>). The estimated camera poses were submitted to the https://www.visuallocalization.net/benchmark/benchmark website, where the evaluation metrics were calculated. Aachen Day-Night v1.1. This outdoor dataset comprises 6697 database images and 1015 query images, with 824 images captured during the day and 191 images taken at night. To generate image pairs, HLoc selected the top-20 nearest neighbors for each database image and the top-50 nearest neighbors for each query image. We resized the images to have the highest dimension of 1200 and predicted matches for each image pair. The matches were then aggregated across all image pairs to extract a unique set of matching keypoints for each image. This aggregation process involved keypoint quantization <cit.>. Subsequently, the image keypoints and matches were imported into the COLMAP database to perform SfM and absolute pose estimation. The entire pipeline typically required approximately one day to complete when executed on an NVIDIA Tesla V100 GPU with 32GB of memory. The evaluation results are presented in Table <ref>, along with the results reported by other methods on the benchmark website. Our methods demonstrated the best performance across most evaluation metrics, outperforming the detector-based baseline SP <cit.> + SuperGlue <cit.> and recent Transformer-based methods such as ASpanFormer <cit.> and ASTR <cit.>. We observed that the detector-based method, SP+SuperGlue, achieved strong performance because SP and SuperGlue were trained on diverse datasets with various shapes and scenes, such as MS-COCO 2014 <cit.> (SP), synthetic shapes <cit.> (SP), and MegaDepth (SuperGlue). While ASpanFormer achieved good results, it required higher computational costs compared to LoFTR. In contrast, our method TopicFM-fast achieved both high performance and efficiency. InLoc. This indoor dataset consists of 9972 database images and 329 query images. Since this dataset provides 3D pointcloud database, HLoc only generated query image pairs by selecting the top-40 nearest retrieval neighbors for each query image. During the feature-matching step, we set the highest dimension of the images to 1024. The entire InLoc experiment took approximately two hours on an NVIDIA Tesla V100 GPU with 32GB memory. Table <ref> presents the evaluated results on InLoc. Remarkably, our method, TopicFM-fast, outperformed the other methods by a large margin. It also significantly surpassed our initial version of TopicFM <cit.>. This demonstrated the robustness of the proposed approach in this paper. §.§ Image Matching Challenge Image Matching Challenge 2022 (https://www.kaggle.com/competitions/image-matching-challenge-2022IMC22) was hosted on Kaggle. We submitted the source code to the benchmark website and received a mean Average Accuracy (mAA) score for camera pose estimation on a hidden dataset. For the experimental setups, we resized the images to have the highest dimension of 1472. After the image-matching step, we used OpenCV libraries and utilized a RANSAC threshold of 0.15 pixels to estimate a fundamental matrix for each image pair. These estimated fundamental matrices underwent post-processing and evaluation to obtain the final output scores. Table <ref> shows the results of the proposed method compared to other image-matching methods. TopicFM-fast achieved better accuracy than the baselines. Here, we did not report the highest-ranking submissions because those submissions employed ensemble techniques that combine the predictions of multiple image-matching models to produce the final prediction. §.§ Efficiency Evaluation We compared our method with recent Transformer-based methods, including LoFTR <cit.>, QuadTree <cit.>, and AspanFormer <cit.>. These methods utilized a ResNet-based network for multi-scale feature extraction and incorporated attention layers for coarse-level and fine-level matching. To evaluate efficiency, we measured GFLOPs (1 GFLOPs = 10^6 FLOPs) and runtime (in ms) of our method and the comparison methods. We calculated the average values of these metrics using 1500 image pairs from the ScanNet dataset. The measurements were performed on a workstation with an Intel Xeon CPU (48 cores), 252 GB of RAM, and a Tesla V100 GPU with 32 GB of memory. Fig. <ref> shows the experimental results at various image resolutions. As shown in Fig. <ref>, TopicFM-fast outperformed TopicFM+ in terms of computational cost and runtime due to the efficient context merging module described in Section <ref>. Both models exhibited significantly reduced computational costs and runtimes as the image resolution increased. Compared to the state-of-the-art method AspanFormer, TopicFM-fast achieved approximately a 50% reduction in runtime and GFLOPs, while demonstrating superior image-matching performance across most benchmarks. §.§ Topic Visualization The latent topics offer interpretability to the image-matching results, providing insights into the contextual information used for feature learning and indicating the presence of overlapping regions between images. To visualize the topics within each image, we selected the topic label with the highest probability based on the topic distribution θ of each feature. Fig. <ref> illustrates the results, where each topic in the image is represented by a specific color. As shown in Fig. <ref>, distinct spatial structures or objects consistently correspond to the same topic in each image. For example, in the first two image pairs of MegaDepth and Aachen, the topic "human" is highlighted in green, "tree" in orange, and "ground" in purple. Different spatial structures of a building, such as roofs, windows, and pillars, are assigned to different topics. This consistent pattern across images from both the MegaDepth and Aachen Day-Night datasets demonstrates the effectiveness of our topic inference module. Furthermore, as depicted in the last two image pairs of the MegaDepth and Aachen datasets, our method can identify overlapping structures by considering the co-visible topics (indicated with color) while disregarding non-overlapping regions (marked without color). Remarkably, despite being trained on the outdoor MegaDepth dataset, our topic-assisted feature-matching approach shows good generalization on the indoor ScanNet dataset, as evident in the last row of Fig. <ref>. §.§ Ablation Study In this section, we conducted experiments on the MegaDepth dataset to analyze the effectiveness and computational efficiency of the key components in our method. The proposed method consists of three modules: (i) the feature initialization FPN, (ii) the coarse-level matching module using TopicFM (Section <ref>), and (iii) the fine-level matching module using dynamic feature refinement (Section <ref>). For the coarse-level matching step, we investigated two versions of context merging: in-topic feature augmentation (Eq. <ref>) and efficient context merging (Eq. <ref>). Efficiency Details. We measured the computational cost for each module at an image resolution of 864×864. As shown in Table <ref>, the feature extraction step incurred the highest computational cost. This CNN-based network, commonly employed in other methods, is crucial for initializing robust features for subsequent steps. Furthermore, the proposed efficient context merging technique (Eq. <ref>) demonstrates higher efficiency compared to the in-topic feature augmentation approach (Eq. <ref>). Effectiveness of Proposed Modules. To investigate the impact of each module, we conducted experiments comparing the performance gains achieved by activating each module sequentially. We tested the following models: * Model-A represents our original work, TopicFM <cit.>. It used the feature extraction network FPN and employed in-topic feature augmentation (Eq. <ref>) for the coarse-level matching stage. For the fine-level matching stage, Model-A adopted the fixed center-keypoint approach, similar to LoFTR <cit.>. * Model-B improved the efficiency of Model-A by incorporating the efficient context merging technique (Eq. <ref>) in the coarse-level matching step. * Model-C built upon Model-B by further applying dynamic feature refinement in the fine-level matching step. Model-C represents the proposed TopicFM-fast in this paper. * Model-D represents TopicFM+ that employed in-topic feature augmentation and dynamic refinement. Based on Table <ref>, we can make two key observations. Firstly, there is a tradeoff between matching accuracy and computational cost when transitioning from in-topic feature augmentation (Eq. <ref>) to efficient context merging (Eq. <ref>) in the coarse-level matching step. The efficient context merging module enhances computational efficiency (from 73 GFLOPs to 49 GFLOPs) at the expense of a slight decrease in accuracy (Model-A to Model-B, Model-D to Model-C). Secondly, the dynamic feature refinement module significantly improves accuracy (Model-A to Model-D, Model-B to Model-C). In summary, we suggested several options to improve image-matching accuracy or efficiency. Based on the evaluations conducted in previous sections, we conclude that TopicFM-fast (Model-C) balances accuracy and computational cost, making it a valuable advancement. Covisible Topic Selection. The proposed method utilizes the estimation of covisible probabilities of topics (as described in Section <ref>) to select important topics for robust feature learning. To assess the impact of the number of covisible topics, denoted as K_co, on image-matching performance, we conducted evaluations on MegaDepth using AUC metrics. Table <ref> presents the results of relative pose estimation and corresponding runtimes for each K_co∈{2,4,6,8,10,12}. It is observed that the matching accuracy gradually improved as K_co increased. However, there was a slight decrease in AUCs when K_co exceeded 8, and the reduction stopped after reaching 10. This indicates that TopicFM effectively covered all covisible regions in the image pair. In conclusion, it is recommended to set the number of covisible topics between 6 and 10 during testing to achieve high image-matching accuracy. § CONCLUSION We have presented a novel method for solving the image-matching problem. Our approach leverages latent topics within each image to effectively support the feature-matching process. Our experiments demonstrate that the topic-assisted feature-matching approach achieves high accuracy while maintaining low computational costs. Additionally, we introduced a dynamic feature refinement technique to enhance pixel-level accuracy. As a result, the proposed architecture is robust and significantly more efficient than state-of-the-art methods. We are confident that our method will make a substantial contribution to real-time applications. § ACKNOWLEDGMENTS This work was supported by the Industrial Strategic Technology Development Program (No. 20007058, Development of safe and comfortable human augmentation hybrid robot suit) funded by the Ministry of Trade, Industry, & Energy (MOTIE, Korea). IEEEtran
http://arxiv.org/abs/2307.03374v1
20230707035426
STG-MTL: Scalable Task Grouping for Multi-Task Learning Using Data Map
[ "Ammar Sherif", "Abubakar Abid", "Mustafa Elattar", "Mohamed ElHelw" ]
cs.LG
[ "cs.LG" ]
[ STG-MTL: Scalable Task Grouping for Multi-Task Learning Using Data Maps Ammar Sherifnu Abubakar Abidhuggingface Mustafa Elattarnu Mohamed ElHelwnu nuNile University, Giza, Egypt huggingfaceHugging Face, New York, United States Ammar [email protected] Machine Learning, ICML 0.3in ] Multi-Task Learning (MTL) is a powerful technique that has gained popularity due to its performance improvement over traditional Single-Task Learning (STL). However, MTL is often challenging because there is an exponential number of possible task groupings, which can make it difficult to choose the best one, and some groupings might produce performance degradation due to negative interference between tasks. Furthermore, existing solutions are severely suffering from scalability issues, limiting any practical application. In our paper, we propose a new data-driven method that addresses these challenges and provides a scalable and modular solution for classification task grouping based on hand-crafted features, specifically Data Maps, which capture the training behavior for each classification task during the MTL training. We experiment with the method demonstrating its effectiveness, even on an unprecedented number of tasks (up to 100). § INTRODUCTION Multi-Task Learning (MTL) has emerged as a powerful technique in deep learning <cit.> that allows for joint training of multiple related tasks, leading to improved model performance compared to traditional Single-Task Learning (STL). By leveraging shared representations and knowledge across tasks, MTL enhances generalization and mitigates overfitting. Furthermore, MTL promotes faster learning of related tasks and alleviates the computational requirements of deep learning, making it particularly valuable in scenarios with limited task-specific data. That is why MTL has gained significant attention in various domains, including computer vision <cit.>, natural language processing <cit.>, speech recognition <cit.>, and healthcare <cit.>, and has shown promising results in improving accuracy, robustness, and efficiency. However, effectively harnessing the potential of MTL poses several challenges, including the identification of optimal task groupings <cit.> and the management of negative interference between tasks <cit.>. The task grouping problem in MTL is particularly challenging due to the exponential number of possible task combinations <cit.>. What makes it worse for the exhaustive search is that each trial involves a complete training and evaluation procedure, leading to computational and optimization burden. Moreover, inappropriate task groupings may result in performance degradation due to negative transfer between tasks <cit.>. Existing solutions have struggled to address these challenges, often suffering from scalability and modularity issues, making their practical application in real-world scenarios nearly infeasible. In this paper, we propose a novel data-driven method for task grouping in MTL for classification tasks, which overcomes the scalability and modularity limitations. Our method utilizes the concept of Data Maps <cit.>, hand-crafted features that capture the training behavior of each classification task during MTL training. By analyzing these data maps, we can identify task groupings, both hard and soft ones, that promote positive transfer and mitigate negative interference as much as possible. We demonstrate the effectiveness of our method through extensive experimentation, including experiments on an unprecedented number of tasks, scaling up to 100 tasks to emphasize the practicality of our approach. The contributions of this paper can be summarized as follows: * We propose a novel data-driven method for task grouping in MTL, addressing the challenges of scalability and modularity. * We propose a mechanism that utilizes our soft-grouping results, enabling model specialization via loss weighting. * We conduct extensive experiments, demonstrating the effectiveness of our method, even on a large number of tasks (scaling up to 100 classification tasks). § RELATED WORK MTL has been extensively studied to leverage the benefits of information sharing among related tasks, which can serve as an inductive bias to improve modeling performance <cit.>. Another perspective on MTL is that it enables more efficient utilization of the model capacity by focusing on learning relevant features and reducing the impact of irrelevant signals, which contributes to overfitting, leading to better generalization. However, when tasks lack shared information, they compete for the limited model capacity, resulting in performance degradation <cit.>. To address this challenge, task grouping has emerged as a promising solution to identify subsets of tasks that can be trained together, avoiding negative interference and promoting improved performance. Traditionally, the decision of task grouping has been approached through costly cross-validation techniques or human expert knowledge <cit.>. However, these methods have limitations when applied to different problem domains and may not scale well. Some attempts have been made to approach the problem differently enabling the models to automate the search over which parameters to share among particular tasks <cit.>. Methods such as Neural Architecture Search <cit.>, Soft-Parameter Sharing <cit.>, and asymmetric information transfer <cit.> have been developed. However, these models often exhibit poor generalization and struggle to perform well on diverse tasks and domains. Besides, they often require a large model capacity and do not thus scale well with a large number of tasks. Therefore, gradient-based approaches <cit.> have also been explored to determine task grouping in advance. The Task Affinity Grouping (TAG) approach <cit.>, which leverages gradients to determine task similarity, is an example of such an approach. Nevertheless, it has complex training paradigm and requires Θ(N^2) more forward and backward passes to compute the inter task affinities, putting an issue with scalability even if we enhance the solution's modularity. Another method, called Higher-Order Approximation (HOA) <cit.>, reduces the exponential number of MTL training, from the exhaustive search, by considering only the quadratic pairs of task combinations. However, even with such relaxation, the scalability of HOA remains limited, particularly when dealing with a large number of tasks. The task grouping problem has been addressed in recent studies through a Meta-Learning approach <cit.>, aiming to create a meta-learner that can estimate task grouping gains. Nevertheless, the computational demands of this approach pose practical challenges for real-world applications; it requires training MTL networks for every chosen task combination in the training set for multiple iterations. It furthermore outputs all the possible gains of every task combination, whose numbers grow exponentially, and runs a search algorithm over these exponentially growing gains to find the optimal grouping. As a result, the scalability of this solution is severely limited, making it less feasible for a larger number of tasks. § TASK CLUSTERING USING DATA MAPS Now, we elaborate in the components in our method in the next sections. We start with stating the notations we will use along with our MTL architecture we are using in our experiments in Section <ref>. Then, we move on to illustrate the data maps, which is crucial component of our method in Section <ref>. In Section <ref>, we talk regarding the approaches we use to cluster the tasks. We also introduce our evaluation mechanism of our task grouping in Section <ref>. Finally, we conclude this part with a simple theoretical comparison of our method and the literature from the perspective of scalability and modularity in Section <ref>. Figure <ref> provides an overview of our method. §.§ Preliminaries Notations In our paper, we use the following notations consistently. The set of all tasks is denoted as T={T_1,…,T_n}, where n represents the number of tasks and |T|=n. The total number of training data points is denoted as N. We calculate the data maps at specific epochs, and the set of epochs is represented as E={E_1, …, E_k}, where E_i corresponds to the i^th epoch. The task clusters are denoted by C={C_1, …, C_m}, and each cluster C_i has an associated centroid c_i. The participation of each task i in cluster j is represented by w_i,j, with the constraint that ∑_j=0^|C|w_i,j = 1, indicating the percentage of membership; W_i is the weight vector of the tasks in cluster j. The values of w_i,j range from 0 to 1, where 1 signifies full membership and 0 indicates no membership. MTL Architecture The MTL procedure for a given task combination, consisting of τ tasks denoted as {T_a_1, …, T_a_τ}, is defined as training with a joint objective for these tasks (Equation <ref>) where L is the accumulated loss value of the cluster, L_k is the loss of the k^th task, and w_k∈ [0,1] is an optional task weight of the k^th task. L = ∑_k=1^τ w_k· L_k Following the previous approaches <cit.>, we utilize a commonly employed hard-sharing multi-head architecture (Figure <ref>) for all our MTL experiments, where a single feature extractor is used to obtain shared representations, and separate task-specific heads are employed to output the result. Additionally, for all the experiments, we maintain the same data splits, via prior seeding, and keep the optimization algorithm and other hyperparameters fixed; this is to make sure any variability in the performance is only attributed to the task grouping and the corresponding weights if any. §.§ Data Maps as Task Features Data Maps <cit.>, originally developed as a model-based tool for characterizing and diagnosing NLP datasets, serve as a valuable component in our approach. They leverage the model behavior concerning individual data point instances of the training set for each task. In our work, we employ Data Maps as task features due to their simplicity, scalability, and ability to extract them on the fly without prior knowledge of the model architecture, thus enhancing the modularity of our approach. The concept behind Data Maps revolves briefly around extracting two essential values for each data point: the model confidence (μ) of the true class, which is the average probability of the true class over some epochs, and the variability (σ) of this confidence, which is the standard deviation of the true class probabilities over the same epochs. For a particular task, the data map shape is (N,2) where N is the training size. Figure <ref> shows an example of the resulting Data Map for an example task extracted from CIFAR10 dataset <cit.>. Because their information is very task-dependent, we thought they can serve as task descriptors. To further enhance the expressiveness of the extracted features, we also extract data maps at various epochs, allowing us to gain insights into their evolution over time; the resulting shape in such case is (|T|, |E|, N, 2). Therefore, by analyzing their characteristics, over the different epochs during training, we can capture crucial information about the relatedness of each task. In the extraction of data maps, we employ two approaches. The first approach involves building a single MTL model that incorporates all tasks and extracting the data maps directly from this unified model. Alternatively, we utilize the second approach, where individual models are constructed for each task, resulting in multiple STLs, and merging the data maps obtained from each model. Our results are primarily based on the first approach, as it offers the advantage of a single training procedure, simplifying computational complexity, and streamlining experimentation, while having the same qualitative results as the STL. §.§ Task Clustering With the extracted data maps in hand, our next step is to group the tasks into clusters based on their similarity. We propose three distinct approaches for task clustering: soft clustering, hard clustering, and point-based soft clustering. In both hard and soft clustering, we represent each task as a vector by concatenating the corresponding data maps. In the case of hard clustering, we employ the k-means algorithm <cit.> to cluster these task vectors, aiming to identify distinct clusters of tasks. To introduce a more nuanced representation of task similarities, we incorporate a modified version of the fuzzification step (Equation <ref>), from <cit.>, into our approach, which enables soft clustering, where x_i represents the i^th task vector of the corresponding data maps and F>1 represents the fuzzification index. This fuzzification process assigns soft memberships to tasks, allowing for more flexible and comprehensive clustering results. We predominantly rely on the soft clustering approach due to its effectiveness and reliability. w_i,j = 1/∑_k=1^|T|(‖ x_i - c_j ‖/‖ x_i - c_k ‖)^2/F-1 = ‖ x_i - c_j ‖^-2/F-1/∑_k=1^|T|(‖ x_i - c_k ‖)^-2/F-1 In early experiments, though, we also explored a point-based clustering approach to determine the participation membership of tasks. This approach involves clustering each instance point per task within each data map. Each data point then serves as a vote for its corresponding task, and the participation membership of each task is calculated based on the percentage of data points within the cluster (Equation <ref>) where d_k,e^i represents the k^th data point of the data map taken at epoch e for task i. w_i,j = ∑_k=1^N∑_e=1^|E| [d_k,e^i∈ C_j] /|E|· N However, we do not heavily rely on this point-based approach in our method. This is because it treats the data points within a data map in isolation, failing to capture the abstract behavior specific to each task and overlooking the evolution of data maps over different epochs. §.§ Model Specialization through Loss Weighting In order to assess the effectiveness of our task grouping results, we use loss weighting as a method of model specialization. We construct MTL models that are tailored to specialize in specific sets of tasks based on the membership weights obtained from soft clustering results. For each cluster, we build an individual MTL model that focuses on the tasks assigned to that cluster according to their corresponding weights (Equation <ref>). To evaluate the performance of our solution, we apply the weighted average of the models' outputs according to the membership weights as in Equation <ref>, where O is our final output, O_k is the output according to the k^th cluster, and W_k is the weight vector of the k^th cluster. Figure <ref> provides an overview of these operations while inferring the output. By comparing the resulting values with both STL and traditional MTL schemes, we can gain insights into the benefits and improvements brought by our task grouping approach. O = ∑_k=1^m W_k· O_k §.§ Theoretical Scalability Comparison In the theoretical scalability comparison, we evaluate our method against existing literature, focusing on the number of models required for clustering. Table <ref> presents the comparison, where lower numbers indicate better scalability. Our approach stands out with excellent scalability, as it only necessitates training a single MTL model to extract data maps and perform clustering, or 𝒪(N) if we consider extracting data maps from STL models. This offers the most promising scalability potential for a larger number of tasks. That is why we can scale our experiments to a very large number of tasks as in Section <ref>. Notice TAG requires one single MTL training, yet this is a customized training procedure where each epoch is effectively processed Θ(N^2), N 2 in particular, times to compute the inter-task affinities pairs, which like the other methods limits its scalability. Therefore, one MTL training of TAG utilizes the same compute of Θ(N^2) MTL models trained within the other methods normally. Furthermore, our method's data map computation is performed on the fly, making it both model and task agnostic. This feature enhances the modularity of our approach, enabling effortless adaptation to different model architectures and tasks without manual intervention. § EXPERIMENTS In this section, we present a comprehensive overview of our experiments, focusing on assessing the effectiveness of our method and presenting the corresponding results. Section <ref> outlines the specifics of the datasets utilized in our experimentation, as well as the tasks employed. In Section <ref>, we delve into the details of the model architecture and the hyperparameters used during experimentation. The outcomes of the soft clustering of tasks are presented in Section <ref>, where we highlight the effectiveness of our approach in grouping related tasks. Finally, in Section <ref>, we evaluate the quality of the obtained clustering results comparing them to STL and MTL results. §.§ Datasets and Tasks Our task generation is based on the CIFAR10 and CIFAR100 datasets <cit.>. We define three groups of tasks for our experiments. In Group 1 (G1), we include binary classification tasks that determine whether an image belongs to a specific label in CIFAR10 or not. G1 consists of 10 tasks: {airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck}. Group 2 (G2) expands on G1 by introducing additional tasks on CIFAR10. These tasks include {Living being, Odd-numbered, Downside, Not living being, random}. The “Living being" task aims to detect whether an image contains a living being, which includes images with the labels {bird, cat, deer, dog, frog, horse}. Similarly, the “Not living being" task focuses on identifying non-living beings; these are { airplane, automobile, ship, truck} classes in CIFAR10. Notably, “Living being" and “Not living being" are intentionally designed to be similar tasks for testing purposes. The “Odd-numbered" task identifies whether the label of a CIFAR10 image is odd or not, encompassing {automobile, cat, dog, horse, truck} classes. Additionally, we flip half of CIFAR10 images and create a task to train the model to recognize vertically flipped images, the “Downside" task. Lastly, the “random" task assigns random binary labels to the entire dataset with a predefined seed for consistency and reproducibility. It is worth mentioning that while the original tasks in G1 are imbalanced, the extra tasks in G2 are all balanced ones. Group 3 (G3), similar to G1, consists of 100 binary classification tasks using the CIFAR100 labels. We also utilize the 20 super labels of CIFAR100 as our ground truth for task clustering evaluation. It is worth mentioning that CIFAR100 super labels are not intended for task grouping, so they are not grouped based on visual similarities like our method's objective. Instead, they are mostly clustered semantically, even though there are some exceptions like mushrooms and the classes of vehicles 1 and 2. Still, we think they serve as an informative indicator of the effectiveness of our approach, especially in the visually coherent superclasses. §.§ Model Architectures and Hyper-Parameters For all our experiments, we adopt the RESNET18 architecture <cit.> as our base model. Our method is model-agnostic, so we have also experimented with models with much less complexity, yet we use RESNET18 considering it has moderate model capacity. Furthermore, we utilize it without any pre-training, ensuring that the model starts from scratch for each task grouping scenario. The last fully connected layer of RESNET18 serves as the task heads, with the number of output neurons corresponding to the number of tasks. Each neuron in the task heads represents a specific classification task. Throughout our experiments, the rest of the network, excluding the task heads, is shared among all tasks. Also, we train the model for 50 epochs in all our experiments: to extract data maps and to evaluate the models. Additionally, in our clustering process, we primarily set the fuzzification index (F) to 2, unless explicitly mentioned otherwise. The fuzzification index controls the level of fuzziness in the soft clustering algorithm, so increasing it produce softer decisions. In terms of the loss function, we utilize Binary Cross Entropy as the binary classification loss for our tasks. However, to address the issue of task imbalance, we incorporate a penalty on positive instances for each task. By applying this penalty, we ensure that the model pays more attention to the minority label during training, thereby mitigating the impact of the imbalance and promoting better overall performance. Finally, it worth mentioning that we do not perform any kind of tuning to any model. We use the same basic settings in all of our experiments. §.§ Task Clustering Results Results of our task clustering experiments are presented for all groups. We initially experimented on G2, generating their data maps as described in Section <ref> and Clustering them as in Section <ref>, as depicted in Figure <ref>. Notably, our method successfully clustered the “random" task separately, indicating its dissimilarity to the other tasks. Furthermore, throughout all our experiments, the tasks “Living being" and “Not living being" consistently exhibited the same membership distribution, which is reasonable considering their equivalence. Moreover, when focusing solely on the first 10 tasks from G2 without any additional tasks, our clustering algorithms demonstrated some semantic clustering capabilities, as shown in Figure <ref>. The algorithm successfully grouped images of living beings, including {bird, cat, deer, dog, frog, horse}, while another group consisted of images of non-living beings such as {airplane, automobile, ship, truck}. Nevertheless, this might be due to the impact of the “Living being" and “Not living being" tasks; we therefore conducted a similar experiment on G1, generating their data maps and clustering the tasks, without any extra tasks. As illustrated in Figure <ref>, even without additional tasks, our method performed the same reasonable clustering for G1, grouping living beings together and non-living things together <ref>. Additionally, Figure <ref> demonstrates the clustering using three clusters, revealing that the living being cluster was divided into two groups: cluster 1 and cluster 2. Cluster 1 predominantly contained quadruped animals {cat, dog, horse}, while cluster 2 included {bird, frog, deer} that represented the other living creatures except for the deer. These results showcase the effectiveness of our clustering algorithm in capturing semantic similarities among tasks based on the visual data leading to meaningful task groupings. In addition to our experiments on G1 and G2, we conducted a comprehensive evaluation of an unprecedented number of tasks, specifically 100 tasks from CIFAR100, in G3. As part of this evaluation, we compared our task clustering results against the predefined superclasses provided by CIFAR100. It is important to note that the superclasses in CIFAR100 primarily rely on semantic relationships as illustrated in Section <ref>. That is why we focus on coherent superclasses like people and flowers, as examples. In Figure <ref>, we showcase an example of the clustering results for a group of super tasks. It is noteworthy that our method successfully clusters certain groups of tasks in alignment with the predefined CIFAR100 superclasses, as illustrated in Figure <ref>. However, it is important to acknowledge that there are cases where the clustering may not be perfect, as depicted in Figure <ref>; we think this is primarily because our method focus one visual similarities, which is exploited during training rather than semantics. Nevertheless, even in such instances, our clustering algorithm manages to allocate significant weights of all tasks into distinctive clusters, such as clusters 0 and 8 in Figure <ref>. Notably, in cluster 8, the participation percentages of the tasks {orchid, poppy, rose, tulip} are the 2^nd highest across all clusters, indicating a close relationship with the missclassified task sunflower, yet our method suggests that the other four tasks are more visually related. We further discuss all the clustering result details of the 100 Tasks in Appendix <ref>. §.§ Evaluation Analysis To further validate the effectiveness of our method, we conducted a comprehensive evaluation as described in Section <ref> on all task groups. Figure <ref> presents the average F1 score for both the training and test sets of all the three sets of tasks. Our method is denoted by where represents the number of clusters. The MTL curve represents the results obtained from training an MTL model on all tasks without any grouping, while the STL curve represents the results obtained by training separate STL models for each task and merging their outputs. We compare the performance of our method against the MTL and STL approaches in both G1 & G2 and against the MTL approach only in G3 because the STL performance is poorer than the MTL, as it overfits. Overall, our method consistently outperforms both the MTL and STL approaches, indicating that the task grouping provides valuable information for improving task performance. Notably, although our method tends to overfit and achieves excellent training performance, it also achieves the best performance on the test set. This suggests that if the models were further fine-tuned, even greater gains could be achieved, yet we refrain from tuning any of the models in this study to guarantee fairness in comparison. § CONCLUSION AND FUTURE WORK In conclusion, we have presented STG-MTL, which is a novel scalable approach for task grouping in multi-task learning (MTL) settings. Our method utilizes data maps <cit.> to identify task similarities and group them accordingly. We showed is superior scalability theoretically in comparison to TAG <cit.>, HOA <cit.>, and MTG-Net <cit.>. We have also demonstrated the effectiveness of our method through our experiments on CIFAR10 and CIFAR100 datasets <cit.>, where we pushed the boundaries by experimenting with 100 tasks, which has never been done before in the literature proving its scalability. We have also compared our clustering results against the predefined superclasses in CIFAR100, further validating the effectiveness of our approach. Furthermore, our method outperformed traditional MTL and single-task learning (STL) approaches, showcasing the quality of task grouping and its ability to improve multi-task learning performance. For future work, we plan to expand the scope of our experiments by including a wider range of datasets and task types, enabling a more comprehensive evaluation of our approach's effectiveness and applicability. Furthermore, as our Data Maps are currently limited to classification tasks, we aim to explore their generalization to other task types, such as regression. Additionally, we hope our research could open a new research direction in the MTL community to explore the development of new features that can capture the training dynamics efficiently, other than data maps. By advancing this research direction, we can unlock new possibilities for enhancing performance and driving further advancements in the field of MTL. Acknowledgments We gratefully thank the Fatima Fellowship[<https://www.fatimafellowship.com/>] for supporting this research especially during its early stages. icml2023 § TASK CLUSTERING RESULTS OF 100 TASKS In this appendix, we present detailed insights into the task clustering results of 100 tasks, building upon the experimental setup outlined in Section <ref>. Figure <ref> showcases the clustering results using 20 clusters, while Figure <ref> illustrates the results with 10 clusters. Each image in the figures represents the clustering outcomes for one superclass from CIFAR100. Notably, the clustering with 20 clusters demonstrates successful grouping in many categories, such as {People, Trees, Food Containers, Flowers, Household Electrical Devices}. Figures <ref> and <ref> highlight the close association between Vehicles 1 and 2, as they are almost merged into the same cluster (Cluster 15). When reducing the number of clusters to 10, we observe enhanced coherence in the assigned tasks. For instance, the tasks related to "Fruit and Vegetables" are nearly clustered together after reducing the number of clusters. Moreover, our method effectively captures the logical associations between categories. Figures <ref> and <ref> showcase the plausible merge between "Household Electrical Devices" and "Household Furniture." Additionally, we observe similar distribution patterns among related categories, such as { Large Carnivores, Large Omnivores and Herbivores, Medium-Sized Mammals} in Figures <ref>, <ref>, and <ref> respectively. Overall, these results reveal the qualitative success of our approach in clustering a very large number of tasks, highlighting the effectiveness of our method even in such challenging scenarios.
http://arxiv.org/abs/2307.01512v1
20230704064907
A Fine Grained Stochastic Geometry Based Analysis on LEO Satellite Communication Systems
[ "Yanshi Sun", "Zhiguo Ding" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
A Fine Grained Stochastic Geometry Based Analysis on LEO Satellite Communication Systems Yanshi Sun, Member, IEEE, Zhiguo Ding, Fellow, IEEE The work of Y. Sun is supported by Hefei University of Technology's construction funds for the introduction of talents with funding number 13020-03712022011. Y. Sun is with the School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230009, China. (email: [email protected]). Z. Ding is with Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE, and Department of Electrical and Electronic Engineering, University of Manchester, Manchester, UK. (email:[email protected]). ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Recently, stochastic geometry has been applied to provide tractable performance analysis for low earth orbit (LEO) satellite networks. However, existing works mainly focus on analyzing the “coverage probability”, which provides limited information. To provide more insights, this paper provides a more fine grained analysis on LEO satellite networks modeled by a homogeneous Poisson point process (HPPP). Specifically, the distribution and moments of the conditional coverage probability given the point process are studied. The developed analytical results can provide characterizations on LEO satellite networks, which are not available in existing literature, such as “user fairness” and “what fraction of users can achieve a given transmission reliability ”. Simulation results are provided to verify the developed analysis. Numerical results show that, in a dense satellite network, it is beneficial to deploy satellites at low altitude, for the sake of both coverage probability and user fairness. low earth orbit (LEO) satellite, stochastic geometry, user fairness, meta distribution § INTRODUCTION Recently, deploying low earth orbit (LEO) satellite constellations to provide ubiquitous global connectivity is becoming an important enabling technique for 6G wireless communications <cit.>. Because LEO satellite constellations require a dense deployment of satellites, performance evaluation using computer simulations can be time-consuming but yeild limited insight. To address this challenge, emerging research is exploring the use of tools from stochastic geometry <cit.> to evaluate the performance of LEO satellite networks theoretically and provide a better understanding of their properties <cit.>. It is worth noting that existing research in this area typically focuses on metrics that reveal the average performance of the entire network, such as “the coverage probability”. However, many important properties of satellite communication networks remain unclear. For example, we may ask: i) how does a satellite network perform in terms of user fairness? ii) what fraction of users in the network can achieve certain link reliability? To answer the above questions, this letter aims to provide a more fine grained stochastic geometric performance analysis for downlink LEO satellite networks. Specifically, LEO satellites, which provide service to ground users, are modeled as a homogeneous Poisson point process (HPPP) denoted by Φ. The conditional coverage probability given Φ with respect to a signal-to-interference-ratio (SIR) threshold θ, denoted by P_s(θ), is first evaluated. Note that P_s(θ) itself is a random variable driven by Φ. Different from existing work which only studies the mean value of P_s(θ), this paper investigates the moments and distribution of P_s(θ). The developed analytical results can provide characterization on “user fairness” and “what fraction of users can achieve a given transmission reliability ”. Numerical results show that, for a dense satellite, it is better to deploy satellites at low altitude, by jointly considering users' average performance and user fairness. While for a sparser network, a high altitude is more favorable for deploying satellites. § SYSTEM MODEL Consider a downlink satellite communication scenario as shown in Fig. <ref>. The earth is modeled as a sphere with radius R_E. The LEO satellites are flying around the earth with circular orbit at the same altitude R_min above the mean sea level. For a given short time slot, the locations of the satellite can be assumed to be fixed, and the focus of this paper is to evaluate the performance achieved by the ground users served by satellites in such a short time slot. For a given short time slot, it is reasonable to assume that the locations of the LEO satellites form a homogeneous Poisson Point Process (HPPP) on the surface of the sphere (denoted by 𝕊^2_S) centered at the geocentric with radius R_S=R_E+R_min, denoted by Φ={x_i}, where x_i is the coordinate of the i-th satellite. The intensity of Φ is denoted by λ, which indicates the density of the satellites. Ground users can also be modeled as a HPPP with intensity λ_u on the surface of the earth. Due to the stationarity of HPPP, it is sufficient to consider a typical ground user U_0 located at (0,0,R_E). It is assumed that each ground user is served by its nearest satellite[The nearest satellite has to be within the horizon of the user, otherwise there is no satellite serving the user.], and all other satellites which are within the horizon of the user play the role of interference sources. For the ease of exposition, define a spherical cap 𝒜 which is the portion of sphere surface 𝕊^2_S cut off by a tangent plane to the earth surface at (0, 0,R_E). According to the previous discussion, only the satellites located on 𝒜 affect the performance of U_0. In the rest of this paper, the satellites are ordered according to their distances to U_0, i.e., x_1 is nearest satellite's coordinate. Given that there is at least one satellite on 𝒜, the SIR at U_0 is given by: SIR =|h_1|^2/∑_x_i∈Φ\{x_1}∩𝒜|h_i|^2Δ=|h_1|^2/I, where h_i=g_ir_i^-α/2 is the channel between U_0 and the i-th satellite <cit.>, g_i is the small scale fading, which is modeled as Nakagami fading with parameter M. Consequently, |g_i|^2 is a normalized Gamma random variable with parameter M, whose cumulative density function (CDF) is given by: F_|g_i|^2(x)=γ(M,Mx)/Γ(M) =1-∑_i=0^M(Mx)^i/i!e^-Mx, where Γ(M)=(M-1)! and γ(s,x) is the lower incomplete gamma function, given by γ(s,x)=∫_0^xt^s-1e^-t dt. r_i is the distance between U_0 and the i-th satellite. Note that for any satellite within area 𝒜, r_i is upper bounded by R_max=√(R_S^2-R_E^2). And α is the large scale path loss exponent. It is assumed that each ground user knows perfect channel state information (CSI) to its serving satellite. And I=∑_x_i∈Φ\{x_1}∩𝒜|h_i|^2 is the sum of the interferences. In the following, we would like to characterize the distribution of SIR. Note that there are two kinds of randomness which impact SIR, one is the small scale fading, and the other is Φ. Given Φ, the conditional coverage probability that the SIR is beyond a threshold θ is given by: P_s(θ) Δ=Pr(SIR>θ, Φ(𝒜)>0|Φ) =Pr(SIR>θ|Φ,Φ(𝒜)>0 )1(Φ(𝒜)>0|Φ), where 1(·) is the indicator function. Note that P_s(θ) is a random variable whose distribution depends on the distribution of Φ. In the existing literature, only the mean value of P_s(θ), i.e., the coverage probability of the typical link, is studied. The “coverage probability” can only reflect the average performance of all users, which is just a rough characterization of the system performance. This paper will give a more fine grained characterization of P_s(θ), to provide more statistical insights, in the next section. § PERFORMANCE ANALYSIS In this section, the moments of P_s(θ) are first evaluated, then the meta distribution, i.e., the complementary cumulative distribution function (CCDF) of P_s(θ), is approximated via beta approximation by using the first and second order moments of P_s(θ). Given Φ and Φ(𝒜)>0, the conditional coverage probability that the SIR is larger than the threshold θ can be approximated as: (SIR>θ|Φ,Φ(𝒜)>0 ) ≈∑_m=1^MC_M^m(-1)^m+1∏_x_i∈Φ\{x_1}∩𝒜1/(1+mηθ r_1^α/Mr_i^α)^M. The conditional coverage probability can be calculated as follows: (SIR>θ|Φ,Φ(𝒜)>0 ) =(|g_1|^2>θ r_1^αI|Φ,Φ(𝒜)>0) =1-(|g_1|^2<θ r_1^αI|Φ,Φ(𝒜)>0) (a)≈ 1-𝔼_g_i{(1-e^-ηθ r_1^α I)^M |Φ,Φ(𝒜)>0} (b)=𝔼_g_i{∑_m=1^MC_M^m(-1)^m+1e^-mηθ r_1^αI |Φ,Φ(𝒜)>0} =𝔼_g_i{∑_m=1^MC_M^m(-1)^m+1∏_x_i∈Φ\{x_1}∩𝒜. ..e^--mηθ r_1^α|g_i|^2/r_i^α|Φ,Φ(𝒜)>0} (c)=∑_m=1^MC_M^m(-1)^m+1∏_x_i∈Φ\{x_1}∩𝒜1/(1+mηθ r_1^α/Mr_i^α)^M, where step (a) follows from the fact that the CDF of the normalized gamma random variable |g_1|^2 can be tightly lower bounded by <cit.>: F_|g_1|^2(x)≥(1-e^-η x)^M, where η=M(M!)^-1/M, and the lower bound is used to approximate the probability; step (b) follows from applying binomial expansion; and step (c) follows from taking average with respect to the small scale fadings of the interfering links, which are independently and identically distributed (i.i.d) normalized gamma random variables with parameter M. Note that when M=1, the small scale fading degrades to Rayleigh fading, and equality holds for (<ref>), resulting in no approximation error in (<ref>). The b-th moment of P_s(θ) is defined as: M_b(θ)=𝔼_Φ{P_s^b(θ)} In the next, we would like to provide expressions for M_b(θ). To this end, it is necessary to obtain the probability density function (PDF) of r_1 given that Φ(𝒜)>0, denoted by f_r_1|Φ(𝒜)>0(r). According to <cit.>, the conditional PDF can be expressed as: f_r_1|Φ(𝒜)>0(r)=υ(λ,R_S)re^-λπR_S/R_Er^2, R_min≤ r ≤ R_max, where υ(λ,R_S)=2πλR_S/R_Ee^λπR_S/R_E(R_S^2-R_E^2)/ e^2πλ R_S(R_S-R_E)-1. For a positive integer b, the b-th moment of P_s(θ) can be expressed as follows: M_b(θ)≈ G(θ)(1-exp(-2πλ R_minR_S )), where G(θ)≈∑_b_1,⋯,b_Mb b_1,⋯,b_M∏_m=1^M(C_M^m(-1)^m+1)^b_m (R_max-R_min)π/2N∑_k=1^K√(1-ψ_k^2)exp(-Q(d_k,θ))f_r_1|Φ(𝒜)>0(d_k), K is the Gaussian-Chebyshev approximation <cit.> parameter, ψ_k=cos(2k-1)π/2K, d_k=R_max-R_min/2ψ_k+R_max+R_min/2, and Q(r_1,θ)≈ π^2λ R_S(R_max-r_1)/NR_E∑_n=1^N√(1-ϕ_n^2) c_n ×(1-∏_m=1^M(1+mηθ r_1^α/Mc_n^α)^-Mb_m) where N is the Gaussian-Chebyshev approximation parameter, ϕ_n=cos(2n-1)π/2N, c_n=R_max-r_1/2ϕ_n+R_max+r_1/2. According to the definition of M_b(θ), it can be expressed as follows: M_b(θ)=𝔼_Φ|Φ(𝒜)>0{P_s^b(θ)}(Φ(𝒜)>0). Note that (Φ(𝒜)>0) is the probability that there is at least one satellite on the spherical cap 𝒜, which can be easily obtained according to the properties of HPPP, as follows <cit.>: (Φ(𝒜)>0)=1-exp(-2πλ R_minR_S ). The remaining task is to evaluate 𝔼_Φ|Φ(𝒜)>0{P_s^b(θ)}Δ=G(θ). By applying Lemma 1, G(θ) can be expressed as: G(θ)= 𝔼_Φ|Φ(𝒜)>0{(∑_m=1^MC_M^m(-1)^m+1.. ..∏_x_i∈Φ\{x_1}∩𝒜1/(1+mηθ r_1^α/Mr_i^α)^M)^b}, By applying polynomial expansion, G(θ) can be further expressed as: G(θ)=𝔼_Φ|Φ(𝒜)>0{∑_b_1,⋯,b_Mb b_1,⋯,b_M∏_m=1^M . . (C_M^m(-1)^m+1)^b_m∏_x_i∈Φ\{x_1}∩𝒜∏_m=1^M1/(1+mηθ r_1^α/M r_i^α)^Mb_m}. Then, by applying the probability generating functional (PGFL) of HPPP <cit.>, it is obtained that G(θ)=𝔼_r_1|Φ(𝒜)>0{∑_b_1,⋯,b_Mb b_1,⋯,b_M. ∏_m=1^M(C_M^m(-1)^m+1)^b_mexp(-2πλR_S/R_E .∫_r_1^R_max(1-∏_m=1^M1/(1+mηθ r_1^α/M r^α)^Mb_m)r dr)} ≈𝔼_r_1|Φ(𝒜)>0{∑_b_1,⋯,b_Mb b_1,⋯,b_M. .∏_m=1^M(C_M^m(-1)^m+1)^b_mexp(-Q(r_1,θ))}, where the last step is obtained by applying Gaussian-Chebyshev approximation to the integration of the exponent. Finally, by taking expectation with respect to r_1 given Φ(𝒜)>0, and applying the Gaussian-Chebyshev approximation, the proof is complete. Note that when b=1, M_1(θ) is the mean value of P_s(θ), which is the so called “coverage probability”. The coverage probability can only reflect the average performance of all users in the considered scenario. The variance of P_s(θ) can be obtained as var(P_s(θ))=M_2(θ)-M_1^2(θ). Note that, according to the ergodicity of Φ, the larger the variance is, the larger the difference of the users' obtained QoS becomes. Hence, the variance of P_s(θ) can be used to indicate user fairness of the considered scenario <cit.>. The meta distribution of SIR is defined as: F̅_P_s(θ,x)Δ=(P_s(θ)>x), x∈[0,1], which is actually the CCDF of P_s(θ). An exact integral expression for F̅_P_s(θ,x) can be formulated by applying Gil-Pelaez inversion theorem <cit.>. However, it is very challenging to evaluate numerically and provide insights. Thus, in practice, F̅_P_s(θ,x) is usually approximated as a beta distribution, by matching the mean and variance of the beta distribution with M_1(θ) and M_2(θ) <cit.>. Note that, the PDF of a beta distributed random variable Z, can be uniquely characterized by two parameters κ and β, as follows: f_Z(z)=z^κ-1(1-z)^β-1/B(κ,β), where B(κ,β) is the beta function, and 𝔼{Z}=κ/κ+β, 𝔼{Z^2}=κ(κ+1)/κ+β(κ+β+1). Let 𝔼{Z}=M_1(θ) and 𝔼{Z^2}=M_2(θ), it can be obtained that: κ=M_1M_2-M_1^2/M_1^2-M_2, β=(1-M_1)(M_2-M_1)/M_1^2-M_2, and the CCDF of P_s(θ) can be approximated as: F̅_P_s(θ,x)≈1-I_x(κ,β), where I_x(κ,β) is the regularized incomplete beta function, given by I_x(κ,β)=∫_0^xt^κ-1(1-t)^β-1 dt/B(κ,β). § SIMULATION RESULTS In this section, numerical results are presented to demonstrate performance of the considered LEO satellite communication system. The parameters are set as: R_E=6371 km, α=3.5. The simulations are obtained by taking average over 10000 realizations of HPPP on the satellite sphere surface. Figs. <ref> and <ref> show the mean and variance of P_s(θ). As shown in Fig. <ref>, when λ=10^-12, M_1(θ) decreases with the LEO altitude, while the variance first increases and then decreases. Thus, it can be easily concluded from Fig. <ref> that: for λ=10^-12, it is beneficial to deploy satellites at low attitudes, in terms of both coverage probability and user fairness. Differently, for a lower satellite density λ=10^-13, opposite trends are observed: the mean value first increases and then slightly decreases with the LEO altitude, while the variance decreases with the LEO altitude, and hence it is better to deploy satellites at middle and high altitudes. Figs. <ref> and <ref> demonstrate the meta distribution of SIR. Exact values of M_1 and M_2 are obtained via Monte Carlo simulations. From the figures, it is shown that, analytical results perfectly match simulation results when M=1. However, when M=3, analytical results cannot perfectly match simulation results, because M_1(θ) and M_2(θ) cannot be accurately evaluated as shown in Fig. <ref>. In addition, it can be observed from Figs. <ref> and <ref>, when λ=10^-12, it is better to deploy the satellite at LEO altitude R_min=200 km compared to higher altitudes 400 km and 800 km, because for a given 0<x<1, the CCDF of P_s(θ) at R_min=200 km is larger than those at R_min=400 km and R_min=800 km. Note that the observation here is consistent with that shown in Fig. <ref>, as stated in the previous paragraph. § CONCLUSION This paper has provided a fine grained stochastic geometry based performance analysis on downlink LEO satellite communication networks. HPPP has been applied to model the locations of satellites. The moments and distribution of the conditional coverage probability given the point process have been investigated. It has been shown that, for a dense satellite constellation, it is better to deploy satellites at low altitude, by jointly considering users' average performance and user fairness, while for a sparse constellation, high altitude is favorable for satellite deployment. IEEEtran
http://arxiv.org/abs/2307.00338v1
20230701133303
Light projectile elastic scattering by nuclei described by the Gogny interaction
[ "J. López Moraña", "X. Viñas" ]
nucl-th
[ "nucl-th" ]
http://arxiv.org/abs/2307.01650v1
20230704111154
Improved approximation algorithms for some capacitated $k$ edge connectivity problems
[ "Zeev Nutov" ]
cs.DS
[ "cs.DS" ]
Task Planning Support for Arborists and Foresters: Comparing Deep Learning Approaches for Tree Inventory and Tree Vitality Assessment Based on UAV-DataSpecial thanks to our cooperation partner Smart City Bamberg. The project BaKIM is supported by Kommunal?Digital! funding of the Bavarian Ministry for Digital Affairs. Project funding period: 01.01.2022 - 31.03.2024. Jonas Troles 10000-0001-8686-5168 Richard Nieding1 Sonia Simons2 Ute Schmid10000-0002-1301-0326 August 1, 2023 =============================================================================================================================================================================================================================================================================================================================================================================== 𝔸 α β γ δ ϵ λ C F L P Cap-SND Set Family Edge Cover Near Min-Cuts Cover Cap-k-ECS Cap-k-ECS Augmentation FGC FGC Augmentation st-FC st-FC Augmentation We consider the following two variants of the Capacitated k-Edge Connected Subgraph () problem. * : Given a graph G=(V,E) with edge costs and E_0 E, find a min-cost edge set J E E_0 that covers all cuts with at most k-1 edges of the graph G_0=(V,E_0). We obtain approximation ratio k-_0+1+, improving the ratio 2min{k-_0,8} of <cit.> for k-_0 ≤ 14, where _0 is the edge connectivity of G_0. * (k,q)-Flexible Graph Connectivity ((k,q)-): Given a graph G=(V,E) with edge costs and a set U E of ”unsafe” edges and integers k,q, find a min-cost subgraph H of G such that every cut of H has at least k safe edges or at least k+q edges. We show that (k,1)- admits approximation ratio 3.5+ if k is odd (improving the ratio 4 of <cit.>), and that (k,2)- admits approximation ratio 6 if k is even and 7+ if k is odd (improving the ratio 20 of <cit.>). § INTRODUCTION Let G=(V,E) be graph. For an edge subset or a subgraph J of G and S V let _J(S) denote the set of edges in J with one end in S and the other in V S, and let d_J(S)=|_J(S)| be their number; we say that J covers S if d_J(S) ≥ 1. We say that J covers a set family if J covers every set S ∈. For a proper node subset S of a graph H, the cut defined by S is _H(S) (it is known that if H is connected then distinct sets define distinct cuts); an ℓ-cut means that d_H(S)=ℓ. We consider several variants of the following problem. 0.96 Capacitated k-Edge Connected Subgraph () Input: A graph G=(V,E) with edge costs {c_e:e ∈ E} and edge capacities {u_e:e ∈ E}, and an integer k. (All input numbers are assumed to be non-negative integers). Output: A mini-cost edge set J E such that u(_J(S)) ≥ k for every ≠ S ⊂ V. In the augmentation version of we are also given a subgraph G_0=(V,E_0) of G of cost zero that has capacitated edge connectivity _0=min{u(_E_0(S)):≠ S ⊊ V} close to k. The goal is to compute a min-cost edge set J E E_0 such that G_0 ∪ J has capaciatated connectivity k. For k=_0-1 this problem is equivalent to the ordinary k-Edge Connectivity Augmentation problem, where all capacities are 1. We consider a specific version of when every edge in E E_0 has capacity ≥ k -_0, and w.l.o.g. assume that edges in E_0 have capacity 1. Consequently, this problem can also be formulated as follows. 0.96 Input: A graph G=(V,E), E_0 E with edge costs {c_e:e ∈ E E_0}, and an integer k. Output: A min-cost edge set J E E_0 that covers the family {≠ S ⊊ V: d_E_0(S) < k}. Using the 2-approximation algorithm of <cit.> for increasing the edge connectivity by 1, one can easily obtain ratio 2(k-_0) for this problem. Using a more recent (1.5+)-approximation algorithm of <cit.> gives ratio (1.5+)(k-_0). Bansal, Cheriyan, Grout, and Ibrahimpur <cit.> showed that admits a constant approximation ratio 16. We improve over these ratios for the cases when _0 is close to k. admits the following approximation ratios: * k-_0 if _0,k are both even. * k-_0+1/2+ if _0,k have distinct parity. * k-_0+1+ if _0,k are both odd. For example, when k-_0 =8 and _0,k-1 are both even, our ratio is 8 while that of <cit.> is 16. On the other hand, when k-_0 ≥16 the ratio 16 of <cit.> is better than our ratio. Recently, Adjiashvili, Hommelsheim and Mühlenthaler <cit.> defined a new interesting version of , called k-Flexible Graph Connectivity (k-). Suppose that there is a subset U E of “unsafe” edges, and we want to find the cheapest spanning subgraph H that will be k-connected even if some unsafe edge is removed. This means that for any proper subset S of V we should have d_H U(S) ≥ k or d_H(S) ≥ k+1. It is not hard to see that k- is equivalent to Cap-(k(k+1))-ECS with u(e)=k if e ∈ U and u(e)=k+1 otherwise. Furthermore, for U= we get the ordinary k-Edge-Connected Subgraph problem. Boyd, Cheriyan, Haddadan, Ibrahimpur <cit.> suggested a generalization of when up to q unsafe edges may fail. Let us say that a subgraph H=(V,J) of G is (k,q)-flex-connected if any cut _H(S) of H has at least k safe edges or at least k+q (safe and unsafe) edges, namely, if d_H U(S) ≥ k or d_H(S) ≥ k+q for all ≠ S ⊊ V. Observing that d_H U(S) =d_H(S)-d_H ∩ U(S), we get that H is (k,q)-flex-connected if and only if d_H(S) ≥ k+min{d_H ∩ U(S),q} ∀≠ S ⊊ V Summarizing, we get the following problem. 0.96 (k,q)-Flexible Graph Connectivity ((k,q)-) Input: A graph G=(V,E) with edge costs {c_e:e ∈ E}, U E, and integers k,q ≥ 0. Output: A min-cost subgraph H of G such that (<ref>) holds. As was mentioned, (k,1)- reduces to Cap-(k(k+1))-ECS with u(e)=k if e ∈ U and u(e)=k+1 otherwise. It is also known (see <cit.>) that (1,q)- reduces to Cap-(q+1)-ECS with u(e)=1 if e ∈ U and u(e)=q+1 otherwise. However, no reduction to Cap-ℓ-ECS is known for other values of k,q. The best ratio for (k,1)- was 4 for arbitrary costs and 16/11 for unit costs <cit.>. The best ratio for (k,2)- was 2k+4 for k ≤ 7 <cit.> and 20 for k ≥ 8 <cit.>. We improve this as follows, and also give a simple approximation algorithm for (k,q)- with unit costs. * (k,1)- admits approximation ratio 3.5+ if k is odd. * (k,2)- admits approximation ratio 7+ if k is odd and 6 if k is even. * For unit costs, (k,q)- admits approximation ratio +2qk, where is the best known ratio for the Min-Size k-Edge-Connected Subgraph problem. We summarize the best known approximation ratios for (q,k)- in Table <ref>. In the next section <ref> we give some uncrossing properties of minimum and minimum+1 cuts of a graph needed for the proofs of Theorems <ref> and <ref>, that are proved in Sections <ref> and <ref>, respectively. § UNCROSSING PROPERTIES OF MINIMUM AND MINIMUM+1 CUTS In this section we give some “uncrossing” properties of near minimum cuts needed for the proofs of Theorems <ref> and <ref>. Let H=(V,J) be a -connected graph and let A,B V such that all four sets C_1=A ∩ B, C_2=A B, C_3=V (A ∪ B), C_4=B A are non-empty. Shrinking each of these sets into a single node results in a graph on 4 nodes, and we will further replace parallel edges by a single capacitated edge; see Fig. <ref>(a), where near each edge is written its (currently unknown) capacity. We will call this graph the square of A,B, the sets that correspond to shrunken nodes are corner sets, edges C_1C_2,C_2C_3,C_3C_4,C_4C_1 are side edges (that have capacity z,y,w,x, respectively), and C_2C_4,C_1C_3 are diagonal edges (that have capacity a,b, respectively). We will also use abbreviated notation d_i=d_H(C_i), d_H(A)=d_A, d_H(B)=d_B. The following equalities are known, and can be easily verified by counting the contribution of each edge to both sides of each equality: d_1+d_3 = d_A+d_B-2a , d_2+d_4 = d_A+d_B-2b . Since the function d(·) is symmetric, namely, d(S)=d(V S) for all S V, we will assume w.l.o.g that: * d_1 ≤ d_i for i=2,3,4. * d_2 ≤ d_4. * If d_1=d_2 then a ≥ b. Let =d_A+d_1-d_2. Then is even and x = /2-b y = d_2-d_1-a+/2 z = d_1-/2 w = d_B-d_1-a-b+/2 Note that (see Fig. <ref>(a)): x+z=d_1-b y+z=d_2-a x+y=d_A-a-b z+w=d_B-a-b Solving this equation system for x,y,z,w gives the lemma. When both A,B are -cuts, then it is well known that must be even, the square has no diagonal edges, and each side edge has capacity /2; c.f. <cit.>. The following is also known. Suppose that A is a -cut and B is a (+1)-cut. Then the square has no diagonal edges, and the following holds. * If is odd then the square has one side edge of capacity (-1)/2 and other three side edges have capacity (+1)/2; see Fig. <ref>(b). * If is even then the square has one side edge of capacity /2+1 and other three side edges have capacity /2; see Fig. <ref>(c). If is odd, d_1,d_2 have distinct parity, since =+d_1-d_2 must be even. By (<ref>,<ref>) (d_1,d_2)=(,+1) and (a,b)=(0,0). Then /2=(-1)/2 and from Lemma <ref> we get (x,y,z,w)=(-12,+12,+12,+12), which is the case in Fig. <ref>(b). If is even, d_1,d_2 have the same parity. By (<ref>,<ref>) (d_1,d_2)=(,), (a,b)=(0,0), and /2=/2. Then (x,y,z,w)=(2,2,2,2+1) by Lemma <ref>, which is the case in Fig. <ref>(c). The next lemma deals with two (+1)-cuts. Suppose that A,B are (+1)-cuts. If is even then one of the following holds. * The square has no diagonal edges and has two adjacent side edges of capacity /2 while the other two edges have capacity /2+1; see Fig. <ref>(a). * The square has one diagonal edge and all side edges have capacity /2; see Fig. <ref>(b). If is odd then one of the following holds. * The square has no diagonals, two opposite side edges have capacity (+1)/2, one side edge has capacity (+3)/2 and its opposite side edge has capacity (-1)/2; see Fig. <ref>(c). * The square has one diagonal edge, two side edge incident to one end of the the diagonal edge have capacity (-1)/2, while the other have capacity (+1)/2; see Fig. <ref>(d). * The square has both diagonal edges of capacity 1 each, and all side edges have capacity (-1)/2; see Fig. <ref>(e). * The square has no diagonal edges and all side edges have capacity (+1)/2; see Fig. <ref>(f). We apply Lemma <ref>. Suppose that is even. Since =+1+d_1-d_2 must be even, d_1,d_2 have distinct parity. By (<ref>,<ref>) (d_1,d_2)=(,+1) and b=0. Then /2=/2 and we have two cases. * (a,b)=(0,0). Then (x,y,z,w)=(2,2+1,2,2+1), which is the case in Fig. <ref>(a). * (a,b)=(1,0). Then (x,y,z,w)=(2,2,2,2), which is the case in Fig. <ref>(b). If is odd, d_1,d_2 have the same parity. By (<ref>,<ref>) (d_1,d_2)=(,) or (d_1,d_2)=(+1,+1); in both cases /2=(+1)/2. For (d_1,d_2)=(,) and /2=(+1)/2 we have the following cases. * (a,b)=(0,0). Then (x,y,z,w)=(+12,+12,-12,+32), which is the case in Fig. <ref>(c). * (a,b)=(1,0). Then (x,y,z,w)=(+12,-12,-12,+12), which is the case in Fig. <ref>(d). * (a,b)=(1,1). Then (x,y,z,w)=(-12,-12,-12,-12), which is the case in Fig. <ref>(e). For (d_1,d_2)=(+1,+1) we have one case * (a,b)=(0,0). Then (x,y,z,w)=(+12, +12,+12,+12), which is the case in Fig. <ref>(f). This concludes the proof of the lemma. From Lemma <ref> we get the following. Let H be a -edge connected graph and let ={S V: d_H(S) ∈{,+1}}. If is even then is uncrossable. Let A,B ∈. We need to show that A ∩ B,A ∪ B ∈ or A B,B A ∈. If one of A B,B A is empty then {A ∩ B,A ∪ B}={A,B}. If one of A ∩ B, V (A ∪ B) is empty then {A B,B A}={A,B}. In both cases, the lemma holds. If all corner four sets are non-empty then one of the cases (a,b) in Lemma <ref> holds. In case (a) A B,B A ∈ (see Fig. <ref>(a)), and in case (b) all corner sets are in (see Fig. <ref>(b)). Let H be a graph and a family of node subsets (or cuts) of H. It is easy to see that the relation {(u,v) ∈ V × V: u,v} is an equivalence; the quotient graph of is obtained by shrinking each equivalence class of this relation into a single node, and replacing every set of parallel edges by a single capacitated edge. is symmetric if S ∈ implies V S ∈. We say that is a crossing family if A ∩ B,A ∪ B ∈ for any A,B that cross, and if in addition (A B) ∪ (B A) ∉ then is a proper symmetric crossing family. By <cit.>, the problem of covering a symmetric crossing family is equivalent to covering the minimum cuts of a 2-edge connected graph, while <cit.> shows that the later problem admits a (1.5+)-approximation algorithm. We need the following result from <cit.> (specifically, see Lemmas 3.11–3.15 in <cit.>.) Let H be a -edge-connected graph with odd, and let be the family of (+1)-cuts of H. Then there exists a subfamily ' of the -cuts of H such that ∪' can be decomposed in polynomial time into parts whose union contains , such that every cut in belongs to at most 2 parts, and such that the cuts in each part correspond to the (+1)-cuts of its quotient graph, which is either (see Fig. <ref>): * A cycle of edges of capacity (+1)/2 each. * A cycle with one edge of capacity (-1)/2 and other edges of capacity (+3)/2 each. * A cube graph, which can occur only if =3. § ALGORITHM FOR (THEOREM <REF>) It is known that the problem of finding a minimum cost cover of cuts of size <k of a -connected graph admits the following approximation ratios for k ≤+2: * 1.5+ if k=+1 <cit.>. * 2 if k=+2 and is even by Corollary <ref>, since the problem of covering an uncrossable family admits ratio 2 <cit.>. We will show that this implies the approximation ratios for claimed in Theorem <ref>: * k-_0 if _0,k are both even. * k-_0+1/2+ if _0,k have distinct parity. * k-_0+1+ if _0,k are both odd. Algorithm <ref> computes a solution as required when _0,k are both even. The correctness of the algorithm is straightforward. The approximation ratio k-_0 follows from the observation that we pay 2 · opt at each iteration, so opt per increasing the connectivity by 1. If _0 is odd and k is even, then in the first iteration we apply the (1.5+)-approximation algorithm of <cit.> for covering -cuts only and have an extra 1/2+ term. If _0 is even and k is odd, then in the last iteration we apply a 2-approximation algorithm for covering (k-1)-cuts only, and also have an extra 1/2+ term <cit.>. If both of these occur then the extra term is 1+2≈ 1+. This concludes the proof of Theorem <ref>. § ALGORITHM FOR (K,Q)- (THEOREM <REF>) Let ⟨ G=(V,E), U, k ⟩ be an instance of (k,q)-. Let H be a subgraph of G. We will use the notation d(S)=d_H(S) and d_U(S)=d_H ∩ U(S). Recall that a subgraph H of G is (k,q)-flex-connected if (<ref>) holds, namely if: d(S) ≥ k+min{d_U(S),q} ∀ ≠ S ⊊ V Suppose that H is (k,q-1)-flex-connected. Then d(S) ≥ k+min{d_U(S),q-1} ∀ ≠ S ⊊ V One can see that if for ≠ S ⊊ V the later inequality holds but not the former then d(S)=k+q-1 and d_U(S) ≥ q. Consequently, to make H (k,q)-flex-connected we need to cover the set family _q(H)={≠ S ⊊ V:d(S)=k+q-1, d_U(S) ≥ q} . Thus the following algorithm computes a feasible solution for (k,q)-, see also <cit.>. We can use this observation to prove parts (i) and (iii) of Theorem <ref>: that (k,1)- admits ratio 3.5+ if k is odd, and that (k,q)- admits ratio +2qk for unit costs, where is the best known ratio for the Min-Size k-Edge-Connected Subgraph problem. If k is odd then it is known that the family {≠ S ⊊ V: d_H(S)=k} is laminar, and thus any its subfamily, and in particular _1(H), is also laminar. Part (i) now follows from the fact that the problem of covering a laminar family admits ratio 1.5+ <cit.>. For part (iii) we need the known fact that any inclusion minimal cover J of a set family is a forest. To see this, suppose to the contrary that J contains a cycle C. Let e=uv be an edge of C. Since P = C {e} is a uv-path, then for any A ∈ covered by e, there is e' ∈ P that covers A. This implies that J {e} also covers , contradicting the minimality of J. On the other hand, opt≥ kn/2, hence |J_i|/ opt≤ 2(n-1)/kn <2/k. Thus the overall approximation ratio is +2q/k, as claimed. It remains to show that the algorithm can be implemented in polynomial time. By <cit.> (see also <cit.>), if H is (k,i-1)-flex-connected then |_i(H)|=O(n^4), and the members of _i(H) can be listed in polynomial time. Since a (k,i-1)-flex-connected H is (k,i)-flex-connected if and only if _i(H)=, we can compute at iteration i an inclusion minimal cover of _i(H) in polynomial time The proof of part (ii) of Theorem <ref> relies on several lemmas. Let H be (k,q-1)-flex-connected, q ≥ 2, and let A,B ∈_q(H) cross. Then: * If d_U(C_1) ≥ 1 then d(C_1)+d(C_2) ≥ 2k+q. * If d_U(C_1)=0 then C_2,C_4 ∈_q(H). Suppose that d_U(C_1) ≥ 1. Note that d_U(C_1)+d_U(C_2) ≥ d_U(A) ≥ q . Since H is (k,q-1)-flex-connected d(C_1)+d(C_2)-2k ≥min{d_U(C_1),q-1}+min{d_U(C_2),q-1} . If d_U(C_2) ≥ 1 and q ≥ 2 then the r.h.s. is at least min{min{d_U(C_1),d_U(C_2)}+q-1,d_U(C_1)+d_U(C_2),2q-2}≥min{q,q,2q-2}=q If d_U(C_2) =0 then d_U(C_1) ≥ q, hence d(C_1) ≥ k+q-1. Since d(C_2) ≥ d(C_1), we get d(C_1) +d(C_2) ≥ 2(k+q-1) ≥ 2k+q. Suppose that d_U(C_1)=0. Then d_U(C_2)=d_U(A)≥ q and d_U(C_4)=d_U(B) ≥ q. Since H is (k,q-1)-flex-connected, d(C_2) ≥ k+q-1 and d(C_4) ≥ k+q-1. Therefore 2(k+q-1) ≤ d(C_2)+d(C_4) ≤ d(A)+d(B) ≤ 2(k+q-1) Hence equality holds everywhere, hence d(C_2)=d(C_4) = k+q-1, so C_2,C_4 ∈_q(H). Let k be even. If H is (k,1)-flex-connected then _2(H) is uncrossable. Let A,B ∈_2(H) cross. By Lemma <ref>(a,b), d(C_1)+d(C_2)=2k+1. Thus we must be in case (ii) of Lemma <ref>, implying C_2,C_4 ∈_q(H). We need some definitions to handle the case q=2 and k odd. Two sets A,B ⊂ V cross if A ∩ B, V (A ∪ B) are non-empty. Let be a set family. We say that is a crossing family if A ∩ B,A ∪ B ∈ for any A,B that cross, and if in addition (A B) ∪ (B A) ∉ then is a proper crossing family. is symmetric if S ∈ implies V S ∈. By <cit.>, the problem of covering a symmetric proper crossing family is equivalent to covering the minimum cuts of a 2-edge connected graph, while <cit.> shows that the later problem admits a (1.5+)-approximation algorithm. As was mentioned, _1(H) is laminar if k is odd, and it is also not hard to see that _1(H) is uncrossable if k is even, see <cit.>. The following lemma gives uncrossing properties of _2(H), assuming that _1(H)=. Let k be odd. If H is (k,1)-flex-connected then _2(H) can be decomposed in polynomial time into an uncrossable family ' and a symmetric proper crossing family ” such that ' ∪”=_2(H). Consider a decomposition of (k+1)-cuts as in Lemma <ref>. Note that if two (k+1)-cuts belong to the same part then so are their corner cuts. Hence to prove the lemma it is sufficient to provide a proof for each part of as in Lemma <ref>, namely, we may assume that there is just one part which is . For an edge e of the quotient graph of let u_e be the number of unsafe edges in the edge subset of H represented by e. Let us say that e is red if u_e ≥ 2, blue if u_e=1, and black otherwise (if u_e=0). Note that since _1(H)=, there cannot be a k-cut in the quotient graph that contain a blue or a red edge. Consequently, only in case (a) the quotient graph may have non black edges, as in cases (b,c) every edge of the quotient graph belongs to some k-cut. On the other hand, every cut in _2(H)must contain a blue or a red edge, thus the only relevant case is (a). Let ' be the family of cuts in _2 that contain a red edge and let ”='. Note that every cut in ” consists of 2 blue edges. We claim that ' is uncrossable and that ” is is a proper crossing family (clearly, ” is symmetric). Let A,B ∈ cross. If A,B ∈' then their square has two adjacent red edges, and this implies that A ∪ B,A ∩ B ∈ or A B,B A ∈'. Consequently, ' is uncrossable. If A,B ∈” then their square has 4 blue edges, and then all corner cuts are in ”. Thus ” is a crossing family. Furthermore, the capacity of the cut defined by the set (A B) ∪ (B A) is 4(k+1)/2=2(k+1)>k+1, and thus ” is a proper crossing family. We now finish the proof of part (ii) of Theorem <ref>. We will apply Algorithm <ref>. At step 1, c(J) ≤ 2 opt, c.f. <cit.>. In the loop (steps 2,3) we combine Lemmas <ref> and <ref> with the best known approximation ratios for solving appropriate set family edge cover problems. If k is even then each of _1,_2 is uncrossable by Lemma <ref> and thus c(J_1),c(J_2) ≤ 2 opt, by <cit.>; consequently, c(J ∪ J_1 ∪ J_2) ≤ 6 opt. If k is odd then _1 is laminar and and thus c(J_1) ≤ (1.5+) opt by <cit.>. After _1 is covered, _2 can be decomposed into an uncrossable family ' and a symmetric proper crossing family ”, by Lemma <ref>. We can compute a 2-approximate cover J' of ' using the algorithm of <cit.> and a (1.5+)-approximate cover J” of ” using the algorithm of <cit.>. Consequently, the approximation ratio of the algorithm is bounded by [c(J)+c(J_1)+c(J')+c(J”)]/ opt≤ 2+(1.5+)+2+(1.5+) =7+2≈ 7+ , concluding the proof of part (ii) of Theorem <ref>. 10 AHM D. Adjiashvili, F. Hommelsheim, and M. Mühlenthaler. Flexible graph connectivity. Mathematical Programming, pages 1–33, 2021. BCGI I. Bansal, J. Cheriyan, L. Grout, and S. Ibrahimpur. Improved approximation algorithms by generalizing the primal-dual method beyond uncrossable functions. CoRR, abs/2209.11209, 2022. URL: <https://arxiv.org/abs/2209.11209>. BCHI S. C. Boyd, J. Cheriyan, A. Haddadan, and S. Ibrahimpur. Approximation algorithms for flexible graph connectivity. In FSTTCS, page 9:1–9:14, 2021. CJ C. Chekuri and R. Jain. Augmentation based approximation algorithms for flexible network design. CoRR, abs/2209.12273, 2022. URL: <https://doi.org/10.48550/arXiv.2209.12273>. DKL E. A. Dinic, A. V. Karzanov, and M. V. Lomonosov. On the structure of the system of minimum edge cuts in a graph. In A. A. Fridman, editor, Studies in Dtscrete Optimization, pages 290–306. Nauka, Moscow, 1976. in Russian. DN Y. Dinitz and Z. Nutov. A 2-level cactus model for the system of minimum and minimum+1 edge-cuts in a graph and its incremental maintenance. In STOC, pages 509–518, 1995. GG H. N. Gabow and S. Gallagher. Iterated rounding algorithms for the smallest k-edge connected spanning subgraph. SIAM J. Computing, 41(1):61–103, 2012. GGTW H. N. Gabow, M. X. Goemans, É. Tardos, and D. P. Williamson. Approximating the smallest k-edge connected spanning subgraph by LP-rounding. Networks, 53(4):345–357, 2009. GGJ M. Garg, F. Grandoni, and A. Jabal Ameli. Improved approximation for two-edgeconnectivity. CoRR, abs/2209.10265v2, 2022. URL: <https://arxiv.org/abs/2209.10265v2>. GGPS M. X. Goemans, A. V. Goldberg, S. A. Plotkin, D. B. Shmoys, É. Tardos, and D. P. Williamson. Improved approximation algorithms for network design problems. In SODA, pages 223–232, 1994. K S. Khuller. Approximation algorithms for finding highly connected subgraphs. In D. Hochbaum, editor, Approximation Algorithms for NP-hard problems, chapter 6, pages 236–265. PWS, 1995. N-th Z. Nutov. Structures of Cuts and Cycles in Graphs; Algorithms and Applications Algorithms and Applications. PhD thesis, Technion, Israel Institute of Technology, 1997. TZ2 V. Traub and R. Zenklusen. A (1.5+ϵ)-approximation algorithm for weighted connectivity augmentation. CoRR, abs/2209.07860, 2022. URL: <https://doi.org/10.48550/arXiv.2209.07860>, http://arxiv.org/abs/2209.07860 arXiv:2209.07860. TZ V. Traub and R. Zenklusen. Local search for weighted tree augmentation and steiner tree. In SODA, pages 3253–3272, 2022.
http://arxiv.org/abs/2307.01911v1
20230704204053
Dynamical system analysis in multiscalar-torsion cosmology
[ "Genly Leon", "Andronikos Paliathanasis" ]
gr-qc
[ "gr-qc", "hep-th", "math-ph", "math.MP" ]
http://arxiv.org/abs/2307.04683v1
20230706134136
CORE-GPT: Combining Open Access research and large language models for credible, trustworthy question answering
[ "David Pride", "Matteo Cancellieri", "Petr Knoth" ]
cs.CL
[ "cs.CL", "cs.AI" ]
CORE-GPT Pride et al. The Knowledge Media Institute, The Open University, Milton Keynes, UK. {david.pride, matteo.cancellieri, petr.knoth}@open.ac.uk CORE-GPT: Combining Open Access research and large language models for credible, trustworthy question answering. David Pride, Matteo Cancellieri Petr Knoth ================================================================================================================= In this paper, we present CORE-GPT, a novel question-answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE[https://core.ac.uk]. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations. CORE-GPT's performance was evaluated on a dataset of 100 questions covering the top 20 scientific domains in CORE, resulting in 100 answers and links to 500 relevant articles. The quality of the provided answers and and relevance of the links were assessed by two annotators. Our results demonstrate that CORE-GPT can produce comprehensive and trustworthy answers across the majority of scientific domains, complete with links to genuine, relevant scientific articles. § INTRODUCTION LLMs demonstrate a remarkable ability to process and interpret natural language, understanding various nuances and intricacies of human language. They excel at text generation, crafting coherent and contextually relevant responses or content, ranging from casual conversations to technical articles. However, these are predictive models and cannot be relied upon to provided reliable sources or citations for any generated text. In order to better understand the problem, we used the GPT3.5 and GPT4 models to answer 50 questions from across ten different domains, and to provide the five top sources / citations for each of the answers. Each row in Figure <ref> shows the results for a single answer. A green dot represents a genuine, factual citation with a paper that exists or a link that goes directly to the paper itself. A red dot represents a completely fictional paper that simply does not exist. The yellow dots were used where there was what we termed conflation, meaning the provided citation or source was not real, but used either a mix of real titles or real author names or then linked to a completely different paper entirely. This shows that 22% of references for GPT3.5 and less than 20% for GPT4 were factual. Whilst it can be argued that GPT3.5 and GPT4 were not designed to reference evidence <cit.>, it can be widely observed that people have attempted to use them for these purposes and that it would be valuable if they could be used in this way. In this paper, we address this issue by introducing CORE-GPT. Our main contributions are: * We provided empirical evidence demonstrating that GPT3.5 and GPT4 cannot be relied upon to generate sources of references. * We provide a solution that combines the power of GPT models and a global open research corpus to deliver a credible and trustworthy question-answering solution, accompanied with references to research literature. * Our question-answering solution is capable of providing answers including references to recently published research without the need for retraining the GPT models. § RELATED WORK The term Large Language Model has been in existence for many decades, however the LLMs we focus on here are extensions of the transformer model architecture introduced in 2017 by Vaswani et al. in their seminal paper 'All you need is attention' which lead to the development of the BERT transfomer models and its siblings SciBERT <cit.> and RoBERTa <cit.>) and to GPT-2 <cit.>,3 <cit.> and most recently GPT4 <cit.>. The advancements and overall recent developments in LLMs have been exhaustively reviewed by several scholars, including Fan et al. <cit.> and Zhao et al. <cit.>, whose comprehensive surveys offer in-depth analyses of this rapidly evolving discipline. This paper will therefore not reiterate these developments. LLMs have demonstrated remarkable capabilities in many areas. There are however significant challenges associated with the use of LLMs. There has been concerns about the risk of plagiarism and the potential impact on education and assessment <cit.>. There are also specific concerns about the implications for the medical <cit.> and legal <cit.> domains. Beyond these domain-specific concerns, the robustness of LLMs has also been questioned. Issues such as hallucinations, or the generation of statements that appear credible but are in fact entirely fabricated, have been widely reported. In a study of particular interest to scientists and researchers, Gao et al. <cit.> showed that models based on the Generative Pre-training Transformer (GPT) architecture could generate abstracts for scientific articles that were often indistinguishable from those authored by humans. However, Alkaissi and McFarlane <cit.> conducted a study to evaluate ChatGPT's ability to answer complex biomedical and healthcare-related questions. Their results demonstrated mixed quality in the generated responses, with answers consisting of a blend of factual information and fabricated content. Crucially, when ChatGPT was asked to provide citations and PubMed IDs to support its answers, all the provided references were fictional, and the given PubMed IDs were simply sequences of random numbers with no relation to existing papers. This research, corroborated by additional studies <cit.>, underscores a profound problem with LLMs generating authentic-sounding but entirely fictional content. These challenges and the results shown in Table <ref> highlight a significant hurdle that needs to be overcome in order to be able to leverage the abilities of LLMs for question answering whilst limiting the potential for false or misleading answers.. The focus of our work in this paper is on addressing this credibility gap, by proposing a novel approach that combines Open Access scientific literature with LLMs to enhance the reliability and trustworthiness of these systems. § OUR SOLUTION - CORE-GPT §.§ CORE-GPT Workflow CORE-GPT has been developed specifically to address the problems discussed in the previous sections. We use a three-stage approach to returning answers to user questions with links to relevant full-text papers in CORE. In Stage 1, the original question is passed to the GPT4 API with several instructions. * Identify the key terms within the question * Enrich with close synonyms * Formulate this into a search query. A sample question and search formatted response can be seen below: Original user questionWhat strategies can be implemented to improve literacy rates in rural primary schools in developing countries? Formatted querystrategies improve literacy rates rural primary schools developing countries OR low-income OR underdeveloped OR third-world In Stage 2, the formatted search query is then passed to the CORE API which returns the five most relevant papers where the full-text content is available. Stage 3 is the key to the novel solution provided by CORE-GPT. We pass the titles and abstracts returned in Stage 2 back to the GPT4 API with further instructions: Generate a comprehensive answer to the following question (but no more than 160 words) solely based on the content provided. Format the links to the papers as follows: furl:Surl, abstract:$abstract, $question Our evaluation shows that this critical third stage is largely effective at constraining the model to base its reply only on the supplied input. The answer and provided links are then shown to the user. The full workflow can be seen in Figure <ref> §.§ The CORE-GPT User Interface Initially, CORE-GPT will be made available on the CORE website as a new web-based question / answer platform.(Figure <ref>). Further development will allow for the service to be made available via the CORE API. This is discussed in the Future Work section (Section <ref>.) A sample result is shown in Figure <ref>. §.§ Benefits of CORE-GPT The key benefit of CORE-GPT is in ensuring that the content of the generated answers is drawn from published scientific literature, which is then subsequently referenced. This greatly reduces the potential for hallucinations. There are further benefits derived from the constraints placed on the model. In our evaluation, there were instances where, despite the massive-scale corpus that CORE-GPT draws its answers from, there was not enough relevant content to formulate a comprehensive answer. Below is an example question from the questions dataset used for the evaluation were this was the case; "What are the potential long-term health impacts of regular use of over-the-counter pain medications on the liver and kidney function in young adults?" In cases like these, the GPT4 model is capable of recognising the lack of relevant responses. If a complete answer cannot be given, the user will be informed with the following type of message; "Regular use of over-the-counter (OTC) pain medications can potentially impact liver and kidney function in young adults. However, the provided results do not offer specific information on the long-term health impacts of such medications on these organs. To obtain a comprehensive answer, further research on this topic would be necessary." In our evaluation we found that whilst this type of answer was understandably low scoring in terms of comprehensiveness and utility, it scored highly for trustworthiness. The key factor here is that the model is forced to be honest when it does not know something. This greatly reduces the potential for hallucinations and increases the overall viability and usability of CORE-GPT in academic question / answering. Another key benefit is intrinsically linked to the way CORE operates as an Open Access infrastructure. Anyone who has used the latest GPT models will almost certainly be familiar with the response 'I'm sorry for the inconvenience. As an AI model with a knowledge cutoff in September 2021, I don't have real-time information'. CORE however is constantly aggregating content from the global network of Open Access repositories and as soon as a document is indexed in CORE, it is available to CORE-GPT to be used in answers and cited. The search shown in Figure <ref> was undertaken during the second week of May 2023. The results contain papers published as recently as April 2023. As CORE-GPT is designed to work in this way, this removes the knowledge cut-off date experienced when using just the GPT models themselves. § EVALUATION METHODOLOGY §.§ Data Sources CORE-GPT is designed to provide citations to the research papers used to formulate the answers. All cited research papers are drawn from the CORE corpus of Open Access research literature. CORE is currently the one of the largest aggregators of OA scholarly knowledge, aggregating content from the global network of almost 11,000 institutional, pre-print and publisher repositories and, as of May 2023, hosts over 32 million full-text research papers. <cit.> §.§ Question generation Our first task was to generate a dataset of questions that could be used to test the performance of CORE-GPT and also to compare this performance against large language models such as GPT3.5 and GPT4. Additionally, we wanted to ascertain whether the models themselves and also CORE-GPT were more successful in some domains and less successful in others. We therefore generated a dataset of questions based on the split of domains in the CORE dataset. The domains with the largest amount of full text content in CORE were selected. We added education as the final domain to give 20 domains. To aid in the rapid development of the questions dataset, we elected to use a large language model. GPT-4 was chosen for its recency and known abilities for this task. Using the list of domains previously discussed, the OpenAI GPT-4 API was used to generate the questions using the following prompt; messages=[ "role": "system", "content": "write a graduate level research question in the following domain, only reply with the body of the question itself:", "role": "user", "content": domain, ] Five questions were generated from each domain for a total of 100 questions. Overall, the question generation methodology was effective and allowed for rapid generation of the questions dataset. There are however some potential limitations that this method may introduce which are discussed in the Discussion section (Section <ref>.) The datasets of all questions and answers with accompanying citations can be found in the Github repository for this study[https://github.com/oacore/core-gpt-evaluation]. §.§ Evaluation Metrics Effectively evaluating CORE-GPT requires a two-step approach as both the given answer and the provided citations must be validated. We elected to use three metrics for each of the answers as follows: * Comprehensive: How comprehensively is the question answered? * Trust: How trustworthy is the answer? * Utility: How useful is the answer? For the citations, we use relevance as the metric, that is how relevant is the given reference to the original question. To enable evaluation of the results, a browser-based evaluation platform was developed which sequentially displayed each of the 100 questions and answers and the title, abstracts and links to the five papers for each answer. For each question, the three answer metrics shown above and the relevance score for each of the citations could be assigned a value from zero to ten. Two annotators were retained and were given written instructions and training using the evaluation platform with sample data. Inter-annotator agreement for each metric was measured using Cohen's Kappa with quadratic weights. This measure was chosen for the task as it accounts for both small and large differences of opinion more accurately than unweighted Kappa. The results for the inter-annotator agreement can be seen in Table <ref>. § RESULTS §.§ Quality of answers Using the evaluation platform, the annotators were asked to rank each answer according to the three metrics introduced previously, comprehensiveness, trust and utility. Each of these metrics could be scored from 0 (not at all) to 10 (completely) for each answer. Figure <ref> shows the mean comprehensiveness, trust and utility scores for the answers from each of the 20 domains. CORE-GPT performs exceptionally well across most domains, but is less successful in a few areas. In 75% of the domains, the mean comprehensive, trust and utility score was 8 points or greater, and 9 points or greater in over half of the domains, indicating that CORE-GPT provides highly relevant, factual and, most importantly, referenced answers. A full breakdown of all scores is shown in Tables <ref> and <ref>. It is worth noting that in the domains where the answers were deemed by the annotators to be less comprehensive and less useful, the trust scores remained fairly high (>6 across all domains) indicating that overall the given answers were considered trustworthy. We investigated whether there was a relationship between the domain scores for comprehensiveness, trustworthiness and utility and the number of research papers in CORE for each respective domain (Figure <ref>). However, we found only a weak correlation (Pearson's R0.23, n=20), indicating that having less research content in some domains does not fully explain the lower performance in these areas. CORE is a comprehensive source of multidisciplinary research content <cit.> and it might be that the domains in which there is genuinely less content are not necessarily insufficiently represented in CORE. We further examined whether the length of the abstracts given to the model to generate the answers had an impact on the quality scores for the answers. There is a wide variance in mean abstract length across the domains, from economics (171 words) to engineering (329 words), we were therefore interested to see if this influenced the scores for comprehensiveness and utility. However, we observed no correlation between these scores and the mean abstract lengths in each domain.(Pearson's r=-0.02, n=20) §.§ Citation Relevance In contrast to the results for GPT3.5 and GPT4 shown in Figure <ref>, all citations provided by CORE-GPT are, by design, links to genuine research papers. Therefore the evaluation was based on testing not the existence of these papers, but their relevance to the user's original question. The annotators were asked to rank each citation from 0 (not relevant at all) to 10 (completely relevant). Figure <ref> shows the mean relevance score for each of the five citations across all domains. Based on the previously discussed Figure <ref> we observed that CORE-GPT provides comprehensive, trustworthy and useful answers for the majority of the domains. However, in some domains, such Geology, History and Art, comprehensiveness and utility were lower. We were therefore interested to find out to what extent the ability of CORE-GPT to provide good-quality answers is linked to the quality of the retrieved references. We found that there is a very strong correlation between the relevance of the retrieved references and comprehensiveness, trust and utility across domains respectively (Pearson r = 0.77 (comp.); r = 0.83 (trust); r = 0.80 (utility), n=20). This suggests that the ability to retrieve relevant references with respect to a user's question has a major impact on the quality of CORE-GPT's answers. The annotators were asked to score the relevance of each of the five retrieved references separately, enabling us to test the performance of our reference retrieval functionality. A well optimised ranking function should retrieve the most relevant references first. As a result, we expected to observe that the top retrieved references would be assigned higher relevance scores than the latter references by the annotators on average. The results reported in Table <ref> indeed confirms this trend. § DISCUSSION Whilst the overall performance of CORE-GPT is very good, there are still some limitations to consider. CORE-GPT draws its answers and references from the body of Open Access literature. Whilst OA now covers a growing proportion of published scientific articles, there is still a significant quantity that is locked behind publishers' paywalls which CORE-GPT cannot currently access. However this problem, and the issues with current publishing paradigms in general, extend far beyond the scope of this study. It should be noted that whilst CORE-GPT was tested across a wide range of domains, only five questions per domain were used for the evaluation. This was to limit the burden on the annotators who validated 100 answers and checked all 500 links to references. Further evaluation could therefore be undertaken with a larger cohort of annotators. In the questions dataset, a small number of questions are somewhat basic and not really at the level that would be expected of a research question. Further, it can be seen that there is overlap in the phrasing of some questions, leading to similar questions in some domains. Whilst this reduced the variety of questions by a small margin, we remain confident in the overall results presented here. Across all domains there is very strong correlation between the comprehensiveness, trust and utility scores for the answers and the relevance of the citations (Pearson r = 0.77 (comp.); r = 0.83 (trust); r = 0.80 (utility), n=20) This indicates that it is access to high quality, relevant literature that is central to delivering high quality answers. § CONCLUSION In this paper we introduce CORE-GPT a framework that combines LLMs and massive-scale Open Access scientific corpora to deliver a trustworthy, evidence-based question-answering platform. CORE-GPT is an overtly simple, yet elegant solution to the problems that arise when LLMs are asked to provide factual, evidence-based answers. Our evaluation results demonstrate that the answers provided by CORE-GPT are, on the whole, comprehensive, useful and most importantly trustworthy. Further, all references generated by the platform are, by design, genuine research papers held within CORE. § FUTURE WORK The results from the evaluation show that CORE-GPT performs well across the majority of scientific domains. This provides a strong foundation to now develop a range of applications using the central CORE-GPT architecture. The initial version of CORE-GPT uses the titles and abstracts of the five most relevant papers as source for the given answers. Due to the limitations in the number of tokens that can be passed to the GPT4 model it is not currently possible to pass the entire full-text content of all papers. This is something that will undoubtedly change in the future and may lead to even stronger results. Our initial plan includes making the current version of CORE-GPT available as an addition to the CORE API V3.0. Further, CORE provides a range of management tools for repositories and we see strong potential in developing both an embedded repository version of the service and also a recommender system for repositories based on the CORE-GPT architecture. § DATA AND CODE AVAILABILITY All data and software code used for the evaluation of CORE-GPT are available to promote transparency and reproducibility of the findings. The dataset of questions and answers and the source code used for the analysis and visualisations in this study are accessible on the CORE-GPT GitHub repository[https://github.com/oacore/core-gpt-evaluation]. Any questions or requests for further information can be addressed to the corresponding author. vancouver
http://arxiv.org/abs/2307.02160v1
20230705100132
Invariance principle for Lifts of Geodesic Random Walks
[ "Jonathan Junné", "Frank Redig", "Rik Versendaal" ]
math.PR
[ "math.PR", "math.DG", "60J65, 60K35, 58J65" ]
TU Delft, Delft Institute of Applied Mathematics, Mekelweg 4, 2628 CD Delft, Netherlands.Emails: We consider a certain class of Riemannian submersions π : N → M and study lifted geodesic random walks from the base manifold M to the total manifold N. Under appropriate conditions on the distribution of the speed of the geodesic random walks, we prove an invariance principle; i.e., convergence to horizontal Brownian motion for the lifted walks. This gives us a natural probabilistic proof of the geometric identity relating the horizontal Laplacian Δ_$̋ onNand the Laplace-Beltrami operatorΔ_MonM. In particular, whenNis the orthonormal frame bundleO(M), this identity is central in the Malliavin-Eells-Elworthy construction of Riemannian Brownian motion. Invariance principle for Lifts of Geodesic Random Walks Rik Versendaal June 2023 ======================================================= § INTRODUCTION In this paper we consider geodesic random walks on a Riemannian manifold(M,g)and consider their horizontal lift into a manifold(N,g̃)such that there is a Riemannian submersionπ: N→M. A motivating example of this setting is the orthonormal frame bundleπ_O(M) : O(M) →Mof a Riemannian manifold. This example is the basis of the Malliavin-Eells-Elworthy construction of Brownian motion. The important point in this setting is that the horizontal Brownian motion has as a generator the horizontal Laplacian which is a sum of squares of globally defined vector fields; i.e., it is in Hörmander form Δ_= ∑_i=1^d H_i^2, wheredis the dimension of the manifold. Because of this, the Markov process generated byΔ_can be constructed as the solution of a Stratonovich SDE <cit.> driven by an^d-valued Brownian motion. Then, the Brownian on the manifold is the projection of this horizontal Brownian motion. This is based on the fact that Δ_ (f∘π) = (Δ_M f) ∘π for all smoothf:M→. The proof of identity (<ref>) in <cit.> is based on an explicit somewhat involved computation. Horizontal Brownian motion is extensively studied in Baudoin's monograph <cit.>. Brownian motion onMcan be obtained as a scaling limit of geodesic random walks as initially considered by Jø rgensen <cit.>. It is therefore natural to lift these walks horizontally in order to obtain horizontal Brownian motion in the scaling limit. As a consequence of such a weak converge result, the horizontal Brownian motion on the total manifoldNand the Brownian motion on the base manifoldMare thenπ-related automatically. It is precisely the aim of our paper to prove this result for a class of geodesic random walks, in the setting of Riemannian submersions. We start in section 3 by proving this invariance principle, and its corollary (<ref>) for the orthonormal frame bundle; i.e., the context of <cit.>. Then in section 4 we consider general submersions where we prove the same result, and provide several examples. § RANDOM WALKS AND HORIZONTAL RANDOM WALKS In this section we introduce the stochastic processes we study, namely, horizontal random walks. To do so, we first introduce the analogue of random walks inM, so-called geodesic random walks, following <cit.>. Afterwards, we explain how these geodesic random walks can be lifted to the total spaceNalong a Riemannian submersionπ:N →M. §.§ Geodesic random walk We consider ad-dimensional geodesically complete Riemannian manifoldMwith metricg, and denote byT_pMthe tangent space ofMatp∈M. In order to describe increments of our random walks, we have to consider a collection of probability measuresμ_ponT_p M. We then say thatμ_pis measurable (or continuous, smooth) as a function ofpif for every smooth coordinate systemxaboutp, the associated family of measures on^d(where we use{∂/∂x^i}_1≤i≤das a basis forT_pMto identifyT_pMwith^d) is measurable (or continuous, smooth). By smoothness of the transition maps, if this holds for one smooth coordinate system, then it holds for all smooth coordinate systems. Such a collection of probability measuresμ_ponT_p M,p∈M, depending in a measurable way onp∈M, is called a distribution of increments. The nomenclature increment is inspired from <cit.> whereμ_pdescribes the direction in which the random walk follows a geodesic when starting fromp. More precisely, we define the following Markov processes based on{μ_p}_p ∈M: * The discrete-time unit speed random walk{S_k}_ k∈ℕ based on {μ_p}_p ∈ M is defined via its transition operator Pf(p) := [f(S_k+1)|S_k=p]= ∫_T_pM f(exp_p(v)) μ_p(dv); * The discrete-time random walk with speed α based on {μ_p}_p ∈ M is denoted by {^(α)_k}_ k∈ℕ and is defined via its transition operator P^(α)f(p) := [f(S^(α)_k+1)|S^(α)_k=p] = ∫_T_pM f(exp_p(α v)) μ_p(dv); * Finally, the continuous time process {Z^(α)}_t≥ 0 is defined via its generator. L^(α) f(p) := α^-2(P^(α)f(p) - f(p)) The process{S_k}_k∈ℕevolves as follows: wheneverS_k=p,S_k+1is obtained by randomly choosingX_k+1onT_pMaccording to the measureμ_pand following the geodesic starting atpin the directionX_k+1for time1, and analogously for the walk with speed scaled byα. In what follows, we want to prove weak convergence to (horizontal) Brownian motion for the continuous walk{Z^(α)_t}_t≥0and its horizontal lift (defined below) asαtends to zero. This will then imply immediately the same weak convergence results for the discrete walk{S^(α)_⌊α^-2t⌋}_t≥0asαtends to zero. In order to proceed, we need some conditions on the distribution of increments. Because we aim at proving convergence to Brownian motion, there is a centering and variance condition. Finally, in order to prove uniform convergence of generators, it is convenient to have an additional third moment condition. More precisely, we make the following assumptions: [Centering and covariance] For every p ∈ M, the measure μ_p has zero expectation and its covariance equals the inverse metric; i.e., ∫_T_pM v μ_p(dv) = 0, ∫_T_pM v ⊗ v μ_p(dv) = g^-1(p), or equivalently, in any smooth coordinates system about p, ∫_T_pM v^i μ_p(dv) = 0, ∫_T_pM v^iv^j μ_p(dv) = g^ij(p), i,j = 1, …, d. [Third moment condition] The third moment of the collection of measures {μ_p}_p∈ M is finite, uniformly on compacts; i.e., for all K⊂⊂ M compact, sup_p∈ K∫_T_pMv^3 μ_p(dv) < +∞, If p↦μ_p is invariant under parallel transport, which is the condition considered in <cit.> in order to mimic identically distributed increments, then if assumptions <ref> and <ref> are satisfied for a single point p∈ M, then they are for all points p∈ M. §.§ Horizontal lift of geodesic random walks Now that we have defined geodesic random walks on the base manifold(M, g), we can construct a new process on the total manifold(N, g̃)carrying a metricg̃that will be specified later on. In order to define this process, we recall some terminology from Riemannian geometry. A Riemannian submersionπ : (N,g̃) → (M, g) is a smooth surjective map whose differential is an isomorphism dπ_u : ( dπ_u)^⊥→ T_π(u)M which is also an isometry. Here ⊥ denotes the orthogonal complement with respect to the metric g̃ in N. In the setting of Riemannian submersions, the tangent spaceT_uNof the total manifoldNat a pointu∈Nsplits into the vertical and horizontal subspaces as follows: _u N := dπ_u, _̋u N := (_u N)^⊥, T_u N = _̋u N ⊕_u N. Their disjoint unions form two subbundles ofTNdenoted respectively byN = ⊔_u∈N _u NandN̋ = ⊔_u∈N _̋uN. This splits the metricg̃onTNinto its two factorsg_Nandg_N̋. A horizontal vector fieldX ∈Γ(N̋)is π-related to a vector fieldX ∈Γ(TM)if for anyu ∈Nif holds dπ_u(X_u) = X_π(u). We stress out that relating manifolds via a Riemannian submersion make sure that horizontalπ-related tangent vectors as in (<ref>) have the same norm becausedπ_|_N̋is an isometry. In several situations, the total manifold N comes with a natural projection map π : N → M defining the vertical subspaces _u N = dπ_u but no specification of a metric g̃. One can then use any connection form ω to define the horizontal subspaces _̋u N = ω_u. Now, with the help of this choice of horizontal bundle, obtained either by the Riemannian submersion or by the specification of a connection form, one can lift any smooth curve on the base manifold to the total manifold with respect to the horizontal bundle. We can now define the horizontal lift of a curveγ: I→M. We denoteγ'(t)=ddt(γ(t)). The horizontal liftγ̃ with respect to N̋ starting at u_0 ∈ N_γ(0) = π^-1({γ(0)}) of a smooth curve γ : I → M is the unique curve satisfying π∘γ̃ = γ, γ̃'(t) ∈_̋γ̃(t)N. Similarly, the horizontal liftṽwith respect toN̋starting atuof a tangent vectorv ∈T_π(u)Mis given by differentiating (<ref>); ṽ = (dπ_u)^-1v ∈_̋uN. Ifṽis the horizontal lift ofv ∈T_p M, then for everyγinMsuch thatγ(0)=p, andγ'(0)=v, and for everyu∈Nsuch thatπ(u)=p, the horizontal lift ofv, denoted byṽ= ṽ[v,u]equalsγ̃'(0), whereγ̃is the horizontal lift ofγstarting atγ̃(0)=u. We recall that the horizontal lift of a geodesic under a Riemannian submersion is again a geodesic (see for instance <cit.>). It is important to notice that geodesics inNwith initial horizontal tangent vector, are horizontal curves; i.e., the tangent vectors remain horizontal. Moreover, by the geodesic property, the tangent vector at any point of the curve is the parallel transport of the initial tangent vector. Given a distribution of increments {μ_p}_ p∈ M we define its horizontal lift{μ̃_u}_ u∈ N as follows: The distribution μ̃_u is obtained by first drawing v according to μ_π(u) and then lifting v to ṽ[v,u]. It then follows that the (discrete or continuous-time) random walks based on {μ_p }_ p∈ M are horizontally lifted to the (discrete or continuous-time) random walks based on {μ̃_u }_ u∈ N, and conversely, the projections of random walks based on {μ̃_u }_u∈ N are distributed as the random walks based on {μ_p}_p∈ M. As a consequence, the horizontal lift of the rescaled continuous-time random walk{Z^(α)_t}_ t≥0 defined via its generator (<ref>) is the process{Z̃^(α)_t}_ t≥0on the total manifoldNwith generator defined on smooth compactly supported functionsf:N→by Ł_α f(u) := α^-2∫_T_π(u)M f(exp_u (αṽ[v,u])) - f(u) μ_π(u)(dv), so that{π(Z̃^(α)_t)}_ t≥0 = {Z^(α)_t}_ t≥0in distribution. § INVARIANCE PRINCIPLE FOR THE ORTHONORMAL FRAME BUNDLE We now turn to our main result, namely the invariance principle for horizontal random walks. Before we state this result in full generality in Section 4, we first consider the special case in whichNis the orthonormal frame bundle. The reason for this is two-fold. First of all, the (orthonormal) frame bundle plays a central role in defining stochastic processes in manifolds by constructing them from their Euclidean counterparts. This motivated our study of horizontal random walks. Second, considering the orthonormal frame bundle allows for a more streamlined proof of the invariance principle, therefore making it more instructive to consider first. Before we can state the main theorem, we need additional definitions. We start by defining the orthonormal frame bundle. An orthonormal frameu at p is an ordered choice of orthonormal basis {ue_i}_1≤ i≤ d of T_pM, where {e_i}_1≤ i≤ d is the canonical basis of ^d. The set of all orthonormal frames at p is denoted O_p and their disjoint union O(M) := ⊔_p ∈ M O_p is referred to as the orthonormal frame bundle. The orthonormal frame bundleO(M)is a manifold of dimensiond(d+1)/2that comes with a natural submersionπ_O(M) : O(M) →Msending any orthonormal frameu ∈O_pto the basepointp. If(U, {x^i}_1≤i≤d)is a local chart inMaboutp, we can express the orthonormal basis ofT_pMasue_i = (ue_i)^j ∂/∂x^j, and this gives a local chart(π^-1(U), ({x^k}_1≤k≤d, {(ue_i)^j}_1≤i<j≤d))inO(M)aboutu. It remains to define a splitting ofTO(M), for instance, by specifying a notion of horizontality. A smooth curve u : I → O(M) is horizontal if for any e∈^d the tangent vector field u(t)e ∈ T_π(u(t))M is itself parallel with respect to the Levi-Civita connection ∇ on M along the curve π∘ u : I → M. This notion of horizontality induces the splittingTO(M) = Ő(M) ⊕O(M)and allows us to lift smooth curves horizontally. Given a smoothγ: I →Mand its horizontal liftγ̃starting atu, we recover the parallel transport of tangent vectors alongγgiven by τ_γ; t_1t_2 : T_γ(t_1)→ T_γ(t_2) : v ↦γ̃(t_2)(γ̃(t_1))^-1v. We can look at the horizontal lifts of the different orthonormal basisue_iofT_pMinduced by the orthonormal frameuatpfor eache_i. Let u be an orthonormal frame at p. The canonical horizontal vector fields H_i(u) := ue_i, i = 1,…,d, are the horizontal lifts with respect to Ő(M) of the tangent vectors ue_i ∈ T_pM starting at u. To find a coordinate expression for these vector fields, consider a horizontal liftγ̃: I →O(M)that starts atuwithγ'(0) = ue_i. By definition of horizontal lift with respect toŐ(M), H_i(u) = ue_i = γ̃'(0) = γ̇^j(0)∂/∂ x^j + ((γ̃e_l)^m)'(0)∂/∂(ue_l)^m, and since the tangent vectorsγ̃(t)e_lare parallel with respect to∇onMalong the curveπ∘γ̃: I →Mwhose initial tangent is vectorue_i, the geodesic equation yields ((γ̃e_l)^m)'(0) = -(ue_i)^j(ue_l)^kΓ^m_jk, 1 ≤ l < m ≤ d, where{Γ^k_ij}_1≤i,j,k≤ddenote the Christoffel symbols of∇. The horizontal and vertical subbundles ofTO(M)are thus respectively spanned by (see <cit.>) H_i(u) = (ue_i)^j (∂/∂ x^j - (ue_l)^kΓ_jk^m∂/∂ (ue_l)^m), i = 1, …, d, and V^k_j(u) := (ue_j)^l∂/∂ (ue_k)^l, 1 ≤ j < k ≤ d. A natural choice of metric compatible with this splitting is a Sasaki-Mok type metric introduced in <cit.> and <cit.> (see also <cit.>). The canonical 1-formθ and the connection formω on O(M) associated to ∇ on M are the dual forms to the vector fields (<ref>) and (<ref>) given by θ^k(u) := (ue_l)^k dx^l, ω^j_i(u) := (ue_k)^j(Γ^k_lm (ue_i)^l dx^m + d(ue_i)^k). The Sasaki-Mok metricg̃ is defined pointwise by ⟨η, ξ⟩_g̃ := ⟨ dπ_O(M),u (η) , dπ_O(M),u(ξ) ⟩_g + ⟨ω_u(η), ω_u(ξ)⟩, where ⟨·, ·⟩ denotes an O(d)-invariant inner product on 𝔬(d). The global canonical horizontal vector fieldsH_i ∈Γ(Ő(M))allow us to define a horizontal Laplacian for the orthonormal frame bundle as a sum of squares: The horizontal Laplacian of O(M) is given by Δ_=̋∑_i=1^d H_i^2. This operator is in Hörmander's form. In general, Nash's embedding theorem allows one to write the Laplace-Beltrami operator ofMas a sum of squares of orthogonal projections (see for instance <cit.>) at the cost of extra terms coming from the dimension of the isometric embedding. The horizontal Laplacian and the Laplace-Beltrami operator satisfy the following relation, and this is a starting point in stochastic calculus on manifolds using (anti-)development: The following identity holds: Δ_(f ∘π_O(M)) = Δ_M f ∘π_O(M), f ∈ C^∞(M). The proofs of Proposition <ref> in <cit.> and <cit.> are more geometric in essence. Here, we deduce this relation as a corollary to the invariance principle of the horizontally lifted geodesic random walks on the orthonormal frame bundle. Let (M,g) be a geodesically complete Riemannian manifold and let {μ_p}_p∈ M be a distribution of increments on M satisfying Assumption <ref> and Assumption <ref>. Let {Z̃^(α)_t, t≥ 0} be the process with generator (<ref>). Then as α→ 0, this process converges to horizontal Brownian motion; i.e., the process with generator 1/2Δ_$̋. Let f : M → be a smooth compactly supported function. Given a frame u for p, we perform a Taylor's expansion of f ∘γ_α, where γ_α is the horizontal lift starting at u of the curve γ_α(t) = exp_p(α tv). There is some 0 < s < 1 such that f ∘γ_α(1) = f(u) + .d/dt|_t=0(f ∘γ_α(t)) + 1/2.d^2/dt^2|_t=0(f ∘γ_α(t)) + 1/6.d^3/dt^3|_t=s(f ∘γ_α(t)). To compute the time derivatives, first note that the horizontal lift allows us to express γ_α'(t) = τ_γ_α; 0t(α v) = γ_α(t)(γ_α(0))^-1 (α v) = αγ_α(t) u^-1v = α(u^-1 v)^i γ_α(t) e_i. Thus d/dt(f ∘γ_α(t)) = df_γ_α(t)∘γ_α'(t) = α(u^-1 v)^i H_i f(γ_α(t)), and likewise, d^2/dt^2 f(γ_α(t)) = α^2(u^-1 v)^j (u^-1 v)^i H_j H_i f(γ_α(t)), d^3/dt^3 f(γ_α(t)) = α^3(u^-1 v)^k(u^-1 v)^j (u^-1 v)^i H_k H_j H_i f(γ_α(t)). Since u is an orthonormal frame, {ue_i}_1≤ i≤ d is an orthonormal basis of T_pM, and we get v^i = ⟨ v, ue_i ⟩_g = ⟨ u^-1v, e_i⟩_^d = (u^-1 v)^i. Recall the rescaled generator (<ref>) given by α^-2∫_T_pM f ∘γ_α(1) - f(u) μ_p(dv) = α^-2∫_T_pM.d/dt|_t=0(f ∘γ_α(t)) + 1/2.d^2/dt^2|_t=0(f ∘γ_α(t)) + 1/6.d^3/dt^3|_t=s(f ∘γ_α(t)) μ_p(dv). The first term vanishes by the centering of the collection of measures, and the second one is precisely the horizontal Laplacian. Indeed, under Assumption <ref>, by the linearity of the integral and the horizontal lift, we get that α^-2∫_T_pM.d/dt|_t=0(f ∘γ_α(t)) μ_p(dv) = α^-1 df_u ∘(dπ_u)^-1( ∫_T_pM v μ_p(dv)) = 0. Moreover, by (<ref>), we have α^-2∫_T_pM.d^2/dt^2|_t=0(f ∘γ_α(t)) μ_p(dv) = ∫_T_pM(u^-1 v)^j (u^-1 v)^i μ_p(dv) H_j H_i f(u) = ∫_T_pM v^jv^i μ_p(dv) H_j H_i f(u) = ∑_i=1^d H_i^2 f(u), which is the horizontal Laplacian. For the third order term, we conclude using Assumption <ref> as follows: π_O(M)( f) is compact by continuity of the projection, and we estimate α^-2∫_T_pM.d^3/dt^3|_t=s(f ∘γ_α(t)) μ_p(dv) = α∫_T_pM v^kv^jv^i H_k H_j H_i f(γ_α(t)) μ_p(dv) ≤α∫_T_pMv^3 μ_p(dv) ∑_i,j,k=1^dH_k H_j H_i f_∞ ≤αsup_q∈π_O(M)( f)∫_T_qMv^3 μ_q(dv) ∑_i,j,k=1^dH_k H_j H_i f_∞, which goes to 0 independently of the frame u as α→ 0. § INVARIANCE PRINCIPLE FOR RIEMANNIAN SUBMERSIONS In this Section, we extend the results of Section 3 for the orthonormal frame bundle to the more general framework of Riemannian submersions. Let(M, g)and(N, g̃)be Riemannian manifolds with Riemannian submersionπ : N → M, and letdbe the dimension ofM. Each vector fieldX ∈Γ(TN)can be decomposed uniquely into its horizontal partX_∈̋$̋ and vertical part X_∈ respectively. Under this setting, we consider the Laplace-Beltrami operator Δ_N on N, or even its horizontal and vertical parts as follows: The horizontal LaplacianΔ_$̋ is the generator of the pre-Dirichlet form ℰ_(̋f, h) = -∫_E ⟨ ( f)_,̋ ( h)_⟩̋_g̃ dVol_g̃, f, h ∈ C^∞_c(N). In local orthonormal frames{E_i}_1≤ i≤ dof$̋ and {F_j}_1≤ j ≤ l of , this operator can be rewritten as Δ_=̋∑_i=1^d (E_i^2 - (∇_E_i E_i)_) - ∑_j=1^l (∇_F_j F_j)_.̋ Analogously, the vertical laplacianΔ_ is the vertical part of the Laplace-Beltrami operator on E, and Δ_N = Δ_+̋Δ_. Of course, since the F_j's are vertical, they cannot possibly be obtained as horizontal lifts. The last term in the horizontal Laplacian (<ref>) should thus vanish in order to obtain convergence of the generator of horizontally lifted geodesic random walks towards this operator. The following type of Riemannian submersion ensures that the last term indeed vanishes; (∇_F_j F_j)_=̋ 0 (see <cit.> and <cit.>). The fibers N_p = π^-1({p}) of a Riemannian submersion π : N → M are said to be totally geodesic if any geodesic in a fiber, seen as a submanifold of N with the induced metric, is also a geodesic in N. Assuming that the submersion has totally geodesic fibers, the horizontal Laplacian (<ref>) takes the form Δ_=̋∑_i=1^d (E_i^2 - (∇_E_i E_i)_). This operator is generally not in Hörmander's form as in the special case (<ref>) of the orthonormal frame bundle which is a parallelizable manifold. Hörmander's theorem allows us to check the subellipticity of Δ_$̋ on the horizontal distribution$̋. A distribution Λ of the tangent bundle TN is said to be bracket-generating if it is generated by a finite number of Lie bracket of vector fields in Γ(Λ). Whenever the horizontal subbundle N̋ of TN is bracket-generating, the subellipticity of Δ_$̋ is guaranteed by Hörmander's theorem. Moreover, <cit.> guarantees in that case its self-adjointness onC_c^∞(N), and its associated pre-Dirichlet form has a unique closed extension. On the other hand, asis never bracket generating, we will not consider vertically lifted geodesic random walks. We are now ready to state the invariance principle for the horizontal lift of the rescaled continuous-time random walk for these types of Riemannian submersions. As a corollary, we obtain the associated relation between the Laplace-Beltrami operator and the horizontal Laplacian. Let (M,g) and (N,g̃) be geodesically complete Riemannian manifolds equipped with a Riemannian submersion π : N → M with totally geodesic fibers such that the horizontal subbundle N̋ of TN is bracket-generating. Let {μ_p}_p∈ M be a distribution of increments on M satisfying Assumption <ref> and Assumption <ref>. Let {Z̃^(α)_t, t≥ 0} be the process with generator (<ref>). Then as α→ 0, this process converges to horizontal Brownian motion; i.e., the process with generator 1/2Δ_. Let π : N → M be a Riemannian submersion with totally geodesic fibers, and assume the horizontal distribution $̋ to be bracket-generating. Then the following identity hold: Δ_(f ∘π) = Δ_M f ∘π, f ∈ C^∞(M). Before proving Theorem <ref>, let us go through some simple examples from <cit.> where the restrictions on the Riemannian submersion, namely, that the fibers are totally geodesic and that the horizontal distribution is bracket-generating, are verified. * The manifold (M, g) itself, with _M : M → M as submersion. The horizontal distribution is the whole tangent space. Theorem <ref> gives then a short proof of the invariance principle for geodesic random walks on Riemannian manifolds. * The tangent bundle π_TM : TM → M equipped with the Sasaki metric <cit.> defined in terms of coordinates ({x_i}_1≤ i≤ d, {y_j}_1≤ j≤ d) about (p, v) in TM by ds^2 = g_ijdx^i dx^j + g_ijDy^i Dy^j, where D denotes the covariant differential with respect to ∇ on M; Dy^k = dy^k + Γ^k_ijy^i dx^j. * The orthonormal frame bundle π_O(M) : O(M) → M from Section 3. * A general class of spaces on which such invariance principle holds are the principal bundles π_P : P → M with fiber Lie group G. Given a G-compatible connection form ω and a G-invariant metric b on G, there is a unique metric g̃ = π^*g + bω on P that makes π_P into a Riemannian submersion with totally geodesic fibers such that the horizontal distribution $̋ ofωis the orthogonal complement of the vertical distribution <cit.>. Whenever the horizontal distribution$̋ is bracket-generating, the subellipticity of Δ_$̋ is guaranteed and there is a unique closed extension of its associated pre-Dirichlet form. The previous examples fall under this category. The proof of Theorem <ref> is of course similar to the special case of Theorem <ref> for the orthonormal frame bundleO(M)presented in Section 3. Nevertheless, we do not have anymore the global canonical horizontal vector fields (<ref>). Let f : N → be a smooth compactly supported function. We perform a Taylor s expansion of f ∘γ_α around u ∈ N_p, where γ_α is the horizontal lift starting at u of the curve γ_α(t) = exp_p(α tv). There is some 0 < s < 1 such that f ∘γ_α(1) = f(u) + .d/dt|_t=0(f ∘γ_α(t)) + 1/2.d^2/dt^2|_t=0(f ∘γ_α(t)) + 1/6.d^3/dt^3|_t=s(f ∘γ_α(t)). The first time derivative is given by d/dt f(γ_α(t)) = df_γ_α(t)∘(dπ_γ_α(t))^-1(γ_α'(t)) = ⟨ f, γ_α'(t) ⟩_g̃ = ⟨( f)_,̋γ_α'(t) ⟩_g̃, where we used the fact that γ_α is horizontal for the last equality. To obtain the second time derivative, we use the Levi-Civita connection ∇ on N and use the fact that γ_α is a geodesic being the horizontal lift of a geodesic under a Riemannian submersion; d^2/dt^2 f(γ_α(t)) = ⟨∇_γ_α'(t) ( f)_,̋γ_α'(t)⟩_g̃ + ⟨ ( f)_,̋∇_γ_α'(t)γ_α'(t)⟩_g̃ = ⟨∇_γ_α'(t) ( f)_,̋γ_α'(t)⟩_g̃. In particular, at time t = 0, consider the orthonormal basis {E_i}_1≤ i≤ d of T_uN defined as the horizontal lift of an orthonormal basis {E_i}_1≤ i≤ d of T_pM. Write v = v^iE_i ∈ T_pM. By linearity of the horizontal lift, we get γ_α'(0) = α v^iE_i, and thus .d^2/dt^2|_t=0(f ∘γ_α(t)) = α^2 v^iv^j ⟨∇_E_i ( f)_,̋ E_j⟩_g̃ = α^2 v^iv^j (E_iE_j - (∇_E_i E_j)_) f. By Assumption <ref> on the first and second moments, we deduce that α^-2∫_T_pM.d/dt|_t=0(f ∘γ_α(t)) μ_p(dv) = α^-1 df_u ∘(dπ_u)^-1(∫_T_pMv μ_p(dv)) = 0, and α^-2∫_T_pM.d^2/dt^2|_t=0(f ∘γ_α(t)) μ_p(dv) = ∫_T_pM v^iv^j μ_p(dv) (E_iE_j - (∇_E_iE_j)_) f =∑_i=1^d (E_i^2 - (∇_E_iE_i)_) f. The last term is the horizontal Laplacian (<ref>) for a submersion with totally geodesic fibers. For the third time derivative, first define the horizontal Hessian _f̋(Y, Z) = ⟨∇_Y ( f)_,̋ Z⟩_g̃, Y, Z ∈Γ(TN), which is a symmetric covariant tensor of order 2. Its covariant derivative is thus the tensor given by ∇_f̋(X, Y, Z) = X(_f̋(Y, Z)) - _f̋(∇_X Y, Z) - _f̋(Y, ∇_X Z), X, Y, Z ∈Γ(TN). Note that, again since γ_α is a geodesic, d^3/dt^3 f(γ_α(t)) = ∇_γ_α'(t)⟨∇_γ_α'(t)( f)_,̋γ_α'(t)⟩_g̃ = ∇_γ_α'(t)⟨∇_γ_α'(t)( f)_,̋γ_α'(t)⟩_g̃ - 2⟨∇_∇_γ_α'(t)γ_α'(t)( f)_,̋γ_α'(t) ⟩_g̃ = ∇_f̋(γ_α'(t), γ_α'(t), γ_α'(t)). Locally, ∇_f̋_u : T^3_uN → is a bounded operator being linear on a finite dimensional vector space with operator norm given by C(u) = max_(η, ξ, ζ) ∈ T^3_uN : η_g̃ξ_g̃ζ_g̃ = 1|∇_f̋_u(η, ξ, ζ)|. This constant C(u) can be uniformly bounded since |∇_f̋| : N × T^3N → is a continuous map on the compact set {(q,(η, ξ, ζ)) : q ∈ f, (η, ξ, ζ) ∈ T^3_qN, η_g̃ξ_g̃ζ_g̃ = 1}, and hence attains a maximum C > 0. Since γ_α'(t)_g̃ = γ_α'(0)_g̃ = αv_g, we are able to conclude by Assumption <ref> on the third moment; α^-2∫_T_pM.d^3/dt^3|_t=s(f ∘γ_α(t)) μ_p(dv) = α^-2∫_T_pM∇_f̋(γ_α'(t), γ_α'(t), γ_α'(t)) μ_p(dv) ≤α Csup_q ∈π( f)∫_T_qMv_g^3 μ_q(dv), which goes to 0 independently of u as α→ 0. Consider an α-rescaled continuous-time random walk on M that satisfies Assumption <ref> and Assumption <ref>. By Theorem <ref> with _M : M → M as submersion, this process converges to Brownian motion; i.e., the process with generator 1/2Δ_M. On the other hand, by Theorem <ref>, the horizontal lift of the α-rescaled random walk converges to horizontal Brownian motion; i.e., the process with generator 1/2Δ_$̋ onN. Since both processes are Markov, and since the projectionπ : N → Mis continuous, the corresponding generators must beπ-related. This proves identity(<ref>)for the case of geodesically complete Riemannian manifolds and smooth compactly supported functions. The general case follows by restricting the diameter of the support of the collection of measures{μ_p}to, say, the unit ball, so that geodesics do not have arbitrarily large velocities. To extend beyond smooth compactly supported functions, a partition of unity argument concludes. Of course, Corollary <ref> is a special case of Corollary <ref>. Here, we proposed another approach than the classical one that we briefly outline for the sake of completeness. Essentially, the proof reduces to showing that the Levi-Civita connection ∇ on N is π-related to the Levi-Civita connection ∇ on M (see <cit.>). This follows from the fact that both the inner products for the specific metrics and the Lie brackets preserve π-relations; ⟨ X, W ⟩_g̃ = ⟨X, W⟩_g ∘π, dπ([Y, Z]) = [Y, Z] ∘π for π-related vector fields W, X, Y, Z ∈Γ(N̋) to W, X, Y, Z∈Γ(TM), and hence ⟨ X, [Y, Z] ⟩_g̃ = ⟨X, [Y, Z] ⟩_g ∘π. It remains to express the Levi-Civita connection ∇ on N via Koszul's formula for any triple X, Y, Z ∈Γ(TN); 2⟨∇_X Y,Z⟩_g̃ = X⟨ Y,Z⟩_g̃ + Y⟨ X, Z⟩_g̃ - Z⟨ X,Y⟩_g̃ -⟨ X, [Y,Z]⟩_g̃ -⟨ Y, [X, Z]⟩_g̃ + ⟨ Z, [X, Y]⟩_g̃ . Acknowledgment: This publication is part of the project Interacting particle systems and Riemannian geometry (with project number OCENW.M20.251) of the research program Open Competitie ENW which is (partly) financed by the Dutch Research Council (NWO)[ < g r a p h i c s > ].
http://arxiv.org/abs/2307.00291v1
20230701101635
Estimating IF shifts based on SU(1,1) interferometer
[ "Chen Yuetao", "Chen Gaiqing", "Luo MengMeng", "Chang Shoukang", "Gao Shaoyan" ]
quant-ph
[ "quant-ph" ]
Corresponding author. [email protected] Corresponding author. [email protected] ^ 1MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi'an Jiaotong University, 710049, People's Republic of China ^ 2Department of Physics, Xi'an Jiaotong University City College, Xi'an 710018, China IF (Imbert–Fedorov) shifts which refers to a transverse micro-displacement occurs at the interface between two media. The estimation of such micro-displacement enables a deeper understanding of light-matter interactions. In this paper, we propose a theoretical scheme to investigate the IF shifts and incident angle sensitivity by introducing SPR sensor into the SU(1,1) interferometer. By injecting two coherent states in the SU(1,1) interferometer, we obtain the sensitivity of the IF shifts and incident angle based on the homodyne detection. Our results demonstrate that it is possible to get the maximal IF shift and the optimal IF shifts sensitivity simultaneously. Meanwhile, the orbit angular momentum carried by Laguerre-Gauss (LG) beam is unfavorable for improving the IF shift sensitivity. Furthermore, we have investigated the sensitivity of the incident angle in our scheme and found that it is capable of surpassing the sensitivity limit of (6× 10^-6) ^∘ . This allows us to achieve a more precise IF shifts sensitivity than the traditional weak measurement method used for IF shift detection, which typically has a rotation precision limit of 0.04^∘ [Journal of Optics, 19(10), 105611]. More importantly, both the sensitivity of IF shifts and incident angle can breakthrough the (shot noise limit) SNL, even approaching the Cramér-Rao bound (QCRB) at the incident angle θ =43.6208 ^∘ and θ =43.6407 ^∘. We also discover that increasing the coherent amplitude is beneficial for improving the sensitivity of both the IF shifts and incident angle. Our findings shall offer a novel scheme for measuring micro-displacement in SPR sensor. These results can be helpful in the development of more precise quantum-based sensors for studying light-matter interactions. PACS: 03.67.-a, 05.30.-d, 42.50,Dv, 03.65.Wj Estimating IF shifts based on SU(1,1) interferometer Shaoyan Gao^1 ===================================================== § INTRODUCTION Quantum metrology has attracted wide attention since it can achieve ultra-high precision estimation of various physical quantities by using some quantum effects, such as squeezing, nonclassicality and entanglement, and therefore has a lot of applications in many other fields, including gravitational wave detection <cit.>, quantum imaging <cit.> and quantum thermometry <cit.>. As one of the important platforms to realize quantum metrology, optical interferometer has attracted extensive attention in recent years. Accordingly, various optical interferometers have been proposed to realize the precise measurement of many physical quantities, such as phase shift, angular displacement and field-quadrature displacement. At the beginning of quantum metrology, one mainly injects some non-classical quantum states into the traditional Mach–Zehnder interferometer (MZI) to enhance the phase or angular displacement sensitivity. In particular, Caves proposed a feasible scheme, that is, using squeezed vacuum state instead of coherent state to significantly improve the phase sensitivity of MZI. Since then, non-classical quantum states, involving twin Fock states, NOON state and two-mode squeezed vacuum state, are used for achieving higher phase shift or angular displacement precision, even beating the Heisenberg limit (HL). Although using non-classical states can realize the improvement of phase or angular displacement sensitivity of the optical interferometer, these quantum states are not only difficult to generate by using existing experimental techniques, but also very vulnerable to the impact of noise environment. Hence one began to pay attention to whether the devices of optical interferometer can enhance the measurement accuracy. For instance, Yurke et al. suggested for the first time to replace two linear optical beam splitters of the conventional MZI with nonlinear optical parametric amplifiers, thus putting forward the SU(1,1) interferometer and further showed that the phase sensitivity of the SU(1,1) interferometer can break through the shot noise limit (SNL). In recent decades, many theoretical and experimental schemes based on the SU(1,1) interferometer have been proposed extensively. One of these schemes was given by Li et al., who theoretically demonstrated that the SU(1,1) interferometer based on the homodyne detection can realize the HL sensitivity by using a coherent state and a squeezed vacuum state as input states. Subsequently, Liu et al. showed that the SNL of the angular displacement can be surpassed by using a coherent state as the input state of the SU(1,1) interferometer. In addition to the phase estimation, angular displacement estimation and field-quadrature displacement estimation, the quantum sensor based on the Kretschmann structure has attracted wide attention in experiment and theory. Especially, the two types of nonlinear interferometric surface-plasmon-resonance sensor proposed by Wang et al., in which Kretschmann structure is placed inside or outside SU(1,1) interferometer <cit.>. We also noticed that the Imbert–Fedorov (IF) shift estimation and incident angle estimation has not been studied yet based on the SU(1,1) interferometer. It is well known that when a bounded light beam is illuminated to an interface of two different medium, the reflected light beam will experience transverse shifts which are called IF shifts <cit.>. IF shifts depend sensitively on the refractive index of the medium. In other words, the information of refractive index of the medium can be discerned by observing the variation in IF shifts. Thus, IF shifts attract renewed attention in various field such as optical sensors <cit.>, precision metrology and nanophotonic devices <cit.>. Generally, large IF shifts can be obtained in the surface plasmon resonance (SPR) sensors and measured in weak measurement scheme. However, there are always existing error in SPR sensors which will lead to low precision in the measurement of IF shifts. Besides, in recent decades, it is found that no matter how to improve the traditional measurement scheme by using traditional means, the measurement accuracy can not be substantially improved owing to the existence of classical noise. The precision limit constrained by classical noise is known as the shot-noise limit (also known as the standard quantum limit) <cit.>. As described above, the SU(1,1) interferometer which is an important tool for the precision measurement could also be good candidates for the IF shifts measurement. Thus, in this paper, we theoretically investigate the IF shift sensitivity for a Laguerre Gaussian reflected beam in a surface plasmon resonance sensor via the SU(1,1) interferometer. By taking the homodyne detection at one of output ports of the SU(1,1) interferometer, we obtain the IF shift and incident angle sensitivity. Then, the SNL and the quantum Cramér-Rao bound (QCRB) are also analytically derived. The numerical results show that the sensitivity of IF shifts and incident angle can surpass the SNL, even approaching the QCRB around the SPR angle. Moreover, the maximal IF shifts and the minimum IF shifts sensitivity can be obtained simultaneously. The incident angle sensitivitiy can break through (6× 10^-6) ^∘ which shows more accurate compared with the rotation precision of 0.04^∘ in the weak measurement of IF shifts <cit.>. The paper is organized as follows. In Sec. II, we present a description of the Kretschmann structure and IF shifts. In Sec. III, we describe the interaction process between incident coherent beam and the SPR sensors in nonlinear transmissivity estimation model. In Sec. IV, the sensitivity of IF shifts and incident angle is investigated via homodyne detection. In Sec. V, we obtain the QCRB and SNL of IF shifts and incident angle and compare them with the sensitivity of IF shifts and incident angle. Finally, the main conclusions are presented in the last section. § KRETSCHMANN STRUCTURE AND IMBERT–FEDOROV SHIFTS For the sake of facilitating the discussion of the Imbert–Fedorov shift estimation based on the SU(1,1) interferometer, we briefly review the known results of the Kretschmann structure and the Imbert–Fedorov (IF) shifts in this section. As shown in Fig. 1, the Laguerre–Gaussian beam is illuminated to the Kretschmann structure which consists of three layers, the top layer is a prism, the middle is a thin gold film coated on prism which leads to surface plasmon resonance and the bottom is semi-infinite vacuum. The global coordinate system is marked as (x_g;y_g;z_g), while the local coordinate systems are represented by (x_i;y_i;z_i) and (x_r;y_r;z_r) which are attached to the incident and reflected beams, respectively. Assume that a TM-polarized Laguerre-Gauss beam is incident onto the Au film from the glass prism, after reflection there will be IF shifts in the y_g direction at the glass-Au interface. The angular spectrum of the incident TM-polarized Laguerre-Gauss beam can be represented by ∼E_i=[w_0(-ik_ix+sgn[l])k_iy/√(2) ]^| l|e^-(k_ix^2+k_iy^2)w_0^2/4, where l and k_iλ (λ =x,y) is the units of orbit angular momentum and the lateral wave vectors of incident beam, respectively, the beam waist w_0 is 1000μ m, the symbolic function sgn[l] can be defined by sgn[l]={[ 1,l>0; 0,l=0; -1,l<0 ] . . The Imbert–Fedorov shifts are related to the complex reflection coefficient r_pgv of the Kretschmann configuration given by r_pgv=r_pg+r_gve^2ik_gzd/1+r_pgr_gve^2ik_gzd, where d denotes the thickness of gold film and the reflection coefficient between the nth and mth layers r_nm can be expressed as r_nm=k_nzε _m-k_mzε _n/ k_nzε _m+k_mzε _n, where n,m=(p(prism), g(gold), and v(vacuum)), and k_nz represents the normal component of the wave vector in the nth layer and can be given by k_nz = 2π/λ√(ε _n-ε _p(sinθ )^2), k_mz = 2π/λ√(ε _m-ε _p(sinθ )^2), where θ is the incident angle, ε _p and ε _g are the permittivities of prism and gold film, respectively, λ is the wavelength of incident light of TM-polarized Laguerre-Gauss beam. For simplicity, we choose the gold film with the thickness d=47nm, the prism and vacuum are semi-infinite. In order to visually see what the value of the incident angle takes, a large IF shifts can be generated. At fixed values of ε _p=2.22, ε _g=-20.327+1.862i, d=47nm and λ =780nm, we plot the the reflectivity | r_pgv| of Kretschmann structure as a function of the incident angle as shown in Fig. 1(b). It is clear that the reflectivity | r_pgv| can reach the minimum value at the incident angle θ =43.63^∘, and this incident angle is also called the SPR angle. The reason for this phenomenon is that the excitation of surface plasmon modes and reflectivity is dramatically attenuated at this incident angle. The sharp variation in reflectivity near the SPR angle will give rise to huge IF shifts and the shifts can be simplified as follow in the condition of large w_0 <cit.>. Y=-l∂| r_pgv| /∂θ k_0| r_pgv|, where k_0 is wave vector of incident light in vacuum.
http://arxiv.org/abs/2307.01287v1
20230703183015
Symmetries and reflections from composition operators in the disk
[ "Esteban Andruchow", "Gustavo Corach", "Lázaro Recht" ]
math.CV
[ "math.CV", "math.FA", "47A05, 47B15, 47B33" ]
Three is the magic number - distance measurement of NGC 3147 using SN 2021hpr and its siblings B. Barna 1 A. P. Nagy1 Zs. Bora2,3 D. R. Czavalinga4,5 R. Könyves-Tóth1,6,7 T. Szalai1,4 P. Székely1 Sz. Zsíros1 D. Bánhidi5 I. B. Bíró 4, 5 I. Csányi5 L. Kriskovics2, 6 A. Pál2, 6 Zs. M. Szabó3, 6, 9, 10 R. Szakáts2, 6 K. Vida2, 6 Zs. Bodola1 J. Vinkó2, 6, 1, 8 Received December 12 2022 ===================================================================================================================================================================================================================================================================================== We study the composition operators C_a acting on the Hardy space H^2 of the unit disk, given by C_af=f∘φ_a, where φ_a(z)=a-z/1-a̅z, for |a|<1. These operators are reflections: C_a^2=1. We study their eigenspaces N(C_a± 1), their relative position (i.e., the intersections between these spaces and their orthogonal complementes for a b in the unit disk) and the symmetries induced by C_a and these eigenspaces. 2020 MSC: 47A05, 47B15, 47B33 . Keywords: Symmetries, Reflections, Projections, Subspaces in generic position . § INTRODUCTION Let 𝔻={z∈ℂ:|z|≤ 1} be the unit disk and 𝕋={z∈ℂ: |z|=1} the unit circle. Consider the analytic automorphisms φ_a which map 𝔻 onto 𝔻 of the form φ_a(z)=a-z/1-a̅z, for a∈. Save for a constant of module one, all automorphisms of the disk are of this form. Note the fact that φ_a(φ_a(z))=z. This implies that the composition operators induced by these automorphisms are reflections (i.e., operators C such that C^2=1). Namely, let H^2=H^2() be the Hardy space of the disk, i.e. H^2={f:→ℂ: f(z)=∑_n=0^∞ a_n z^n with ∑_n=0^∞ |a_n|^2<∞}. Then, an analytic map φ:→ induces the (bounded linear, see <cit.>) operator C_φ:H^2→ H^2, C_φ f=f∘φ. In particular, for a∈, the operator C_a:=C_φ_a satisfies (C_a)^2=1, the identity operator. The eigenspaces of C_a are N(C_a-1)={f∈ H^2: f∘φ_a=f} and N(C_a+1)={g∈ H^2: g∘φ_a=-g}, which verify that N(C_a-1)+̇N(C_a+1)=H^2. Here +̇ means direct (non necessarily orthogonal) sum, we reserve the symbol ⊕ for orthogonal sums. Reflections which additionally are selfadoint are called symmetries: S is a symmetry if S=S^*=S^-1. Associated to a reflection C, there are three natural symmetries: (C), (C) and ρ(C). The first two correspond to the decompositions H^2=N(C-1)⊕ N(C-1)^⊥ and H^2=N(C+1)⊕ N(C+1)^⊥ respectively. The third is of differential geometric nature, and is described below. The aim of this paper is the study of the operators C_a for a∈, the description of their eigenspaces, their relative position, and the induced symmetries. In this task, it will be important the role of the unique fixed point ω_a of φ_a inside the disk. Namely, ω_a:=1/a̅{1-√(1-|a|^2)} if a 0, and ω_0=0. The contents of the paper are the following. In Section 2 we recall basic facts on the manifolds of reflections and symmetries, in particular the condition for existence of geodesics between points in the latter space. In Section 3 we state basic formulas concerning the operators C_a. In Section 4 we characterize the symmetries ρ_a, obtained as the unitary part of the polar decomposition of C_a. For this task, we use Rosenblum's computation for the spectral measure of a selfadjoint Toeplitz operator <cit.>. Using a result by E. Berkson <cit.>, we show that locally, the map a↦ρ_a (a∈) is injective (it remains unanswered wether it is globally injective in the disk ). We also obtain formulas for the range an nullspace symmetries of C_a, and a power series expansion for ρ_a. The rest of the paper is devoted to the study of the eigenspaces of C_a, and their relative position. If a=0, then the fixed point of φ_0 is ω_0=0 and C_0 is the reflection (and symmetry) f(z)↦ f(-z). Thus the eigenspaces of C_0 are the subspaces and ø of even and odd functions of H^2. It is elemenatry to see that for arbitrary a∈, the eigenspaces of C_a are N(C_a-1)=C_ω_a() and N(C_a+1)=C_ω_a(ø). We then analyze the position of these eigenspaces for a b. For instance (Theorem <ref>), N(C_a-1)∩ N(C_b-1)=ℂ1 and N(C_a+1)∩ N(C_b+1)={0}. The computations of the intersections involving the orthogonal of these spaces is more cumbersome, and we are only able to do it in the special case when b=0 (Theorem <ref>). These facts, which are stated in Section 5, are used in Section 6 to show which of these eigenspaces are conjugate with the exponential of the Grassmann manifold of H^2. This work was supported by the grant PICT2019 0460, from ANPCyT, Argentina. § PRELIMINARIES, ON REFLECTIONS AND SYMMETRIES Denote the set of reflections by ={T∈(̱H^2): T^2=1}. The set has rich geometric structure (see for instance <cit.>): is it an homogeneous C^∞ submanifold of (̱H^2), carrying the action of the group Gl(H^2) of invertible operators in H^2: G· T=GTG^-1, T∈, G∈ Gl(H^2). The set of selfadjoint reflections, or symmetries, is ={ V∈: V^*=V}. Note that a symmetry V is a selfadjoint unitary operator. Reflections and symmetries can be viewed alternatively as oblique and orthogonal projections, respectively. A reflection T gives rise to an idempotent (or oblique projection) with range equal to the eigenspace {f∈ H^2: Tf=f}: Q_+=1/2(1+T) (and another with range equal to the other eigenspace {g∈ H^2: Tg=-g} of T: Q_-=1/2(1-T)). If S is a symmetry, the corresponding idempotents P_+ and P_- are orthogonal projections. The set , in turn, can be regarded as the Grassmann manifold of H^2: to each reflection V corresponds a unique projection P_+=1/2(1+V) and a unique subspace such that R(P_+)=. The geometry of the Grassmann manifold in this operator theoretic context was developed in <cit.>, <cit.>: is presented as a homogeneous space of the unitary group (as in the classical finite dimensional setting), with a linear reductive connection and a Finsler metric. In <cit.> the necessary and sufficient condition for the existence of a geodesic of this connection between two subspaces and was stated: namely, that (∩^⊥)= (^⊥∩). Moreover, the geodesic is of the form δ(t)=e^itX, for X^*=X co-diagonal with respect to both and : X()⊂^⊥ and X()⊂^⊥. The geodesic can be chosen of minimal length for the Finsler metric (see <cit.>, <cit.>, <cit.>). This latter condition amounts to finding X such that X≤π/2. The condition for the existence of a unique minimal geodesic (up to reparametrization) was given: ∩^⊥={0}=^⊥∩. In this case, the exponent X=X_, is unique with the above mentioned conditions (X_, selfadjoint, codiagonal with respect to and , with norm less or equal then π/2, satisfying e^iX_,=.). In this paper we shall examine existence and uniqueness of geodesics of the Grassmann manifold of H^2, for the eigenspaces of C_a. One of the remarkable features of the space is the several natural projection maps that it has onto . The natural projection maps → are the range, nullspace and unitary part in the polar decomposition: * The range map , which maps T∈ to the symmetry (T)=2P_R(Q_+)-1, i.e. the symmetry which is the identity on R(Q_+)={f∈ H^2: Tf=f}. We recall the formula for the orthogonal projection P_R(Q) onto the range R(Q) of an idempotent Q (see for instance <cit.>): P_R(Q)=Q(Q+Q^*-1)^-1. Then P_R(Q_+)=1/2(1+T){1/2(1+T)+1/2(1+T^*)-1}^-1=(1+T){T+T^*}^-1, and therefore (T)=2(1+T){T+T^*}^-1-1=(2+T-T^*){T+T^*}^-1. * The null-space map , which maps T∈ to the symmetry which is the identity on R(Q_-)={g∈ H^2: Tg=-g}, which by similar computations is given by (T)=2(T-1){T+T^*}^-1-1=(T-T^*-2){T+T^*}^-1. . * The unitary part ρ in the polar decomposition, which maps T to ρ(T)=T(T^*T)^-1/2, the unitary part in the polar decomposition T=ρ(T)(T^*T)^1/2. We refer the reader to <cit.> for the properties of this element ρ(T). Among them, the most remarkable, that ρ(T) is a symmetry. We shall recall the other properties of ρ(T) in due course. Note, for instance, that (T^*T)^-1=TT^*, so that (T^*T)^-1/2=(TT^*)^1/2. Notice the following formulas: Let T∈ then (T)=2(1+T)(T^*T+1)^-1 and (T)=2(1-T)(T^*T+1)^-1. Let T=ρ(T)|T| be the polar decomposition. It is a straightforward computation (or see <cit.>) that |T|ρ(T)=ρ(T)|T^*|. Also it is eassy to see that since T^2=1, |T^*|=|T|^-1. Then T+T^*=ρ(T)|T|+|T|ρ(T)=ρ(T)(|T|+|T^*|)=ρ(T)(|T|+|T|^-1). Using again that |T|ρ(T)=ρ(T)|T|^-1 (and therefore also that ρ(T)|T|=|T|^-1ρ(T)), we have that ρ(T) commutes with |T|+|T|^-1. Then (T+T^*)^-1=ρ(T)(|T|+|T|^-1)^-1=(|T|+|T|^-1)^-1ρ(T). By an elementary functional calculus argument, we have that (|T|+|T|^-1)^-1=|T|(|T|^2+1)^-1. Then (T+T^*)^-1=ρ(T)|T|(|T|^2+1)^-1=T(|T|^2+1)^-1. Thus, (T)=2(T+1)T(|T|^2+1)^-1=2(1+T)(|T|^2+1)^-1, and similarly (T)=2(T-1)T(|T|^2+1)^-1=2(1-T)(|T|^2+1)^-1. We shall return to these formulas for the case T=C_a later, after we further characterize |C_a|. § THE OPERATORS C_A It is not a trivial task to compute the adjoint of a composition operator, however, for the special case of automorphisms of the disk, it was shown by Cowen <cit.> (see also <cit.>) that C_a^*=(C_φ_a)^*=M_1/1-a̅z C_a (M_1-a̅z)^*, where, for a bounded analytic function g in , M_g denotes the multiplication operator. Equivalently, C_a^*=M_1/1-a̅z C_a-a M_1/1-a̅z C_a (M_z)^*, where (M_z)^* (or co-shift) is the adjoint of the shift operator S=M_z. In order to characterize the polar decomposition of C_a, it will be useful to compute C_aC_a^*. Note that, for f∈ H^2, after straightforward computations, C_aC_a^*f(z)=1-a̅z/1-|a|^2{f(z)-af(z)-f(0)/z}. Also note how C_a relates to the shift operator S=M_z:H^2→ H^2, Sf(z)=zf(z), with adjoint S^*f(z)=f(z)-f(0)/z: C_aC_a^*=1/1-|a|^2(1-a̅S)(1-aS^*). For a∈, denote by k_a the Szego kernel: for f∈ H^2, ⟨ f , k_a⟩=f(a), i.e., k_a(z)=1/1-a̅z. Note the fact that C_aC_a^*(k_a)=1. Indeed, this follows after a straightforward computation. Therefore, we have also that C_a^*C_a(1)=k_a. For a∈, denote by ρ_a=ρ(C_a). Note that if a=0, φ_0(z)=-z and C_0f(z)=f(-z) is a symmetry, thus C_0^*=C_0, |C_0|=1 and ρ_0=C_0. Returning to the characterization of the modulus of C_a, we have that With the current notations, |C_a^*|=1/√(1-|a|^2)|1-aS^*|. and ρ_a=1/√(1-|a|^2)C_a|1-aS^*|. There is another symmetry related to C_a. In the book <cit.> (Exercise 2.1.9:), it is stated that for a∈, if we put ψ_a(z)=√(1-|a|^2)/1-a̅z=√(1-|a|^2) k_a=k_a/k_a_2, then the operator W_a∈(̱H^2), W_a=M_ψ_a C_a. i.e., W_af(z)=ψ_a(z) f(φ_a(z)) is a unitary operator. In fact, it is straightforward to verify that W_a^2=1, i.e., W_a is a symmetry. Note the relationship between ρ_a and W_a: C_a=1/√(1-|a|^2) M_1-a̅z W_a=1/√(1-|a|^2)(1-a̅S)W_a. It follows that the symmetry W_a intertwines C_aC_a^* and C_a^*C_a: C_a^*C_a=1/1-|a|^2 W_a(1-a S^*)(1-a̅S)W_a=W_a(C_a^*C_a)W_a, thus |C_a|=1/√(1-|a|^2)W_a |1-a̅S|W_a=W_a|C_a^*|W_a, and |C_a|^-1=√(1-|a|^2)W_a |1-a̅S|^-1W_a. Note that C_a=ρ_a|C_a| implies that C_aC_a^*=ρ_a|C_a|^2ρ_a=ρ_a C_a^*C_aρ_a. Then W_aρ_a C_a^*C_a (W_aρ_a)^*=W_aρ_aC_a^*C_aρ_aW_a=W_aC_aC_a^*W_a=C_a^*C_a, i.e., W_aρ_a commutes with C_a^*C_a (and therefore also with its inverse C_aC_a^*). § THE SYMMETRY Ρ_A If ψ∈ L^∞(), as is usual notation, let T_ψ∈(̱H^2) be the Toeplitz operator with symbol ψ: T_ψ f=P_H^2(ψ f). The following remark is certainly well known: For a∈, W_a S W_a=T_φ_a=M_φ_a. Straightforward computation: W_aSW_af(z)=√(1-|a|^2) W_a(z/1-a̅zf(a-z/1-a̅z))=a-z/1-a̅zf(z). Therefore: |Ca|=√(1-|a|^2)(T_|1-a̅z|^-2)^1/2=|T_ψ_a|. As remarked above, |C_a|^2=C_a^*C_a=1/1-|a|^2 W_a(1-a S^*)(1-a̅S)W_a=1/1-|a|^2 W_a(1-a S^*)W_aW_a(1-a̅S)W_a which by Lemma <ref> equals 1/1-|a|^2(1-a (W_aSW_a)^*)(1-a̅W_aSW_a)=1/1-|a|^2(1-aT_φ_a^*)(1-a̅T_φ_a)=1/1-|a|^2T_1-a̅φ_a^*T_1-a̅φ_a. Now we use the fact that T_ψ^*=T_ψ̅ and that if ψ,θ̅∈ H^∞, then T_θ T_ψ=T_θψ (see chapter 7 of Douglas' book <cit.>, specifically Prop. 7.5 for the second assertion). Then C_a^*C_a=1/1-|a|^2T_(1-aφ̅_a)(1-a̅φ_a). Since 1-a̅φ_a(z)=1-|a|^2/1-a̅z, it follows that this expression above equals (1-|a|^2)T_1/|1-a̅z|^2, and the proof follows. As a consequence, we may use the remarkable description of the spectral decomposition of selfadjoint Toeplitz operators obtained by M. Rosenblum un <cit.>. Let us quote in the next remark this description: Suppose that ω:→ℝ is a measurable function that satifies the following conditions: * ω is bounded from below: ω(θ)>-∞. * For each λ∈ℝ, the set Γ_λ:={e^i θ∈: ω(θ)≥λ} is a.e. an arc. Then Rosenblum's Theorem 3 in <cit.> states that: if T_ω is the Toeplitz operator with symbol ω, Λ⊂ℝ is a Borel subset and E(Λ) is the spectral measure (of T_ω) associated to Λ, u,v∈, one has that ⟨ E(Λ)k_u,k_v⟩=∫_ΛΦ(u̅;λ) Φ(v̅;λ) dm(λ), where Φ(u;λ)=Ψ(u;λ) (1-ue^i α(λ))^-1/2(1-ue^i β(λ))^-1/2, Ψ(u;λ)=exp(-∫_-π^πlog |ω(θ)-λ| P(u,θ) dθ), P(u,θ)=1/4π1+ue^iθ/1-ue^iθ, α(λ)≤β(λ)∈[-π,π] are such that Γ_λ={e^iθ: α(λ)≤θ≤β(λ)}, and dm(λ)=1/πsin(1/2(β(λ)-α(λ))) dλ. In particular, note that the spectral measure of T_ω is absolutely continuous with respect to the Lebesgue measure. In our case, we must analize ω(θ)=1/|1-a̅e^iθ|^2=|k_a(e^iθ)|^2. We consider the case a 0 (for a=0 recall that ρ_0=C_0). The function ω is continuous, so condition 1. is fulfilled. With respect to condition 2., note that, for λ≤ 0, Γ_λ is empty, and for λ>0 Γ_λ={e^iθ: |a/|a|^2-e^iθ|≤1/|a|√(λ)}. Consider the following figure < g r a p h i c s > Figure 1 Then clearly the spectral measure is zero if * λ > 1/(1-|a|)^2 (here α(λ)=β(λ) and Γ_λ has measure zero), or if * λ < 1/(1+|a|)^2 (here α(λ)=-π, β(λ)=π and Γ_λ=). For λ∈[ 1/(1+|a|)^2, 1/(1-|a|)^2] we have the following figure: < g r a p h i c s > Figure 2 Therefore, after elementary computations, on has that β(λ)=arcsin(γ), α(λ)=-arcsin(γ) and sin(1/2(β(λ)-α(λ)))=sin(γ)=√(1-1/4(1+1/|a|(1-1/λ))^2). Thus, we may characterize the function ρ_a1 (the symmetry ρ_a at the element 1∈ H^2). To this effect, recall that the set {k_u:u∈} is total in H^2. With the current notations, for v∈, we have ⟨ρ_a1,k_v⟩=√(1-|a|^1)/π∫_1/(1+|a|)^2^1/(1-|a|)^2λ^1/2Φ(0;λ) Φ(v̅;λ)√(1-1/4(1+1/|a|(1-1/λ))^2) dλ. Recall that ρ_a=C_a(C_a^*C_a)^-1/2=(C_aC_a^*)^-1/2C_a=(C_a^*C_a)^1/2C_a, so that (since 1=k_0) ρ_a1=|C_a|^1/2C_a1=|C_a|^1/21=|C_a|^1/2k_0, and then ⟨ρ_a1,k_v⟩=√(1-|a|^2)⟨ T_|1-a̅e^iθ|^-2^1/2k_0,k_v⟩, and the formula folllows applying Rosenblum's result and the above elementary computations. In particular, we have that ρ_a1(0)=⟨ρ_a1,1⟩=√(1-|a|^2)/π∫_1/(1+|a|)^2^1/(1-|a|)^2λ^1/2|Φ(0,λ)|^2 √(1-1/4(1+1/|a|(1-1/λ))^2) dλ, with |Φ(0,λ)|^2=exp(-1/2π∫_-π^πlog | |1-a̅e^iθ|^-2 -λ| dθ). Clearly, if A⊂ is a finite set, then {k_a: a∈∖ A} is also total in H^2. Therefore we may characterize ρ_a as follows: With the current notations, for a, u, v∈, with u a, we have ⟨ρ_ak_u,k_v⟩=u̅(|a|^2-1)^3/2/π(u̅-a̅)∫_1/(1+|a|)^2^1/(1-|a|)^2λ^1/2Φ(φ_a(u);λ) Φ(v̅;λ)√(1-1/4(1+1/|a|(1-1/λ))^2) dλ + +a̅/a̅-u̅√(1-|a|^2)/π∫_1/(1+|a|)^2^1/(1-|a|)^2λ^1/2Φ(0;λ) Φ(v̅;λ)√(1-1/4(1+1/|a|(1-1/λ))^2) dλ. These inner products characterize ρ_a, because {k_u: u∈, u a} is a total set in H^2. The last assertion is clear. Recall that ρ_a=C_a(C_a^*C_a)^-1/2=(C_aC_a^*)^-1/2C_a=(C_a^*C_a)^1/2C_a. Note that C_ak_u(z)=1-a̅z/1-u̅a-z(a̅-u̅)=1/1-u̅a1-a̅z/1-φ_a(u)z, which after routine computations (using that a u, and 1=k_0) yields C_ak_u=u̅(1-|a|^2)/u̅-a̅ k_φ_a(u)+a̅/a̅-u̅ k_0. Therefore, ρ_a k_u=(C_a^*C_a)^1/2C_ak_u=(C_a^*C_a)^1/2(u̅(1-|a|^2)/u̅-a̅ k_φ_a(u)+a̅/a̅-u̅ k_0), and thus ⟨ρ_ak_u,k_v⟩=√(1-|a|^2)⟨ T_|1-a̅e^iθ|^-2^1/2(u̅(1-|a|^2)/u̅-a̅ k_φ_a(u)+a̅/a̅-u̅ k_0),k_v⟩ =√(1-|a|^2){u̅(1-|a|^2)/u̅-a̅⟨ T_|1-a̅e^iθ|^-2^1/2k_φ_a(u),k_v⟩+a̅/a̅-u̅⟨ T_|1-a̅e^iθ|^-2^1/2k_0,k_v⟩}. The formula folllows applying Rosenblum's result and the above elementary computations. §.§ A result by E. Berkson We are indebted to Daniel Suárez for pointing us the result below. In <cit.>, E. Berkson proved the following Theorem: <cit.> Let φ:→ be a bounded analytic map, φ̃ its boundary function, and A=φ̃^-1(). Suppose that |A|>0 (= normalized Lebesgue measure in ). Let ψ:→ be another analytic map, and C_φ and C_ψ denote the composition operators on H^p(), 1≤ p<∞. If C_ψ-C_φ<(|A|/2)^1/p, then ψ=φ. As a consequence, for a b∈ we have that (p=2): C_a-C_b≥1/√(2) On the other hand, it is a consequence of Theorem <ref> that C_a^*C_a-C_b^*C_b=T_1-|a|^2/1-a̅z-T_1-|b|^2/1-b̅z=T_δ_a,b, where δ_a,b(z)=1-|a|^2/1-a̅z-1-|b|^2/1-b̅z. Thus C^*_aC_a-C_b^*C_b=δ_a,b_∞=sup{|δ_a,b(z)|: z∈}. Note that, after an elementary computation, δ_a,b also equals δ_a,b(z)=a̅φ_a(z)-b̅φ_b(z). So that we have ||a|-|b||≤ ||δ_a,b_∞≤ |a|+|b|. Moreover, |δ_a,b(z)|=|1-|a|^2/1-a̅z-1-|b|^2/1-b̅z|≤|1-|a|^2/1-a̅z-1-|b|^2/1-a̅z|+|1-|b|^2/1-a̅z-1-|b|^2/1-b̅z|= =1/|1-a̅z|||a|^2-|b|^2|+(1-|b|^2)|z||a-b|/|1-a̅z||1-b̅z|≤1/1-|a|(|a|+|b|)|a-b|+(1+|b|)|a-b|/1-|a|≤ ≤4/1-|a| |a-b|. In particular, contrary to what happens to C_b and C_a, if b→ a, then both C_b^*C_b→ C_a^*C_a and |C_b|→ |C_a|. Therefore, we have the following: Fix a∈ and r<1/√(2), consider the open neighbourhood _̱r(a) of a in given by _̱r(a):={b∈: |C_b|-|C_a|<r}. Then, if b∈_̱r(a), b a, we have that ρ_b-ρ_a≥(1/√(2)-r) 1+|a|/√(1-|a|^2). By Berkson's Theorem, if a b 1/√(2)≤C_a-C_b=ρ_a|C_a|-ρ_b|C_b|≤ρ_a|C_a|-ρ_b|C_a|+ρ_b|C_a|-ρ_b|C_b| ≤|C_a|ρ_a-ρ_b+|C_a|-|C_b|, because ρ_b is a unitary operator. If b∈_̱r(a), 1/√(2)≤C_aρ_a-ρ_b+r. The proof follows recalling that |C_a|=C_a=√(1-|a|^2)/1+|a|. §.§ Formulas for (C_a) and (C_a). Using Theorem <ref> we can refine the formulas for (T) and (T) obtained in Proposition <ref>, the range and nullspace symmetries induced by a reflection T, to the case when T=C_a: We have (C_a)=2(1+C_a)T_≫_a^-1 and (C_a)=2(1-C_a)T_≫_a^-1, where T_≫_a is the Toeplitz operator with symbol ≫_a(z)=1+1-|a|^2/|1-a̅z|^2. Note that for T=C_a we have (C_a)=2(1+C_a)(|C_a|^2+1)^-1, and from Theorem <ref> we know that |C_a|^2=(1-|a|^2)T_1/|1-a̅z|^2. Then |C_a|^2+1=(1-|a|^2)T_1/|1-a̅z|^2+1=T_1+1-|a|^2/|1-a̅z|^2=T_≫_a. The computation of (C_a) is similar. §.§ A power series expansion for ρ_a Let us further consider |1-a̅S|^-1. Note that |1-a̅S|^2=(1-aS^*)(1-a̅S)=1+|a|^2-2 Re(a̅S), where ReT=1/2 (T+T^*), for T∈(̱H^2), as is usual notation. Then |1-a̅S|^2=(1+|a|^2)( 1-2/1+|a|^2 Re(a̅S))=(1+|a|^2)( 1-2|a|/1+|a|^2 T), where a=|a|ω and T= Re( ω̅S) is a contraction. Using the power series expansion (1-kt)^-1/2=1+∑_n=1^∞ (2n-1)(2n-3)… 1 (k/2)^n t^n, we get With the current notations, i.e. T= Re(ω̅S), a=|a|ω, we have that * |1-a̅S|^-1=1/√(1+|a|^2)(1+ ∑_n=1 ^∞ (2n-1)(2n-3)… 1 (|a|/1+|a|^2)^n T^n), where T=1/2(ω̅S +ω S^*) and a=|a|ω. * |C_a|^-1=√(1-|a|^2)W_a{1+ ∑_n=1 ^∞ (2n-1)(2n-3)… 1 (|a|/1+|a|^2)^n T^n}W_a =√(1-|a|^2)(1+ ∑_n=1 ^∞ (2n-1)(2n-3)… 1 (|a|/1+|a|^2)^n (W_aTW_a)^n). * ρ_a=(1-a̅S){1+ ∑_n=1 ^∞ (2n-1)(2n-3)… 1 (|a|/1+|a|^2)^n T^n}W_a=(μ(1-a̅S))W_a, where μ(A)= unitary part in the polar decomposition of A: A=μ(A)|A|. Straightforward computations. Next we see that the map ∋ a ↦ |C_a| is one to one: Let a,b∈. Then |C_a|=|C_b| if and only if |C_a^*|=|C_b^*| if and only if a=b Recall that (C_a^*C_a)^-1=C_aC_a^*, and thus |C_a|^-1=|C_a^*|. By uniqueness of the positive square root of operators, clearly |C_a^*|=|C_b^*| if and only if C_aC_a^*=C_bC_b^*. Next note that at the constant function 1∈ H^2, we have (since S^*1=0) C_aC_a^*(1)=1/1-|a|^2(1-a̅S)(1-aS^*)(1)=1/1-|a|^2(1-a̅S)(1)=1-a̅z/1-|a|^2. Evaluating at z=0, we get that C_aC_a^*=C_bC_b^* implies that |a|=|b|, and thus 1-a̅z=1-b̅z for all z∈, i.e., a=b. Proposition <ref> states that given a∈, there is an open neighbourhood of a such that for b in this neighbourhood, ρ_a=ρ_b implies a=b. We do not now though if globally the map ∋ a↦ρ_a∈(̱H^2) is injective. § THE EIGENSPACES OF C_A Denote by and ø the closed subspaces of even and odd functions in H^2. Note that they are, respectively, =N(C_0-1) and ø=N(C_0+1). For general a∈, the eigenspaces of C_a are N(C_a-1)={f∈ H^2: f∘φ_a=f} and N(C_a+1)={g∈ H^2: g∘φ_a=g}. For a∈, denote by ω_a the fixed point of φ_a inside . Explicitely: ω_a=1/a̅{1-√(1-|a|^2)}. Elementary computations shows that φ_ω_a∘φ_a=-φ_ω_a which at z=0 gives φ_ω_a(a)=-ω_a. For a∈, the eigenspaces of C_a are N(C_a-1)={f=∑_n=0^∞α_n(φ_ω_a)^2n: (α_n)∈ℓ^2}=C_ω_a(), and N(C_a+1)={g=∑_n=0^∞α_n(φ_ω_a)^2n+1: (α_n)∈ℓ^2}=C_ω_a(ø). It follows from (<ref>) that the even powers of φ_ω_a belong to N(C_a-1): (φ_ω_a)^2n∘φ_a=(φ_ω_a)^2n, and the odd powers belong to N(C_a+1): (φ_ω_a)^2n+1∘φ_a=-(φ_ω_a)^2n+1. Therefore, any sequence of coefficients (α_n)∈ℓ^2 gives an element f=∑_n=0^∞α_n(φ_ω_a)^2n∈ N(C_a-1), and an element g=∑_n=0^∞α_n(φ_ω_a)^2n+1∈ N(C_a+1). Conversely, suppose that f∈ N(C_a-1). Using (<ref>) f∘φ_ω_a=f∘φ_a∘φ_ω_a, and since φ_a∘φ_ω_a=aω̅_a-1/1-a̅ω_aφ_φ_ω_a(a)=-φ_-ω_a, we get f∘φ_ω_a(z)=f∘φ_ω_a(-z), i.e., f∘φ_ω_a∈. The fact for odd functions is similar. Note that if we denote h(z)=∑_n=0^∞α_n z^2n, which is an arbitrary even function in H^2, we have that f=h∘φ_ω_a=C_ω_ah. And similarly if k(z)=∑_n=0^∞α_n z^2n+1 is an arbitrary odd function in H^2, g=C_ω_ak. Then C_ω_a|_:→ N(C_a-1) and C_ω_a|_ø:ø→ N(C_a+1). The restrictions C_ω_a|_ and C_ω_a|_ø are bounded linear isomorphisms. Their inverses are, respectively, C_ω_a|_N(C_a-1) and C_ω_a|_N(C_a+1). Note that H^2=C_ω_a(⊕ø)=C_ω_a()+̇C_ω_a(ø)⊂ N(C_a-1)+̇N(C_a+1), were +̇ denotes direct (non necessarily orthogonal) sum. It follows that C_ω_a()=N(C_a-1) and C_ω_a(ø)=N(C_a+1). This completes the proof, since C_a is its own inverse. Clearly, if p,g ∈ H^2 are, respectively, inner and outer functions, then C_ap=p∘φ_a and C_ag=g∘φ_a are also, respectively, inner and outer. Therefore, if f∈ N(C_a-1), and f=pg is the inner/outer factorization of f, then f=C_ap· C_ag is another inner/outer factorization. By uniqueness, it must be C_ap=ω p for some ω∈𝕋. But then p is an eigenfunction of C_a, and so it must be ω=± 1. Therefore, if f ∈ N(C_a-1), then either a) p,g∈ N(C_a-1) or b) p,g∈ N(C_a+1). The latter case cannot happen: the outer function g verifies that C_ω_ag is odd, and therefore it vanishes at z=0, 0=C_ω_ag(0)=g(ω_a). A similar consideration can be done for N(C_a+1). If f=pg is the inner/outer factorization of f∈ N(C_a+1), then again C_ap=± p. If C_ap=p, then -f=-pg=f∘φ_a=(p∘φ_a)(g∘φ_a) implies p∘φ_a=± p. If p∘φ_a=p, then g∘φ_a=-g, and therefore the outer function g vanishes, a contradiction. Thus p∈ N(C_a+1) and g∈ N(C_a-1). Let us examine the position of the subspaces N(C_a± 1) and their orthogonal complements. Let a b in . Then * N(C_a-1)∩ N(C_b-1)=ℂ1, where 1∈ H^2 is the constant function. * N(C_a+1)∩ N(C_b+1)={0}. Let us first prove 1. As seen above, the reflection C_ω_a carries N(C_a-1) onto the space of even functions. Another way of putting this, is that C_ω_aC_aC_ω_a=C_0. Note that since C_b^-1=C_b, this product is in fact a conjugation. A straigtforward computation shows that in general, for b,d∈ φ_d∘φ_b∘φ_d=φ_d∙ b , where d∙ b:=2d-b-b̅d^2/1+|d|^2-b̅d-bd̅. Note that C_ω_a(N(C_b-1))=N(C_ω_aC_bC_ω_a-1)=N(C_ω∙ b-1). Then N(C_a-1)∩ N(C_b-1)=ℂ1 if and only if ℂ1=C_ω_a(N(C_a-1)∩ N(C_b-1))=∩ N(C_ω_a∙ b-1), i.e., we have reduced to the case when one of the to points is the origin. Let us prove that N(C_a-1)∩=ℂ1. Let f∈ N(C_a-1)∩, for a∈, a 0. Then f=f∘φ_0=f∘φ_a. In particular, this implies that f=f∘φ_a∘φ_0∘…∘φ_a=f∘(φ_a∘φ_0)^(n)∘φ_a, for all n≥ 1 (here (φ_a∘φ_0)^(n) denotes the composition of φ_a∘φ_0 with itself n times). We shall need the following computation: (φ_a∘φ_0)^(n)φ_a=φ_a_n, where a_n=a/|a| 1-(1-|a|/1+|a|)^n+1/1+(1-|a|/1+|a|)^n+1. Our claim is equivalent to a_n=a/|a| (1+|a|)^n+1-(1-|a|)^n+1/(1+|a|)^n+1+(1-|a|)^n+1. The proof is by induction in n. It is an elementary computation. For n=1, we have that φ_a∘φ_0∘φ_a(z)=φ_a(-a-z/1-a̅z)=a+a-z/1-a̅z/1+a̅a-z/1-a̅z=2a-(1+|a|^2)z/1+|a|^2-2az=2a/1+|a|^2-z/1-2a̅/1+|a|^2z=φ_2a/1+|a|^1(z). On the other hand, a_1=a/|a| (1+|a|)^2-(1+|a|)^2/(1+|a|)^2+(1-|a|)^2=2a/1+|a|^2. Suppose the formula valid for n. Then (φ∘φ_0)^n+1∘φ_a(z)=(φ∘φ_0)^n∘φ_a∘(φ_a∘φ_0)(z)=φ_a_n(-a-z/1-a̅z) =a_n+a-z/1-a̅z/1+a̅_na-z/1-a̅z= a/|a| f_n+a-z/1-a̅z/1-a/|a| f_n a-z/1-a̅z=a( f_n/|a|+1)-(|a| f_n+1)z/|a| f_n+1-a̅( f_n/|a|+1)z=β_n-z/1-β̅_nz=φ_β_n(z), where β_n=a ( f_n/|a|+1)/|a| f_n+1 and f_n=(1+|a|)^n+1-(1-|a|)^n+1/(1+|a|)^n+1+(1-|a|)^n+1. Thus, we have to show that β_n=a_n. Note that β_n=a/|a| f_n+|a|/|a| f_n+1 and that f_n+|a|/|a| f_n+1=(1+|a|)^n+1-(1-|a|)^n+1+|a|(1-|a|)^n+1+|a|(1+|a|)^n+1/-|a|(1-|a|)^n+1+(1+|a|)^n+1+(1-|a|)^n+1+|a|(1+|a|)^n+1 =(1+|a|)^n+2-(1-|a|)^n+2/(1+|a|)^n+2+(1-|a|)^n+2, which completes the proof of the lemma. Returning to the proof of the theorem, it is clear that constant functions belong to ∩ N(C_a-1). Suppose that there is a non constant f∈∩ N(C_a-1). Then f_0=f-f(0)∈∩ N(C_a-1) as well. As remarked above, f_0=f_0∘φ_a_n for all n≥ 0 (for n=0, a_0=a). It follows that 0 and a_n, n≥ 0 are zeros of f_0. Since f_0 is also even, also -a_n, ≥ 0 occur as zeros of f_0. Consider f_0=BSg the factorization of f_0 with B a Blaschke product, S singular inner and g outer. Then the pairs of factors φ_a_n·φ_-a_n appear in the expression of B. Since f_0=f_0∘φ_a, and S∘φ_a and g∘φ_a are non vanishing in , it follows that (φ_a_n∘φ_a)· (φ_-a_n∘φ_a) must also appear in the expression of B. Note that φ_a_n∘φ_a=(φ_a∘φ_0)^(n)∘φ_a∘φ_a=((φ_a∘φ_0)^(n)=(φ_a∘φ_0)^(n-1)∘φ_a∘φ_0=φ_a_n-1∘φ_0. Also φ_-a_n(z)=-a_n+z/1+a̅_nz=-φ_a_n(-z)=φ_0∘φ_a_n∘φ_0=φ_0∘(φ_a∘φ_0)^(n)∘φ_a∘φ_0=φ_0∘(φ_a∘φ_0)^(n+1). Then φ_-a_n∘φ_a=φ_0∘(φ_a∘φ_0)^(n+1)∘φ_a=φ_0∘φ_a_n+1. Note the effect of C_a on the following pairs of factors of B: z· z=z^2=φ_0^2C_a⟶(φ_0∘φ_a)^2=φ_a^2, φ_a·φ_-aC_a⟶ (φ_a∘φ_a)·(φ_-a∘φ_a)=z·(-φ_a_1)=-zφ_a_1, and φ_a_1·φ_a_-1C_a⟶(φ_a_1∘φ_a)·(φ_a_-1∘φ_a)=(φ_a∘φ_0)·φ_a_2=-φ_-a·φ_a_2. Other pairs of factors in the expression of B, after applying C_a, do not involve φ_a or φ_0, due to the spreading of the indices. Summarizing, after applying C_a, we get the products (φ_a)^2 , -zφ_a_1 and -φ_-a·φ_a_2, i.e., we do not recover the original factors z^2 and φ_a·φ_-a. It follows that f is contant. To prove 2., a similar trick as above allows us to reduce to the case of a 0 and b=0, i.e., we must prove that there are no nontrivial odd functions in N(C_a+1). Let f∈ H^2 be odd such that f∘φ_a=-f. Then f^2=f· f is even and (f(φ_a(z)))^2=(-f(z))^2=(f(z))^2, i.e., f^2∈ N(C_a-1). Therefore, by the previous case, f^2 is constant, and therefore f≡ 0. The maps → given by a↦(C_a) and a↦(C_a) re one to one. Let us further proceed with the study of the position of the subspaces N(C_a± 1) and N(C_b± 1) for a b, considering now their orthogonal complements. We shall restrict to the case b=0. The conditions look similar, but as far as we could figure it out, some of the proofs may be quite different. Let a∈, a 0. * N(C_0-1)^⊥∩ N(C_a-1)={0}=N(C_0-1)∩ N(C_a-1)^⊥, * N(C_0+1)^⊥∩ N(C_a+1)={0}=N(C_0+1)∩ N(C_a+1)^⊥, * N(C_0-1)^⊥∩ N(C_a-1)^⊥={0}=N(C_0+1)^⊥∩ N(C_a+1)^⊥. Assertion 1.: for the left hand equality, let f∈ N(C_0-1)^⊥=ø such that f∘φ_a=f. Then, by the above results, f^2∈∩ N(C_a-1), and therefore f^2 is constant. Then f, being constant and odd, is zero. The right hand equality: suppose f∈ N(C_0-1)∩ N(C_a-1)^⊥ is 0, i.e., f is even and ⟨ f, C_ω_a(z^2k)⟩=0 for k≥ 0 (in particular, when n=0 we get f(0)=0). Thus C^*_ω_a(f) is odd. Recall that C^*_ω_a(f)=1/1-ω̅_az f(ω_a-z/1-ω̅_az)-ω_a/1-ω̅_azf(ω_a-z/1-ω̅_az)-f(0)/ω_a-z/1-ω̅_az =1/1-ω̅_az(f(ω_a-z/1-ω̅_az)-ω_af(ω_a-z/1-ω̅_az)/ω_a-z/1-ω̅_az). Since f is even with f(0)=0, put f(z)=∑_n=1^∞α_n z^2n. Then, after routine computations we get C^*_ω_a(f)=z |ω_a|^2-1/(1-ω̅_az)^2∑_n=1^∞α_n (ω_a-z/1-ω̅_az)^2n-1. The fact that C^*_ω_a(f) is odd, implies that A(z)=1/(1-ω̅_az)^2∑_n=1^∞α_n (ω_a-z/1-ω̅_az)^2n-1 is even. Note that therefore C_ω_a(A)=(1-ω̅_az)^2/(1-|ω_a|^2)^2∑_n=1^∞α_n z^2n-1∈ N(C_a-1). Let us abreviate α(z)=∑_n=1^∞α_n z^2n-1, which is an odd function. Thus (1-ω̅_az)^2 α∈ N(C_a-1). Note that (1-ω̅_az)^2 is outer. Therefore, if α=p g is the inner/outer factorization of α, then (1-ω̅_az)^2 α= p ( (1-ω̅_az)^2g) is also an inner/outer factorization. Then, by Remark <ref>, we have p ∈ N(C_a-1). By a similar argument, since α is odd it follows that p is either odd or even. Note that p even would imply g odd, and thus vanishing at z=0, which cannot happen. Thus p∈ N(C_a-1)∩ø={0}, which is the first assertion of this theorem. Clearly this implies that f=0. Assertion 2.: the proof of the second assertion is similar. Let us sketch it underlining the differences. The left hand equality: suppose that f is odd and f⊥ N(C_a+1). Then f=C_ω_aι=ι(φ_ω_a) for some odd function ι. Then f^2=ι^2(φ_ω_a)∈ N(C_a-1). Then, by the first part of Theorem <ref>, we have that f^2 is constant, then f is constant, and the fact that f=ι(φ_ω_a) with ι odd implies that f=0. The right hand equality of the second assertion, if f∈ N(C_0+1)∩ N(C_a+1)^⊥, then f is odd, f(z)=∑_k≥ 0β_k z^2k+1 and C_ω_a^*f is even. Similarly as above, C_ω_a^*f(z)=z|ω_a|^2-1/(1-ω̅_az)^2∑_k≥ 0β_k (ω_a-z/1-ω̅_az)^2k, and thus B(z)=1/(1-ω̅_az)^2∑_k≥ 0β_k (ω_a-z/1-ω̅_az)^2k is odd. Therefore, if β(z):=∑_k≥ 0β_k z^2k, we have C_ω_a(B)=(1-ω̅_az)^2/(1-ω̅_az)^2β∈ N(C_a+1), i.e., (1-ω̅_az)^2β∈ N(C_a+1). If β=qh is the inner/outer factorization, then q and h are even, and (1-ω̅_az)^2β=q((1-ω̅_az)^2h) is the inner/outer factorization of an element in N(C_a+1). Then, again by Remark <ref>, q∈ N(C_a+1). Then q^2 is even and lies in N(C_a-1), and therefore is constant, by the first part of Theorem <ref>. Thus q is constant in N(C_a+1), which implies that q=0, and then f=0. Assertion 3.: For the left hand equality: f∈ N(C_0-1)^⊥∩ N(C_a-1)^⊥ is odd, f(z)=∑_n≥ 0β_n z^2n+1, and similarly as above, C^*_ω_af(z)=z(ω̅_a-1)/(1-ω̅_az)^2∑_n≥ 0β_n (ω_a-z/1-ω̅_az)^2n is odd, so that D(z)=1/(1-ω̅_az)^2∑_n≥ 0β_n(ω_a-z/1-ω̅_az)^2n is even, and C_ω_aD=(1-ω̅_az)^2/(1-|ω_|^2)^2∑_n≥ 0β_n z^2n∈ N(C_a-1). Denote δ(z)=∑_n≥ 0β_nz^2n, so that (1-ω̅_a z)^2 δ∈ N(C_a-1). Note that f(z)=zδ(z) Then we have (1-ω̅_az)^2δ=(1+(ω̅_az)^2)δ-2ω̅_a f is an orthogonal sum: the left hand term is even and the right hand term is odd. One the other hand, rewriting this equality, we have (1+(ω̅_az)^2)δ=(1-ω̅_az)^2δ+2ω̅_a f is also and orthogonal sum: the left hand term belongs to N(C_a-1) and the right hand term is orthogonal to N(C_a-1). Then we have (1-ω̅_az)^2δ^2=(1+ω̅_az)^2)δ^2+2ω̅_a f^2 and (1+(ω̅_az)^2)δ^2=(1-ω̅_az)^2δ^2+2ω̅_a f^2. These imply that f=0. The right hand equality: let f ∈ N(C_a-1)^⊥ be even, and suppose first that f(0)=0. Then f(z)=∑_n≥ 1α_n z^2n. We proceed similarly as in the third assertion, we sketch the proof. We know that C_ω_a^*(f)(z)=z(ω̅_a^2-1)/(1-ω̅_az)^2∑_n≥ 1α_n(ω_a-z/1-ω̅_az)^2n-1 is even, so that E(z)=1/(1-ω̅_az)^2∑_n≥ 1α_n(ω_a-z/1-ω̅_az)^2n-1 is odd. Then h(z):=C_ω_a(E)(z)=(1-ω̅_az)^2/1-|ω_a|^2∑_n≥ 1α_nz^2n-1∈ N(C_a+1). Note that ∑_n≥ 1α_nz^2n-1=f(z)/z. Then we have on one hand that (1-|ω_a|^2)h(z)=(1+(ω̅_az)^2)∑_n≥ 1α_nz^2n-1+2ω̅_a f(z) is an orthogonal sum, the left hand summand is odd and the right hand summand is even. Thus (1-|ω_a|^2)h^2=(1+(ω̅_az)^2)∑_n≥ 1α_nz^2n-1^2+2ω̅_a f^2. On the other hand. the above also means that (1+(ω̅_az)^2)∑_n≥ 1α_nz^2n-1=(1-|ω_a|^2)h(z)+2ω̅_a f(z) is also an orthogonal sum, the left hanf summand belongs to N(C_a+1) and the right hand summand belongs to N(C_a+1)^⊥. Then (1+(ω̅_az)^2)∑_n≥ 1α_nz^2n-1^2=(1-|ω_a|^2)h^2+2ω̅_a f^2. These two norm identities imply that f=0. Suppose now that f(0) 0, by considering a multiple of f, we may assume f(0)=1, i.e., f(z)=1+∑_n≥ 1α_nz^2b. Then g(z):=C^*_ω_af(z)=1/1-ω̅_az+(ω̅_a-1)z/1-ω̅_az∑_n≥ 1α_n(ω_a-z/1-ω̅_az)^2n-1, which is also even. Then g'(z) is odd and g'(0)=0. Note that g'(z)=ω̅_a/(1-ω̅_az)^2 +ω̅_a-1/(1-ω̅_az)^2∑_n≥ 1α_n (ω_a-z/1-ω̅_az)^2n-1+ +(ω̅_a-1)(|ω_a|^2-1)z/(1-ω̅_az)^3∑_n≥ 1α_n (ω_a-z/1-ω̅_az)^2n-2, so that 0=g'(0)=ω̅_a+(ω̅_a-1)∑_n≥ 1α_n ω_a^2n-1. Note that f(ω_a)=1+∑_n≥ 1α_nω_a^2n=1+ω_a∑_n≥ 1α_nω_a^2n-1, i.e., 0=ω̅_a+(ω̅_a-1)(f(ω_a)-1/ω_a), or f(ω_a)=|ω_a|^2/1-ω̅_a+1. Since f is even, f(ω_a)=f(-ω_a), i.e., 1/1-ω̅_a=1/1+ω̅_a, or ω_a=0 (which cannot happen because a 0). It follows that f≡ 0. A natural question is wether these properties above hold for arbitrary a b∈. A straightforward computation shows that if a∈, the unique b∈ such that the fixed point ω_b of φ_b (in ) is a is given by b=2a/1+|a|^2. Let us denote this element by Ω_a. One may iterate this computation: denote by Ω^2_a:=Ω_Ω_a, and in general Ω_a^n+1:=Ω_Ω_a^n. Then it is easy to see that Ω_a^n=a_2^n-1, where a_k∈ are the numbers obtained in Lemma <ref>. Note that all these iterations Ω_a^n are multiples of a, with increasing moduli, and Ω_a^n→a/|a| as n→∞. Moreover, it is easy to see that the sequence a_n is an interpolating sequence: it consists of multiples of 1-r^n+1/1+r^n+1 by the number a/|a| of modulus one, with r<1. Therefore Ω_a^n is an interpolating sequence. § GEODESICS BETWEEN EIGENSPACES OF C_A Recall from the introduction the condition for the existence of a geodesic of the Grassmann manifold of H^2 that joins two given subspaces and , namely, that (∩^⊥)=(∩^⊥). This condition clearly holds for =N(C_0-1) and ø=N(C_0+1)=^⊥: both intersections are, respectively, ∩ø^⊥= and ø∩^⊥=ø, and have the same (infinite) dimension. Our first observation is that this no longer holds for N(C_a-1) and N(C_a+1) when a 0: If 0 a∈, then there does not exist a geodesic of the Grassmann manifold of H^2 joining N(C_a-1) and N(C_a+1). The proof follows by direct computation. First, we claim that N(C_a+1)∩ N(C_a-1)^⊥={0}. Note that f∈ N(C_a-1)^⊥ if and only if ⟨ f, g⟩=0 for all g∈ N(C_a-1)=C_ω_a(), i.e., 0=⟨ C_ω_a^*f,g⟩, for all g∈. This is equivalent to C_ω_a^*f∈ø, or also that f∈ C_ω_a^*(ø). Using the operator C_ω_a, our claim (<ref>) is equivalent to {0}=C_ω_a(N(C_a+1))∩ C_ω_aC_ω_a^*(ø)=ø∩ C_ω_aC_ω_a^*(ø), where the last equality follows from the fact C_ω_a(N(C_a+1))=ø observed before. Let f∈ø. Then (since f(0)=0) g(z)=C_ω_aC_ω_a^*f(z)=1-ω̅_az/1-|ω_a|^2(f(z)-ω_af(z)/z) =1/1-|ω_a|^2(f(z)(1+|ω_a|^2)-(ω_af(z)/z+ω̅_azf(z))). Then, since g and the first summand are odd, and the second summand is even, the second summand is zero, which implies that f≡ 0. On the other hand, a similar computation shows that (N(C_a-1)∩ N(C_a+1)^⊥)=1, which would conclude the proof. Indeed, by a similar argument as above, it suffices to show that (∩ C_ω_aC_ω_a^*())=1. Let g,f be even functions such that g(z)=C_ω_aC_ω_a^*f(z)=1-ω̅_az/1-|ω_a|^2(f(z)-ω_af(z)-f(0)/z) =1/1-|ω_a|^2((f(z)+|ω_a|^2(f(z)-f(0)))-(ω̅_af(z)z+ω_af(z)-f(0)/z)). It follows that ω̅_af(z)z+ω_af(z)-f(0)/z≡ 0, i.e., f(z)=c/ω_a+ω̅_az^2. This implies that ∩ C_ω_aC_ω_a^*()=⟨1/ω_a+ω̅_az^2⟩. Note though that the orthogonal projections onto N(C_a-1) and N(C_a+1) are unitarilly equivalent: both subspaces are infinite dimensional and infinite co-dimensional. Also on the negative side, the subspaces ø and N(C_a-1), for a 0, cannot be joined by a geodesic: There exist no geodesics of the Grassmann manifold of H^2 joining N(C_0+1) and N(C_a+1), for a 0. Note that, by Theorem <ref>, part 1, for b=0: N(C_0+1)^⊥∩ N(C_a-1)=N(C_0-1)∩ N(C_a-1)=ℂ 1; whereas by Theorem <ref>, Assertion 3, left hand identity, we have that N(C_0+1)∩ N(C_a-1)^⊥=N(C_0-1)^⊥∩ N(C_a-1)^⊥={0}. On the affirmative side, a direct consequence of the results in the previous section is the existence of unique normalized geodesics of the Grassmann manifold joining =N(C_0-1) with N(C_a-1), ø=N(C_0+1) with N(C_a+1), and with N(C_a+1): Let a∈𝔻, a 0. * There exists a unique (geodesic) curve δ^-_0,a(t)=e^tZ^-_0,a of the Grassmann manifold of H^2, with (Z^-_0,a)^*=-Z^-_0,a, Z^-_0,a⊂ø and Z^-_0,a≤π/2, such that e^Z^-_0,a=N(C_a-1). * There exists a unique (geodesic) curve δ^+_0,a(t)=e^tZ^+_0,a of the Grassmann manifold of H^2, with (Z^+_0,a)^*=-Z^+_0,a, Z^+_0,a⊂ø and Z^+_0,a≤π/2, such that e^Z^+_0,aø=N(C_a+1). * There exists a unique (geodesic) curve δ^+,-_0,a(t)=e^tZ^+,-_0,a of the Grassmann manifold of H^2, with (Z^+,-_0,a)^*=-Z^+,-_0,a, Z^+,-_0,aø⊂ and Z^+,-_0,a≤π/2, such that e^Z^+,-_0,aø=N(C_a-1). * Follows from assertion 1 in Theorem <ref>. * Follows from assertion 2 in Theorem <ref>. * N(C_0-1)∩ N(C_a+1)^⊥={0}, is the right hand side of assertion 2 in Theorem <ref>. N(C_0-1)^⊥∩ N(C_a+1)=N(C_0+1)∩ N(C_a+1)={0}, is part 2. of Theorem <ref> for b=0. XX ando Ando, T., Unbounded or bounded idempotent operators in Hilbert space. Linear Algebra Appl. 438 (2013), no. 10, 3769–3775. p-q Andruchow, E.; Operators which are the difference of two projections, J. Math. Anal. Appl. 420 (2014), no. 2, 1634-1653. berkson Berkson, E., Composition operators isolated in the uniform operator topology. Proc. Amer. Math. Soc. 81 (1981), no. 2, 230–232. cpr Corach, G.; Porta, H.; Recht, L., The geometry of spaces of projections in C^*-algebras, Adv. Math. 101 (1993), no. 1, 59–77. cowen Cowen, C. C. Linear fractional composition operators on H^2. Integral Equations Operator Theory 11 (1988), no. 2, 151–160. cowenmccluer Cowen, C. C.; MacCluer, B. D., Composition operators on spaces of analytic functions. Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1995. dixmier Dixmier, J., Position relative de deux variétés linéaires fermées dans un espace de Hilbert, Revue Sci. 86 (1948), 387-399. douglas Douglas, R. G., Banach algebra techniques in operator theory. Second edition. Graduate Texts in Mathematics, 179. Springer-Verlag, New York, 1998. halmos Halmos, P. R., Two subspaces, Trans. Amer. Math. Soc. 144 (1969), 381–389. pr Porta, H.; Recht, L., Minimality of geodesics in Grassmann manifolds, Proc. Amer. Math. Soc. 100 (1987), 464–466. rosenblum M. Rosenblum, Self-adjoint Toeplitz operators and associated orthonormal functions. Proc. Amer. Math. Soc. 13 (1962), 590–595.
http://arxiv.org/abs/2307.00617v1
20230702170128
The Forward-Forward Algorithm as a feature extractor for skin lesion classification: A preliminary study
[ "Abel Reyes-Angulo", "Sidike Paheding" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
[ The Forward-Forward Algorithm as a feature extractor for skin lesion classification: A preliminary study equal* Abel Reyes-Angulomtu Sidike Pahedingmtu mtuDepartment of Applied Computing, Michigan Technological University, Houghton, USA Abel [email protected] Sidike [email protected] Machine Learning, medical imaging, learning representation 0.3in ] Skin cancer, a deadly form of cancer, exhibits a 23% survival rate in the USA with late diagnosis. Early detection can significantly increase the survival rate, and facilitate timely treatment. Accurate biomedical image classification is vital in medical analysis, aiding clinicians in disease diagnosis and treatment. Deep learning (DL) techniques, such as convolutional neural networks and transformers, have revolutionized clinical decision-making automation. However, computational cost and hardware constraints limit the implementation of state-of-the-art DL architectures. In this work, we explore a new type of neural network that does not need backpropagation (BP), namely the Forward-Forward Algorithm (FFA), for skin lesion classification. While FFA is claimed to use very low-power analog hardware, BP still tends to be superior in terms of classification accuracy. In addition, our experimental results suggest that the combination of FFA and BP can be a better alternative to achieve a more accurate prediction. § INTRODUCTION Skin cancer is one of the most common types of cancer <cit.>, and early detection and diagnosis can greatly improve the prognosis of patients <cit.>. Dermatologists use visual inspection to diagnose skin cancer, which can be challenging because there are numerous skin lesions and the visual similarity between different types of them. Therefore, Computer-aided diagnosis (CAD) systems are required to assist dermatologists in the early detection and diagnosis of skin cancer, aiming to ensure timely treatment <cit.>. In the past few years, deep learning (DL) has revolutionized the automation of clinical decisions through computational system assistance. Techniques such as backpropagation <cit.>, convolutional neural networks (CNNs) <cit.>, and recently proposed transformers <cit.> have boosted confidence in the use of AI in a real clinical setup. However, the computational cost and hardware constraints related to the implementation of state-of-the-art DL architectures limit the feasibility of actual deployment implementation <cit.>. The Forward-Forward Algorithm (FFA) <cit.> has been presented as an alternative to optimizing the training process of neural networks, considering the “mortal computational" cost. Although results reported by the use of FFA have not outperformed traditional mechanisms such as backpropagation, the trade-off could be less requirement in hardware implementation. In this work, we explore the performance of FFA for skin lesion classification and also compare it to a backpropagation-based trained model. Finally, we propose to combine the two techniques during the training process, where the FFA is used as a feature extractor and backpropagation to improve model prediction. § RELATED WORKS Skin lesion classification has been a significant research area in recent years, and numerous deep-learning techniques have been designed for this task <cit.>. Most existing methods use traditional CNNs, which learn the feature representation through backpropagation <cit.>. However, these methods can be limited by the quality of the feature representation. In order to overcome this limitation, some researchers have suggested methods that use adversarial training to improve the feature representation. For example, GAN-based methods have been implemented for skin lesion classification, which use a generator network to generate realistic skin lesion images and a discriminator network to classify the generated images as real or fake <cit.>. In contrast to these methods, we herein propose to combine the FFA with traditional backpropagation to enhance feature representation and improve classification accuracy in skin lesion classification. Our approach does not require adversarial training or pre-training on a large dataset, making it computationally efficient and suitable for large-scale datasets. Our approach means to provide a baseline on the performance of DL architecture using the FFA strategy in contrast with backpropagation, in addition to exploring the combination of the two techniques. § METHODOLOGY Our proposed framework consists of two stages: feature representation with FFA and final prediction with backpropagation. §.§ Backpropagation Introduced by Rumelhart et al. <cit.>, the backpropagation algorithm, utilized for training artificial neural networks, entails calculating the gradient of the error function concerning the network weights. This gradient information is then used to update the weights by using gradient descent, aiming to minimize the global loss function of the task. The authors demonstrated that this algorithm could facilitate the learning of distributed representation of input data, enabling effective generalization to new data. The backpropagation algorithm consists of two main stages: the forward pass and the backward pass. During the forward pass, input data is propagated through the network, generating corresponding predictions. Subsequently, during the backward pass, the gradients of the loss function are calculated with respect to the network's parameters by propagating the error backward through the neural network. These gradients are then utilized to update the parameters using an optimization algorithm, such as stochastic gradient descent (SGD) or Adam <cit.>. §.§ The Forward-Forward Algorithm Introduced by Geoffrey Hinton <cit.>, the forward-forward algorithm (FFA) is an innovative learning procedure for neural networks. It draws inspiration from the Boltzmann machines <cit.> and Noise Contrastive Estimation <cit.>, Its primary objective is to replace the traditional forward and backward passes of backpropagation with two parallel forward passes. The first forward pass uses real data, and the second forward pass uses negative data, which the network generates itself. Each layer possesses its individual objective function, which essentially aims to maximize the goodness for positive data while minimizing the goodness for negative data. Figure <ref> illustrates the main difference between the traditional backpropagation algorithm and the newer FFA. Mathematically, the goodness is computed as follows prob(positive-data)=σ(∑_j y_j^2-θ)) where y_i represents activity of the j^th hidden unit, θ a given threshold, and σ a logistic distribution function. The FFA has been shown to work well on a few small problems <cit.>, but it has not yet been tested on large-scale problems. In addition, the FFA has been claimed to be superior in hardware efficiency, through low power consumption when compared with the backpropagation and the gradient computation <cit.>. §.§ Feature representation in FFA In the FFA, pairs of data points are created by corrupting real data points with noise. The network is then trained to distinguish between the real data points and the corrupted data points. This training process forces the network to learn representations that are robust to noise. The proposed method utilizes the FFA algorithm as the initial stage of learning, and then the final classification is trained by using regular backpropagation. § IMPLEMENTATION DETAILS The proposed methodology was tested with two skin lesion classification benchmark datasets: ISIC 2016 and HAM 10000 datasets, details about the dataset are provided in section <ref>. The ISIC 2016 provides training and testing datasets with the correspondent classification ground truth (i.e. benign or malign skin lesion). However, the HAM 10000 dataset provides only the training dataset with the corresponding type of skin lesion ground truth. For the last case, we randomly split the dataset in training and testing with a ratio of 8:2, respectively. In order to alleviate the computational constraints all the images, from the aforementioned datasets, were downsampled to 64×64 pixels size, maintaining the original RGB channels. The inputs were flattened and normalized prior to the training process. The architecture for all the experiments was composed of 3 fully connected hidden layers (784, 500, 500 units) with ReLU activation and batch normalization. A final softmax activation function provides the probability distribution among the classes corresponding to the classification task on each dataset. In the case of the FFA, the positive and negative data are generated by the network, and the one-hot-encoder representation of the label is overlying in the first n pixels of the input image, where n represent the number of classes available. The positive data embed the original ground truth in the corresponding input image; meanwhile, the negative data overlay a wrong label in the same input image. Figure <ref> illustrates the overlay technique utilized in this work to generate the positive and the negative data. The architecture is implemented within a Tensorflow/Keras 2.9 version environment. Each experiment is run using a single NVIDIA GeForce RTX 3070 graphic card with 8GB of dedicated GPU. All experiments are run for a total of 250 epochs with checkpoint callback, and batch size of 64 samples per iteration. The mean squared error (MSE) loss was used along with the Adam optimizer with an initial learning rate of 1 × 10^-3. § EXPERIMENTS In this section, we provide more details about the experiments performed to evaluate the proposed method, including the description of the datasets, performance metrics used as evaluation criteria, and a discussion of the experimental results. §.§ Datasets In order to evaluate the robustness of the proposed methodology, we utilize two widely used and publicly available benchmark datasets related to skin lesion classification: ISIC 2016 and HAM 10000 datasets. §.§.§ ISIC 2016 Presented in the International Skin Imaging Collaboration (ISIC) 2016 challenge, the ISIC 2016 dataset <cit.> is a comprehensive collection of dermoscopic images that is widely used in the field of computer-aided diagnosis for skin lesions. It consists of 900 training images and 379 test images of dermoscopic skin lesions. The dataset includes binary masks for both lesion segmentation and classification purposes, enabling researchers to evaluate algorithms for these tasks. The dataset encompasses three main types of skin lesions: melanoma (Malign), seborrheic keratosis, and nevus (Benign). These lesion types cover a range of conditions commonly encountered in dermatology practice. The images in the dataset were obtained from patients across multiple countries, resulting in a diverse and challenging dataset. §.§.§ HAM 10000 The Human Against Machine with 10000 training images (HAM10000) dataset <cit.>, widely used in dermatology and computer vision, comprises 10,015 dermoscopic images of skin lesions obtained from diverse clinical sources. It was specifically created for the ISIC 2018 challenge initiated by the International Skin Imaging Collaboration, with the aim of advancing automated analysis of skin lesions. Within the HAM10000 dataset, images represent seven distinct types of skin lesions: melanoma(MEL), melanocytic nevus(NV), basal cell carcinoma(BCC), actinic keratosis(AKIEC), benign keratosis(BLK), dermatofibroma(DF), and vascular lesions(VASC). Figure <ref> shows the numerical by-class distribution of the samples in the HAM 100000 dataset. Figure <ref> shows samples of the two mentioned datasets. §.§ Performance metrics §.§.§ the error rate %: The error rate is a commonly used performance metric that measures the accuracy of the model's predictions. It represents the proportion of incorrectly classified instances out of the total number of instances. The error rate, denoted as ϵ, is a straightforward and intuitive metric that provides a simple measure of the model's performance in image classification tasks and can be mathematically expressed as ϵ = FP + FN/TP + TN + FP + FN× 100% §.§.§ The ROC-AUC score: The ROC-AUC measures the classifier's ability to distinguish between positive and negative instances across various classification thresholds. The ROC curve (receiver operating characteristic curve) demonstrates the true positive rate (sensitivity) against the false positive rate (1-specificity), and the ROC-AUC represents the area under this ROC curve. A higher ROC-AUC indicates better classification performance. The ROC-AUC can be mathematically expressed as ROC AUC score = ∫_-∞^∞TPR(f) ·FPR(f) df where TPR(f) = TP(f)/P and FPR(f) = FP(f)/N are knowing as the true positive rate and the false positive rate, respectively §.§.§ Experimental results and discussion Experimental results are reported in Table <ref> in terms of the error rate %, and in Table<ref> in terms of ROC-AUC score. In both cases, it shows superior performance of the model when it is trained with the backpropagation method in contrast when it is trained with FFA. However, reported results suggest a better performance can be achieved when both techniques are combined, and the results are consistent over the two datasets. It is important to remark on the following points: (1) The architecture configuration was not intended to achieve state-of-the-art performance but provides a comparative baseline when a DL architecture is trained using either FFA, backpropagation, or both. (2) The use of only fully connected layers limits the size of the architecture due to computational constraints. Therefore, this is expected to be addressed through the implementation of convolutional layers. § CONCLUSION In this work, we investigated the performance of FFA for the skin lesion classification task. We compared the prediction accuracy of the FFA, backpropagation, and the combination of both. Experimental results showed that the backpropagation algorithm yields better classification accuracy than FFA. However, our results also suggest promising insights into the use of the FFA in the neural network as a feature extractor. A contrastive learning stage with FFA complemented with classification learning using traditional backpropagation could enhance the performance for skin lesion classification when comparing the similar architecture setup using either FFA or the backpropagation method only. One limitation of our study is that we did not include the results of the energy efficiency of FFA in terms of hardware implementation, which will be our future work. langley00 icml2023
http://arxiv.org/abs/2307.02409v1
20230705163225
Utility-Aware Load Shedding for Real-time Video Analytics at the Edge
[ "Enrique Saurez", "Harshit Gupta", "Henriette Roger", "Sukanya Bhowmik", "Umakishore Ramachandran", "Kurt Rothermel" ]
cs.DC
[ "cs.DC" ]
1]Enrique Saurez^*[email protected] 1]Harshit Gupta^*[email protected] 2]Henriette Rö[email protected] 2]Sukanya [email protected] 1] Umakishore [email protected] 2]Kurt [email protected] [1]Georgia Institute of Technology [2]University of Stuttgart Utility-Aware Load Shedding for Real-time Video Analytics at the Edge [ Received ; accepted ===================================================================== *These authors contributed equally to this work. This work was supported by the German Research Foundation (DFG) under the research grant "PRECEPT II" (RO 1086/19-2 and BH 154/1-2). plain plain Real-time video analytics typically require video frames to be processed by a query to identify objects or activities of interest while adhering to an end-to-end frame processing latency constraint. Such applications impose a continuous and heavy load on backend compute and network infrastructure because of the need to stream and process all video frames. Video data has inherent redundancy and does not always contain an object of interest for a given query. We leverage this property of video streams to propose a lightweight that can be deployed on edge servers or on inexpensive edge devices co-located with cameras and drop uninteresting video frames. The proposed uses pixel-level color-based features to calculate a utility score for each ingress video frame, which represents the frame's utility toward the query at hand. The uses a minimum utility threshold to select interesting frames to send for query processing. Dropping unnecessary frames enables the video analytics query in the backend to meet the end-to-end latency constraint with fewer compute and network resources. To guarantee a bounded end-to-end latency at runtime, we introduce a control loop that monitors the backend load for the given query and dynamically adjusts the utility threshold. Performance evaluations show that the proposed selects a large portion of frames containing each object of interest while meeting the end-to-end frame processing latency constraint. Furthermore, the does not impose a significant latency overhead when running on edge devices with modest compute resources. video analytics, edge computing, load shedding § INTRODUCTION Real-time video analytics has been gaining rapid popularity due to its utility in applications such as surveillance<cit.>, driving assistance and safety<cit.>, and factory automation<cit.>. Such applications are typically structured as a pipeline of operators with the first operator consuming a stream of video frames from cameras and feeding its output to the next operator, and so on. Each operator executes a piece of the overall application logic, for instance, object detection, classification, activity recognition, etc., and extracts relevant insights from camera frames. We specifically focus on video analytics pipelines with stringent end-to-end latency constraints, such that the extracted insight from video processing could be used to trigger a real-time response, e.g., alert to car driver. The increasing availability of high quality and high frame rate cameras put significant pressure on the backend compute and networking resources. Although the use of edge resources for running geo-distributed video analytics has been proposed to minimize backhaul bandwidth usage <cit.>, the resource capacity of edge sites is typically limited due to space and power constraints <cit.>. Oftentimes complex operators like object detection pose heavy compute requirements, such as access to a GPU, which imposes limitations on the number of cameras that can be served at a given edge site. Video streams possess two key characteristics that enable serving more number of cameras with limited resources. Firstly, the appearance of the object-of-interest for a given analytics query is not frequent<cit.>, implying that a large fraction of camera frames do not contain useful information. Secondly, when an object-of-interest exists in a video stream, due to the high frame rate of cameras it usually is present in multiple frames. Dropping a small portion of the frames that an object-of-interest appear in does not affect the overall fidelity of the results. These characteristics motivate the use of load shedding techniques to shed irrelevant frames, to reduce the workload on the application pipeline. Previous work in load shedding has focused on using linear selectivities <cit.> and work with structured queries and data <cit.>. Such techniques haven't been explored for content-based shedding of unstructured data such as video. Previous work in early-discard of video frames either require expensive hardware for feature extraction <cit.> or do not tune the filtering parameters according to the processing load on the backend video analytics pipeline <cit.>. In this work, we present a lightweight load shedding technique that uses a per-query content-based utility function to determine if a frame should be shed. The utility of a frame is calculated as a function of its color distribution. Each query undergoes a learning phase during which the utility function is built. The receives all ingress frames and it drops those whose utility is below the baseline utility threshold. Due to inherent variations in video streams' contents, the utility threshold needs to be dynamically tuned so that the load on backend analytics pipeline is within manageable levels, and the end-to-end processing latency constraint of the query is continuously met. The includes a feedback control-loop that dynamically updates the utility threshold based on the current load on the later stages of the video processing pipeline. This feedback from the later stages ensures that the overall query processing pipeline functions correctly despite differences in the content of the video stream compared to the training set. We incorporate the proposed load shedding technique on a video analytics platform and perform extensive evaluations with real-world analytics queries and video datasets. Our contributions can be summarized as follows: * A workflow for building the per-frame utility function given a query and a labeled training data set. The utility function should be lightweight, i.e., it should be able to process a high rate of ingress frames without imposing significant latency overhead. * A control loop that dynamically tunes the utility threshold based on the current load on the query operators. The objective of the control loop is to keep the end-to-end latency under a query-specific bound. * Performance evaluation of the proposed load shedding approach to demonstrate its efficacy. The rest of the paper is structured as follows. <ref> discusses the context of this work and <ref> presents the related work in the space of load shedding for video analytics queries. <ref> presents the system architecture and the load shedding approach in detail, including the training phase and the dynamic tuning of utility thresholds. <ref> presents the design and results of experimental evaluation of the proposed approach. <ref> presents a discussion of relevant questions stemming from this research. <ref> ends the paper with conclusion and a discussion of future work. § BACKGROUND AND PROBLEM STATEMENT We describe the context in which our proposed approach is to be employed, followed by a formal definition of the problem statement. §.§ Context We target real-time video analytics queries that perform detection of target objects of a specific color. Such queries are common in the domains of surveillance (e.g, tracking red cars in response to an AMBER alert <cit.>), traffic control (e.g., detecting if an emergency vehicle is stuck in traffic <cit.>), search and rescue (e.g., locating humans in open water using drones <cit.>), etc. Such queries typically process multiple frames containing a given target object (e.g., a suspicious red car in first example) to extract insights about the object (such as its direction of motion, or which street it went to from an intersection). The query could be designed to process frames containing target objects of a single color (e.g., red suspicious vehicle), frames containing at least one object from the target colors (e.g., white ambulance or red fire truck) or frames containing objects of all target colors. <ref> shows the query model at a high level, which is composed of three components: a filter, a sequence of one or more video processing operators (e.g., Deep Neural Network based object detection), and a sink. The filter discards frames with no useful information - for instance, filtering out frames with contiguous groups of pixels (blobs) of a certain color larger than a certain size. The video processing operators form the core of the query logic. Finally, the sink analyzes the output labels of the video processing operators and sends information to the relevant parties. Depending on the dynamic content of video streams, the fraction of frames processed by the video processing operators (e.g., heavyweight DNNs) and the sink varies with time. This variation in processing load causes significant variation in end-to-end latency of the system as well. To ensure real-time response, the queries have a constraint on end-to-end latency of processing a frame containing a target object. The end-to-end processing latency of a frame is the total time taken between the generation of the frame by the camera to the time when it is executed completely by the application query (including communication delay). We consider a connected camera deployment which could (but not necessarily) be assisted by an edge compute node available in proximity. We assume cameras to contain limited compute capability for running background subtraction and feature extraction (to be discussed in <ref>). Cameras send the foreground of frames along with the associated features downstream. The possible scenarios of the deployment of downstream components, i.e., the and Application Query, are shown in <ref>. In each such scenario, either the compute resource on the edge or the network bandwidth between edge and cloud or between camera and cloud is the bottleneck resource, whose over-utilization results in excessive queuing of video frames, eventually resulting in violation of the end-to-end processing latency constraint. Hence, the 's effective operation is crucial to make sure that the bottleneck resource is not overloaded. The objective of the is to maximize the fraction of frames containing each target object that are sent downstream to the application query after shedding. The optimizes this objective with the constraint of end-to-end processing latency being below the query-specific bound. §.§ Problem Statement We now formally define the problem statement to be solved by the by mathematically expressing the objective function and latency constraint. We define video streams as a continuous sequence of frames. Therefore, the source video stream generated from a camera is denoted by V=[f_1, f_2, ⋯, f_m]. We define an application query as Q = [ q_1, ⋯, q_n ], where q_i denotes a video processing operator (including filtering), which reads input from the output of q_i-1 and feeds into the input of q_i+1. The first operator in a query always reads the combined video stream coming from the multiple cameras it is serving, as shown in <ref>. We denote the output stream of operator q_i with input stream v as q_i ( v ). We define the set of target objects detected by a query Q in video V as T_Q ( V ) as in <ref>. We represent frame f containing target object o by the relation o ∈ f. T_Q ( V ) = {target objects in V detected by Q } Now, introducing the component into the video query Q, we construct a query Q' of the form [q_0, q_1, ⋯, q_n], where q_0 is the component, also denoted by LS. The Quality of Result (QoR) metric we use in this work is designed to measure the number of frames for each target object that are sent downstream by the LS in query Q', i.e., belonging to the output stream LS ( V ). We define the per-target-object QoR for target object o in <ref>. QoR metric has a value between 0 and 1. QoR_Q ( o, LS, V ) = |f ∈ LS( V ) : o ∈ f||f ∈ V : o ∈ f| The overall QoR metric for query Q with the LS against source video V encompassing all target objects of interest is defined in <ref>. This metric calculates the average per-object QoR metric over all target objects. Thus, this metric quantitatively measures the aggregate performance of the for a given source video. QoR_Q ( LS, V ) = ∑_o ∈ T_Q ( V ) QoR_Q ( o, LS, V )|T_Q ( V )| The Quality of Result (QoR) metric we use in this paper is what fraction of target objects in video V are detected by Q' as compared to Q, described more precisely in <ref>. QoR_Q (LS, V ) = |T_Q'( V )||T_Q( V )| For a given video stream V = [ f_1, f_2, ⋯ , f_m ], we define the end-to-end delay experienced by frame f of video V when processed by query Q' as E2E_V, Q'( f ) which is expressed as shown in <ref>, where q_k is the last operator in Q' that processes frame f. E2E_V, Q'( f ) = ∑_i=0^k queue ( q_i, f ) + exec ( q_i, f ) The objective is to maximize the Quality of Result (QoR), while dropping frames such that the given end-to-end latency bound LB is met. More formally, the objective is defined as follows. Maximize QoR_Q (LS, V ) s.t. E2E_V, Q'( f ) ≤ LB ∀  f ∈ V More formally, the objective is to minimize the adverse impact on Quality of Result (QoR), i.e., maximize detected_objects, while dropping frames such that the given end-to-end latency bound LB is met. More formally, the objective is defined as follows. Maximize ( detected_objects) s.t. l_e ≤ LB ∀  e ∈ S_in where l_e is the latency of frame e, belonging to the input stream S_in, that represents the sum of the queuing latency of frame e and the time needed to process frame e. § RELATED WORK Resource Management in Online Video Analytics: Live video analytics is an extremely compute and network resource intensive application. Prior art has adopted two main approaches for managing the high resource requirements of this application - tuning the configuration of the input stream and video processing operators (e.g., frame rate, resolution, etc.) and early discard of uninteresting frames. Online adaptation of camera streams and video processing operators has been explored in several prior works <cit.>. These works target scenarios where end-to-end latency of the order of sew seconds can be tolerated. Their primary objective is to optimize cost of resources needed for supporting all video streams while also maintaining high accuracy. Although these works differ from our proposed approach in terms of objectives, their technique can be adopted to work complementary to our proposed approach. Early-Discard Filtering of Video: Multiple approaches have been taken for performing early discard of unnecessary video frames so that compute and network costs of streaming and processing them can be avoided. Glimpse <cit.> proposed a hardware-software add-on that could use low-powered coarse-grained vision modalities to filter out frames irrelevant to the query at hand. The system uses specialized hardware for motion detection, temperature measurement, etc., which are not available on the cameras in our scenarios. Zhang, et al., <cit.> use a multi-stage pipeline of operators to determine whether a frame is relevant to a query, wherein the operators are implemented using GPUs and impose significant latency overhead to the video processing pipeline's performance. FilterForward <cit.> consists of a base DNN whose intermediate layers' outputs are used by a per-application binary classifier that decides whether to filter out a frame or not. However, in that system, the execution latency of the base model itself is very high (>200ms) and the cost of performing early discard amortizes only when running at least 4 concurrent application queries per base model. The EarlyDiscard strategy proposed in Wang, et al., <cit.> uses an inexpensive DNN to perform filtering of frames. However, there is no way to tune the filter so as to ensure end-to-end latency guarantee of video processing. Reducto <cit.> makes use of difference in low-level visual features (such as pixels, edges, etc.) across consecutive frames to determine if processing the new frame would result in a difference in the query's result. However, it operates at 1-second granularity, meaning that it cannot guarantee end-to-end latency less than that. Additionally, it does not support tuning of the filtering based on load experienced by backend application query. In summary, previous work on early discard of frames either use expensive filters that add significant latency overhead to the video processing pipeline, or do not support the fine-tuning of the filter to ensure end-to-end latency. § UTILITY-AWARE LOAD-SHEDDING In this section, we present a high-level architecture of the proposed load shedding system, highlighting the interactions between different components. Then we describe the design and functionality of each individual component in isolation. §.§ System Architecture We extend real-time video analytics systems which are equipped with edge-computing capability (referring to both edge sites and compute capability co-located with cameras) with our proposed load shedding system that sheds a portion of the input frames to maintain the given latency bound (LB) during overload. Figure <ref> shows the main components involved in performing the aforementioned tasks and their interactions—the , the application query, and the Control Loop. Each operator (including ) reads from an ingress queue which is (populated by the upstream operator or source cameras) and pushes the output data to the egress queue. The load shedding system must perform two primary tasks—(1) decide when to shed and how much to shed, and (2) decide which frames to shed. We describe the latter task first, for which the computes a utility/ importance value for each video frame that denotes the probability of the frame being useful for the application query. The utility is calculated using the color content of the frame and the application query at hand. The intuition behind the design of the is to discard video frames that have a low probability of being useful for the given application query (see <ref>). The can be dynamically configured with a utility threshold, such that it drops frames with utility less than the threshold. The Control Loop component dynamically computes the utility threshold and updates the . The utility threshold calculation is done so as to sustain the current load on the application query, while meeting the end-to-end latency bound (LB) (see <ref>). More specifically, the Control Loop component monitors the queue lengths of each operator in the query, and estimates the current observed end-to-end latency. Based on the observed end-to-end latency and the query's latency requirement, the Control Loop computes the fraction of frames that should be dropped by the , which is then transformed into a utility threshold (see <ref>). Transforming the desired frame drop rate to a utility threshold is done on the basis of the utility distribution of frames observed in the past. The subsequent sections describe the per-frame utility calculation function (exploration of color features for its design), the calculation of utility threshold for a given target drop rate, and the control loop in more detail. §.§ Building the utility function The makes the decision of whether to discard a frame or not based on each frame's utility value - which is calculated by the utility function. It takes as input the frame's color-based features and computes a scalar utility value. §.§.§ Hue-Saturation-Value Color Model We use the Hue-Saturation-Value (HSV) model for representing the color of pixels, which represents how colors appear under light (<ref>). The HSV model is widely used in computer vision applications over the Red-Green-Blue (RGB) model because HSV separates the color information from intensity information (which helps deal with situations like lighting changes or shadows). We use the HSV triplet of each pixel to calculate the distribution of colors in each frame, which forms the input feature set for the utility function (described in detail in subsequent sections). The developer of the Application Query specifies the color of the target objects in terms of a hue range C as input to the . For instance, the color red is represented using the hue ranges [0,10) ∪ [170,180). §.§.§ Training data To train the utility function, we use a labeled stream of videos from multiple cameras as training data set. Each element of the training data set is of the form (f, l), where f represents the frame and l represents the label. Each frame f is described by a list of pixels, with each pixel being represented by a triplet ( h, s, v ). The h, s, v elements of a pixel's notation denote the pixel's Hue, Saturation and Value fields respectively. The label l tells whether the frame contains an object of interest, i.e., whether the frame was a match for the given query. Henceforth, we use the term positive to denote frames which contain one or more target objects, and negative to denote frames which do not contain them. The goal of the utility function is to be able to separate the computed utilities of positive and negative frames, such that using a utility threshold would result in effective load shedding. In the remaining part of this section, we first outline our observations from the training data on how to separate positive and negative frames using the HSV model. Based on these observations, we build our utility function. §.§.§ Hue as feature for separating positive and negative frames We first explore the use of the Hue field of pixels to calculate whether the given frame contains a target object of the given color. We do so by computing a metric Hue Fraction for the color C, denoted by HF_C. HF_C ( f ) = | pixel p ∈ f: hue ( p ) ∈ C ||pixel p ∈ f| A higher Hue Fraction would imply higher likelihood for the frame to contain a target object of a given color. Hence, a threshold-based approach on HF_C is a candidate for the . However, our analysis of hue fractions of frames in our dataset showed that the distribution of HF_C for negative frames overlaps significantly with positive frames across all videos. For instance, <ref> shows the hue fractions of the color Red of all frames across all videos in the dataset; it can be seen that the hue fraction distribution of positive frames overlaps with negative frames. This overlap would prevent a threshold-based approach on the hue fraction from effectively differentiating between positive and negative frames. This effect is captured in <ref>, where the per-object QoR metric drops steeply with hue fraction thresholds without achieving a significant frame drop rate. We posit that this overlap in distribution is because both positive and negative frames contain red-colored pixels; however, the saturation and value fields of those pixels in the positive and negative frames would have distinctly discernible distributions for the two. §.§.§ Using Saturation and Value fields for differentiating frames In order to separate positive frames from negative ones, we analyze the distribution of Saturation and Value fields for Red pixels across videos. We discretize the range of pixel saturation and value into bins of size s and v respectively. We define two functions sat_bin and val_bin that map a pixel's saturation sat( p ) and value val( p ) respectively to their corresponding bins. sat_bin ( p ) = i i · s ≤ sat ( p ) < ( i+1 ) · s val_bin ( p ) = j j · v ≤ val ( p ) < ( j+1 ) · v We leverage the division of the saturation and value range into bins to transform each frame f into a two-dimensional matrix for a given color C, such that each element of the 2D matrix denotes the fraction of pixels belonging to a particular saturation and value bin. .9!PF_C ( f ) = [ PF_C^( 0,0 )( f ) PF_C^( 0,1 )( f ) ⋯ PF_C^( 0,n )( f ); PF_C^( 1,0 )( f ) PF_C^( 1,1 )( f ) ⋯ PF_C^( 1,n )( f ); ⋮ ⋮ ⋱ ⋮; PF_C^( m,0 )( f ) PF_C^( m,1 )( f ) ⋯ PF_C^( m,n )( f ) ] Where PF_C^( i,j ) is the fraction of pixels whose hue falls in the range C and their saturation and value fall into the i^th and j^th bin respectively. The number of bins for saturation and value are B_S and B_V respectively. .9!PF_C^( i,j )( f ) = | pixel p ∈ f : hue ( p ) ∈ C in_bin(p,i,j) || pixel p ∈ f : hue ( p ) ∈ C| in_bin(p,i,j) = ( bin_sat( p ) = i bin_val( p ) = j ) Using the per-frame distribution of pixels in various sat­u­ration-value bins, we quantify how useful each such bin is in classifying a frame as positive or negative. We compute a metric for each saturation-value bin using the pixel distribution of positive and negative frames that denotes the correlation of the bin with the particular label (positive or negative) for a given frame. M_C, +ve^( i,j ) = AVG PF_C^( i,j )( f ) ∀( f, 1 ) ∈𝒟 M_C, -ve^( i,j ) = AVG PF_C^( i,j )( f ) ∀( f, 0 ) ∈𝒟 The distribution of above utility for the color Red, i.e., M_C, true and M_C, false is shown in <ref>. The bins with higher saturation value are much stronger indicators of whether a frame is positive. §.§.§ Computing per-frame utility We use the utility of each saturation-value bin (from <ref> and <ref>) to compute the utility value for a frame. The utility is a weighted sum of the M_C,+ve^( i,j ) value for each saturation-value bin, weighted by the pixel fraction of the frame in that bin. U_C ( f ) = M_C,+ve^( i,j )· PF_C^( i,j )( f ) <ref> schematically describes the approach of building the utility function using training data and using the function to calculate utility value for frames at runtime. §.§.§ Computing per-frame utility for composite color queries Computing the utility of a frame for composite queries requires using the utility function computed for the component colors. U_C_1 C_2( f ) = max ( U_C_1( f ), U_C_2( f ) ) U_C_1 represents the utility function for color C_1 normalized over the training data, such that the maximum utility is 1.0. Normalization of per-color utility functions allows their effective composition. Similarly for computing the utility of a query looking for both colors C_1 AND C_2 in a frame, we use the minimum of U_C_1 and U_C_2 as the composite utility. §.§.§ Equivalence classes and utility We posit that frames that have similar distribution of per-color Pixel Fraction are more likely to produce a similar query result, and hence posses similar utility for being processed by the backend query. Therefore we group such "similar" frames into equivalence classes based on which bins they fall in for each color. The set of equivalence classes is defined as the Cartesian product of the set of bins for each color (shown in Equation <ref> where h_i represents each color). ℰ = B_h_1× B_h_2×⋯× B_h_11 We define 𝒞 as the function that maps a frame to its corresponding equivalence class in ℰ. 𝒞( f ) = ( b_1, b_2, ⋯ , b_11) such that b_i ∈ B_h_i and PF_h_i∈ b_i §.§ Computing utility threshold As mentioned earlier, we use the notion of equivalence classes to simplify utility calculation for each frame. We compute the utility value for each equivalence class and assign the same value to all frames that fall in that class. Specifically, the utility value of a class e is defined as the fraction of frames with a positive label out of all frames that belong to class e. 𝒰( e ) = |{ f : 𝒞( f ) = e & l=1 ∀( f, l ) ∈𝒟}||{ f : 𝒞( f ) = e ∀ f ∈𝒟}| §.§ Computing Utility Threshold The utility threshold to be used at any given point of time by the is computed using the target frame drop rate that is currently desired. This mapping is learned using the distribution of utility values for a set of frames ℋ in recent history. Using the utility function U_C, we build a cumulative distribution function (CDF) of utility values over frames in the history ℋ, as shown in equation <ref>. CDF ( u ) = |{ f : U_C ( f ) ≤ u ∀( f, l ) ∈ℋ}||ℋ| Note that using a CDF-based approach allows incorporating utility values of more recent frames into ℋ to update the CDF with changing video content. Initially, the training data set 𝒟 itself can be used as the set ℋ of historical frames. To determine the utility threshold for a given target drop rate r, we use the inverse of the CDF function iteratively to compute the minimum utility value u_th such that CDF ( u_th) ≥ r. The intuition behind this approach is that for the utility threshold u_th, the will drop a fraction r from ℋ. Since the utility distribution of recent historical frames is expected to be similar to new frames in the near term, the is expected to drop r fraction of new incoming frames. However, the observed frame drop rate of new incoming frames might not equal the target drop rate r because it's transformation to utility threshold depends entirely on the distribution of frame utilities in ℋ. Both the target drop rate and observed drop rate have a range from 0 to 1. §.§ Design of Control Loop The utility and threshold calculation described in <ref> are used by a to control the end-to-end (E2E) latency of the execution of a video processing query. The requires a control loop (as shown in <ref>) to define the target drop rate to keep the E2E latency within the specified bound. §.§.§ Control Loop mechanisms The developer of the video query defines the required E2E latency, which guides the execution of the . Besides this requirement, the uses as inputs both the frames per second (FPS), the (current) processing latency of backend query execution, and the (current) network latency between camera and , and between and the backend running the query. The uses two main mechanisms to control the end-to-end latency: admission control and dynamic queue sizing. These mechanisms differ primarily in how quickly they adapt the target drop rate based on changes in backend execution load. Admission Control. The admission control decides which frames are considered for further processing. To do so, it monitors the execution and queuing delays of a frame for all operators of the Application Query and thereby computes the processing latency of each frame by the query. It uses the average perceived query processing latency (proc_Q) to calculate the currently supported throughput (ST) by the backend query as: ST = 1proc_Q The ST is then compared against the FPS (frame per second) coming into the to calculate the required target drop rate as follows: Target Drop Rate = max(0, 1-STFPS ), which prescribes the fraction of frames that need to be dropped to match the ingress frame rate going into the backend query with the available processing throughput, such that the system is stable. As described in <ref>, we transform the target drop rate to a utility threshold by using the utility distribution of historical frames, to filter new ingress frames based on their utility value. Dynamic Queue Sizing. Internally, the (as shown in <ref>) also manages a queue that it uses to ensure that the the end-to-end latency of any frame accepted by admission control is met. The expected E2E latency for the N^th frame in the queue is shown in <ref>. net_cam,LS and net_LS,Q represent the average of continuously monitored network latencies between cameras and and between and query backend respectively. proc_CAM denotes the average latency incurred in processing frames on the camera (including background subtraction, feature extraction, etc.) (analyzed in <ref>). Expected E2E Latency = (N + 1) · proc_Q + net_cam,LS + net_LS,Q + proc_CAM If one of the component latencies increases, later frames in the 's internal queue could violate the E2E latency requirement. Dynamic queue sizing helps reduce the likelihood of a frame with a high utility violating the query's latency requirement. Dynamic queue sizing updates the size of the 's queue and drops the frames with the lowest utility when the size is reduced. This design allows the frames with highest utilities to be processed instead of following other policies (e.g., LIFO - that blindly drop older frames). Dynamic queue sizing reacts faster than updates to the utility threshold (that guides admission control) and reduces the likelihood of an E2E latency violation. The queue is always at least of size one to avoid starving the downstream operators. Dynamic queue sizing can also be seen as a second layer of admission control. Even if a new frame has a utility higher than the current threshold, it will be dropped if the queue is full and it has the lowest utility of all frames currently in the queue. Similarly, if an incoming new frame has a greater utility than the lowest utility frame that is already in the queue, then the latter will be dropped and the new frame added to the queue. This queue shedding keeps the latency requirement valid even for new incoming frames. Both mechanisms allow the to fine-tune the admission of new frames to extract the most utility out of the video stream while still maintaining the required E2E latency. § EVALUATIONS In this section, we present the results of experimental evaluations carried out to test the efficacy of the proposed utility-based load shedding approach. Our experiments are tailored to validate the following hypotheses. * The proposed utility function is able to compute utility value for frames from an unseen video (not in training set) and effectively differentiate between positive and negative frames. * The proposed control loop adapts to changing workload pattern and is able to meet application performance requirements without sacrificing QoR metric. * The utility value calculation is light-weight and imposes low overhead on edge devices. §.§ Data Set We generated a benchmark of synthetic videos with VisualRoad <cit.>, which is a benchmark to evaluate video database management systems. VisualRoad uses the autonomous driving CARLA simulator <cit.> to generate videos from CCTV cameras located in a realistic city-like environment, including pedestrians, bicycles, different types of vehicles, and all the surroundings (roads and buildings). Additionally, it allows perturbing the locations of cameras (by specifying a seed parameter) and weather conditions, thereby generating a number of different scenarios. We evaluate our proposed color-based and associated control loop with 25 videos from 7 seeds value (3 or 4 videos from each seed value) using sunny weather. Each video represents 15 minutes of a camera video stream facing a road or highway in a city, with a frame rate of 10 fps. Different cameras have different distributions of cars, varying from cars always presents to rarely appearing. In our results, we report metrics for videos that contained a decent number of target objects for the given query. §.§ Implementation * How are the videos played by the camera? * How is the communication between the camera and done? How is communication between and backend query done? How are these components implemented? In order to evaluate the aforementioned hypotheses, we implement the video processing system along with the . The system being evaluated has three main components: the Video Streamer, the , and the Backend Query executor, as shown in <ref>. All the components exchange messages using the communication library ZeroMQ <cit.>, with the messages serialized using Cap'n Proto <cit.>. The is implemented in Python 3 and all the other components in C++. The Video Streamer reads the video files generated with VisualRoad, performs background subtraction, extracts the color features for each frame (as described in <ref>) and streams them to the . The Video Streamer component is capable of emulating multiple cameras sending their frames' features to the by interleaving their frames. Next, the implements the utility calculation and load shedding described in <ref>. The utility function we use in our evaluations uses 8 bins for both saturation and value, meaning that the bin sizes s and v are equal to 32. Through preliminary experiments (not shown in this paper) we found that these bin sizes offer the best separation of positive from negative frames. Finally, the Backend Query Executor runs the video analytics query to be executed on the video stream. The and Backend Query Executor run on a NC6 Virtual Machine on Microsoft Azure with 6 Intel Xeon E5-2690 v3 vCPUs, 56 GiB of RAM and an NVIDIA Tesla K80. There are two additional components to implement the control loop: the Metrics Collector and a Transmission Control Mechanism. The Metrics Collector measures the end-to-end latency in the Backend Query Executor, aggregates it, and forwards it to the . The Metrics Collector also monitors the total incoming frames per second. The uses this information to define the target drop rate. The Transmission Control Mechanism implements a backpressure algorithm using tokens between the and the Backend Query Executor. The latter has a queue that allows the to send frames to be processed when a token is freed (i.e., when a frame processing is completed). The tokens allow the to control which frames will be processed by the Backend Query Executor when they are running in different machines and balance the trade-off between waiting for a better frame to arrive at the and sending the currently best. In addition, the tokens give feedback to the load shedder on when to send the current best next frame to process. If no tokens are remaining, the load shedder should keep analyzing the incoming frames and drop frames with the lower utility if required. On the other hand, if the Backend Query Executor is empty, the load shedder should immediately send something to be processed. §.§ Video queries Our evaluations consider object detection and tracking queries that need to look at multiple frames of target objects. The model query we use consists of (1) a filter component that groups together spatially adjacent pixels into blobs and drops frames that do not have at least one blob of a certain minimum size, (2) a second filter that ignores frames that do not have a blob(s) of the target object's color, (3) a DNN that performs object detection, (4) a filter that looks for the detected objects' color and label before sending the information to the sink. The object detection DNN we use is efficientdet-d4 <cit.>, which forms the most computationally intensive components of these components and incurs a bulk of processing latency. As described earlier in <ref>, the end-to-end latency of a frame depends on which operators of the query it traverses, which depends on the frame's content. We evaluate simple queries for target objects of a single color (e.g., red) and composite queries for objects of multiple colors (e.g., red or yellow). §.§ Performance on Unseen Videos We evaluate the performance of the on unseen videos using a cross-validation study. We use the video dataset described above and iteratively split it into training and testing set, changing the split in each iteration. In each iteration, we build train the 's utility function using the training set and compute the utility and correctness metrics for the test videos. §.§.§ Single-color query: Red We begin by evaluating a query looking for target objects of a single color, which in this are red cars. <ref> shows the utility of positive and negative frames for the videos in our dataset. The results show that the utility function computes significantly higher utility value for positive frames than negative ones. This result is significant because it shows the performance of the utility function on unseen videos. We demonstrate how such a separation in utility values between positive and negative frames is useful in being able to detecting target objects and maintaining a high QoR value while also shedding a significant fraction of (useless) frames. In <ref>, we show the variation of frame drop rate and QoR metric with respect to utility threshold. As expected, an increasing utility threshold results in an increase in the frame drop rate, which also includes a small portion of useful frames containing target objects, and hence results in a drop in the QoR metric. Comparison against Content-agnostic load shedding. We now compare the performance of the proposed utility-based load shedding approach against a content-agnostic approach that sheds a fixed rate of incoming frames using a uniform probability. Firstly, <ref> shows the variation of frame drop rate and the associated fraction of objects detected against the target drop rate of the . As we have seen in <ref>, there is a significant portion of negative frames that have a low utility. Hence, a low target drop rate results in the selection of a utility threshold that drops many more frames than the target drop rate. However, since those frames are low-utility ones, the QoR remains at 1.0 until the target drop rate becomes so high that higher-utility frames (containing target objects) need to be dropped. That is when the QoR metric dips. Next, <ref> shows the variation of observed frame drop rate and QoR against target drop rate for the Content-agnostic shedding approach. Since the shedding is based on a uniform random probability distribution, we repeat each setting 20 times. Even though the observed frame drop rate is roughly equal to the target drop rate, the QoR falls sharply because content-agnostic shedding often sheds frames containing target objects. Finally, <ref> compares the QoR that the proposed load shedding approach can achieve for a given observed frame drop rate with the content-agnostic shedding approach. Although the QoR of the content-agnostic approach continuously falls with increasing drop rate, the QoR for utility-based approach has a visible drop only when the observed frame drop rate gets close to 1.0. The result shows that the utility-based approach is highly selective in picking frames to send to Backend Query Executor, and one can achieve a much higher QoR with it for a given observed frame drop rate compared to a content-agnostic shedding approach. §.§.§ Composite-color query: Red OR Yellow We perform a similar analysis for a composite queries - those that are tasked with (1) detecting target objects that are either Red or Yellow in color (we refer to this as the OR query), and (2) detecting all frames containing both Red and Yellow target objects (we refer to this as the AND query). As before, we iteratively select a set of videos as the training set and the complementary set as the test set. The utility value of frames for the OR query is shown in <ref>. Similar to the results for the single-color query, the utility value of positive frames is significantly higher than that of negative frames. Note that for the composite OR query, a positive frame is one that contains either a Red or Yellow target car. <ref> shows the frame drop rate and QoR metric against the utility threshold. The QoR remains stagnant at 1.0 (selecting all frames containing target objects) with a high frame drop rate, until the utility threshold becomes high enough to start dropping positive frames. <ref> shows the utility value of frames for the AND query, and the differentiation between positive and negative frames is visible here as well. Note that for the composite AND query, a positive frame is one that contains both a Red or Yellow car. §.§ Application Evaluations The microbenchmarks evaluated the soundness of our utility calculation and compared it against a content-agnostic approach. In this subsection, we detail an E2E evaluation of running both the utility calculation and the control loop using a real-time video stream and show how it can timely control the latency of frame processing and avoid overloading the backend. §.§.§ Synthetic scenario First, we evaluate a synthetic worst-case scenario in which a sudden burst of high-processing activity occurs in the ingress video. The video comprises three segments: one low-utility frames with no target object, high-utility frames containing target object(s), and high-utility frames with no target object. To create such a tailored video, we obtain segments from the videos generated with Visual Road that are known a-priori to have those properties, and stitch them together to form a 15 minutes long video with each of the above three segments being 5 minutes long. The expectation in this experiment is that during the video's first low-utility no-object segment, the will allow frames to be processed by the filter-stage in the backend query, even when the frame utility is low. This is because the the filter operator would drop these frames as they don't contain large blobs of the specific color that the query is looking for (because there are no target objects). Hence, the processing latency proc_Q is low, leading to the throughput that can be handled by the backend higher than the incoming frame rate (<ref>), and thereby a low target drop rate (<ref>). Next, in the second segment of the video, the will start shedding frames because all frames would be processed by the expensive DNN as they contain target objects. Thus, the increases the utility threshold so that the backend query executor can keep the end-to-end latency latency bounded for the frames that are processed. Finally, in the third segment with no target objects, the would stop shedding again and have an execution profile similar to the first segment. <ref> shows the time-varying behavior of the query execution. The x-axis shows time, the y-axis in upper graph of <ref> shows the max end-to-end latency in milliseconds for each 5 minutes time window, along with with the end-to-end latency requirement. The lower graph in <ref> shows the number of frames processed at each stage group every 5 seconds. The upper graph in <ref> shows how the latency is always less than the required end-to-end latency. Similarly, the lower graph in <ref> portrays how the decides the frame drop rate at each segment, with no shedding required in the first and third segment and plenty of shedding in the high-processing segment. Both these results match our expectations, and go on to show that the proposed can almost instantly react to changes and have a low number of violations even with extreme changes, that too for real-time video processing, where the input has usual a frame rate of 24 fps or less. There was only 1 latency violation during the peak in the second segment, while the was recalculating the queue size and the utility threshold. §.§.§ Realistic smart-city scenario The previous sub-section analyzed the worst-case scenario for the . This sub-section analyzes the running directly on a subset of the videos generated using Visual Road. The Video Streamer generates a stream of frame features interleaved from multiple videos, thereby emulating the receiving frames from multiple cameras. We show the end-to-end latency and the distribution of frames processed at each stage, along with the variation the QoR metric with an increasing number of concurrent video streams. Similarly to <ref>, <ref> shows that the can keep the processing latency bounded by shedding some of the frames. There are more spikes than in the synthetic scenario because the DNN is invoked unpredictably by frames from different videos at different time periods. The is effective in minimizing the latency violations caused due to these sudden surges in load. <ref> shows the latency and elements being processed for five concurrent videos. Additionally, <ref> shows that the proposed utility-based load shedding approach can exploit the statistical multiplexing between multiple cameras' video streams and achieve a high QoR. On the other hand, a content-agnostic approach (that sheds frames with uniform probability) has poor QoR. We compute the target drop ratio for the latter approach using <ref> and assuming that proc_Q is 500 ms, which is a rather lenient assumption for the baseline. §.§ Runtime Overhead Analysis We evaluate the additional latency incurred by a video analytics system that includes the proposed load shedding approach as compared to executing the backend query without load shedding. We assume that cameras have colocated compute capability on them, which they use to do three tasks: (1) converting the color space of the source video stream from RGB to HSV, (2) subtracting the background from the frame to extract the foreground, and (3) extracting color features from the foreground which serves as the input to the . We evaluate the time taken to perform these tasks on a Nvidia Jetson TX1 with a quad-core ARM Cortex-A57 processor and 4 GB of RAM, which we use to represent the typical co-located compute power with cameras. We use a video stream with continuously high activity to stress test the above operations and obtain worst-case delay numbers. <ref> shows the median latency incurred by each of the component tasks. The overall additional latency remains below 35 ms which can support the video streams of multiple cameras operating at 10 frames per second (and even higher rates like the common 24 fps). With composite queries needing features calculated for multiple colors, the feature extraction portion of the overhead would be multiplied by the number of colors considered, while the other components' latencies would not change (as they are computed once per frame). § DISCUSSION Automatic selection of Hue ranges for a query. The proposed load shedding approach requires minimal intervention of the application query developer, except having to provide the Hue range for the target objects. This task can also be automated by the analysis of bounding boxes of target objects available in the training data. Techniques such as dominant color detection <cit.> can be used to automatically extract the Hue ranges in target objects and fed into the utility calculation function. Feature calculation vs utility calculation on camera. We could further improve the efficiency of the network by pushing the utility calculation itself to the camera. This decision will incur a tradeoff between the overhead of maintaining a distributed utility model versus a higher communication cost of sending all frames than those with a high utility value. When the cameras can calculate the utility, the load shedder can let them know what utility threshold to use to reduce unnecessary frames sent. However, when the model is updated due to either new data available or drift from the previous model, it needs to be updated at each camera, incurring additional bandwidth requirements. Therefore, this decision should be taken considering the connectivity scenario of the camera network. § CONCLUSION In this paper we have shown how a low-cost can be built for real-time video processing on edge devices. The goal of the proposed is to shed frames such that the end-to-end processing latency of each video frame is under the latency bound for the query, while maximizing the quality of result (QoR). The uses color features of ingress video frames to compute a utility value, that denotes how likely a given frame is to contain a target object of the query at hand. We show how the proposed utility function is able to differentiate between video frames that do contain target objects from frames that don't. We propose a utility threshold based approach for the to enforce a target rate of dropping frames. The also consists of a control-loop component that continuously monitors the execution of frames and detects overload in the backend query. In the case of overload, it adjusts the target frame drop rate, which in turn sets a new utility threshold for frames to be compared against. Through our evaluations we have shown that the proposed is able to differentiate between frames containing target objects from those that don't. It is able to attain a high frame drop rate, wherein it drops as many "useless" frames as possible - thereby maintaining a high QoR metric. We show this result for queries looking for target objects of a single-color as well as composite queries looking for target objects of two possible colors. We show that the proposed is able to adjust the utility threshold dynamically according to the observed load on the backend query to meet the end-to-end latency constraint. Finally, we show that the overhead of executing the operations of the load shedding approach is not significant on edge devices. IEEEtran
http://arxiv.org/abs/2307.00993v1
20230703131808
Universal Scaling Law of Quasiparticle Nernst Effect in Cuprates: A Unified Schematic Analysis for Transverse Transport
[ "Yi-feng Yang" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.supr-con" ]
[][email protected] Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100190, China Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China We discover a universal scaling law for the quasiparticle Nernst coefficient in underdoped cuprates, whose magnitude decreases exponentially with increasing temperature as confirmed in YBa_2Cu_3O_y. We attribute it to the basic mathematical structure of the conductivity formula with a narrow effective bandwidth of nonzero Berry curvatures associated with the pseudogap. A unified scheme is then developed to analyze transverse transport in hole-doped cuprates and clarify the puzzling disparity in determining the pseudogap temperature from different measurements. Our proposal opens the avenue for exploring potential scaling laws in the intermediate temperature region and may have broad applications in strongly correlated or narrow band systems. Universal Scaling Law of Quasiparticle Nernst Effect in Cuprates: A Unified Schematic Analysis for Transverse Transport Yi-feng Yang August 1, 2023 ======================================================================================================================== The Nernst effect measures the transverse electric field generated by a longitudinal thermal gradient under a perpendicular magnetic field <cit.>. It is typically small in simple metals due to the so-called Sondheimer cancellation <cit.>. Investigations of the Nernst effect in correlated materials was greatly stimulated by the study of cuprate superconductors <cit.>, where large Nernst signals have been reported in the pseudogap phase and attributed to vortices or votexlike excitations. While most studies have focused on the superconducting contribution <cit.>, a relatively small quasiparticle term has later been identified and used to reveal nematicity of the pseudogap <cit.>. In spite of its importance, a basic understanding is still lacking for the quasiparticle Nernst effect. In particular, there is still no mathematical expression to describe its temperature evolution, let alone the underlying cause. We report here the discovery of a universal scaling law for the quasiparticle contribution to the Nernst effect in hole-doped cuprates, and propose a simple scaling analysis based on the generic mathematical structure of the conductivity formula. The scaling is attributed to some potential topological properties of charge carriers in a finite energy window associated with the pseudogap. Comparison with experiments confirms our analysis and suggests a unified scheme to account for the transverse transport properties such as the Hall coefficient and the Nernst coefficient. We clarify the puzzling disparity in determining the pseudogap temperature from different experimental probes. Our work highlights the peculiar scaling properties of intermediate temperature physics and may have far-reaching implications in understanding transverse transport in correlated systems. We start with the Nernst measurement in YBa_2Cu_3O_y (YBCO) by Taillefer's group <cit.>. The data are reproduced in Fig. <ref> and show an abrupt upturn at low temperatures. The large positive contribution is attributed to superconducting fluctuations, while the smaller negative contribution at higher temperatures comes from the quasiparticles. We have therefore two components, ν/T=ν^sc/T+ν^qp/T. The presented data have been scaled with respect to some onset temperature T_ν assigned to the pseudogap <cit.>. Comparison of the data in Figs. <ref>(a) and <ref>(b) reveals a large in-plane anisotropy of the Nernst coefficient, suggesting possible nematicity in the pseudogap phase. We will not discuss this issue here, but focus on their overall temperature dependency. Our discovery is that the quasiparticle Nernst coefficient exhibits an exponential scaling at intermediate temperatures: ν^qp/T=A + B e^-T/T_0, where T_0 is a characteristic temperature and A, B are free parameters. Despite of the large anisotropy, all data can be well fitted using the exponential function (solid line) that covers exactly the region where quasiparticle contribution dominates, and yields similar temperature scale T_0≈ T_ν/8, with T_ν being about 220 K for p=0.12 (y=6.67) assigned as the onset temperature of the pseudogap. Such a scaling can also be easily verified in other cuprate compounds. To understand its origin, we study the general formula for transverse transport <cit.>: σ^α_xy/H=4π^2/3∫ dω-∂ f(ω)/∂ω(ω/T)^αℬ(ω,T), where f(ω) is the Fermi-Dirac distribution function and ℬ(ω,T) is determined by specific transport mechanism. α=0, 1, 2 stands for the Hall conductivity (σ_xy), the off-diagonal Peltier coefficient (α_xy), and the thermal Hall conductivity (κ_xy). The linear-in-field approximation is valid for the quasiparticle contribution since ν/T is almost unchanged for field up to 15 T in the scaling region <cit.>. The term -∂ f(ω)/∂ω accounts for the effect of thermal broadening and will be shown to play an essential role in causing the exponential scaling law. The above formula covers a wide variety of possibilities where ℬ(ω,T) may have different origins and take different forms. For intrinsic contribution, ℬ(ω,T) is the Berry curvature density given by electronic band structures or the Fermi surface topology <cit.>. For extrinsic skew scattering, it is related to the magnetization and the scattering rate <cit.>. Because of these complications, the behavior of transverse transport is largely unknown except in very special cases. For example, at zero temperature, it has been argued that the quasiparticle Nernst coefficient is related to the carrier mobility (μ) and the Fermi energy (ϵ_F) via ν/T=π^2μ/3ϵ_F. This simple relation seems to hold over six orders of magnitude for a large spectrum of quantum materials <cit.>, but at finite temperatures, no such universal relation is known, and numerous puzzling experimental data have been accumulated that are too anomalous to interpret. The mystery lies in the fact that most studies assume a temperature much smaller than the carrier bandwidth, so that the transport properties rely heavily on microscopic details such as the Berry curvatures, the Fermi surface topology, or the scattering anisotropy. Consequently, their behaviors are highly non-universal depending on the variation of ℬ(ω,T) with energy. It is therefore quite unexpected that universality may actually emerge when the temperature reaches the bandwidth of the effective carriers that dominate the transverse transport so that microscopic details are thermally averaged out. Similar scaling law has been observed in the thermal Hall conductivity <cit.> and the anomalous Hall coefficient <cit.> attributed to nonzero Berry curvatures in a narrow energy window associated with the pseudogap, which is much smaller than the bandwidth of noninteracting electrons and therefore highly nontrivial. We develop below a unified scheme to analyze these transverse transport properties including the Nernst coefficient. To see how the scaling occurs, we assume a regular ℬ(ω,T) being nonzero only for ω∈[-D_h,D_h], where D_h is an effective bandwidth of transverse charge carriers induced by the pseudogap. Applying the Taylor expansion and defining L_n(t)=t^n/n!∫^1/t_-1/t dxx^n/cosh^2(x/2), where x=ω/D_h and t=T/D_h, we rewrite the conductivity formula as σ^α_xy/H=π^2/3t^α∑^∞_n=0 b_n(t) L_n+α(t) where b_n(t)=ℬ^(n)(0,T) is the n-th derivative of ℬ(ω,T) assumed to be regular for all n. The integral L_n are evaluated numerically and plotted in Fig. <ref>(a) for n=0, 2, 4. The odd terms are zero. L_0 saturates at low temperature and decreases continuously with increasing t, while all others approach zero as t→ 0 or ∞, and exhibit a maximum in between. These are expected since L_n(t)∝ t^n for t→ 0 and ∝ t^-1 for t→∞. In between, they look quite different and their magnitude decreases rapidly with increasing n. But surprisingly, as shown in Fig. <ref>(b), they can all be scaled to a single curve, L_n∼ e^-t/t_0, over the wide intermediate temperature region. The extracted t_0=T_0/D_h (inset) is around unity and varies slightly with n. The very different magnitude of L_n implies that the major contribution to σ^α_xy comes from a single term in the expansion for regular choices of b_n. We have thus σ^0_xy∝ b_0L_0, σ^1_xy∝ b_1L_2/t, σ^2_xy∝ b_0L_2/t^2. To see how they behave, we first set b_n to be independent of temperature. The results are plotted in Fig. <ref>(c). For t→0, σ^0_xy and σ^2_xy saturate, while σ^1_xy∝ t. For t→∞, σ^α_xy∝ t^-1-α for all α. Strikingly, as plotted in Fig. <ref>(d), all curves again collapse onto the exponential function at intermediate temperatures, reflecting a special property of the basic formula. The scaling is not affected after divided by t^α except that the functions decay more rapidly at high temperatures to yield a smaller t_0. We see in the inset of Fig. <ref>(d) that t_0 is about 0.8 for L_0 and 0.12 for L_4/t^4, reduced by a factor of 7. Actually, expanding t^-α around t_0, we obtain e^-t/t_0/t^α→ e^-t/t_0-α t/t_0, which immediately yields t_0→ t_0/(1+α) and explains the reduction (dashed line). We now turn to the Nernst coefficient <cit.>: ν=1/H(σ^1_xy/σ^0_xx-σ^1_xx/σ^0_xxσ^0_xy/σ^0_xx), which involves not only transverse quantities, but also the longitudinal resistivity ρ=1/σ^0_xx and the Seebeck coefficient S=σ^1_xx/σ^0_xx. Both ρ and S depend on microscopic details of quasiparticle scattering and exhibit non-universal behavior in the intermediate temperature region. To simplify the discussion, we ignore the Seebeck term, which is relatively small in many cases, and consider some limits by taking ρ∝ T^β, where β=0 accounts for disorder, 1 for strange metal, and 2 for the Fermi liquid. We have then ν/T∝ b_1L_2/t^2-β. For constant b_1, all three situations (β=0, 1, 2) have already been shown in Fig. <ref>. For β=0, ν/T∝ L_2/t^2 approaches a constant at zero temperature, while the other two curves, L_2/t and L_2, are both suppressed as t→0 and have a peak at around t_0/2. Regardless of these details, their collapse onto the same exponential function proves the robustness of the observed scaling in Fig. <ref>. Including temperature dependence in b_n won't destroy the scaling, but may have important consequences on the understanding of experimental measurements. This is best seen from the anomalous Hall coefficient, R_H= ρ^2σ^0_xy∝ b_0t^2βL_0, which diverges at large t if β>0.5 and b_0 is independent of temperature. This contradicts with the experiments, where R_H also obeys the exponential scaling and approaches a constant at high temperatures <cit.>. Thus, b_0 must diminish as t→∞, which is conceivable since the quasiparticles are strongly correlated in underdoped cuprates and have a finite lifetime τ that decreases rapidly with temperature. In the semiclassical approximation <cit.>, the Boltzmann transport equation requires ρ∝τ^-1∝ t^β and ℬ∝τ^2. Assuming b_n∝τ^2∝ t^-2β leads to R_H∝ L_0, σ^0_xy∝L_0/t^2β, ν/T∝L_2/t^2+β. Note that for t→ 0, disorder often dominates to give β=0 and recover the well-known ν∝ T. Below we focus on the the exponential scaling at intermediate temperatures. Because t_0 is roughly unity for L_n, the above formulas predict that the exacted T_0 from R_H gives roughly the effective quasiparticle bandwidth with nonzero Berry curvatures. For β=2 as in YBCO <cit.>, σ^0_xy and ν/T should have similar but much smaller T_0 since they are suppressed by the same prefactor t^-4. To see if these might be the case, we compare in Fig. <ref>(a) the experimental data of ν/T, σ_xy, and R_H for YBCO at a similar doping <cit.>. At first glance, they behave quite differently, with the Hall coefficient persisting to much higher temperature than others. However, as shown in Fig. <ref>(b), they all fall nicely onto the exponential scaling function. The extracted T_0 is given in the inset. We see similar values of roughly 30 K for ν/T and σ_xy, but about 200 K for R_H. The latter is close to the onset temperature of the pseudogap from the resistivity <cit.>. It is quite amazing that the ratios between these numbers agree even quantitatively well with our simple scaling analysis. Additionally, we may apply the same analysis to the Hall angle and expect it to satisfy Θ_H=σ^0_xx/σ^0_xy∝t^β/L_0, which predicts a crossover from Θ_H∝ t^β at low temperatures to tanΘ_H∝ e^-t/t_0 at higher temperatures. Systematic analyses of experimental data have confirmed this crossover in YBCO <cit.>. Disparity in the pseudogap temperature from different measurements has been a long-standing puzzle widely observed in cuprate experiments, causing a so-called large pseudogap from the susceptibility or the Hall coefficient and a small one from other transport measurements <cit.>. Our analysis suggests that their difference is nothing but a simple consequence of the temperature dependent prefactors. The excellent agreement of our theory with experiment confirms the consistency of our unified interpretation of transverse transport properties in hole-doped cuprates. It is then important to distinguish the true pseudogap transition temperature and the ill-defined “onset" temperature of these observables. From our point of view, the latter is not a good indicator of the onset of the pseudogap phase and could be sometimes even misleading. Our idea of exploring universal scaling properties from the formula's structure may also have important implications in other strongly correlated or narrow band systems. In heavy fermion materials, a universal scaling indeed has been found for the Nernst coefficient, which follows closely the temperature evolution of emergent heavy quasiparticles <cit.>. Because the temperature is lower than the coherence temperature characterizing the heavy electron bandwidth, the above analysis might not apply. But it is wondering if this different scaling may actually fall into another interesting scenario where ℬ(ω,T) exhibits ω/T scaling, namely, ℬ(ω,T)=b(T)ℬ(ω/T), possibly due to quantum criticality. We would then have σ^α_xy/H≈π^2/3b(T)∫ dxx^α/cosh^2(x/2)ℬ(x), such that σ^α_xy∝ b(T). More investigations are needed to elaborate on this possibility. In any case, it will be interesting to explore more potential scaling laws in the intermediate temperature region based solely on the mathematical structure of theoretical formulas instead of resorting to microscopic details, which might provide important information on some unexpected but quite generic properties of the underlying quasiparticles. This work was supported by the National Key R&D Program of China (Grant No. 2022YFA1402203), the National Natural Science Foundation of China (Grants No. 12174429, No. 11974397), and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB33010100). 99 Nernst1886 A. v. Ettingshausen and W. Nernst, Ann. Phys. Chem. 265, 343 (1886). Behnia2009 K. Behnia, J. Phys. Condens. Matter 21, 113101 (2009). Sondheimer1948 E. H. Sondheimer, Proc. Roy. Soc. A 193, 484 (1948). Xu2000 Z. A. Xu, N. P. Ong, Y. Wang, T. Kakeshita, and S. Uchida, Nature 406, 486 (2000). Wang2003 Y. Wang, S. Ono, Y. Onose, G. Gu, Y. Ando, Y. Tokura, S. Uchida, and N. P. Ong, Science 299, 86 (2003). Behnia2016RPP K. Behnia and H. Aubin, Rep. Prog. Phys. 79, 046502 (2016). Daou2010Nature R. Daou, J. Chang, D. LeBoeuf, O. Cyr-Choiniére, F. Laliberté, N. Doiron-Leyraud, B. J. Ramshaw, R. Liang, D. A. Bonn, W. N. Hardy, and L. Taillefer, Nature 463, 519 (2010). Yang2020PRL Y.-F. Yang, G.-M. Zhang, and F. C. Zhang, Phys. Rev. Lett. 124, 186602 (2020). Yang2023arXiv Y.-F. Yang, arXiv:2304.08428 (2023). Haldane2004PRL F. D. M. Haldane, Phys. Rev. Lett. 93, 206602 (2004). Xiao2010RMP D. Xiao, M.-C. Chang, and Q. Niu, Rev.. Mod. Phys. 82, 1959 (2010). Fert1987 A. Fert and P. M. Levy, Phys. Rev. B 36, 1907 (1987). Nagaosa2010RMP N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and N. P. Ong, Rev. Mod. Phys. 82, 1539 (2010). Hwang1994PRL H. Y. Hwang, B. Batlogg, H. Takagi, H. L. Kao, J. Kwo, R. J. Cava, J. J. Krajewski, and W. F. Peck, Jr., Phys. Rev. Lett. 72, 2636 (1994). Ong1991PRB N. P. Ong, Phys. Rev. B 43, 193 (1991). Narikiyo2020JPSJ O. Narikiyo, J. Phys. Soc. Jpn. 89, 124701 (2020). Segawa2004PRB K. Segawa and Y. Ando, Phys. Rev. B 69, 104521 (2004). Wuyts1996PRB B. Wuyts, V. V. Moshchalkov, and Y. Bruynseraede, Phys. Rev. B 53, 9418 (1996). Timusk1999RPP T. Timusk and B. Statt, Rep. Prog. Phys. 62, 61 (1999). Luo2008PRB H. G. Luo, Y. H. Su, and T. Xiang, Phys. Rev. B 77, 014529 (2008). Yang2016RPP Y.-F. Yang, Rep. Prog. Phys. 79, 074501 (2016). Yang2020PRR Y.-F. Yang, Phys. Rev. Research 2, 033105 (2020).
http://arxiv.org/abs/2307.02760v1
20230706035354
Geometric Mean Type of Proportional Reduction in Variation Measure for Two-Way Contingency Tables
[ "Wataru Urasaki", "Yuki Wada", "Tomoyuki Nakagawa", "Kouji Tahata", "Sadao Tomizawa" ]
stat.ME
[ "stat.ME" ]
arabic Cross-Modal Content Inference and Feature Enrichment for Cold-Start Recommendation ^* indicates corresponding author. Haokai Ma1, Zhuang Qi1, Xinxin Dong1, Xiangxian Li1, Yuze Zheng1, Xiangxu Meng1 and Lei Meng^*1,2 1 School of Software, Shandong University, Jinan, China 2 Shandong Research Institute of Industrial Technology, Jinan, China Email: {mahaokai, z_qi, dongxinxin, xiangxian_lee, zhengyuze}@mail.sdu.edu.cn, {mxx, lmeng}@sdu.edu.cn August 1, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================== In a two-way contingency table analysis with explanatory and response variables, the analyst is interested in the independence of the two variables. However, if the test of independence does not show independence or clearly shows a relationship, the analyst is interested in the degree of their association. Various measures have been proposed to calculate the degree of their association, one of which is the proportional reduction in variation (PRV) measure which describes the PRV from the marginal distribution to the conditional distribution of the response. The conventional PRV measures can assess the association of the entire contingency table, but they can not accurately assess the association for each explanatory variable. In this paper, we propose a geometric mean type of PRV (geoPRV) measure that aims to sensitively capture the association of each explanatory variable to the response variable by using a geometric mean, and it enables analysis without underestimation when there is partial bias in cells of the contingency table. Furthermore, the geoPRV measure is constructed by using any functions that satisfy specific conditions, which has application advantages and makes it possible to express conventional PRV measures as geometric mean types in special cases. Keywords: Contingency table, Diversity index, Geometric mean, Independence, Measure of association, Proportional Reduction in Variation Mathematics Subject Classification: 62H17, 62H20 § INTRODUCTION Categorical variables are formed from categories and are employed in various fields such as medicine, psychology, education, and social science. Considering two types of categorical variables, one consisting of R categories and the other consisting of C categories. These two variables have R × C combinations, which can be represented in a table with R rows and C columns. This is called a two-way contingency table, where each (i, j) cell (i=1,2,…, R; j=1,2,…, C) displays only the observed frequencies. Typically, the two-way contingency table is used to evaluate whether the two variables are related, i.e., statistically independent. If the independence of the two variables is rejected for example by Pearson's chi-square test, or they are clearly considered to be related, we are interested in the strength of their association. As a method to investigate the associative structure of the contingency table, association models have been proposed by <cit.>, <cit.>, and <cit.>. This method can determine whether there is a relationship between row and column variables by the goodness-of-fit test with models. However, this method only focuses on whether or not there is a relationship, and we can not quantitatively determine what the degree of association is. Instead of the goodness-of-fit test with the models, a variety of measures have been proposed as indicators that can show the degree of association within the interval from 0 to 1 by <cit.>, <cit.>, <cit.>, <cit.>,<cit.>, and <cit.>. These measures calculate the degree of deviation from independence for each (i, j) cell in the contingency table and derive the degree of association from the sum of all cells. Because of the method, these measures can be applied to most contingency tables without distinguishing whether row and column variables are explanatory or response variables. However, in actual contingency table analysis, there are cases where the row and column variables are defined as explanatory or response variables. In such cases, it is not appropriate to analyze each variable by ignoring its characteristics. Alternative measures have been proposed by <cit.>, and <cit.>, which is explained by the proportional reduction in variation (PRV) from the marginal distribution to the conditional distributions of the response. The measures constructed by the method is called PRV measure. The PRV measure is an important tool in summarizing the strength of association of the entire contingency table because the way it is constructed makes it easy to interpret the values. In addition, we sometimes want to focus on the association of some categories of explanatory variables, but conventional PRV measures underestimate the strength and thus may not be able to accurately reflect the partial association numerically. In the study of models and scales for evaluating the symmetry of the contingency table, <cit.>, <cit.>, and <cit.> proposed to evaluate the partial symmetry by using the geometric mean. On the other hand, little research has been done in the case of the partial association. In this paper, we propose a geometric mean type of PRV (geoPRV) measure via a geometric mean and functions satisfying certain conditions. Therefore, the geoPRV measure has application advantages and makes it possible to express previously proposed PRV measures as geometric mean types in special cases. By using the geometric mean to sensitively capture the association of each explanatory variable, analysis can be performed without underestimating the degree of association when cells in the contingency table are partially biased. In addition, the geoPRV measure enables us to know local association structures. Furthermore, the geoPRV measure can be analyzed regardless of whether the categorical variable is nominal or ordinal because its value does not change even when rows and columns are swapped. The rest of this paper is organized as follows. Section 2 introduces previous research on an extension of generalized PRV (eGPRV) measure and proposes the geoPRV measure. Section 3 presents the approximate confidence intervals of the proposed measures. Section 4 confirms the values and confidence intervals of the proposed measure using several artificial and actual data sets, and compares them with the eGPRV measure. Section 5 presents our conclusions. § PRV MEASURE In this Section, we introduce measures using function f(x) that satisfy the following conditions: (i) The function f(x) is convex function; (ii) 0 · f(0/0)=0; (iii) lim_x→ +0f(x)=0; (iv) f(1)=0. Examples of the function are introduced, and models and measures using it have been proposed by <cit.>, <cit.> and <cit.>. These proposals are intended to generalize existing models and measures and have application advantages that make it easy to construct new ones and allow adjustments with tuning parameters to fit the analysis. Section 2.1 provides some conventional PRV measures by <cit.>. In Section 2.2, we propose a geometric mean type of PRV measure and its characteristics. §.§ Conventional PRV Measure Consider R× C contingency table with nominal categories of the explanatory variable X and the response variable Y. Let p_ij denote the probability that an observation will fall in the ith row and jth column of the table (i=1,…, R;j=1,…, C). In addition, p_i· and p_· j are denoted as p_i·=∑_l=1^C p_il, p_· j=∑_k=1^Rp_kj. The conventional PRV measure has the form Φ = V(Y)-E[V(Y | X)]/V(Y) = V(Y)-∑_i=1^Rp_i·V(Y | X=i)/V(Y), where V(Y) is a measure of variation for the marginal distribution of Y, and E[V(Y | X)] is the expectation for the conditional variation of Y given the distribution of X (see, ). Φ is using the weighted arithmetic mean of V(Y | X=i), i.e, ∑_i=1^Rp_i·V(Y | X=i). By changing the variation measure, various PRV measures can be expressed, such as uncertainty coefficient U for the variation measure V(Y)=-∑_j=1^C p_· jlog p_· j called Shannon entropy and concentration coefficient τ for the variation measure V(Y)=1-∑_j=1^C p_· j^2 called Gini concentration (see, ). <cit.> proposed a generalized PRV measure T^(λ) that includes U and τ by using V(Y) = ( 1-∑_j=1^C p_· j^λ+1)/λ as the variation measure which is <cit.> diversity index of degree λ for the marginal distribution p_· j. Furthermore, <cit.> proposed an extension of generalized PRV (eGPRV) measure that includes U, τ, and T^(λ): Φ_f = -∑_j=1^Cf(p_· j) - ∑_i=1^R p_i·[ - ∑ _j=1 ^C f (p_ij/p_i·) ]/- ∑_j=1^C f(p_· j). The variation measure used in the eGPRV measure Φ_f are V(Y) = - ∑_j=1^Cf(p_· j). §.§ Geometric Mean Type of PRV Measure We propose a new PRV measure by using the weighted geometric mean of V(Y | X=i) that aims to sensitively capture the association of each explanatory variable to the response variable. Assume that p_· j>0 and V(Y | X=i) is a real number greater than or equal to 0 (i=1,…, R; j=1,…, C). We propose a geometric mean type of PRV (geoPRV) measure for R× C contingency tables defined as Φ_G = V(Y)-∏_i=1^R [ V(Y | X=i) ]^p_i·/V(Y), where V(Y) is a measure of variation for the marginal distribution of Y. The geoPRV measure can use the same variation as the conventional PRV measure, for example, Φ_Gf = -∑_j=1^C f(p_· j) - ∏_i=1^R [ -∑_j=1^C f ( p_ij/p_i·) ]^p_i·/-∑_j=1^C f(p_· j), where the variation measure V(Y) = -∑_j=1^C f(p_· j). In addition, the following theorem for Φ_Gf holds. The measure Φ_Gf satisfies the following conditions: * Φ_f ≤Φ_Gf. * Φ_Gf must lie between 0 and 1. * Φ_Gf=0 is equivalent to independence of X and Y. * Φ_Gf=1 is equivalent to ∏_i=1^R [ V(Y | X=i) ]^p_i·=0, i.e., for at least one s, there exists t such that p_st≠0 and p_sj=0 for every j with j≠ t. The value of Φ_Gf is invariant to permutations of row and column categories. For proof of Theorem <ref> and Theorem <ref>, see Appendix <ref> and Appendix <ref>, respectively. The geoPRV measure differs from the conventional PRV measure in that Φ_Gf=1 when there exists i such that p_ij=p_i·≠ 0. Another important feature of the geoPRV measures is that it takes higher or equal values than the conventional PRV measures, allowing for a stronger representation of row and column relationships. A property of the geoPRV measure is that the larger the value of Φ_G, the stronger the association between the response variable Y and the explanatory variable X. In other words, the larger the value of Φ_G, the more accurately you can predict the Y category if you know the X category than if you do not. In contrast, if the value of Φ_G is 0, the Y category is not affected by the X category at all. § APPROXIMATE CONFIDENCE INTERVAL FOR THE MEASURE Since the measure Φ_G is unknown, we derived a confidence interval of Φ_G. Let n_ij denote the frequency for a cell (i,j), and n=∑_i=1^R∑_j=1^C n_ij (i=1,2,…,R; j=1,2,…,C). Assume that the observed frequencies { n_ij} have a multinomial distribution, we consider an approximate standard error and large-sample confidence interval for Φ_G using the delta method (, and Appendix C in ). Let Φ_Gf denote a plug-in estimator of Φ_Gf. √(n)( Φ_Gf-Φ_Gf ) converges in distribution to a normal distribution with mean zero and variance σ^2 [ Φ_Gf ], where σ^2[Φ_Gf] = (δ^(f))^2 [ ∑_i=1^R∑_j=1^Cp_ij(Δ_ij^(f))^2 - (∑_i=1^R∑_j=1^C p_ijΔ_ij^(f))^2 ], with δ^(f) = ∏_s=1^R [ -∑_t=1^C f ( p_st/p_s·) ]^p_s·/( ∑_t=1^C f(p_· t) )^2, Δ_ij^(f) = f'(p_· j) -ε_ij^(f)∑_t=1^C f(p_· t), ε_ij^(f) = log[ -∑_t=1^C f ( p_it/p_i·) ] + ∑_t=1^C { -p_it/p_i· f' ( p_it/p_i·) } + f' ( p_ij/p_i·)/∑_t=1^C f' ( p_it/p_i·), and f'(x) is the derivative of function f(x) by x. The proof of Theorem <ref> is given in Appendix <ref>. Let σ^2 [ Φ_Gf] denote a plug-in estimator of σ^2 [ Φ_Gf]. From Theorem <ref>, since σ[ Φ_Gf] is a consistent estimator of σ[ Φ_Gf], σ[ Φ_Gf] / √(n) is an estimated standard error for Φ_Gf, and Φ_Gf± z_α/2σ[ Φ_Gf] / √(n) is an approximate 100(1-α)% confidence limit for Φ_Gf, where z_α is the upper two-sided normal distribution percentile at level α. § NUMERICAL EXPERIMENTS In this section, we confirmed the performance of geoPRV measure Φ_Gf, and the difference between Φ_Gf and the conventional PRV measure Φ_f proposed by <cit.>. We use Φ_f and Φ_Gf, which have the variation measure V(Y)=-∑_j=1^Cf(p_· j). In addition to applying f(x)=( x^λ+1 - x )/λ for λ>-1 and g(x)=(x - 1)^2/(ω x + 1 - ω) - (x - 1)/(1 - ω) for 0 ≤ω < 1 (see, ), the former is expressed as Φ_f^(λ) and Φ_Gf^(λ), while the latter is expressed as Φ_g^(ω) and Φ_Gg^(ω). For the tuning parameters, set λ=0, 0.5, 1.0 and ω=0, 0.5, 0.9. §.§ Artificial data 1 Consider the artificial data in Table <ref>. These are data to clearly show the difference in characteristics between conventional PRV measures and the geoPRV measure. Table <ref>c shows the case where the explanatory variable in the first row has a complete association structure with the response variable in the third column. On the other hand, Table <ref>a and Table <ref>b show the case where the explanatory variable in the first row has a weak or slightly strong association structure to the response variable, respectively. Consider the simulation data in Table <ref>a and Table <ref>b. These are data to clearly show the difference in characteristics between conventional PRV measures and the geoPRV measure. Table <ref>a shows the case where the explanatory variable in the first column has a complete association structure with the response variable in the third column. On the other hand, Table <ref>b shows the case where the explanatory variable in the first line has a related structure to the response variable. The values of Φ_f^(λ) and Φ_Gf^(λ) are provided in Table <ref>a and Table <ref>b, respectively. For instance, Table <ref>a shows that when Table <ref>c is parsed the measure Φ_f^(λ)=0.2628, 0.1990, 0.1784 for each λ and does not capture the complete association structure of the first row. In contrast, Φ_Gf^(λ) = 1 in all λ, allowing us to identify the local complete association structure. Similarly, consider the results of the Φ_Gf^(λ) and Φ_f^(λ) in any λ from Table <ref>a to Table <ref>c. As can be seen from these results, the simulation also shows that Φ_Gf^(λ) changes significantly by capturing partially related structures compared to Φ_f^(λ). Similarly, Φ_Gf^(λ) show larger values than Φ_f^(λ) in any λ by capturing the relationship between explanatory variables of the first row and the response variables in Table <ref>a and Table <ref>b. §.§ Artificial data 2 Consider the artificial data in Table <ref>. These data are intended to examine the value of the geoPRV measure Φ_Gf as the association of the entire contingency table changes. Therefore, we obtained data suitable for the survey by converting the bivariate normal distribution with means μ_1 = μ_2 = 0 and variances σ^2_1 = σ^2_2 = 1, in which the correlation coefficient was changed from 0 to 1 by 0.2, into the 4 × 4 contingency tables with equal-interval frequency. From Theorem <ref> and the properties of the PRV measures, when the absolute values of the correlation coefficients are the same, i.e., when the rows of the contingency table are simply swapped, the values are equal, so the results for the negative correlation coefficient case are omitted. The 4 × 4 probability tables, formed by using three cutpoints for each variable at z_0.25, z_0.50, z_0.75 from a bivariate normal distribution with the conditions μ_1 = μ_2 = 0, σ^2_1 = σ^2_2 = 1, and ρ increasing by 0.2 from 0 to 1. 1 (1) (2) (3) (4) Total (1) (2) (3) (4) Total 2cρ = 1.0 2cρ = 0.4 (1) 0.2500 0.0000 0.0000 0.0000 0.2500 (1) 0.1072 0.0692 0.0477 0.0258 0.2500 (2) 0.0000 0.2500 0.0000 0.0000 0.2500 (2) 0.0692 0.0698 0.0632 0.0477 0.2500 (3) 0.0000 0.0000 0.2500 0.0000 0.2500 (3) 0.0477 0.0632 0.0698 0.0692 0.2500 (4) 0.0000 0.0000 0.0000 0.2500 0.2500 (4) 0.0258 0.0477 0.0692 0.1072 0.2500 Total 0.2500 0.2500 0.2500 0.2500 1.0000 Total 0.2500 0.2500 0.2500 0.2500 1.0000 2cρ = 0.8 2cρ = 0.2 (1) 0.1691 0.0629 0.0164 0.0016 0.2500 (1) 0.0837 0.0668 0.0563 0.0432 0.2500 (2) 0.0629 0.1027 0.0680 0.0164 0.2500 (2) 0.0668 0.0648 0.0621 0.0563 0.2500 (3) 0.0164 0.0680 0.1027 0.0629 0.2500 (3) 0.0563 0.0621 0.0648 0.0668 0.2500 (4) 0.0016 0.0164 0.0629 0.1691 0.2500 (4) 0.0432 0.0563 0.0668 0.0837 0.2500 Total 0.2500 0.2500 0.2500 0.2500 1.0000 Total 0.2500 0.2500 0.2500 0.2500 1.0000 2cρ = 0.6 2cρ = 0 (1) 0.1345 0.0691 0.0353 0.0111 0.2500 (1) 0.0625 0.0625 0.0625 0.0625 0.2500 (2) 0.0691 0.0797 0.0659 0.0353 0.2500 (2) 0.0625 0.0625 0.0625 0.0625 0.2500 (3) 0.0353 0.0659 0.0797 0.0691 0.2500 (3) 0.0625 0.0625 0.0625 0.0625 0.2500 (4) 0.0111 0.0353 0.0691 0.1345 0.2500 (4) 0.0625 0.0625 0.0625 0.0625 0.2500 Total 0.2500 0.2500 0.2500 0.2500 1.0000 Total 0.2500 0.2500 0.2500 0.2500 1.0000 Table <ref> shows the value of Φ_f^(λ) and Φ_Gf^(λ) for each value of ρ, respectively. We observe that the values of Φ_f^(λ) and Φ_Gf^(λ) increase as the absolute value of the ρ increases. Besides, ρ = 0 if and only if the measures show that it is independent of the table, and ρ = 1.0 if and only if the measures confirm that there is a structure of all (or partially) complete association. Also, if there is a relationship only to the entire contingency table, the values of Φ_Gf^(λ) are found to be larger than the values of Φ_f^(λ) by Theorem <ref>, but the differences are small. §.§ Actual data 1 Consider the case where the PRV measure is adapted to the data in Table <ref>, a survey of cannabis use among students conducted at the University of Ioannina (Greece) in 1995 and published in <cit.>. The students’ frequency of alcohol consumption is measured on a four-level scale ranging from at most once per month up to more frequently than twice per week while their trial of cannabis through a three-level variable (never tried–tried once or twice–more often). We can see the partial bias of the frequency for the first and second rows in the data. The estimates of Φ_f^(λ) and Φ_Gf^(λ) are provided in Table <ref>a and Table <ref>b, respectively. For instance, when λ=1, the measure Φ_f^(1)=0.1034 for Table <ref>a, and Φ_Gf^(1)=0.2992 for Table <ref>b. Φ_f^(1) shows that the average condition variation of trying cannabis is 10.34% smaller than the marginal variation, and similarly Φ_Gf^(1) shows that the average condition variation of trying cannabis is 29.92% smaller. Based on the results of these values, the following can be interpreted from Table <ref>: (1) There is a strong association overall between alcohol consumption and cannabis use experience associated. (2) There are fairly strong associations between some alcohol consumption and cannabis use experience. These interpretations seem to be intuitive when looking at Table <ref>. However, by analyzing using the measures, we have been able to present an objective interpretation numerically and to show how strongly associated structures are in the contingency table. §.§ Actual data 2 By analyzing multiple contingency tables using the measures, it is possible to numerically determine how much difference there are between the associations of the contingency tables. Therefore, consider the data in Table <ref> are taken from <cit.>. These data describe the cross-classifications of the father's and son's occupational status categories in Japan which were examined in 1975 and 1985. In addition, we can consider the father's states as an explanatory variable and the son's states as an response variable, since the father's occupational status categories seem to have an influence on the son's. The analysis of Table <ref> aims to show what differences there are in the associations of occupational status categories for fathers and sons in 1975 and 1985. Table <ref> and Table <ref> give the estimates of Φ_g^(ω) and Φ_Gg^(ω), respectively. Comparing the estimates for each ω in Table <ref> and Table <ref>, we can see that the values for both measures are almost the same. In addition, comparing Table <ref>a and Table <ref>b, the estimate is slightly larger in Table <ref>b, so it can be assumed that Table <ref>b is more related, but there is little difference because all the confidence intervals are covered. When we also compare <ref>a and Table <ref>b, we can see that <ref>b is larger because the estimate is slightly larger in <ref>b. However, we can see that the confidence interval does not cover at ω = 0.9. From the results of these values, the following can be interpreted for Table <ref>a and Table <ref>b: (1) The occupational status categories of fathers and sons in 1975 and 1985 both have weak associations overall, further indicating that individual explanatory variables do not have remarkably associations. (2) Although the association of Table <ref>b is slightly larger than Table <ref>a, the results of the confidence intervals indicate that there is no statistical difference. (3) The partial association in Table <ref>b is slightly larger than Table <ref>a, and the results of confidence intervals indicate that there may be a statistical difference. When there are statistical differences from the results of some confidence intervals, as in (3), it is affected by differences in the characteristics of variation associated with changing the tuning parameters. In this case, it is difficult to give an interpretation by referring to variation because there was no difference in the variation in the special cases (e.g., ω= 0). However, when there are differences in variation in special cases, further interpretation can be given by focusing on the characteristics. § CONCLUSION In this paper, we proposed a geometric mean type of PRV (geoPRV) measure that uses variation composed of geometric mean and arbitrary functions that satisfy certain conditions. We showed that the proposed measure has the following three properties that are suitable for examining the degree of association, which satisfies the conventional measures: (i) The measure increases monotonically as the degree of association increases; (ii) The value is 0 when there is a structure of null association, and (iii) The value is 1 when there is a complete structure of association. Furthermore, by using geometric means, the geoPRV measure can capture the association to the response variables for individual explanatory variables that could not be investigated by the existing PRV measures. Analyses using the existing PRV measures and the geoPRV measure simultaneously will be able to examine the association of the entire contingency table and the partial association. Also, the geoPRV measure can be analyzed using variations with various characteristics by providing functions and tuning parameters that satisfy the conditions, such as the measure Φ_f. Therefore, analysis using the geoPRV measure together can lead to a deeper understanding of the data and provide further interpretation. While various measures of contingency tables have been proposed, there have been several studies in recent years that have conducted analyses using the Goodman-Kraskal's PRV measure (e.g. ). We believe that the new PRV measure in this paper, when examined and compared together with the existing Goodman-Kraskal's PRV measure, may provide a new perspective that pays attention to the association of individual explanatory variables, including the association of the entire contingency table. § DECLARATIONS Funding This work was supported by JSPS Grant-in-Aid for Scientific Research (C) Number JP20K03756. Conflict of interest Not applicable. Ethics approval Not applicable. Consent to participate Not applicable. Consent for publication All authors have read and agreed to the published version of the manuscript. Availability of data and materials Not applicable. Code availability Not applicable. Authors' contributions These authors contributed equally to this work. § PROOF OF THEOREM <REF> * Let ϕ denote a numerator of a fraction Φ_Gf-Φ_f = ∑_i=1^R p_i· V(Y | X=i)-∏_i=1^R[ V(Y | X=i) ]^p_i·/-∑_j=1^Cf(p_· j). If there exists i such that V(Y | X=i)=0, ϕ≥0 is easily verified, i.e., Φ_f≤Φ_Gf is established. Moreover, consider cases other than this one. Assume that f(x)=-log x which is convex function since f”(x)=1/x^2>0 where f”(x) is the second derivative of function f(x) by x. From Jensen's inequality, ∑_i=1^R p_i·[-log V(Y | X=i)]≥-log[ ∑_i=1^R p_i·V(Y | X=i) ] ⟺ ∑_i=1^R log[ V(Y | X=i) ]^p_i·≤log[ ∑_i=1^R p_i·V(Y | X=i) ] ⟺ log∏_i=1^R [ V(Y | X=i) ]^p_i·≤log[ ∑_i=1^R p_i·V(Y | X=i) ] ⟺ ∏_i=1^R [ V(Y | X=i) ]^p_i·≤[ ∑_i=1^R p_i·V(Y | X=i) ] where p_i·≥0, ∑_i=1^R p_i·=1. Therefore, ϕ≥0, i.e., Φ_f≤Φ_Gf holds. * The inequality 0≤Φ_f≤1 is already proven by Momozaki et al., and Φ_f≤Φ_Gf holds as proved above. Hence, Φ_Gf≥0 holds since 0≤Φ_f≤Φ_Gf. In addition, since ∏_i=1^R [ V(Y | X=i) ]^p_i·≥0, we obtain Φ_G≤1. Thus, 0≤Φ_Gf≤1 holds. * Since 0≤Φ_f≤Φ_Gf, if Φ_Gf=0 then Φ_f=0. Hence, since p_ij=p_i·p_· j holds for Φ_f=0 (Momozaki et al.), p_ij=p_i·p_· j holds for Φ_Gf=0. Thus, Φ_Gf=0⟹ p_ij=p_i·p_· j holds. Moreover, Φ_Gf=0⟸ p_ij=p_i·p_· j can be easily checked. * If Φ_Gf=1 then ∏_i=1^R [ V(Y | X=i) ]^p_i·=0, i.e., for some s, V(Y | X=s)=-∑_j=1^Cf( p_ij/p_i·)=0 (s=1,2,…,R). Thus, there exists i such that p_ij≠0 and p_ik=0 (k≠ j). § PROOF OF THEOREM <REF> Since the first terms in the denominator and numerator of Φ_Gf do not depend on the row category, we focus on the second term in the numerator. This term is ∏_i=1^R [ - ∑ _j=1 ^C f ( p_ij/p_i·) ]^p_i· = ∏_i=1^R [ -f (p_i1/p_i·) - ⋯ - f (p_iC/p_i·) ]^p_i·, and the values are invariant to the reordering of the sums. Namely, the value of Φ_Gf is invariant with respect to the permutation of row categories. Similarly, the value of Φ_Gf is also invariant with respect to the permutation of column categories. § PROOF OF THEOREM <REF> Let n=(n_11,n_12,…,n_1C,n_21,…,n_RC)^⊤, p=(p_11,p_12,…,p_1C,p_21,…,p_RC)^⊤, p=n/n, and a^⊤ is a transpose of a. Then √(n)( p - p) converges in distribution to a normal distribution with mean zero and the covariance matrix (p) - pp^⊤, where (p) is a diagonal matrix with the elements of p on the main diagonal (). The Taylor expansion of the function Φ_Gf around p is given by Φ_Gf = Φ_Gf + ( Φ_Gf/p^⊤) (p-p) + o_p(n^-1/2). Therefore, since √(n)(Φ_Gf-Φ_Gf) = √(n)( Φ_Gf/p^⊤) (p-p) + o_p(1), √(n)(Φ_Gf-Φ_Gf) d→ N(0,σ^2[Φ_Gf]), where σ^2[Φ_Gf] = (δ^(f))^2 [ ∑_i=1^R∑_j=1^Cp_ij(Δ_ij^(f))^2 - (∑_i=1^R∑_j=1^C p_ijΔ_ij^(f))^2 ], with δ^(f) = ∏_s=1^R [ -∑_t=1^C f ( p_st/p_s·) ]^p_s·/( ∑_t=1^C f(p_· t) )^2 Δ_ij^(f) = f'(p_· j) -ε_ij^(f)∑_t=1^C f(p_· t), ε_ij^(f) = log[ -∑_t=1^C f ( p_it/p_i·) ] + ∑_t=1^C { -p_it/p_i· f' ( p_it/p_i·) } + f' ( p_ij/p_i·)/∑_t=1^C f' ( p_it/p_i·), and f'(x) is the derivative of function f(x) by x. apalike
http://arxiv.org/abs/2307.00437v1
20230701225031
Passive and active field theories for disease spreading
[ "Michael te Vrugt", "Julian Jeggle", "Raphael Wittkowski" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.stat-mech", "physics.soc-ph", "q-bio.PE" ]
Institut für Theoretische Physik, Center for Soft Nanoscience, Westfälische Wilhelms-Universität Münster, 48149 Münster, Germany Institut für Theoretische Physik, Center for Soft Nanoscience, Westfälische Wilhelms-Universität Münster, 48149 Münster, Germany [Corresponding author: ][email protected] Institut für Theoretische Physik, Center for Soft Nanoscience, Westfälische Wilhelms-Universität Münster, 48149 Münster, Germany The worldwide COVID-19 pandemic has led to a significant growth of interest in the development of mathematical models that allow to describe effects such as social distancing measures, the development of vaccines, and mutations. Several of these models are based on concepts from soft matter theory. Considerably less well investigated is the reverse direction, i.e., how results from epidemiological research can be of interest for the physics of colloids and polymers. In this work, we consider the SIR-DDFT model, a combination of the susceptible-infected-recovered (SIR) model from epidemiology with dynamical density functional theory (DDFT) from nonequilibrium soft matter physics, which allows for an explicit modeling of social distancing. We extend the SIR-DDFT model both from an epidemiological perspective by incorporating vaccines, asymptomaticity, reinfections, and mutations, and from a soft matter perspective by incorporating noise and self-propulsion and by deriving a phase field crystal (PFC) model that allows for a simplified description. On this basis, we investigate via computer simulations how epidemiological models are affected by the presence of non-reciprocal interactions. This is done in a numerical study of a zombie outbreak. Passive and active field theories for disease spreading Raphael Wittkowski August 1, 2023 ======================================================= § INTRODUCTION The worldwide outbreak of the coronavirus disease 2019 (COVID-19), caused by the coronavirus SARS-CoV-2 <cit.>, has inspired an enormous amount of research work on the spread of infectious diseases in the past years <cit.>. A large portion of this research has focused on modeling the effects of various forms of interventions – both nonpharmaceutical interventions such as contact restrictions <cit.> and pharmaceutical interventions such as vaccination <cit.> – on the spread of the pandemic in order to develop optimal containment strategies. While the spread of COVID-19 is now mostly under control due to the successful development of vaccines <cit.>, research on modeling infectious diseases continues to be important for at least two reasons. First, the outbreak of further diseases (or mutations of older ones) is mostly a matter of time <cit.>. Second, – this aspect will be a focus of this article – work on epidemic spreading has been fruitful also for other fields of research, such as soft matter physics <cit.>. Epidemiological models range from compartmental models, such as the famous susceptible-infected-recovered (SIR) model <cit.>, which are very simple, to highly complex individual-based models <cit.>, which allow to model an epidemic outbreak in a lot of detail. The SIR-DDFT model <cit.>, which is a combination of the SIR model with dynamical density functional theory (DDFT) <cit.>, allows to combine the advantages of compartmental and individual-based models by allowing to model effects of social distancing explicitly within a simple (compared to individual-based models) coarse-grained field theory. Various extensions of the SIR-DDFT model have been developed or at least suggested, such as incorporating governmental interventions <cit.>, hydrodynamic interactions <cit.>, or determining model coefficients from Wi-Fi data <cit.>. A numerical implementation is provided in Ref. <cit.>, a brief review in Ref. <cit.>. The SIR-DDFT model is a paradigmatic example of a reaction-diffusion DDFT (RDDFT) <cit.>, which allows to simultaneously describe diffusion, chemical reactions, and particle interactions. In the past years, RDDFT has been widely used also outside of epidemiology <cit.>, in particular in active matter physics <cit.>. Moreover, it has become an important tool in chemical engineering, where it has been applied to electrodes <cit.>, metal corrosion <cit.>, oxidation <cit.>, and reactions on catalytic substrates <cit.>. Related ideas are used in biophysical applications of DDFT <cit.>. This suggests that a further development of the SIR-DDFT model should also be directed towards active matter in order to give it further significance also beyond the study of disease spreading, and that a deeper understanding of the SIR-DDFT model will have implications that go way beyond epidemiology. In our first article on this topic <cit.>, we have introduced the SIR-DDFT model and demonstrated its ability to model effects of social distancing. Our second article <cit.> has focused on epidemiological applications by considering effects of governmental interventions on the occurrence of multiple epidemic waves. The present third article complements the first two by considering not only questions of epidemiological interest, but also the importance of the SIR-DDFT model (and related theories) for physical research, in particular regarding active matter. We focus, in particular, on non-reciprocal interactions, which have attracted enormous attention among soft matter physicists in the past years <cit.>. So far, they have not been incorporated into DDFT. Here, we develop a variant of the SIR-DDFT model (SZ-DDFT model) that describes a zombie outbreak, a scenario which is governed by non-reciprocal interactions. This article is structured as follows: In <ref>, we introduce the SIR-DDFT model. Section <ref> presents extensions of the SIR-DDFT model that are motivated by epidemiological considerations. Extensions based on soft matter physics are presented in <ref>. In <ref>, we show the results of numerical simulations. We conclude in <ref>. § THE SIR-DDFT MODEL The starting point of our considerations is the very well known SIR model developed in Ref. <cit.> (based on earlier work <cit.>) and reviewed in Refs. <cit.>, which describes the time evolution of the total number of susceptible (S̅), infected (I̅), and recovered (R̅) persons. Susceptibles get infected at a rate c_effI̅ with the effective contact rate c_eff, and infected people recover at a rate w and die at a rate m (the SIR model with this extension is also referred to as SIRD model <cit.>). These considerations lead to the dynamic equations Ṡ̅̇ = - c_effS̅I̅, İ̅̇ = c_effS̅I̅ - wI̅ - m I̅, Ṙ̅̇ = wI̅. The SIR model given by these equations describes only the total number of persons in the respective compartments, but not their spatial distribution. This can be achieved by modeling not the total numbers, but the spatial densities S, I, and R of susceptible, infected, and recovered persons. These are often assumed to obey the reaction-diffusion equations <cit.> ∂_tS = D_S ^2 S - cSI, ∂_tI = D_I ^2 I +cSI - wI -mI, ∂_tR = D_R ^2 R + wI, where D_S, D_I, and D_R are the diffusion constants for susceptible, infected, and recovered persons and c is the infection rate. This model is very useful for, e.g., describing the spread of animal diseases <cit.>. If, however, we wish to describe the spread of diseases such as COVID-19 in a human society, the assumption implicitly underlying such reaction-diffusion equations, namely that all particles (persons) diffuse freely without affecting each other, is unrealistic due to the importance of social distancing, which can be thought of as a repulsive interaction. To tackle this problem, one can replace the diffusion terms in <ref> by DDFT terms, making use of the fact that DDFT is a generalization of the ordinary diffusion equation to interacting systems. This then gives an RDDFT model capable of describing epidemic spreading <cit.>. DDFT, developed in Refs. <cit.> and reviewed in Ref. <cit.>, is a dynamical theory for the one-body density ρ of a fluid. Its central governing equation is given by ∂_t ρ(r⃗,t) = Γ·(ρ(r⃗,t)Fρ(r⃗,t)) with time t, position r⃗, mobility Γ, and free energy functional F. This functional is given by F= F_id + F_exc+ F_ext with the ideal gas free energy F_id = k_B T ^drρ(r⃗)(ln(Λ^3ρ(r⃗))-1) with the number of spatial dimensions d, the Boltzmann constant k_B, the temperature T, and the irrelevant thermal de Broglie wavelength Λ, the external contribution F_ext([ρ]) = ^drρ(r⃗)U_1(r⃗) with the external potential U_1, and the unknown excess free energy F_exc that incorporates particle interactions. In the present context, repulsive interactions represent the effects of social distancing and isolation. One has to distinguish here between general social distancing, which corresponds to all persons keeping a certain distance from each other, and self-isolation, which corresponds to the fact that persons who know that they are infected will put much more effort into staying away from other persons, generally by staying at home. On the level of the free energy, this is represented by including a contribution for social distancing F_sd and a contribution for self-isolation F_si. The excess free energy then reads F_exc = F_sd+F_si with F_sd = -^dr^dr' C_sde^-σ_sd(r⃗-r⃗')^2 (1/2S(r⃗,t)S(r⃗',t)+S(r⃗,t)R(r⃗',t)+1/2R(r⃗,t)R(r⃗',t)), F_si = - ^dr^dr' C_sie^-σ_si(r⃗-r⃗')^2I (1/2I(r⃗',t) + S(r⃗',t)+R(r⃗',t)), where C_sd and C_si set the interaction strength and σ_sd and σ_si the interaction range. Thereby, we have assumed the interactions to take the form of a Gaussian soft-core repulsion, for which the excess free energy is well approximated by a mean-field approximation <cit.>. Starting from <ref> and replacing the diffusion terms by the right-hand side of <ref> gives the SIR-DDFT model ∂_tS = Γ_S·( SFS) - cSI, ∂_tI = Γ_I·( I FI) +cSI - wI -mI, ∂_tR =Γ_R·( R FR) + wI. § EXTENSIONS MOTIVATED BY EPIDEMIOLOGY Since its initial development <cit.>, a huge number of extensions of the SIR model have been developed. Most of them can be incorporated pretty directly into the SIR-DDFT model. §.§ Governmental interventions To make the list of epidemiological extensions complete, we start this section by showing an extension that was already derived in Ref. <cit.>. The aim here is to incorporate the fact that nonpharmaceutical interventions such as contact restrictions are not present in the same intensity at all times, but are imposed and lifted depending on the current infection numbers. In the SIR-DDFT model, this means that the interactions are time-dependent. For this purpose, we add (based on the rectangular hysteresis model by <cit.>) dynamic equations for the coefficients C_sd and C_si that are given by Ċ_i(t) = α (C_i,0 - C_i (t)) if (I̅(τ) < I̅_start∀τ∈ [0,t]) or (∃ t_1 ∈ [0,t] such that I̅(t_1) ≤I̅_stop and I̅(τ) < I̅_start∀τ∈ (t_1,t]), α (C_i,1 - C_i (t)) if ∃ t_1 ∈ [0,t] such that I̅(t_1) ≥I̅_start and I̅(τ) > I̅_stop∀τ∈ (t_1,t] with i=sd, si. Here, C_i,1 and C_i,0 are the values that the parameter C_i approaches in the presence and absence of a shutdown, respectively, and α measures the rate at which these values are approached. The physical idea is that the start and end of a shutdown is triggered if the number of infected persons I̅ passes threshold values I̅_start and I̅_stop, respectively. §.§ Vaccination The rapid development of safe and effective vaccines <cit.> has had a tremendous impact on containing the spread of the COVID-19 pandemic. During the early stages of the vaccination campaign, the supply of vaccines has been very limited. In such contexts, it is important to develop strategies for distributing them in order to use the available vaccines efficiently <cit.>. In the SIR-DDFT model, vaccination can be incorporated in two basic ways. The first and simpler one is to assume that vaccination has the same effect as recovery, such that vaccination simply transfers persons from the susceptible to the recovered compartment <cit.>. The second one is to introduce a fourth compartment that contains vaccinated persons <cit.>. In a theory such as the SIR-DDFT model that is based on partial rather than ordinary differential equations, it is desirable to keep the number of compartments small in order to reduce the numerical cost. Therefore, we use the first variant here. Hence, we assume that vaccination essentially transfers a person directly from the S- to the R-compartment since vaccinated persons (just like recovered ones) are immune. Let v be the vaccination rate, which can depend on space and time (e.g., because vaccines are first introduced in a certain region or because vaccine skepticism is more prevalent in some regions than in others). In this case, the SIR-DDFT model with vaccination reads ∂_tS = Γ_S·( SFS) - cSI - vS, ∂_tI = Γ_I·( I FI) +cSI - wI -mI, ∂_tR =Γ_R·( R FR) + wI +vS. If (as it is the case for COVID-19) the vaccine does not lead to full immunity, this model can be combined with an extension allowing for reinfections (see <ref>). §.§ Exposed and asymptomatic persons The standard SIR-DDFT model assumes that susceptibles immediately become infected upon contact with an infected person, and that all infected persons then exhibit the same repulsive interaction. This assumption is unphysical for two important reasons: * The disease has a certain incubation period, i.e., one does not immediately become infectious after having caught the disease. * While some people become severely ill, others have mild or no symptoms (but can still infect others). The existence of asymptomatic individuals that can infect others and the fact that people can be infectious already before developing symptoms has been of enormous importance for the spread of COVID-19. Consequently, introducing exposed and asymptomatic compartments is a relatively natural extension of the SIR model <cit.>. In the context of interaction modeling in the SIR-DDFT model, this distinction is important because only symptomatic individuals will self-isolate. Asymptomatic infected persons, on the other hand, will simply interact with others in exactly the same way as susceptible ones. Physically speaking, the interaction potential cannot distinguish between susceptible and asymptomatic infected persons. Following <cit.>, we therefore add two fields to the SIR-DDFT model. First, E is the density of exposed persons. Second, A is the density of asymptomatic infected persons. Upon contact with an infected person, a susceptible person first becomes exposed at a rate c. It then becomes asymptomatic (following the observation that, for COVID-19, persons are infectious before they are symptomatic) at a rate k_1. Then, it either stays asymptomatic until it recovers or it becomes symptomatic. This is incorporated by transitions from the asymptomatic to the infectious compartment, taking place at a rate k_2, and transitions from the asymptomatic to the recovered compartment, taking place at a rate w_1. Together, these assumptions give the dynamic equations ∂_tS = Γ_S·( SFS) - c S(A+I), ∂_t E = Γ_I·( I FI) +cS(A+I) - k_1 E, ∂_t A = Γ_I·( I FI) + k_1 E - (w_1+k_2) A, ∂_tI = Γ_I·( I FI)+ k_2 A - (w+m) I, ∂_tR =Γ_R·( R FR) + w_1 A + w I. §.§ Reinfections The SIR-DDFT model was developed in 2020, when little was known about the likelihood of a re-infection after a person has recovered from COVID-19. It assumes, as done in the simple SIR model, that a person that has recovered from the disease is completely immune. Today, it is known that reinfections with COVID-19 are possible <cit.>. In SIR-type models, reinfections can be incorporated in different ways depending on the properties of the disease. For example, the immunity acquired after recovery may be only partial, or it may decrease over time <cit.>. What an extension of the SIR-DDFT model incorporating reinfections has to look like thus depends on the disease in question. We consider here the case of COVID-19, for which the current knowledge about reinfection probability is reviewed in Refs. <cit.>. The probability of reinfection is low for the wild type and for Alpha, Beta, and Delta variants. For the Omicron variant, it is higher, but a previous infection still provides a reasonable protection against severe disease <cit.>. Here, we simply assume that there is a constant reinfection rate c_R by which recovered persons can get infected when encountering infected individuals. This gives ∂_tS = Γ_S·( SFS) - cSI, ∂_tI = Γ_I·( I FI) +cSI + c_R R I - wI -mI, ∂_tR =Γ_R·( R FR) + wI -c_R R I. We do not incorporate here the fact that reinfections appear to be milder <cit.>, which would imply that m is different for the first and the second infection. The reason is that doing so would make the model significantly more complicated while providing little additional epidemiologically relevant information. §.§ Mutations A particular central factor for the occurrence of re-infections are mutations, in particular so-called immune escape variants such as the Omicron variant <cit.>. The emergence of new variants has had a major impact on the course of the COVID-19 pandemic, which has motivated also the development of models specifically aiming to incorporate mutations <cit.>. In the SIR-DDFT model, we can incorporate a mutation by adding a field I_M that describes the density of persons infected with a mutation, as well as a field R_M for persons that have recovered from an infection with the mutation. (A similar approach was used in Ref. <cit.> for a normal SIR model.) We moreover make the following assumptions: * Susceptibles are infected with a rate c when encountering someone infected with the wild type and with a rate c_M when encountering someone infected with a mutation. * Recovered persons are immune against the variant they have recovered from. Someone who has been infected with the mutation is additionally immune against the wild type, but not vice versa. Therefore, recovered persons can be infected by the mutation at a rate c_MR. One cannot be infected with both variants at the same time. * Persons infected with the wild type recover at a rate w and die at a rate m. Persons infected with the mutation recover at a rate w_M and die at a rate m_M. With these assumptions, the SIR-DDFT model with mutations reads ∂_tS = Γ_S·( SFS) - cSI - c_M S I_M, ∂_tI = Γ_I·( I FI) +cSI - wI -mI, ∂_tI_M = Γ_I_M·( I_Mδ F/δ I_M) +c_M S I_M + c_MR R I_M - w_M I_M -m_M I_M, ∂_tR =Γ_R·( R FR) + wI - c_MR R I_M, ∂_tR_M =Γ_R_M·( R_Mδ F/δ R_M) + wI +w_M I_M. Here, Γ_I_M and Γ_R_M are the mobilities of persons that are infected with the mutation or have recovered from it, respectively. § EXTENSIONS MOTIVATED BY SOFT MATTER PHYSICS §.§ Noise The SIR-DDFT model is a deterministic theory that originates from combining the deterministic SIR model with deterministic DDFT. Interestingly, both theories also exist in stochastic variants (see Refs. <cit.> for reviews of stochastic SIR models and Ref. <cit.> for a review of stochastic DDFT), such that it is natural to ask how stochasticity can be incorporated into the SIR-DDFT model. In fact, the incorporation of noise has already been suggested in Ref. <cit.>, although it has not been done explicitly. The early days of DDFT saw a coexistence between deterministic <cit.> and stochastic <cit.> variants, and it was a matter of debate which form is the correct one. Nowadays, following work by <cit.>, it is generally understood that deterministic DDFT describes the ensemble-averaged density (which in this context is the average of the density over all realizations of the noise in the Langevin equations that describe the motion of individual particles), whereas stochastic DDFT describes either the microscopic density operator or a spatially coarse-grained density. From this point of view, it makes sense that the SIR-DDFT model, which was introduced as a theory for the ensemble-averaged density <cit.>, contains no noise terms as these are averaged over. Nevertheless, in a real experiment (or pandemic), one observes not an ensemble average of the density of all possible time evolutions, but rather a spatial average of the density <cit.>. Thus, from a practical point of view, it is desirable to take noise terms into account. This can be done in a straightforward way by simply replacing the deterministic DDFT terms in <ref> by the ones known from stochastic DDFT <cit.>. The result is ∂_tS = Γ_S·( SFS) + ·(√(2Γ_S k_B T_S S)η⃗_S) - cSI, ∂_tI = Γ_I·( I FI) + ·(√(2Γ_I k_B T_I I)η⃗_I) +cSI - wI -mI, ∂_tR =Γ_R·( R FR) + ·(√(2Γ_R k_B T_R R)η⃗_R)+ wI with the temperatures T_i, where the noise η⃗_i has the properties ⟨η⃗_i| ⟩=0⃗, ⟨η⃗_i⊗η⃗_j(r⃗',t')| ⟩=δ_ijδ(r⃗-r⃗')δ(t-t'). Here, ⟨·|$⟩ denotes an ensemble average,⊗is a dyadic product,is the unit matrix, andδis the Dirac delta distribution. The stochastic SIR-DDFT model given by <ref> should be understood as a description of the actual (possibly spatially averaged) densities, whereas the deterministic model usually considered gives the ensemble-averaged densities. Note that integrating <ref> over space still gives the deterministic SIR model. Mathematically, this is due to the fact that the noise terms are still written as the divergence of a conserved current (the densities are conserved in the absence of infection dynamics). Physically, this is due to the fact that we are considering here the stochasticity of the spatial motion, which is invisible on the level of the spatially averaged SIR model. Consequently, <ref> can be thought of as a combination of stochastic DDFT with the deterministic SIR model. Nevertheless, since the spatial distribution of the persons determines the effective infection ratec_eff<cit.>, fluctuations of the density fields will in practice also induce fluctuations on the SIR level. Combinations of the stochastic SIR model with deterministic or stochastic DDFT represent further possible extensions. §.§ Active matter Active matter is characterized by a continuous inflow of energy at a local level, typically with the consequence that the particles exhibit directed motion <cit.>. While active particles can also be realized artificially, for example by ultrasound-generated propulsion <cit.>, the most generic example for active particles are biological organisms such as swimming bacteria or flying birds. Biological organisms, of course, are also where infectious diseases are spreading, and the fact that systems of biological organisms constitute active matter might have important influences on the dynamics of a pandemic. Consequently, it is not surprising that a lot of research has been devoted to studying the connection between disease spreading and active matter (see, for example, Refs. <cit.>). The SIR-DDFT model – just like the reaction-diffusion SIR model it is derived from – assumes the motion of humans to be essentially described by the motion of passive Brownian particles. This can be true at most approximately since human motion arises not from being kicked around by a thermal fluid, but from the conversion of internal energy into directed motion. In other words, humans are active particles. The dynamics of active particles can differ in many interesting ways from that of passive ones, and this difference may be of importance for the spread of diseases. Likewise, the dynamics of chemical reactions in a system with steric interactions (described by RDDFT) might be affected by the fact that some reactants are transported by active processes. Consequently, an extension of the SIR-DDFT model to the active case is desirable. Theoretical models of (overdamped) active particles typically take them to be described not just by their positionr⃗, but also by their orientation vectorthat in two spatial dimensions can be parametrized by an angleϕ. The orientation vector describes the direction of self-propulsion. For an active DDFT, this implies that the density depends not only onr⃗, but also onϕ. The governing equation gets additional terms describing rotational diffusion (change of the particle orientation) and self-propulsion, respectively. An active DDFT was first derived by <cit.>. Later work extended this theory to particles with arbitrary shape <cit.> or microswimmers <cit.>. Assuming isotropic translational diffusion, the active DDFT equation reads <cit.> ∂_t ρ = Γ·(ρFρ)+D_Rβ∂_ϕ(ρ∂_ϕFρ) -v_0·(ρ), whereD_Ris the rotational diffusion coefficient,βthe rescaled inverse temperature,v_0the self-propulsion velocity, and(ϕ)=(cos(ϕ),sin(ϕ))^Tthe particle orientation. If we assume the particles to be active – in our context, if we assume that humans are not passively diffusing, but walking around with a velocityv– then we have to replace the diffusion term in the reaction-diffusion SIR model not with the passive DDFT (<ref>), but with the active DDFT (<ref>). In the active case, we also have to be a lot more careful with the reaction terms. Let us consider, as an example, the termcSIin the governing equation forI. In a passive system, we have ∂_tI(r⃗,t) = c S(r⃗,t)I(r⃗,t)+… (we write the arguments of the fields for the moment for illustration). Physically, this means that rate of new infections at positionr⃗is proportional to the number of susceptible and infected persons at this position, since susceptibles become infected when meeting an infected person. A naive generalization to the active case would be ∂_tI(r⃗,ϕ,t) = c S(r⃗,ϕ,t)I(r⃗,ϕ,t)+…, which assumes that the structure of the reaction-diffusion model is unaffected by the presence of additional orientational degrees of freedom. This, however, is unrealistic. For the question whether a susceptible person can be infected if they are at the same position as an infected person, the orientation of the infected person should not matter (at least to a first approximation). To get an additional infected person with orientationϕat positionr⃗, we require a susceptible person with orientationϕat positionr⃗and an infected person with any orientation at positionr⃗. This gives ∂_tI(r⃗,ϕ,t) = c S(r⃗,ϕ,t)02πϕ'I(r⃗,ϕ',t)+…. A more sophisticated model could take into account that an infection is more likely if the persons are looking at each other. More generally, the infection probabilitycmight depend onϕ-ϕ'. This leads to ∂_tI(r⃗,ϕ,t) =02πϕ'c(ϕ-ϕ')S(r⃗,ϕ,t)I(r⃗,ϕ',t)+…, which can (now dropping arguments again) be written as ∂_t I = (c ⋆_ϕ I) S with the orientational convolution⋆_ϕ. Note that these considerations are relevant not only in an epidemiological context, but also if we wish to model actual chemical reactions in which active particles (or, more generally, particles with orientational degrees of freedom) are involved. When deriving a reaction-diffusion model for such systems, it has to be taken into account whether and how the particles' orientation affects the reactions they undergo. This further emphasizes the relevance of the present study for soft matter physics. We thus arrive at the active SIR-DDFT model, given by ∂_tS = Γ_S·( SFS) + D_R,Sβ_S∂_ϕ(S∂_ϕFS) - v_S·(S) - (c ⋆_ϕ I) S, ∂_tI = Γ_I·(IFI) + D_R,Iβ_I∂_ϕ(I∂_ϕFI) - v_I·(I) - (c ⋆_ϕ I) S - wI - mI, ∂_tR = Γ_R·(RFR) + D_R,Rβ_R∂_ϕ(R∂_ϕFR) - v_R·(R) +wI. Note that we have allowed the rotational diffusion coefficientsD_R,i, the rescaled inverse temperaturesβ_i, and the self-propulsion velocitiesv_ito be different for the different fields. This can, for example, be a consequence of ill persons walking slower than noninfected ones. §.§ Phase field crystal model The governing equations of DDFT can be quite difficult to solve in practice, in particular due to the convolution in the interaction term. Therefore, it is desirable to have available simpler models that still capture the same essential physics. This requirement is satisfied by phase field crystal (PFC) models. After their phenomenological introduction <cit.>, it has been found that they can be derived as an approximation to DDFT <cit.>. This derivation is discussed in detail in Refs. <cit.>. PFC models also allow to model mixtures <cit.> and active matter <cit.>. They are reviewed in Ref. <cit.>. In most cases, the order parameter of PFC models is given by the dimensionless deviation of the density from its mean value. In the present case, however, this would not be a convenient choice since this would make the reaction terms unnecessarily complicated. Therefore, we simply useS,I, andRas order parameter fields also for the PFC model. Starting from the SIR-DDFT model, we make three standard approximations: * We replace the expression ·ϕ (with ϕ=S,I,R) in front of δ F/δϕ by ρ̂^2 with a reference density ρ̂. This approximation is straightforward in standard DDFT where one can simply choose the average density, but is a little more tricky in RDDFT since the individual densities are not conserved and the density of, e.g., susceptibles can deviate quite a lot from any reference value one might choose and moreover depends on time. For the reference density, we therefore here use the mean population density ρ̂=N_0/A with the initial total number of persons N_0 and the domain area A. Thereby, we ensure that ρ̂ is constant. * We make a Taylor expansion for the ideal gas free energy (<ref>) around ρ̂ up to fourth order in S, I, and R. * The nonlocal excess free energy given by <ref> is made local by a gradient expansion up to fourth order. As a result, we obtain the SIR-PFC model, which is given by ∂_tS = Γ_Sρ̂^2FS - cSI, ∂_tI = Γ_Iρ̂^2FI +cSI - wI -mI, ∂_tR =Γ_Rρ̂^2FR + wI. The Taylor-expanded ideal gas free energy reads (ignoring irrelevant zeroth- and first-order terms) F_id=∑_ϕ=S,I,Rβ_ϕ^-1ρ̂^dr3ϕ^2/2ρ̂^2-ϕ^3/2ρ̂^3+ϕ^4/12ρ̂^4. Equation (<ref>) has a different form than the ideal gas free energy in a standard PFC model <cit.>. This is simply a consequence of the fact that we work with the density rather than the density deviation. Equations (<ref>) and (<ref>) simplify to F_sd = -^dr1/2(C_sd^(0)S^2 + C_sd^(2)S^2S + C_sd^(4)S^4S +2C_sd^(0)SR +2 C_sd^(2)S^2R + 2C_sd^(4)S^4R +C_sd^(0)R^2 + C_sd^(2)R^2R+C_sd^(4)R^4 R), F_si = - ^dr1/2(C_si^(0)I^2 + C_si^(2)I^2I + C_si^(4)I^4I +2C_si^(0)IS +2 C_si^(2)I^2S + 2C_si^(4)I^4S +2C_si^(0)IR +2 C_si^(2)I^2R + 2C_si^(4)I^4R) with the parameters C_i^(0) = π/σ_iC_i, C_i^(2) = π/4σ_i^2C_i, C_i^(4) = π/32σ_i^3C_i andi = sd, si. Note that <ref> hold in any spatial dimension, whereas <ref> hold only ind=2dimensions. Moreover, the free energy of the SIR-PFC model does not have the familiar Swift-Hohenberg-type <cit.> form. This is a direct consequence of the fact that we do not shift or rescale the density fields. Doing so would make the nonconserved part of the dynamics, which is not present in standard PFC models, significantly more complicated. This also indicates that Swift-Hohenberg free energies are generally less appropriate if a PFC model is used to study systems with chemical reactions, an observation that is of interest also for applications in chemical engineering or biochemistry. Chemical reaction networks are important, for instance, also in the development of intelligent materials <cit.> or in biological systems <cit.>, and knowing how to extend PFC models to interacting systems of reacting species is therefore useful also for these fields of research. §.§ Zombie outbreak We finally consider an epidemic scenario that is somewhat different from that of virus spreading, namely a zombie outbreak. Zombies, originating from Haitian folk belief, have developed into a very common motive in popular culture such as novels, movies, or video games. Moreover, zombie outbreaks have been the subject of various mathematical modeling studies since they provide an interesting case study for epidemiological models <cit.>. See Ref. <cit.> for an overview over zombie-related research. While zombies have not been a major public health concern in the past years, they are extremely interesting in the context of this work from a physical point of view. While it can be assumed that, in a COVID-19 outbreak, infected persons try to keep a distance from noninfected ones just as they are keeping one from them (reciprocal interactions), zombies exhibit a different behavior: they actively attack non-infected humans and try to bite them, whereas humans will run away from zombies in order to avoid being killed. Thus, the interaction between humans and zombies is non-reciprocal. Non-reciprocal couplings have attracted a lot of interest in recent years <cit.>, including in the context of field theories <cit.> and predator-prey models <cit.>. Thus, studying a zombie outbreak in an SIR-DDFT model represents an interesting contribution to modern active matter physics. As a starting point, we use the susceptible-zombie-removed (SZR) model <cit.>, which is given by Ṡ̅̇ = - b_effS̅Z̅, Ż̅̇ = (b_eff-κ_eff)S̅Z̅, Ṙ̅̇ = κ_effS̅Z̅. Here,b_effstands for the effective bite parameter (rate at which zombies bite humans),κ_effis the effective kill parameter (rate at which humans kill zombies),Z̅is the total number of zombies, andR̅is the number of removed individuals, i.e., the number of killed zombies. Note that, although theR̅compartment plays a similar mathematical role as in the SIR model, the physical interpretation here is different since the only way to be removed from the zombie population is through death <cit.>. Similar as in the SIR-DDFT model, we now consider the spatial densitiesSandZ. We can ignore the removed compartment here since the corresponding field would simply describe the spatial distribution of zombie corpses. (While accumulations of zombie corpses are certainly unpleasant, we can safely assume them to be irrelevant for the overall dynamics.) By basing our considerations on the SZR model, we make two central approximations. First, we neglect the effects of suicides that susceptibles commit in order to avoid becoming a zombie. This is, as discussed in Ref. <cit.>, a good approximation for the movie Shaun of the Dead<cit.>, where no suicides happen, but more problematic for other zombie movies. Second, we ignore the effects of exposure (see <ref>) and simply assume that a bitten person immediately becomes a zombie. This can be justified by the short half life of exposed persons (about 30 minutes for Shaun of the Dead<cit.>). We make this second approximation because it is not fully clear how to accommodate their behavior within our model – depending on their character traits and on whether exposed persons are aware of their fate, they may be running away from susceptibles or from zombies, will continue to kill zombies or not, or will even be killed by susceptibles. Denoting the local bite and kill parameters bybandκ, respectively, we propose the SZ-DDFT model, which is given by ∂_tS = D_S^2 S - Γ_S·(S ( C_szK_sz⋆ Z)) - bS Z, 1.5em ∂_t Z = D_Z^2 Z + Γ_Z·( Z(C_zsK_zsS)) +(b-κ) S Z , 1.5em Here,D_S = β_S Γ_Sis the diffusion constant for the susceptibles,D_Z = β_Z Γ_Zis the diffusion constant for the zombies (depending on their inverse rescaled temperatureβ_Zand their mobilityΓ_Z),C_szis the strength of the repulsive force that the zombies exert on the susceptibles,C_zsis the strength of the attractive force the humans exert on the zombies,⋆denotes a spatial convolution, and K_i(r⃗)=e^-σ_ir⃗^2 with the interaction rangesσ_iandi=sz,zsare the kernels. The constantsC_szandC_zshave to be negative for physical reasons, but can be different (depending on how hungry the zombies and how scared the susceptibles are). Essentially, the SZ-DDFT model is obtained from the SIR-DDFT model by dropping the fieldR, changing the reaction terms, and flipping the sign of the interaction term in the dynamic equation forZwhile keeping the sign in the dynamic equation forI. Thereby, the model becomes non-reciprocal. Consequently, the zombie model constitutes another active field theory for epidemic spreading. § SIMULATION OF A ZOMBIE APOCALYPSE To investigate the effect of non-reciprocal interactions in reaction-diffusion systems, we perform simulations of a zombie outbreak based on the SZ-DDFT model given by <ref>. Details on the numerical method are provided in Appendix <ref>. We employ dimensionless units (except for time, which is measured in hours). <cit.> have estimated the parameters of a zombie outbreak based on the popular zombie movie Shaun of the Dead<cit.>. They foundb_eff = 0.59/h andκ_eff = 0.49/h. In our case, the total population size is smaller than assumed in Ref. <cit.> by a factor of about 10, which means thatb_effandκ_effshould also be smaller by roughly this factor.[This can be seen from <ref>. If S̅ and Z̅ are divided by a factor N, then <ref> changes to Ṡ̅̇ = - b_effS̅Z̅/N. To recover the original form, we have to absorb the factor 1/N into b_eff.] We therefore useb_eff= 0.055/h andκ_eff=0.045/h. Assuming a homogeneous population and a domain sizeA=100, we can (in analogy to the argument employed for inferring the parametercin Ref. <cit.>) then get the parametersbandκasb=b_eff A =5.5/h andκ= κ_eff A= 4.5/h (assuming dimensionless units for length). The mean initial population density isρ̂=0.25, givingN_0=ρ̂A=25. Our goal is to investigate the optimal strategy of fighting a zombie outbreak. For this purpose, we perform a parameter scan inκandC_szto generate a phase diagram, where we measure (a) the number of susceptibles at the end of the outbreak (number of survivors), (b) the number of zombies at the end of the outbreak, and (c) the time it takes for the battle of the living and dead to end. Thereby, we are able to compare two main strategies – fighting the zombies, which corresponds to a large kill parameterκ, and running away from them, which corresponds to a large repulsive interaction strengthC_sz. We fixb=5.5/h(see above),D_S=0.01/h,Γ_S=Γ_Z = 1/h, andσ_sd=σ_si=100), andD_Z = 0.005/h (because zombies are slower than susceptibles). For the interaction strengths, we should note that the zombie disease is significantly more infectious than usual respiratory diseases (as seen from the large value ofb<cit.>), such that the typical interaction strengths should be a factor 10 larger than in the case of the SIR-DDFT simulations in Ref. <cit.> as zombies are very scary (and hungry). We therefore fixC_zs = -100and varyC_szbetween 0 and-300. Similarly, we varyκbetween 0/h and 15/h to ensure that humans kill zombies at a rate whose order of magnitude is comparable to the rate at which zombies kill humans. The resulting phase diagram is shown in <ref>. We plot here, as a function ofC_szandκ, (a) the final number of susceptiblesS̅_∞in relation to the initial population sizeN_0, (b) the final number of zombiesZ̅_∞in relation toN_0, and (c) the timet_endthat it takes for the zombie apocalypse to end. Notably, the parameterC_szhas no influence on the final number of susceptibles and zombies. All that matters is the kill rateκ. Consequently, for ending a zombie apocalypse, we have to kill the zombies and not run away from them. The determining factor is whetherκis smaller or lager than 5.5/h, which is the value ofb. Forκ< ball susceptibles are eliminated, whereas forκ> b, there is a large number of survivors. This number does not change whenκis increased. Slightly more interesting is the dependence ofZ̅_∞onκ. While all zombies are eliminated forκ> b, the number of zombies that are still around after all susceptibles are eliminated depends on the kill rate. There is, for example, a large number of zombies at the end forκ=0, whereas forκslightly smaller thanb, not only the number of remaining susceptibles, but also the number of zombies is very small. Fort_end, the overall picture is similar. The outbreak always ends very quickly forκ> b. In contrast, ifκis increased from zero to a value belowb, the time it takes till the battle is decided increases significantly (it even diverges forκ= b). Notably, there is also a small effect of increasing|C_sz|at constantκhere, namely that it takes longer till all susceptibles are eliminated. Consequently, while running away does not end a zombie apocalypse, it does increase the time till the zombies bite everyone. In <ref> a, the time evolution of the spatial distributionZ(x,y,t)(with spatial coordinatesxandy) of the zombies is shown for some selected parameter values. For the final timet=20h, we also show the distribution of the susceptiblesS. In the noninteracting case (C_zs = 0andC_sz =0, <ref> a i), the zombies radially spread outwards, as in the SIR model with diffusion <cit.>. The susceptible distribution at the final time looks like a ring. ForC_zs = 0,C_sz =-300(susceptibles are repelled by zombies, zombies are not affected by susceptibles, <ref> a ii), the zombies move outwards radially up to a timet=10/h. Later, they are reflected at the boundaries of the system and move inwards, leading to a structure with four-fold symmetry. The distribution of susceptibles is still ring-like, but with a cross-shaped region that contains many susceptibles. If, on the other hand, the zombies are attracted by the susceptibles and the susceptibles simply move around randomly (C_zs = -100,C_sz =0, <ref> a iii), the structures are generally similar to the ones observed in the noninteracting case, although there are fewer susceptibles att=20/h. The most interesting case is of course the one where both interactions are turned on (C_zs = -100,C_sz=-300, <ref> a iv). Here, the zombies distribute in space quicker, i.e., the fieldZspreads outwards faster than in the cases with no or fewer interactions. At the later stages (t=10h), the model starts to show some interesting pattern formation that is different from what is known from the SIR-DDFT model with reciprocal interactions <cit.>, where one finds concentric rings and later a separation into points that can be interpreted as infected persons self-isolating at their houses. In the present simulation based on the SZ-DDFT model, in contrast, one observes a square of zombies with bars at the edges on top of a spherical distribution att=10h, which then evolves into a smaller square with bars at the sides att=20h. The susceptibles, at this time, accumulate into quarter circles located at the edges. It should be noted that the form of the observed structures, in particular their four-fold symmetry, is a consequence of the boundary conditions and the quadratic form of the simulation box. However, it is still interesting to show and discuss these structures here since (a) they differ quite significantly from what is observed in the reciprocal case, where the boundary conditions are the same but do not have a strong effect on the observed patterns, and (b) the boundary effects do have a physical relevance in the present context. If the susceptibles and zombies are in a quadratic domain, the susceptibles will accumulate at the edges in the final stages because this is the part of the domain that has not yet been conquered by the zombies. Thus, the simulation results show that for a zombie apocalypse, the spatial domain on which it takes place is considerably more important than in a normal pandemic. Pattern formation effects observed here are dominated by boundary effects, not by particle interactions as in the SIR-DDFT simulations performed in Ref. <cit.>. Figure <ref> b shows the time evolution of the total number of susceptiblesS̅and zombiesZ̅in relation to the initial population sizeN_0for the cases depicted in <ref> a. In the noninteracting case withC_zs = 0andC_sz = 0, the number of susceptibles declines relatively quickly. ForC_zs = 0andC_sz =-300(susceptibles run away from the zombies and zombies move around randomly), the number of susceptibles declines significantly slower, and there is a considerably larger number of survivors att= 20/h. On the other hand, forC_zs = -100andC_sz = 0(zombies actively attack susceptibles and susceptibles do not actively run away), the zombies very quickly manage to kill essentially all the susceptibles. Finally, forC_zs = -100andC_sz =-300(susceptibles run away from zombies and zombies attack susceptibles), the overall number of survivors is very similar to the noninteracting case. Consequently, the two types of interactions compensate for each other (on the level of the entire population, the spatiotemporal dynamics is different), since att= 20/h one has a similar number of remaining susceptibles in the noninteracting and in the fully interacting case. Nevertheless, this final state is approached in a different way, with the initial decay of the number of susceptibles being slower than in the noninteracting case. § CONCLUSIONS Being based on the SIR model, the SIR-DDFT model inherits the enormous flexibility of compartmental theories for epidemic spreading. In this work, we have demonstrated this flexibility by extending it towards vaccination, exposure and asymptomaticity, and mutations. We have also derived several extensions that are based on ideas from soft matter physics by incorporating noise and self-propulsion and by deriving an SIR-PFC model and a model with non-reciprocal interactions (describing a zombie outbreak). Of course, these extensions can also be combined, for example to generate a model that involves vaccination, mutations, and activity. Moreover, we have performed numerical simulations to study a zombie apocalypse, a scenario in which non-reciprocal interactions are relevant. In future work, our results can be used for modeling disease outbreaks in a more realistic way, and in particular to study chemical reactions using DDFT and PFC models in contexts where particle interactions (including non-reciprocal ones) and particle self-propulsion are relevant. M.t.V. thanks the Studienstiftung des deutschen Volkes for financial support. R.W. is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 433682494 – SFB 1459. § NUMERICAL METHODS The model equations (<ref>) and (<ref>) are solved by a finite difference scheme on a512×512periodic grid in the case of <ref> and a256×256periodic grid in the case of <ref>. The initial populations are given by Gaussian distributions centered atx=L/2,y=L/2with the domain lengthL. The Gaussian has a variance ofL^2/75and is normalized such that the mean initial population density for the respective grid is equal toρ̂=N_0/A=0.25and such that the ratio between the initial susceptible and zombie populations is given byS̅_0=999Z̅_0, i.e., on average one in every thousand persons is initially a zombie. For <ref>, the time evolutions of the fieldsSandZwere simulated for a total simulation time oft=20h. For <ref>, simulations were run until either the zombie or the susceptible population density fell below5·10^-4ρ̂. From this, the final valuesS̅_∞=lim_t→∞S̅andZ̅_∞=lim_t→∞Z̅were estimated. In addition, we determined the timet_endat which either the total number of zombies or susceptibles fell below5%of the initial numbersS̅_0orZ̅_0, respectively.
http://arxiv.org/abs/2307.02304v1
20230705140147
Available energy of trapped electrons in Miller tokamak equilibria
[ "R. J. J. Mackenbach", "J. H. E. Proll", "G. Snoep", "P. Helander" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
Maximum edge colouring problem on graphs that exclude a fixed minorSupported by project 22-17398S (Flows and cycles in graphs on surfaces) of Czech Science Foundation Zdeněk Dvořák1 Abhiruk Lahiri2 Received March 10, 2023; accepted May 12, 2023 ====================================================================================================================================================================== Available energy (Æ), which quantifies the maximum amount of thermal energy that may be liberated and converted into instabilities and turbulence, has shown to be a useful metric for predicting saturated energy fluxes in trapped-electron-mode-driven turbulence. Here, we calculate and investigate the Æ in the analytical tokamak equilibria introduced by <cit.>. The Æ of trapped electrons reproduces various trends also observed in experiments; negative shear, increasing Shafranov shift, and negative triangularity can all be stabilising as indicated by a reduction in Æ, though it is strongly dependent on the chosen equilibrium. We find that negative triangularity is especially beneficial in vertically elongated configurations with positive shear, or low gradients. We furthermore extract a gradient-threshold like quantity from Æ and find that it behaves similarly to gyrokinetic gradient-thresholds: it tends to increase linearly with magnetic shear, and negative triangularity leads to an especially high threshold. We next optimise device geometry for minimal Æ and find that the optimum is strongly dependent on equilibrium parameters, e.g. magnetic shear or pressure gradient. If one furthermore investigates the competing effects of increasing the density gradient, pressure gradient, and decreasing the shear, one finds regimes which have steep gradients yet low Æ, and that such a regime is inaccessible in negative-triangularity tokamaks. We finally compare Æ with saturated heat-flux estimates from the tglf model and find fairly good correspondence. § INTRODUCTION Transport in tokamaks and stellarators is largely dominated by turbulent energy losses, which severely degrade the energy confinement in these devices. A detailed understanding of how various parameters characterising the plasma and the magnetic-field geometry, such as magnetic shear and the pressure gradient, affect the turbulent transport properties would be helpful for comprehending and mitigating it. The standard method to assess the turbulence properties of any given tokamak is to preform nonlinear gyrokinetic simulations. However, such simulations are computationally expensive because of the very disparate time- and length scales characterising the turbulence and the transport. It would thus be beneficial to find a reduced model capable of predicting the level of turbulent transport by simpler means. In a recent publication, it was shown that the available energy (Æ) of trapped electrons can serve as such a reduced model <cit.>, at least for turbulence driven by the plasma density gradient. Any plasma possesses a maximum amount of thermal energy that can be converted into instabilities and turbulence <cit.>. This “available” energy can be calculated by performing a Gardner restacking of the plasma distribution function f, in which phase-space volume elements are rearranged in a manner that respects Liouville's theorem <cit.>. The restacking of f which minimizes the thermal energy results in a “ground state” distribution function f_g, and the Æ is defined as the difference in thermal energy between f and f_g. If one imposes the additional constraint that adiabatic invariants be conserved in the restacking process, the Æ becomes relevant to magnetically confined plasmas <cit.>. In fusion plasmas, the magnetic moment μ is usually conserved for all species, and the parallel adiabatic invariant 𝒥 = ∫ m v_ dℓ is conserved for magnetically trapped electrons. A significant portion of electrons are trapped, and can contribute to the turbulence through trapped electron modes (TEMs). The Æ of trapped electrons correlates with the turbulent energy flux for such TEM-driven turbulence over several orders of magnitude in saturated energy fluxes <cit.>. This correlation is expressible as a simple power law, where the saturated energy flux, Q_sat, was found to be related to the available energy, which we denote by A in formulas, via approximately Q_sat∝ A^3/2. This relation was found to hold for both a tokamak and stellarators, and for various values of the density gradient. Aside from this relationship, other links have been found by <cit.> where quasi-linear plateauing is shown to be related to a concept closely connected to Æ, highlighting other links to transport physics. In any case, in order to gain a deeper understanding it is of interest to derive an explicit expression of the Æ in tokamak geometry, in order to investigate the dependence of Æ on various geometrical and plasma parameters. This is our aim in the present paper, where we compute the Æ for the family of tokamak equilibria constructed by <cit.>. The starting point is the following explicit expression for the Æ on a flux surface of any omnigenous equilbrium <cit.>, including that of a tokamak, A= 1/2 √(π) L Δψ_t Δα/B_0∬∑_wells e^-z z^5/2ω̂_α^2ℛ[ 1/zω̂_*^T/ω̂_α-1] ĝ^1/2dλdz, Here, L is the total length of a field-line completing one poloidal turn, B_0 is some reference magnetic field strength, z=H/T_0 is the particle energy normalized by the temperature, λ = μ B_0/H is the pitch angle, and Δψ_t and Δα denote the size of the flux-tube in the radial and binormal directions respectively (we have parameterized the radial coordinate by means of the toroidal flux ψ_t and the binormal by means of the Clebsch angle α). The hatted quantities in the integrand denote normalized frequencies, with ω̂_α being the normalized bounce-averaged drift precession frequency, ω̂_*^T the normalized electron diamagnetic drift frequency, and Ĝ^1/2 the normalized bounce-time. They are explicitly defined as ω̂_α ≡ - Δψ_t/H∂_ψ_t𝒥/∂_H𝒥 ω̂_*^T ≡Δψ_t dln n/dψ_t( 1 + η[ z - 3/2] ) ĝ^1/2 ≡∂_H𝒥/L√(2H/m) where we have denoted the ratio between the gradients by η = (dlnT / dψ_t ) / (dlnn / dψ_t ). Finally, ℛ[x]=(x+|x|)/2 is the ramp function. Using the above expressions, we shall find the Æ of any Miller tokamak. § THEORY §.§ The available energy in any omnigenous systems We first note that the integral over z can be rewritten into a convenient form. We define two functions which are independent of z, namely c_0 = - Δψ_t/ω̂_α(λ)dln(n)/dψ_t( 1 - 3/2η), c_1 = 1 + Δψ_t/ω̂_α(λ)dln(n)/dψ_tη. With these functions, the integral over the normalized energy z reduces to the following form, I_z(c_0,c_1) = 8/3 √()∫_0^∞exp(-z) z^3/2 r [ c_0 - c_1 z ] d z. This integral can be solved analytically and its functional form depends on the signs of c_0 and c_1, resulting in four different conditions. The easiest case to evaluate is the case where c_0<0 and c_1>0. In this case the argument of the ramp function is always negative, and hence the integral reduces to zero. The second case is when the argument of the ramp function is always positive, which happens whenever c_0≥0 and c_1 ≤ 0. The integral then reduces to the following form I_z = 2c_0 - 5c_1 . There are two cases left to consider. Firstly we inspect the case where the argument of the ramp function is positive for low z but becomes negative for high z, i.e. c_0 ≥ 0 and c_1 > 0. The unique point where the argument of the ramp function vanishes is z_* = c_0/c_1. Thus the integral becomes I_z = 8/3 √()∫_0^z_*exp(-z) z^3/2( c_0 - c_1 z ) d z. This integral can be expressed in terms of the error function, erf(x) = 2/√()∫_0^x exp(-t^2) dt, I_z = (2c_0 - 5c_1) erf(√(c_0/c_1)) + 2/3√() (4c_0 + 15c_1) √(c_0/c_1)exp( - c_0/c_1). The final case is that where the argument of the ramp function is negative for low z but becomes positive for high z, i.e. c_0 < 0 and c_1 ≤ 0. The integral then becomes, I_z =(2c_0 - 5c_1) [ 1 - erf(√(c_0/c_1)) ] -2/3√() (4c_0 + 15c_1) √(c_0/c_1)exp( -c_0/c_1). Note that I_z ≥ 0, ∀(c_0,c_1) ∈ℝ^2, which can also be seen in Fig. <ref>. The Æ can now be found by executing the integral over the remaining coordinate A = 3/16Δψ_t Δα L/B_0 n_0 T_0 ∫_{λ}dλ∑_wells(λ) I_z(c_0,c_1) ω̂_α^2 ĝ^1/2. Note that this expression is completely general; no approximations have been made in executing these integrals, aside from the preceding assumption of omnigeneity. It is also interesting to note that from this expression, one can see that there are no tokamak configurations with vanishing Æ, at least to leading order near the axis. This conclusion can most readily be drawn by investigating the expression for ω_α from <cit.>. Here, one can find that there is always a zero-crossing for ω_α, implying that c_0 and c_1 must change sign. As such, the available energy must be non-zero (as either I(c_0,c_1) or I(-c_0,-c_1) must be non-zero). Formally, this corresponds to the fact that such a zero-crossing implies that the device does not have the so-called maximum-𝒥 property, which is required for the linear stability of trapped electron modes. To make further progress in solving Eq. (<ref>), one requires the function ω̂_α(λ), which in turn necessitates a specification of the equilibrium. In this paper, we will make use of local construction of the equilibrium, employing a formalism developed by <cit.>. §.§ Construction of local equilibria Equilibria are constructed by finding a radially local solution to the Grad-Shafranov equation, and this solution allows us to find ω̂_α. We highlight the essential components of this derivation, which essentially follows steps taken by <cit.>. The Luc-Mercier formalism requires the shape of the flux surface, the poloidal field B_p on that flux surface, and the gradients of the pressure p(ψ) and the toroidal field function f(ψ) = R B_ϕ on the flux surface, where R is the major radial coordinate, B_ϕ is the toroidal component of the magnetic field, and ψ is the poloidal flux. We parameterize the flux-surface as R_s = R_s(l) and Z_s = Z_s(l), where l measures the poloidal arclength along the flux surface. It is furthermore useful to define a tangential angle u, which measures the angle between the unit vector in the major radial direction e_R and the vector tangential to the flux surface e_l in a clockwise manner, thus dR_s(l)/d l = cos u, dZ_s(l)/d l = - sin u. With this definition, the angle u can be calculated by d u / d l = - 1 / R_c, where R_c(l) is the radius of curvature of the poloidal cross-section, and the negative sign arises because the poloidal arclength is measured clockwise. We go on to introduce a radial-like expansion variable ρ which is zero on the given flux surface, in terms of which the cylindrical coordinates become R(ρ,l) = R_s(l) + ρsin u, Z(ρ,l) = Z_s(l) + ρcos u. The metric tensor in these coordinates has non-zero components only on the diagonal (which is to be expected as we ensured orthogonality in the construction), g_ij = diag[ (1 - ρ/R_c)^2, 1, R^2 ], where we use the convention x^1 = l, x^2 = ρ, x^3 = ϕ. The local solution is now constructed by expanding in ρ, ψ ≈ ψ_0 + ρψ_1 + ρ^2 /2ψ_2, p'(ψ) ≈ p'(ψ_0) f'(ψ) ≈ f'(ψ_0) and substitute into the Grad-Shafranov equation, which in leading order reduces to ψ_2 = ( sin(u) + R_s/R_c) ψ_1/R_s - μ_0 R_s^2 p'(ψ_0) - f(ψ_0)f'(ψ_0). This allows one to find radial variation of the poloidal magnetic field by using <cit.> B_p = | ∇ψ |/R, resulting in B_p(l,ρ) = ψ_1/R_s( 1 + ρ[ 1/R_c - μ_0 R_s^2 p'(ψ_0)/ψ_1 - f(ψ_0) f'(ψ_0)/ψ_1] ). From this equation we can immediately see that ψ_1/R_s = B_p,s, with B_p,s being the poloidal field on the flux-surface as indicated by the subscript. As such, the poloidal field strength can be written as | B_p(l,ρ)| = B_p,s( 1 + ρ[ 1/R_c - μ_0 R_s p'/B_p,s - ff'/R_s B_p,s] ) ≡ B_p,s( 1 + ρ_ρ b_p ) The toroidal field is found from its definition B_ϕ = f(ψ)/R, resulting in |B_ϕ(l,ρ) | = B_ϕ,s( 1 + ρ[ f'(ψ_0)/f(ψ_0) R_s B_p,s - sin u/R_s] ) ≡ B_ϕ,s( 1 + ρ_ρ b_ϕ), where B_ϕ,s = f(ψ_0)/R_s. The total magnetic field strength is also readily derived B = √(B_ϕ,s^2+B_p,s^2)(1 + ρB_ϕ,s^2 _ρ b_ϕ + B_p,s^2 _ρ b_p /B_ϕ,s^2+B_p,s^2) ≡ B_s ( 1 + ρ_ρ b ). The radial variation of the poloidal line element is readily found from the metric tensor, dl = (1 - ρ/R_c) [ dl ]_ρ=0 In these equations f'(ψ_0) is treated as a free parameter, but it is difficult to ascertain if the chosen value of this parameter is realistic. It is more convenient, however, to specify the magnetic shear, which is related to f'(ψ_0). This can be made explicit by investigating the safety factor q = f(ψ)/2 ∫dl /R_s^2 B_p,s. Taking the derivative of the safety factor with respect to ψ, one finds an equation describing this relationship, _ψ q = f'/f q + f 1/2π∫dl/R_s^3 B_p,s^2( - 2/R_c - 2 sin u/R_s + μ_0 R_s p'/B_p,s + f f'/R_s B_p,s). We also wish to relate the arclength along a magnetic field line to the poloidal arclength. These quantities are related as dℓ =| B/B_p| dl. Finally, the poloidal coordinate can be expressed in terms of the poloidal angle θ instead of the poloidal arclength by l_θ≡dl/dθ = √( (_θ R_s)^2 + (_θ Z_s)^2 ), and the total arclength thus becomes L = ∮ l_θ| B/B_p| dθ. §.§ Non-dimensionlisation and available energy We proceed to make the various functions dimensionless as in <cit.>, and in doing so we will introduce various dimensionless constants which will be useful for the remainder of the analysis. We assume that we have been given the dependencies of the various functions in terms of the minor radial coordinate r, which in turn relates to the major radial coordinate R_0 via the inverse aspect ratio of the flux surface in question ϵ=r/R_0. With these coordinates, we define various dimensionless functions, R̂_s = R/R_0, Ẑ_s = Z/R_0, R̂_c = R_c/r, l̂_θ = l_θ/r, B̂_ϕ = B_ϕ/B_0, B̂ = B/B_0. One also needs to relate ψ to r, which can be done by investigating the poloidal field as in Eq. (<ref>) B_p = _r ψ/R_0| ∇ r |/R̂_s, where we have made use of the relation f(ψ_0) = B_0 R_0. We go on to identify two factors in the above expression, namely _r ψ /R_0 ≡ B_p,0 and B̂_p,s≡| ∇ r | / R̂_s. Inserting these into the equation for the safety factor (<ref>), one finds B_p,0 = γϵ/q B_0, γ≡1/2 ∮l̂_θ/R̂_s^2 B̂_p,sdθ. We proceed to define a dimensionless pressure gradient, analogous to the α parameter used in s-α geometry, α = - 2 μ_0 ϵ^2 R_0^2 p'/B_p,0 = -ϵ r d p/d r/ B_p,0^2/2 μ_0. Note that this dimensionless pressure gradient is unrelated to the Clebsch angle. The pressure gradient can in turn be used to define a dimensionless toroidal current density σ = ( μ_0 p' + ff'/R_0^2) ϵ R_0^2/B_p,0 = q/γ f' R_0 - α/2 ϵ. We go on to define the shear s in the following manner s = ϵ R_0^2 B_p,0_ψln q = r/q q/ r which can be substituted into Eq. (<ref>) to relate the shear to f'R_0 as s = γϵ^2/q f'R_0 - 2/γ C_1 - 2 ϵ/γ C_2 - α/2γϵ C_3 + q/γ^2 C_4 f'R_0, where we have defined the geometric constants C_1 to C_4 as C_1 = 1/2 ∮l̂_θ/R̂_c R̂_s^3 B̂_p,s^2dθ, C_2 = 1/2 ∮l̂_θsin u/R̂_s^4 B̂_p,s^2dθ, C_3 = 1/2 ∮l̂_θ/R̂_s^2 B̂_p,s^3dθ, C_4 = 1/2 ∮l̂_θ/R̂_s^4 B̂_p,s^3dθ. The radial derivatives of the magnetic field become r _ρ b_p = ( 1/R̂_c - α/2 ϵB̂_p,s[ 1/R̂_s - R̂_s ] - σ/R̂_s B̂_p,s), r _ρ b_ϕ = ϵ( γ^2ϵ/q^2[ σ + α/2 ϵ] R̂_s B̂_p,s - sin u/R̂_s). Finally, we express the total magnetic field length as L = qξ/γ R_0 , ξ≡∮l̂_θB̂_s/B̂_p,sdθ. We now turn our attention to the precession frequency, which we calculate from (<ref>). To simplify the calculation slightly, we note that the operator Δψ_t ∂_ψ_t≈Δψ∂_ψ to leading order, as we can approximate Δψ_t ≈Δψ∂_ψψ_t. Using this identity, we find the same expression as in <cit.>, ω̂_α(λ) = Δψ/R_0^2 B_p,0⟨1/ϵ( 2 [1 - λB̂] [r_ρ b - r_ρ b_p - 1/R̂_c] - λB̂ r _ρ b ) / B̂_p,sR̂_s ⟩_λ, where we define the bounce averaging operator in angular brackets as ⟨ h ⟩_λ = ∫dθ  h ·l̂_θB̂_s/B̂_p,s/ √(1 - λB̂)/∫dθ l̂_θB̂_s/B̂_p,s/ √(1 - λB̂). We rewrite the precession frequency as ω̂_α≡Δψ/R_0^2 B_p,0ω̂_λ. Let us next investigate the Jacobian ĝ^1/2. We rescale it with a factor ϵ^1/2, defining ĝ_ϵ^1/2≡ĝ^1/2√(ϵ). The Æ now becomes A = 3/16Δψ_t Δα L/B_0 n_0 T_0 √(ϵ)( Δψ/R_0^2 B_p,0)^2 1/ϵ∫_{λ}dλ ∑_wells(λ) I_z(c_0,c_1) ω̂_λ^2 ĝ_ϵ^1/2, where the prefactor 1/ϵ to the integral deliberately not cancelled against the √(ϵ) to highlight that the integration range to lowest order {λ}∼ϵ, thus the integral is ∼𝒪(ϵ^0). With the above expression we go on to define a dimensionless Æ. We take steps in line with <cit.>, and calculate the fraction of the total thermal energy that is available. The thermal energy of a plasma in a flux tube is to leading order in Δψ_t, E_t = ∫3/2n T/Bdψ_t dαdℓ = 3/2 n_0 T_0 Δψ_t Δα L /B_01/ξ∮l̂_θB̂_p,s^-1dθ. We then define the available energy as a fraction of the thermal energy as A = 3A/2E_t. Simplifying the expression using Δψ = Δ r ∂_r ψ, one finds that A = 3/16( Δ r/R_0)^2 ξ√(ϵ)/∮l̂_θB̂_p,s^-1dθ·1/ϵ∫_{λ}dλ ∑_wells(λ) I_z(c_0,c_1) ω̂_λ^2 ĝ_ϵ^1/2. Δ r measures the length-scale over which energy is available, i.e. a typical length-scale over which gradients can be flattened. We take this to be proportional to the correlation length, typically found to be the gyroradius. Hence we set Δ r = C_r ρ_g, where ρ_g is the gyroradius, and C_r some function of order 𝒪(ρ_g^0). This function is not known a priori, and may vary. For example, if there are large radial streamers present in the system C_r may be significantly increased. However, for simplicity, we shall simply take C_r=1 below. The dimensionless Æ now becomes A = 3/16( ρ_g/R_0)^2 ξ C_r^2 √(ϵ)/∮l̂_θB̂_p,s^-1dθ·1/ϵ∫_{λ}dλ ∑_wells(λ) I_z(c_0,c_1) ω̂_λ^2 ĝ_ϵ^1/2. This expression has various scalings which are of interest. Firstly, we see that reducing the aspect ratio at fixed ρ_g/R_0 is beneficial since it leads to fewer trapped particles. Note that, in the limit of large aspect ratio, the trapping fraction scales as √(ϵ), which is the same dependency found here. A reduction in the expansion parameter ρ_g/R_0 (at fixed ϵ) is also found to help in decreasing Æ. As a final step, we introduce the dimensionless density gradient R_0^2 B_p,0_ψln n = R_0/n∂ n/∂ r≡ - ω̂_n, with which c_0 and c_1 reduce to an especially simple form c_0 = ω̂_n/ω̂_λ( 1 - 3/2η), c_1 = 1 - ω̂_n/ω̂_λη. §.§ Miller geometry Finally, we choose our equilibrium to be of the type discussed by <cit.>. The key step is to parameterise the flux surface as a standard D-shaped tokamak in terms of the poloidal angle θ, R_s(θ) = R_0 + R_0 ϵcos(θ + arcsin [δ] sinθ), Z_s(θ) = R_0 κϵsinθ. Here, R_0(r) is the centre of the flux surface, κ(r) is the elongation, and δ(r) is the triangularity. An important feature of this parameterisation is that it is up-down symmetric, which can be seen by invariance under (Z_s,θ)↦-(Z_s,θ). The poloidal magnetic field can then be calculated via (<ref>), and the equilibrium is fully specified by the following set of 9 parameters; [ϵ,κ,δ, s_κ, s_δ, _r R_0, q, s, α], where s_κ =r _r lnκ and s_δ = r _r arcsin(δ). Henceforth we shall refer to this set of numbers which determines the local geometry as a “Miller vector”, M = [ϵ,κ,δ, s_κ, s_δ, _r R_0, q, s, α]. We finally plot a selection of cross-sections in Fig. <ref>, to serve as a reference for the various shapes mentioned in subsequent sections. §.§ An analytical limit: large aspect ratio s-α tokamak We proceed to investigate a limiting case of Miller geometries; namely that of a large aspect ratio tokamak with circular flux surfaces and a steep local pressure gradient. To this end, we set ϵ≪ 1, κ=1, and δ= s_δ = s_κ = _r R_0 = 0, equivalent to the s-α tokamak investigated by <cit.>. In this limit we find that γ = C_1 = C_3 =C_4 = 1 and C_2 = 0, so that the equation for the shear simplifies to s = σ - 2. Furthermore, the prefactor ξ/∮l̂_θB̂_p,s^-1dθ=1, and the same holds for R̂_s = B̂_p,s = 1 to lowest order in ϵ. The toroidal field becomes to first order in ϵ, B_ϕ = B_0 ( 1 + ρ/rϵ[ α/2 q^2 - cosθ] ), where the minus sign of the cosθ term arises because θ and u have different directions. The poloidal field becomes B_p = ϵ B_0/q( 1 + ρ/r[ αcosθ - s - 1 ] ). We use these expression to find the normalized drift ω_λ, to leading order in ϵ. To do so, we introduce a trapping parameter k ∈ [0,1], which relates to the pitch-angle like parameter λ as λ = 1 + ϵ (1 - 2 k^2) so that terms of the form λB̂ become to first order in ϵ λB̂ = 1 + ϵ (1 - 2k^2 - cosθ). With this, the drift precession frequency finally becomes ω̂_λ = 2 ⟨ (2 k^2 + cosθ -1) (s - αcosθ) - ( α/4 q^2 - cosθ/2) ⟩_λ. The integrals that are to be evaluated are of two forms, I_1 = ∫ (a + b cosθ)√(2k^2 + cosθ - 1) dθ and I_2 = ∫a + b cosθ/√(2k^2 + cosθ - 1)dθ. One can relate these various integrals to elliptical integrals of the first and second kind, which we define to be K(k) ≡ ∫_0^/2dζ/√(1 - k^2 sin^2 ζ), E(k) ≡ ∫_0^/2√(1 - k^2 sin^2 ζ) dζ. The first integral then becomes I_1/2 √(2) =2 ( a + b/3[ 2k^2 -1 ] ) E(k) +2 ( a + b/3) ( k^2 -1 ) K(k), and the second integral becomes I_2/2√(2) = 2b E(k) + (a-b)K(k). Evaluating the precession frequency is now trivial, and we find that the result is the same as that of <cit.>, ω̂_λ/2 = E(k)/K(k) - 1/2 - α/4 q^2 + 2s [ E(k)/K(k) + k^2 -1 ] - 2α/3[ E(k)/K(k) (2k^2 -1) + 1 - k^2 ]. Furthermore, the Jacobian becomes ĝ_ϵ^1/2(k) = 2√(2)/π K(k). With all these expressions the Æ becomes a straightforward integral of known functions over k A = 3/2√(2)( ρ_g/R_0)^2 C_r^2 √(ϵ)∫_0^1 d k   k I_z(c_0,c_1) ω̂_λ(k)^2 K(k), which can efficiently be computed numerically. § NUMERICAL RESULTS Two codes have been constructed: one that computes the integral of (<ref>) using standard integration routines, and a first-order numerical routine that computes both the precession frequencies and the Æ as given in (<ref>), both of which are computationally cheap (fractions of a CPU second per evaluation). We first investigate the results obtained for the s-α circular tokamak, after which we investigate how Æ varies in Miller geometries as a function of various parameters. The code used for generating these results is freely available on GitHub[Install the code via <https://github.com/RalfMackenbach/AE-Miller>]. The bounce-integrals required in Eq. (<ref>) are evaluated using numerical methods detailed in <cit.>. Finally we take the prefactor ρ_g/R_0 to be unity in all plots presented below, so when converting to a real device one should multiply the Æ by a factor (ρ_g/R_0)^2. §.§ s-α geometry A plot of the Æ calculated from Eq. (<ref>) is given in Fig. <ref> as a function of the magnetic shear and pressure gradient. We note that the ranges for s and α are not meant to represent realistically attainable values here, instead we are more interested in the general structure of the Æ over the domain. There are multiple interesting features visible in the figure. Even in this simplest model, the Æ exhibits rich structure over the s-α plane. The Æ is large when s and α are comparable, s∼α, and it is otherwise much smaller, particularly when the absolute value of one of these quantities is large. This is in line with previous findings <cit.>. It is furthermore interesting to note that the precise reduction in Æ depends on the drive: for a pure electron-temperature gradient, significant positive shear is most helpful in reducing Æ, whereas the Æ driven by a pure density gradient benefits more from negative shear. Since Eq. (<ref>) can be integrated numerically to high precision, it serves as a useful benchmark for the more general Æ of (<ref>). Accordingly, we have compared the Æ in the large-aspect-ratio limit with circular flux surfaces using a code which solves Eq. (<ref>). This comparison in shown in Appendix <ref>, and we find that the codes agree. §.§ Miller geometry We now leave the realm of the s-α limit and venture into shaped, finite-aspect-ratio equilibria. Our first step is to investigate the dependence on magnetic shear and pressure gradient for a range of different Miller vectors, and the results are shown in Fig. <ref>. Here we see similar trends as in section <ref>: negative shear and large α tend to be especially stabilising for a pure density gradient. It is however also clear that the magnitude and precise contours depend strongly on the chosen Miller vector, as defined in Eq. (<ref>). For example, it can be seen that lowering the safety factor is very stabilising, as the Æ is reduced over a large region of the s-α plane as one compares subfigure (a) to (b). In subfigure (c) the elongation has been reduced produce a “comet”-type configuration (κ < 1, i.e. a horizontally elongated tokamak, see Fig. <ref>), which can decrease the magnitude of the Æ, though the stabilising effects of s and α become less pronounced. Finally,in subfigure (d) the sign of the triangularity has been reversed to become negative. Though the shape of the contours is largely unchanged, the peak in Æ is shifted to higher α and lower s, indicating that negative triangularity can be particularly beneficial in high-shear discharges with a modest value for α. In a more general sense, when changing any of the parameters significantly one should expect that the precise shape and magnitude of the contours changes non-trivially. With this important caveat in mind, let us investigate the influence of geometry on the Æ. To do so, we display the dependence on κ and δ for various Miller vectors in Fig. <ref>. Several general interesting trends can be noted. Firstly, we see that it is not true in general that either positive or negative triangularity is always stabilising; it depends on the other Miller parameters. Secondly, we see that tokamaks with κ <1 and δ < 0, often referred to as (negative) comet-cross-sections tokamaks <cit.>, are possible contenders for Æ-minimizing geometries. This is perhaps unsurprising, as such tokamaks are close to having the maximum-𝒥 property as shown by <cit.>. Since Æ measures deviations from the maximum-𝒥 property, it is thus expected that these configurations perform well in terms of Æ. Investigating the plots in detail, we first note that increasing the inverse aspect ratio, as is done when going from (a) to (b), has a stabilising effect. Naively, one would expect that doubling the aspect ratio would increase the Æ by roughly a factor √(2)≈ 1.4, due to the factor √(ϵ) in Eq. (<ref>). However, going from plot (a) to (b) we see an increase of the maximum Æ by a factor ∼ 1.1, which is significantly less than √(2). This is likely due to the fact that, in a small-aspect-ratio device, magnetic field lines spend most of their time (or more precisely, arc-length) on the inboard side of the tokamak <cit.>. There, ω_λ tends to be opposite to the drift wave and therefore these orbits do not contribute to the Æ for a pure density gradient. Going from plot (a) to (c) the safety factor is halved, resulting in the contours changing shape significantly. It is furthermore interesting to note that the largest Æ increases by some 25% when the safety factor drops, which indicates that there is significant interplay between shaping and the safety factor. Finally, plot (d) has significant positive shear as compared to (a), which drastically changes the picture. As in Fig. <ref>, we again find that significant positive shear leads to negative triangularity being preferable to positive triangularity. It is also interesting to note that with significant positive shear, the negative-comet tokamak is no longer an Æ-minimizing geometry. We find that the results change somewhat if one instead imposes a pure electron temperature gradient (not shown here), though the basic trends remain intact, apart from the large aspect-ratio stabilisation. All in all, we conclude from these results that the Æ is very sensitive to equilibrium parameters, including quantities not investigated here such as s_κ, η, and ∂_r R_0. This sensitivity is perhaps reassuring: gyrokinetic turbulence has long been known to be strongly dependent on equilibrium parameters and even slight nudges can drastically change the picture (a sentiment perhaps best captured by the old Dutch expression wie het kleine niet eert, is het grote niet weerd). We seem to reproduce similar sensitivity in this simplified Æ-model for trapped electrons. This sensitivity becomes especially clear when investigating the dependence of Æ on triangularity, which we shall do in the next section. §.§ When is negative triangularity beneficial? As hinted at in the previous section, it is not possible to make a general statement about the effect of negative triangularity on Æ; its possible benefit depends strongly on other parameters describing the equilibrium. We can however find trends, and in order to do so we define the following fraction Δ = A(δ = -0.1)/A(δ = +0.1), where δ = ± 0.1 is chosen to represent a typical experimentally realizable range of parameters. This fraction can be interpreted as the factor by which the Æ changes upon switching from positive to negative triangularity, where Δ <1 implies a reduction in Æ. We present an investigation of Δ and its dependencies in Fig. <ref>. We see two clear trends which seem to be robust for tokamaks with κ > 1. Firstly, as noted in the previous sections, in plots (a) and (b) we see that negative triangularity tends to be especially stabilising for configurations with significant positive shear. Similar conclusions were made by <cit.>, who found that the turbulent heat-flux in gyrokinetic simulations follows the same trend for TEM-driven turbulence: only for sufficiently high positive shear is a decrease in heat flux found at negative triangularity. Increasing α tends to push the Δ = 1 line (in the plot this is the logΔ = 0 line) to even higher values of shear, implying that a significant pressure gradient may make negative triangularity less desirable. Secondly, in plots (c) and (d) we note that negative triangularity can be beneficial in situations where the gradient is small, such as in the core. The dependence on η is non-trivial; at small density gradients a nonzero value of η can make negative triangularity beneficial. As in the previous sections, the results here depend on the Miller vector and are not meant to serve a quantitative measure for core and edge transport. We have, however, found that the presented trends tend to be robust as long as κ > 1 and thus do have qualitative value. We finally note that a more comprehensive model of the effect of negative triangularity should likely take collisions, impurities, and global effects into account <cit.>. From these results we infer that negative triangularity is expected to be especially beneficial in the core of the plasma, where gradients are necessarily small. It is not clear if the benefit extends to the edge: only with significant positive shear does negative triangularity become beneficial here as well. One should also keep in mind that Δ measures the effect of going to negative triangularity whilst keeping all other parameters fixed. A more complete investigation would, for example, compare experimental equilibria with positive and negative triangularity, or use a global MHD-equilibrium code to find consistent profiles. We do not attempt such an investigation here, but we note that our mathematical framework would readily allow for such a comparison. We finally remark that the above results may seem counter-intuitive, as negative triangularity is often thought to automatically imply TEM stabilisation since the bounce points of most trapped particles reside on the inboard side of the torus, where the magnetic curvature should be favourable. Consequently, it is often argued that the bounce-averaged drift is such that TEMs are stabilised. Upon calculation of (<ref>), we find no such stabilisation however, as explained further in Appendix <ref>. §.§ Gradient-threshold like behaviour Our next step is to investigate the dependence of Æ on the gradient strength ω̂_n. From Eq. (<ref>), one can show that there are two distinct scalings <cit.>. In a strongly driven regime one finds that the Æ scales linearly with the gradient strength ω̂_n. For a weakly driven regime one can expand around small ω̂_n, and one finds that the Æ scales with the gradient strength as A ∝ω̂_n^3, A∝ω̂_n if |ω̂_n | ≫ 1, ω̂_n^3 if |ω̂_n | ≪ 1. These scalings are reminiscent of gradient-threshold (or critical-gradient) type behaviour <cit.>. Gradient thresholds are signified by a sudden decrease in heat-flux when decreasing the gradient below some threshold value. The aforementioned scaling behaviour of the Æ is displayed in Fig. <ref> which similarly shows a rapid decrease below some threshold value. In plot (b) we estimate a critical-threshold like quantity from Æ, by fitting a straight line to the strongly driven regime, i.e. we find the best-fit parameters a_0 and a_1 in the formula A = a_0 + a_1 ω̂_n, with ω̂_n ≫ 1. The gradient threshold, denoted by ω̂_c, is then defined as the interception with the abscissa, hence ω̂_c ≡ - a_0/a_1. One could, of course, use different definitions for ω̂_c, e.g. one could define the intersection point between the two straight lines on the log-log plot of Fig. <ref> as ω̂_c. We have however found that the definition of Eq. (<ref>) has several benefits: it is computationally cheaper, less prone to numerical noise, and seems to behave more smoothly. Other attempted definitions do show the same trends. We illustrate how ω̂_c varies as a function of various equilibrium parameters in Fig. <ref>. Note that subplot (a) in Fig. <ref> has the same Miller vector as Fig. <ref> (a), and subplot (b) in Fig. <ref> has the same Miller vector as Fig. <ref> (a). One interesting trend is that increasing the shear tends to increase ω̂_c linearly, and ω̂_c tends to plateau for low shear to some value. This is similar to findings of <cit.>, though their investigation focusses on electron-temperature-gradient turbulence. It is also interesting to note that, although the Æ is high in the negative-triangularity configuration, it does benefit from a high critical gradient, which is in line with findings of <cit.>. This effect becomes even more pronounced as one increases the shear, which furthermore reduces the Æ in the negative triangularity configuration. This implies that negative triangularity may be beneficial in a different sense: since the critical gradient as estimated from Æ is higher in negative triangularity discharges the profiles are able to sustain much higher gradients and thus higher core density/temperature. §.§ Tokamak optimisation In this section we aim to find Æ-optimised tokamaks for a certain set of equilibrium parameters, at fixed gradients (ω̂_n=1 and η=0). To this end, we choose to optimise over κ and δ whilst keeping all other parameters fixed. In order to find somewhat realistic solutions, we restrict ourselves to a bounded optimisation space, namely κ∈ (1/2,2), δ∈ (-1/2,1/2). The SHGO algorithm from <cit.> is ideally suited for finding the global minimum in this low-dimensional bounded parameter-space, and is furthermore available in . Finally, we shall vary magnetic shear and α, and investigate its effect on the found global minimum. The results are displayed in Fig. <ref>, where the minimal Æ, κ, and δ values are displayed as a function of s and α. We furthermore succinctly display the type of geometry in plot (d) by distinguishing between four different geometries; we refer to κ < 1 and δ < 0 as negative comet (NC), κ >1 and δ < 0 as negative triangularity (NT), κ > 1 and δ >0 as positive triangularity (PT), and finally κ < 1 and δ > 0 as positive comet (PC). Similar cross-sections may be seen in Fig. <ref>. It can be seen that both the optimal triangularity and elongation tend to be in the corners of the optimisation domain, and hence one should expect that these results are strongly dependent on this domain. It is interesting to note that the NT solution tends to be optimimal whenever there is significant shear and the pressure gradient is not too large, which is in line with the findings of section <ref>. From this plot an important conclusion can be drawn: there is no such thing as a single “optimal” solution. The global minimum depends sensitively on other equilibrium parameters such as shear and pressure gradient, which are in turn determined by the profiles of the safety factor, density, and temperature. Hence, if one were interested in finding an Æ-optimised tokamak one should take care in choosing the right profiles. One could also choose to let the profiles be part of the optimisation, by describing them with some number of free parameters and constraints (e.g. one could use a fixed number Fourier modes on top of a profile and optimise for the mode amplitudes). In reality, the profiles are themselves set by equilibrium conditions, making a self-consistent optimisation highly non-trivial. A more consistent investigation could perhaps solve this by coupling the current Æ-model to a transport solver, which would calculate self-consistent profiles. §.§ Existence of solutions with high gradients yet low Æ In this section, we investigate how this Æ model may relate to the suppression of TEMs when the density gradient is increased. To do so, we note several interesting properties that arise as one increases this gradient. Firstly, the normalized pressure gradient α scales linearly with the density gradient (assuming a constant ratio of poloidal magnetic field pressure to thermal pressure). The shear depends on the pressure gradient, as such a gradient drives the bootstrap current, which in turn changes the rotational transform profile. The bootstrap current density has an off-axis maximum in realistic scenarios, and such an off-axis maximum can locally lower the shear. This is most readily seen by inspecting the expression for shear in a large-aspect-ratio, circular tokamak, which depends on the current density profile j(r) as s(r) = 2 ( 1 - j(r)/(r)); (r) = 2/r^2∫_0^r x j(x)  d x, where measures the average current density inside the radius r. From this expression, it is clear that for current-density profiles which peak at r=0, the shear is always positive. An off-axis maximum, supplied by the bootstrap current, can cause a locally lower shear. Hence, as one raises ω̂_n one simultaneously increases α and decreases s. To estimate the magnitude of the effect of the bootstrap current on the shear, we note that the bootstrap current is proportional to the density and temperature gradients, and thus to the pressure gradient j_b ≈ j_b,0α(r). This is an approximation since the different transport coefficients relating the bootstrap current to the various gradients are not identical <cit.>, but we ignore this minor complication. We furthermore write the total current density as j = j_b + j_e, where j_e is the equilibrium current, and assume j_b ≪ j_e = j_e,0(r). To first order in the smallness of the bootstrap current, (<ref>) then gives s ≈ 2 - r^2 (r)/∫_0^ρ x(x) dx( 1 + j_b,0/j_e,0[α(r)/(r) - ∫_0^r x α(x) d x/∫_0^r x (x) d x] ) . Finally, following <cit.> we estimate the ratio j_b,0/j_e,0 as j_b,0/j_e,0≈ 0.3 ⟨β_p ⟩√(ϵ), where β_p is the local ratio of the thermal pressure over the poloidal magnetic field pressure, and the angular brackets denote a volume average. We shall take j_b,0/j_e,0 to be on the order of 10 %, implying that the shear may change as d s /dα∼ s/10. Finally, one can relate the pressure gradient to ω̂_n as α = ϵβ_p ( 1 + η + η_i )ω̂_n , where η_i=∂_r ln T_i / ∂_r ln n, with T_i(r) being the ion temperature. We assume that the factor ϵβ_p (1 + η + η_i) ∼ 0.1, so that dω̂_n / dα∼ 10. We illustrate the competing effects of the density gradient, pressure gradient, and shear in Fig. <ref>. In subfigures (a) and (b) we see various iso-contours of the Æ in (ω̂_n,s,α)-space, where (a) has positive triangularity and (b) has negative triangularity. It is especially interesting to note that in subfigure (a) there are paths in parameter space in which ω̂_n increases but the Æ decreases. These paths generally require that, as the density gradient increases, the pressure gradient should also increase and the shear should decrease. As we have argued, these trends are indeed found in tokamak discharges. One such path is indicated in subplot (a) as a blue line. Importantly, the blue line has s = 4 (1 - α/8), ω̂_n = 10 ·α which is the right order of magnitude for both ds/dα and dω̂_n/dα. Subfigure (b) exhibits drastically different features. Planes of constant Æ tend to lie parallel to planes of constant ω̂_n, indicating that not much stabilisation is possible by changing the shear or the pressure gradient: the Æ rises when ω̂_n is increased. In subfigure (b), we again plot a line along the direction of increasing α and decreasing magnetic shear in red. Finally, note that for s_δ we have used the estimate from <cit.>, s_δ≈δ/√(1 - δ^2). In subfigure (c) we display the Æ along the blue and red lines given in subfigures (a) and (b) as a function of the density gradient. Note that the positive-triangularity case exhibits a distinct maximum, with low Æ both to the left and right of the peak. One could interpret the existence of the latter as two distinct low-transport regimes; one with low gradients, and one with high gradients (which also has decreased magnetic shear and increased α). It is furthermore interesting to note that the negative-triangularity tokamak rises to far higher values in terms of Æ and does not seem to drop back down to low levels along the chosen domain. Hence one could perhaps conclude that reaching a low-transport state with high gradients is not feasible in a negative-triangularity discharge. This is in line with findings of <cit.> and <cit.>, where the H-mode was found to be inaccessible in negative-triangularity tokamaks on basis of the ballooning instability, though the physical reason is of course different. This rise in Æ in negative triangularity is perhaps unsurprising given that we have found that negative triangularity is stabilising in cases with significant positive shear, a weak pressure gradient, and a slight density gradient, exemplified in Figs. <ref> and <ref>. Since, along the chosen path shear decreases and α increases with increasing density gradient, which is opposite to what is stabilising for negative-triangularity tokamaks, we see a sharp increase in Æ. It may be feasible, however, to have a significant reduction in transport by tailoring the q-profile in such a way that negative triangularity becomes favorable, which likely implies significant positive shear. With such a reduction in Æ, one could perhaps enjoy much improved transport whilst staying in an L-mode like regime. The parameters described in <cit.> do seem to meet such requirements, especially near the edge where the reduction in transport seems greatest as compared to the positive triangularity case. A more comprehensive investigation, which shall be undertaken in a future publication, would self-consistently calculate the bootstrap current which would give precise paths in (ω̂_n,α,s)-space. However, given the nature of the iso-contours in this three-dimensional space, we expect the observed trends to be robust, as long as the path has the correct general dependencies (i.e. decreasing shear and increasing α with increasing density gradient). §.§ Comparison with tglf Finally, after investigating the dependence of the Æ on various parameters we compare it with turbulent energy-flux calculations in tokamak geometry. At the moment, nonlinear gyrokinetic simulations are computationally too expensive for detailed two-dimensional scans like those shown in Fig. <ref> and <ref>, therefore we instead employ the quasi-linear tglf (trapped gyro-Landau fluid) code <cit.>. A few key differences between the two models are highlighted before any comparison is made. tglf computes the linear eigenmodes of a variety of instabilities, ion and electron temperature gradient (ITG, ETG) modes, electromagnetic kinetic ballooning (KB) modes, as well as trapped-ion and trapped-electron modes (TIM, TEM), and then applies a quasilinear saturation rule to accurately fit the fluxes from nonlinear gyrokinetic simulations. For quasi-neutrality purposes tglf requires the inclusion of at least one ion species. These are fundamental differences to the formulation of the Æ described in this work, which only accounts for the Æ of trapped electrons. Therefore, when setting up tglf, care was taken to ensure the modelled turbulent energy-fluxes were as much as possible due to instabilities dominated by trapped electrons, using settings analogous to those used in recent gyrokinetic simulations in a similar regime <cit.>. Given the lack of collisions in this regime, the expected dominant instabilities should be of the collisionless trapped-electron mode (CTEM) variety. Nevertheless, some other instabilities can also arise due to interactions with the ion population, such as the ubiquitous mode <cit.>. Thus, to ensure that the dominant instabilities in the tglf simulations were as relevant as possible for our comparison, only contributions from modes propagating in the electron diamagnetic direction were included, as e.g. the ubiquitious mode is characterized by a change in mode propagation direction. Furthermore, we find that, for the scenarios considered in this work, adding an equally large electron temperature gradient to the density gradient, i.e. taking η=1, significantly decreased the amount of non-TEM modes dominant in tglf simulations, and as such we set η to unity for the comparison. The ETG mode can play a role (at large wavenumbers) under such conditions, which the current Æ also does not model, but this effect is more benign. The recent SAT2 <cit.> quasilinear saturation rule for tglf was used, as it includes the proper impact of plasma shaping on the quasilinear saturation <cit.>. Although tglf also uses a Miller parameterization of the local equilibrium, we note that it does not use the same normalization as <cit.> followed in this work, and care has been taken in converting between the two. There is an additional, more fundamental, difficulty one needs to account for in such a comparison. When supposing that the length-scale over which energy is available is proportional to the gyroradius in Eq. (<ref>), we have not specified which gyroradius is meant. The gyroradius scales inversely proportional to some reference magnetic field B_ref, i.e. ρ_g∝1/B_ref. Hence the choice of the reference magnetic field can impact the total Æ, and it is unclear a priori what an appropriate choice is. Let us denote a generically chosen reference field by B_ref=C_B B_0, where C_B is a dimensionless number which relates to the choice of B_ref. Formally, one could absorb any choice of C_B into C_r of Eq. (<ref>). tglf offers two options, C_B=γ (as in Eq. (<ref>)) and C_B=1.[These are called and units in the tglf code, respectively.] Encouragingly though, for realistic equilibria C_B ≈constant for many choices of the reference field (e.g. γ≈ 1 for equilibria which are not strongly shaped). However, when venturing into the strongly shaped realm (e.g. |δ|>0.3 and κ>3/2) the choice of B_ref can have a significant impact in both tglf and Æ. Running tglf with C_B = γ was found to give good agreement in the normalization, and as such all tglf simulations presented hereafter were run with this choice. For the comparison we use the gyro-Bohm normalized heat-fluxes computed by tglf, Q_TGLF = Q_e/Q_GB, where Q_e is the electron heat-flux from tglf, and Q_GB is the gyro-Bohm heat-flux. This is compared with the estimate of the gyro-Bohm normalized heat-flux from Æ <cit.>, which is Q_Æ≡A^ 3/2. With such a power law, a linear correlation between Q_Æ and Q_e from nonlinear gyrokinetic simulations was found for pure density-gradient driven TEMs, which is different from the current comparison in which both the electron temperature and density gradient drive the TEM (η=1). The data-points in the comparison are chosen in order to verify that tglf reproduces trends discussed in previous sections, where we also found regions where the correspondence is worse. A comparison in the (κ,δ) and (s,α) planes is displayed in Fig. <ref>. One can see that there is good correspondence in trends: decreasing the magnetic shear and/or increasing the pressure gradient helps in reducing the heat flux, and negative triangularity leads to an increase in transport for high gradients and low shear. There are however also clear differences visible between the two models for the heat flux, which are especially evident in the (s,α)-plot. A clear discrepancy can be seen at high values of shear, where the tglf heat flux drops sharply. To further investigate this relation between the two estimates, all simulation data shown in Fig. <ref> has been combined in a scatter plot shown in Fig. <ref>. Here we see that there is a linear correlation for most of the data, though there exists data-points that deviate more significantly from the linear relationship. There are various reasons why such a discrepancy may occur: * There may be other instabilities present which are not captured by the Æ of trapped electrons, such as the ubiquitous mode or the universal instability <cit.>. More generally, if there are instabilities present which do not derive their energy from trapped electrons, the current Æ-model is no longer expected to be an accurate measure. * The Æ length-scale C_r may vary more significantly for certain choices of equilibrium parameters, and choosing C_r=1 is then a poor approximation. * Perhaps the choice of B_ref is not ideal: there may be more suitable choices of a reference field resulting in better correspondence (though this could formally be absorbed into C_r). * The Æ can be interpreted as an upper bound on the free energy of a plasma. A significant portion of the free energy may reside in the zonal flow, which acts stabilising, and the Æ does not account for this. If the fraction of the Æ which resides in the zonal flow changes significantly, one could reasonably expect that Æ ceases to be an accurate measure. * The tglf's quasilinear saturated fluxes in both the (κ,δ) and (s,α) planes show occasional extreme outliers for small changes in input. Although tglf has been extensively verified against a wide variety of nonlinear gyrokinetic simulations, the regime explored in this work is not the typical input space and might require separate verification. A future investigation could, perhaps, find a fitting function for C_r such that the error in the estimated transport is minimal. However, seeing that general trends are well captured by Æ, it may already serve as a useful estimate for transport at low computational cost (Æ calculations are roughly a factor 50 faster than the presented tglf calculations). § CONCLUSIONS We have shown that it is possible to simplify the analytical expression for the Æ of trapped electrons in the case of an omnigenous system, which speeds up calculations. If one furthermore employs an analytical local solution to the Grad-Shafranov equation, explicit expression of various quantities needed in the calculation of the Æ (e.g., bounce-averaged drifts, bounce times) can be found as in <cit.>. Making use of an equilibrium parameterisation proposed by <cit.>, we go on to investigate how Æ depends on these equilibrium parameters. Using this set-up, we observe several interesting features of the Æ: * Increasing the magnetic shear or the Shafranov shift tends to be stabilising as indicated by a reduction in the Æ, and these trends hold for many different choices of geometry. * Negative triangularity can be stabilising, particularly in configurations with significant positive shear or small gradients, but not always. * The Æ has different scalings with respect to the gradient strength in weakly and strongly driven regimes. We employ this difference in scaling to estimate a gradient-threshold like quantity, and we find that it has similar behaviours as found in critical-gradient literature; an increase in shear tends to increase this gradient-threshold and negative triangularity benefits from an especially high gradient-threshold. * Using Æ for shape-optimisation we show that the optimal solution is strongly dependent on density gradients, temperature gradients, and magnetic shear, implying that the optimisation is sensitive to the density, pressure, and q-profiles. * One investigation is presented on how the Æ varies as one consistently increases the density and pressure gradient, whilst decreasing the shear. We find that in such scenarios one can find solutions with large gradients yet low Æ. Such solutions tend to exist for positive triangularity tokamaks but not for negative triangularity tokamaks. * A comparison is made between Æ and tglf. We observe fairly good correlation between the heat-flux and A^3/2, indicating that Æ can be a useful measure for tokamak transport. The results suggest that various observed trends regarding turbulent transport in tokamaks may partly be understood in terms of Æ, which has a simple physical interpretation and is cheap to compute. The analytical framework can readily be extended to account for an equilibrium model which allows for other shaping and plasma parameters such as plasma rotation <cit.>, squareness <cit.>, and up-down asymmetry <cit.>, though no such investigation is presented here. § ACKNOWLEDGMENTS We wish to thank J.M. Duff, R. Wolf, A. Goodman, P. Mulholland, P. Costello, M.J. Pueschel, F. Jenko, and E. Rodriguez for insightful discussions. This work was partly supported by a grant from the Simons Foundation (560651, PH), and this publication is part of the project “Shaping turbulence—building a framework for turbulence optimisation of fusion reactors,” with Project No. of the research program “NWO Open Competition Domain Science” which is financed by the Dutch Research Council (NWO). This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Program (Grant Agreement No. 101052200—EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. § COMPETING INTERESTS The authors declare no competing interests. § COMPARISON BETWEEN CIRCULAR TOKAMAK AND MILLER CODE Here we show that the two codes which calculate the Æ in both the circular s-α tokamak, for which the equation is given in (<ref>), and a Miller tokamak, as given in Eq. (<ref>), indeed yield the same results in the limit of a large aspect ratio circular tokamak. For a proper comparison, we set the Miller parameters such that one approaches the s-α limit. As such, we choose ϵ=10^-6, q=2, and all other Miller components of the Miller vector as given in (<ref>) are set to zero. There is one numerical parameter of interest in the Miller code, the number of θ points which are used to evaluate the bounce integrals of Eq. (<ref>) using a generalized trapezoidal method <cit.>. In the comparison presented here we use 10^3 equidistant nodes for θ. The integral over the pitch angle is done using quadrature methods. The comparison is shown in Fig. <ref>. In this figure, three different contour plots are shown; (a) is the available energy as calculated from Eq. (<ref>). Plot (b) shows the result as calculated from Eq. (<ref>). Finally, plot (c) shows the relative error between the two codes (more precisely, it is the difference between plot (a) and (b), divided by plot (a)). It can be seen that the error is typically quite small, with a maximal value of 1% and a mean value of 0.004 %. If one chooses different parameters (safety factor, density gradient, or η) the error remains similarly small. All plots presented in the current publication are generated using the same or even more refined numerical parameters as used here, so that we have a fairly high degree of confidence that the presented trends are indeed physical and not numerical. Further convergence checks (increasing the resolution of θ and adjusting tolerances of the quadrature methods) do not alter the plots presented in this publication in a visually discernible manner. § NEGATIVE TRIANGULARITY AND TRAPPED PARTICLE PRECESSION In this section, we investigate the difference in trapped particle orbits in positive and negative triangularity tokamaks. To this end, we investigate the dependence of Eq. (<ref>) on δ, and we set the other components of the Miller vector equal to [ϵ,κ,δ, s_κ, s_δ, _r R_0, q, s, α] = [1/3,2,δ,0,0,0,2,0,0]. The result for a positive triangularity tokamak (δ = 0.5) is plotted in Fig. <ref>, where we have plotted ω_λ as a function of its bounce points θ, which satisfy 1 - λB̂(θ) = 0. We have furthermore displayed the Æ per λ, called A_λ, which is the integrand of Eq. (<ref>). This is done by coloring a line of constant λ (which corresponds to constant B) according to its A_λ. Finally, we also display ω_λ as a function of the trapping parameter k^2 which maps λ↦ [0,1] according to k^2 = B̂_max- λB̂_maxB̂_min/B̂_max- B̂_min, where the subscripts max and min refer to the maximal and minimal values of the functions respectively. With this convention, k^2 = 0 corresponds to the most deeply trapped particles and k^2=1 to the most shallowly trapped particles. We have furthermore included a red dashed line, which delineates where ω_λ changes sign, which determines stability in a purely density-gradient-driven TEM. In the figure, ω_λ>0 corresponds to instability (and associated Æ). It can be seen that this positive triangularity tokamak is unstable up to roughly k^2=1/2, and the magnetic well is relatively narrow The same information is displayed for a tokamak which has δ=-0.5 in Fig. <ref>. It can be seen that the precession frequencies are unstable for a broader range of values for k^2. The Æ is furthermore weighted by the bounce-time of a particle, which can become very large at the bottom of a magnetic well in a negative triangularity tokamak. As such, the negative triangularity case (with the Miller vectors as chosen here) has higher Æ than the positive triangularity case. From numerical experiments, we find that an important term in determining the sign of ω_λ in these cases is R̂_c in Eq. (<ref>). As such, we postulate that, although the bounce points are in regions of “good curvature”, the curvature drive for the trapped particle precession is significantly different in positive and negative triangularity tokamaks. The particles which experience curvature drive in negative triangularity tokamaks are importantly the deeply trapped particles, which tend to be most unstable against the TEM with a density gradient. This is in contrast to positive triangularity tokamaks, where the most shallowly trapped particles experience significant curvature drive. These shallowly trapped particles however, are stabilised by the fact that they experience an averaged drift, and as such the curvature drive here is less deleterious. A graphical illustration of this is given in Fig. <ref>. As in previous comparisons, this result can change non-trivially when changing the Miller vector. Note that such an explanation also highlights why these trends tend to invert for tokamaks which have κ<1. When the elongation is less then unity, the sharpest angle (corresponding to large R_c^-1) is on the outboard-side in a positive triangularity tokamak, and on the inboard side for a negative triangularity tokamak. As such, when κ<1 and δ>0 the most deeply trapped particles experience much curvature drive, destablising the TEM. This is in contrast to κ<1 and δ<0 which has the shallowly trapped particles experiencing much curvature drive, which would benefit from the averaging effect of the drift. jpp
http://arxiv.org/abs/2307.01439v1
20230704021149
Local and global regularity for the Stokes and Navier-Stokes equations with the localized boundary data in the half-space
[ "Kyungkeun Kang", "Chanhong Min" ]
math.AP
[ "math.AP" ]
Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering (This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.) Jiacheng He, Gang Wang, Kun Zhang, Shan Zhong, Bei Peng, Min Li This paper was produced by the IEEE Publication Technology Group. They are in Piscataway, NJ. Manuscript received April 19, 2021; revised August 16, 2021. August 1, 2023 ========================================================================================================================================================================================================================================================================= Dedicated to Professor Vladimír S̆verák on the occasion of his 65th birthday. We study the Stokes system with the localized boundary data in the half-space. We are concerned with the local regularity of its solution near the boundary away from the support of the given boundary data which are product forms of each spatial variable and the temporal variable. We first show that if the boundary data are smooth in time, the corresponding solutions are also smooth in space and time near the boundary, even if the boundary data are only spatially integrable. Secondly, if the normal component of the boundary data is absent, we are able to construct a solution such that its second normal derivatives of the tangential components become singular near the boundary. Perturbation argument enables us to construct solutions of the Navier-Stokes equations with similar singular behaviors near the boundary in the half-space as the case of Stokes system. Lastly, we provide specific types of the localized boundary data to obtain the pointwise bounds of the solutions up to second derivatives. It turns out that such solutions are globally strong, and the second normal derivatives are, however, unbounded near the boundary. These results can be compared to the previous works in which only the normal component is present. In fact, the temporally non-smooth tangential boundary data can also cause spatially singular behaviors near the boundary, although such behaviors are milder than those caused by the normal boundary data of the same type. § INTRODUCTION We consider the non-stationary Stokes system in the half-space: ∂_t u-Δ u+∇ p = 0 div u = 0 in ℝ_+^n × (0,∞), with zero initial data and non-zero boundary data: u|_t=0=0, u|_x_n=0=g. Here the boundary data g : ℝ^n-1×ℝ_+→ℝ^n are separable forms for the spatial and temporal variables, which are given by g(y',s)=(g_1(y',s), g_2(y',s), ⋯, g_n(y',s))=(a_1g_1^𝒮(y')g_1^𝒯(s), a_2g_2^𝒮(y')g_2^𝒯(s), ⋯, a_ng_n^𝒮(y')g_n^𝒯(s)), where a_i∈ℝ, g_i^𝒮≥0 and g_i^𝒯≥ 0 for i=1, 2, ⋯, n. Our main concern is the local regularity of the solution near the boundary. Assuming that the boundary conditions are localized, we come up with the following simple situation away from the support of the boundary data: ∂_t u-Δ u+∇ p=0, div u=0 B_1^+ × (0, ∞) together with the no-slip boundary condition given only on the flat part of the boundary, namely u=0 Λ'× (0, ∞). Here we denote B^+_1:=x∈^n: x<1, x_n>0 (via translation, the center of the half ball can be assumed to be the origin) and Λ':=B_1∩x_n=0. We emphasize that no condition is imposed on the rounded boundary, denoted by Λ^+:=∂ B_1∩x_n≥ 0. One can imagine a similar situation to compare to the heat equation, i.e. u_t-Δ u=0 for which the classical boundary regularity theory implies that D^l_xD^m_t u_L^∞(Q^+_r)≤ C_l, m, r, nu_L^2(Q^+_1), l,  m≥ 0, where Q_r^+=B^+_r×(t-r^2, t)⊂ B_1^+ × (0, ∞) and r<1. Due to the non-local effect of the Stokes system (<ref>)-(<ref>), such estimate (<ref>) is, however, not clear and, as a matter of fact, it turned out that the estimate (<ref>), in general, isn't true for the Stokes system (see <cit.>, <cit.>, <cit.> and <cit.>). Indeed, in <cit.>, the first author constructed in three dimensions the examples of solutions such that their gradients are singular near the boundary, although the solutions themselves are bounded. More specifically, there exist solutions of the Stokes system (<ref>)-(<ref>) such that they are bounded and their derivatives are square integrable, but their normal derivatives are unbounded, i.e., u_L^∞_x,t(Q_1^+)+∇ u_L^2_x,t(Q_1^+)<∞, ∂_x_3 u_L^∞_x,t(Q_1/2^+)=∞. Such results recently became refined in the sense that the boundary data causing a wider range of singularity are analyzed and the corresponding blow-up rates are calculated near the boundary in general dimensions, and furthermore, a construction of a solution to the Navier-Stokes equations is also made to show similar singular behaviors (see <cit.> for the details). We remark that such constructed solution is an analog near the boundary to the well-known Serrin's example, namely u(x,t)=∇ h(x,t), where h is harmonic in spatial variables, in the interior. We remark, however, that no non-trivial solution of the form u(x,t)=∇ h(x,t) near the boundary exists because of the homogeneous boundary condition (<ref>). It is worth referring to the a priori estimate for (<ref>)-(<ref>) proved in <cit.>, which is given as follows: For given p, q∈ (1,2] and any r with p≤ r<∞, it holds that ∂_t u_L^q_t L^r_x(Q_1/2^+)+∇^2 u_L^q_t L^r_x(Q_1/2^+)+∇π_L^q_t L^r_x(Q_1/2^+)≤ C_p,q,r u_L^q_t W^1,p_x(Q_1^+)+π_L^q_t L^p_x(Q_1^+). Therefore, it follows from the parabolic embedding that u_𝒞^α/2_t 𝒞^α_x(Q_1/2^+) < ∞, 0< α <2(1-1/q). It was shown very recently in <cit.> that the estimates (<ref>) and (<ref>) are optimal. In fact, it turned out that the integrability in time is crucial to control ∇ u (see <cit.>). Indeed, it was also proved that if q>2, then ∇ u is bounded. More precisely, for given p ∈ (1, ∞) and q>2, the estimate (<ref>) is valid and furthermore, it follows that ∇ u∈^α/2_t ^α_x(Q_1/4^+), 0< α <1-2/q. As in the case of the Stokes system (<ref>)-(<ref>), we also consider the following Navier-Stokes equations: ∂_t u-Δ u+(u·∇)u+∇ p=0, div u=0 B_1^+ × (0, ∞), with the no-slip boundary condition (<ref>). Perturbation argument of the Stokes system enables a construction of solutions of (<ref>) and (<ref>) with the singular gradients near the boundary as in the case of the Stokes system mentioned above (see <cit.> and <cit.>). As a problem related to (<ref>) and (<ref>), we also consider the Caccioppoli type inequality near the boundary for both the Stokes system and Navier-Stokes equations. Here we mean the Caccioppoli type inequality for the Stokes system (<ref>)-(<ref>) by ∇ u^2_L^2(Q_1/2^+)≤ Cu^2_L^2(Q_1^+) and similarly for the Navier-Stokes equations (<ref>) and (<ref>) by ∇ u^2_L^2(Q_1/2^+)≤ Cu^2_L^2(Q_1^+)+u^3_L^3(Q_1^+), where C in (<ref>) and (<ref>) are independent of respective u. The main concern for the above inequalities (<ref>) and (<ref>) is whether or not the pressure appears in the right hand sides. One can ask the same question for the interior case and it has been known that the Caccioppoli type inequalities are valid in the interior for both the Stokes system and Navier-Stokes equations (see <cit.> and <cit.>). The answer for validation of the Caccioppoli type inequalities is, however, negative near the boundary. Indeed, it was proved in <cit.> that the Caccioppoli inequality (<ref>) for the Stokes system and Caccioppoli type inequality (<ref>) for the Navier-Stokes equations, in general, fail near the boundary. On the other hand, instead of non-zero boundary data, we remark that a non-zero external force, may also cause similar singular behaviors to the gradients of the solutions near the boundary for the Stokes system and the Navier-Stokes equations (see <cit.>). The constructed solution is an analog in the energy class to that of a shear flow type developed in <cit.>. We are not going to pursue this direction, since we deal with only the non-zero boundary data in this paper. The main tool for our construction of such singular solutions is to use the explicit representation formula for the solution of the Stokes system in the half-space (see <cit.> and <cit.>). More precisely, using the Golovkin tensor K_ij(x,t)=-2 _ij _n(x,t)-4 _j∫_0^x_n∫__n(z,t) _iE(x-z) dz' dz_n-2 _nj_iE(x)(t), we recall that the solution w of (<ref>)-(<ref>) is given by u_i(x,t)=∑_j=1^n∫_-∞^∞∫_ K_ij(x-ξ',t-s)g_j(ξ',s) dξ' ds. The singular solutions that have been formulated so far are based on the boundary data (<ref>) with non-zero normal component but zero tangential components. One can expect singular behaviors of the solutions with only the tangential components of the localized boundary data, but this is not obvious. The motivation of our study in this paper is to analyze behaviors of the solutions near the boundary when the boundary data are composed of the tangential components with zero normal component. In addition, we would like to make distinct comparisons between singular behaviors caused by the tangential components and those of the normal component of the boundary data. By the way, we remark that all the examples of the boundary data constructed in the previous works were sufficiently regular in the spatial variables and some lack of regularity was assigned to the temporal variable, and thus, in this paper, we consider the opposite case as well. We now state our main theorems. Our first result states that if the temporal boundary data are smooth, then the solution of (<ref>)-(<ref>) with pressure, are locally smooth away from the support of the boundary data. Therefore, we can say that a local smoothing effect is available if the boundary data are smooth in the temporal variable. For convenience, we first define the set A:={y'∈ℝ^n-1| 3<|y'|<4√(n), -4<y_i<-3.}. We are now ready to state the first main result. Let j=1, 2, ⋯, n and A be the set defined in (<ref>). Suppose that u is the solution of the Stokes system (<ref>)-(<ref>) defined by (<ref>) with the boundary data g given in (<ref>) with a_i∈ℝ and (g_j^𝒯)⊂(3/4, 7/8) such that g_j^𝒮∈ L^1(ℝ^n-1), g_j^𝒯∈ C_c^∞(ℝ). Then u, p∈ C^∞(Q_1^+). Singular behavior of the gradients of the solutions of the Stokes system (<ref>)-(<ref>) was shown for the boundary data of the form in (1.3) with a_n≠ 0, a_i=0 for i<n, where g_n^𝒮 is sufficiently smooth and g_n^𝒯 is less regular (see <cit.>, <cit.>, <cit.> and <cit.> for more details). Theorem <ref> shows that the regularity of the temporal boundary data is crucial for the regularity of the solutions away from the support of boundary data. Our second theorem discusses the local regularity of the solutions where now the spatial boundary data are smooth with the same assumptions on the support as described in the previous theorem. Let j=1,  2, ⋯,  n-1, 1<p<∞ and A be the set defined in (<ref>). Suppose that the boundary data g with a_i∈ℝ, (g_j^𝒮)⊂ A and (g_j^𝒯)⊂(3/4,  1) in (<ref>) satisfy g_j^𝒮∈ W^2,1(A), g_j^𝒯∈ L^∞(ℝ) ∖Ḃ_pp^1/2-1/2p(ℝ). Then the solution of the Stokes system (<ref>) - (<ref>) defined by (<ref>) with the boundary data g satisfy u_L^∞(Q_2^+)+∇ u_L^∞(Q_2^+) + ∑_j=1^n-1∑_i=1^n∂_i ∂_j u_L^∞(Q_2^+)+∂_n^2 u_n_L^∞(Q_2^+)<∞, ∂_n^2 u_i_L^p(Q_1/2^+)=∞, i=1, 2, ⋯, n-1. Similar construction can be made for the solution of the Navier-Stokes equations (<ref>) with sufficiently small a_i and the no-slip boundary condition (<ref>) satisfying (<ref>) with Q_1^+ in place of Q_2^+ and (<ref>). The result of Theorem <ref> is similar to that of <cit.>, where the tangential components of the boundary data are absent. One comparison is that the normal derivatives are singular in <cit.> but it is not the case in Theorem <ref>, where the second normal derivatives are singular. This shows that the tangential components of the boundary data give rise to a milder singularity compared to that of the normal component of the boundary data of the same type. Our third theorem discusses the global pointwise estimates of the solutions and their L^p estimates. Also the pointwise estimate of the corresponding pressure is addressed. Let n≥ 2 and 1≤ k ≤ n-1. Suppose that u is the solution of the Stokes equation (<ref>)-(<ref>) defined by (<ref>) with the boundary data g given in (<ref>) with a_j=δ_kj and g_k^𝒮∈ C_c^∞(B_1^'), g_k^𝒯∈ C_c(ℝ) ∩ C^1(0,1), where (g_k^𝒯)⊂[1/4, 1],  g_k^𝒯(s)=(1-s)^a  (s∈[1/2,  1],  0<a≤1/2). Then u satisfies the following bounds: |u_i(x,t)| ≤C/< x>^ng_k^𝒮_W^1,∞, 1≤ i ≤ n, |∂_j u_i(x,t)| ≤ C(δ_j<n1/< x>^n+1+[δ_i<n1/<x>^n+δ_in1/<x>^n+1+(1/<x>^n+2+1_|x|<2)]δ_jn)g_k^𝒮_W^2,∞, 1≤ i,  j≤ n, |∂_l∂_j u_i(x,t)| ≤ C[ δ_l<nδ_j<n1/<x>^n+2+(δ_lnδ_j<n+δ_l<nδ_jn)(1/<x>^n+1+1_|x|<2). +δ_jnδ_ln(1_|x|>2(1/<x>^n+2+/<x>^n(x_n+1)^2a)+1_|x|<2log(1+t/x_n)/(x_n^2+|t-1|)^1-a) .+δ_n2σ1_|x|<2log(2+1/x_2^2+|t-1|)]g_k^𝒮_W^3,∞, 1≤ i,  j, l≤ n, for x∈ℝ_+^n and t∈ [0,2], where σ=δ_i<nδ_jnδ_ln, and :=(x_n^2+|t-1|)^a-1/2+δ_a=1/2log(2+1/x_n^2+|t-1|). and C is independent x, t, and g_k. We have that * u∈ L_x,t^p,q(ℝ_+^n×(0,2)) for p∈(1,∞] and q ∈ [1,∞], * ∇ u∈ L_x,t^p,p(ℝ_+^n×(0,2)) for p∈(1,3/1-2a) if a<1/2 and p∈(1,∞) if a=1/2, * ∇^2 u∈ L_x,t^p,p(ℝ_+^n×(0,2)) for p ∈[1, 3/2(1-a)) if a<1/2 and p∈ [1,3) if a=1/2. In particular, u belongs to the energy class L_t^∞ L_x^2 ∩ L_t^2 Ḣ_x^1(ℝ_+^n× (0,2)). Moreover, for n≥ 2, we have the following pointwise estimate for the pressure p: for any multiindex α∈ℤ_0^n and ϵ>0, |∇^α p(x,t)| ≤ C1_1/4≤ t≤ 11/<x>^n+|α|+C1_t≤1/21/<x>^n-1+|α|+C1_|x|<2(1_1/2≤ t<11/(1-t)^1+ϵ/2-a+1_t> 1^*/(t-1)^ϵ/2) +C1_|x|>2(1_1/2≤ t<1+1_t>1)^*/|x|^n-1+|α|, where ^*:=|t-1|^a-1/21_a<1/2+log(2+1/|t-1|)1_a=1/2 and C=C(α, ϵ, g_k) is a positive constant. * The pointwise estimates in Theorem <ref> can be compared to <cit.>, where the pointwise estimates caused by the normal component of the boundary data of the same type were obtained. * The above pointwise estimate for the pressure p shows that * p∈ L_x,t^q,r(ℝ_+^n×(0,2)) for any q> n/n-1 and 1≤ r <2/1-2a, * ∇ p∈ L_x,t^q,r(ℝ_+^n×(0,2)) for any q>1 and 1≤ r<2/1-2a, * ∇^k p∈ L_x,t^q,r(ℝ_+^n×(0,2)) for any k≥ 2, q≥ 1 and 1≤ r<2/1-2a. We now discuss the lower bounds of the second normal derivatives of the tangential components of the solutions. Let n≥ 3 and 1≤ k≤ n-1. Suppose that u is the solution of the Stokes equation (<ref>)-(<ref>) defined by (<ref>) with the boundary data g given in (<ref>) with a_j=δ_kj, and g_k^𝒮∈ C_c^3(B_1^'), g_k^𝒯∈ C_c(ℝ)∩ C^1(0,1), where (g_k^𝒯)⊂[1/4, 1],  g_k^𝒯(s)=(1-s)^a  (s∈[1/2,  1],  0<a≤1/2). Furthermore, choose g_k^𝒮 to be the following product form g_k^𝒮(y')=∏_j=1^n-1𝒢(y_j), where 𝒢:ℝ→ℝ is smooth, even, supported in (-4/5√(n-1), 4/5√(n-1))=:(-r,r), 𝒢(x)=1 for |x|<1/2√(n-1)=:p and 𝒢'(x)≤ 0 for x>0 (see ). Then the followings hold: 1) If i<n,  i≠ k, then for any α, β>0, and |x'|≥ 3 with x_i≥α, x_k≥β, x_n≤ 1 (see ), we have |∂_x_n^2 u_i(x,1)|≥C_1 αβ/|x'|^n+2log2/x_n-C_2/|x'|^n-2 a=1/2, C_1αβ/|x'|^n+2x_n^1-2a-C_2/|x'|^n-2 0<a<1/2, where C_1 and C_2 are positive constants independent of x, α, β. 2) If i=k, and we further assume that 𝒢”(x)=0 at x=13/20√(n-1)=:q, 𝒢”(2q-x)=-𝒢”(x) for p≤ x≤ r and 𝒢”(x)=0 on (p,p+1/20√(n-1))∪(q-1/20√(n-1),q) (see ). Then for |x'|≥ 3 and either x_k<p, or x_k>r and (∑_i≠ k,nx_i^2)^1/2<3/2 (see ), we have |∂_x_n^2 u_i(x,1)|≥C_1/|x'|^n+2log2/x_n-C_2/|x'|^n-2 a=1/2, C_1/|x'|^n+2 x_n^1-2a-C_2/|x'|^n-2 0<a<1/2, where C_1 and C_2 are positive constants independent of x. The blow-up estimates in Theorem <ref> can be also compared to <cit.>. Different comparison is also due to opposing contributions via the tangential components and the normal components of the boundary data, respectively. We organize this paper as follows: In Section <ref>, we remind some known formulas and results and introduce useful lemmas to use later. Section <ref>, Section <ref> and Section <ref> are devoted to presenting the proofs of Theorem <ref>, Theorem <ref> and Theorem <ref>, respectively. The proof of Theorem <ref> is provided separately in Section <ref> for the Stokes system and in Section <ref> for the Navier-Stokes equations. In Appendix, details are given for the proofs of some lemmas introduced in Section <ref> and some figures are drawn for the localized boundary data and the regions of singularity for the second derivatives of the solution presented in Theorem <ref>. § PRELIMINARIES We recall that the heat kernel Γ and the fundamental solution E of -Δ are given by Γ(x,t)={ (4π t)^-n/2e^-|x|^2/4t for t>0, 0 for t≤ 0, .     and    E(x)=1/n(n-2)|B_1|1/|x|^n-2 for n≥ 3, -1/2πlog |x| for n=2. We also remind the following functions: for x∈ℝ^n and t∈ℝ, A(x,t):=∫_ΣΓ(z',0,t)E(x-z')dz'=∫_ΣΓ(x'-z',0,t)E(z',x_n)dz', B(x,t):=∫_ΣΓ(x-z',t)E(z',0)dz'=∫_ΣΓ(z',x_n,t)E(x'-z',0)dz', where Σ:=ℝ^n-1 and for x∈ℝ_+^n and t∈ℝ, C_i(x,t):=∫_0^x_n∫_Σ∂_n Γ(x-z,t)∂_i E(z)dz, 1≤ i≤ n. The following relations are known between above functions and their proofs are given in <cit.>: ∂_nC_i(x,t) =∂_iC_n(x,t)+∂_i∂_nB(x,t), i≠ n, ∂_nC_n(x,t) =-∑_k=1^n-1∂_kC_k(x,t)-1/2∂_n Γ(x,t). We consider the following nonstationary Stokes system in ℝ_+^n×(0,∞): ∂_t w-Δ w+∇π = f div w = 0 in ℝ_+^n × (0,∞), with zero initial data and non-zero boundary data: w|_t=0=0, w|_x_n=0=g. We collect some estimates of the solution w which will be used later. Let 1<p, q<∞. Suppose that w is the solution of (<ref>)-(<ref>) with g=0. Then, w satisfies the following: * If f=∇· F with F_in|_x_n=0=0, then w(t)_L^p(ℝ_+^n)≤ C∫_0^t(t-s)^-1/2F(s)_L^p(ℝ_+^n)ds, 0<t<∞. * If f=∇· F with F∈ L^q(0,∞;L^p(ℝ_+^n)), F|_x_n=0∈ L^q(0,∞; Ḃ_pp^-1/p(ℝ^n-1)), then ∇ w_L^q(0,∞;L^p(ℝ_+^n))≤ C(F_L^q(0,∞;L^p(ℝ_+^n))+F|_x_n=0_L^q(0,∞; Ḃ_pp^-1/p(ℝ^n-1))). * If f=∇· F with F∈ L^q(ℝ^n×ℝ_+), then w_L^q(ℝ_+^n×(0,T))≤ CT^1/2F_L^q(ℝ^n× (0, T)). The Golovkin tensor K_ij(x,t) : ℝ_+^n×ℝ→ℝ is the Poisson kernel of the nonstationary Stokes system (<ref>) in the half-space ℝ_+^n with zero external force. A solution w of (<ref>)-(<ref>) with f=0 is given by w_i(x,t)=∑_j=1^n∫_-∞^∞∫_Σ K_ij(x-y',t-s)g_j(y',s)dy'ds, where we extend g_j(x',t)=0 for t<0. Here the Golovkin tensor K_ij(x,t) is K_ij(x,t)=-2δ_ij∂_nΓ(x,t)-4∂_j C_i(x,t)-2δ_nj∂_i E(x)δ(t), i, j=1, 2, ⋯, n and the associated pressure tensor k_j is given by k_j(x,t)=2∂_j∂_n E(x)δ(t)+2δ_njE(x)δ '(t)+2/t∂_j A(x,t), j=1, 2, ⋯, n. We remark that the last term of the above formula is not integrable and thus the formula has to be understood in either one of the following senses: p(x,t) =2∑_i=1^n∂_i∂_n∫_ΣE(x-y')g_i(y',t)dy'+2∫_ΣE(x-y')∂_t g_n(y',t)dy' +∑_j=1^n∂_j∫_-∞^∞∫_Σ2/t-sA(x-y',t-s)[g_j(y',s)-g_j(y',t)]dy'ds, or p(x,t) =2∑_i=1^n∂_i∂_n∫_ΣE(x-y')g_i(y',t)dy'+2∫_ΣE(x-y')∂_t g_n(y',t)dy' -4∑_j=1^n(∂_t-Δ_x')∫_-∞^∞∫_Σ∂_j A(x-y',t-s)g_j(y',s)dy'ds. The proof of the above representations for n=3, is given in <cit.>. We now recall some useful estimates of the previous functions and their proofs can also be found in <cit.>. |∂_x^l∂_t^m Γ(x,t)| ≲1/(|x|^2+t)^l+n/2+m, |∂_x^l∂_t^m A(x,t)| ≲1/t^m+1/2(|x|^2+t)^l+n-2/2 l+n≥ 3, |∂_x'^l∂_x_n^k∂_t^m B(x,t)| ≲{1/ (|x|^2+t)^l+n-2/2(x_n^2+t)^k+1/2+m m ≥ 0, e^-x_n^2/10t/(|x|^2+t)^l+n-2/2t^k+1/2 m =0, . |∂_x'^l∂_x_n^k∂_t^m C_i(x,t)| ≲1/ t^m+1/2(|x|^2+t)^l+n-1/2(x_n^2+t)^k/2 1≤ i ≤ n. We finally list some useful lemmas. Let Γ_1(x,t):=1/(4π t)^1/2e^-x^2/4t be the 1-dimensional heat kernel and B be the function defined in (<ref>). Then, for 1≤ j ≤ n-1 and |x'|≥ 1 we have ∂_j∂_n B(x,t) =-∂_n Γ_1(x_n, t)(∂_j E(x',0)+J(x',t)) and |J(x',t)|≤ c_nt^1/2. For positive L, a, d and k we have ∫_0^Lr^d-1/(r+a)^kdr≲ L^d(a+L)^-k k<d, L^d(a+L)^-d(1+log_+L/a) k=d, L^d(a+L)^-da^-(k-d) k>d. Let a>0, b>0, k>0, m>0 and k+m>d. Let 0≠ x∈ℝ^d and I:=∫_ℝ^ddz/(|z|+a)^k(|z-x|+b)^m. Then, with R=max{|x|, a, b}∼ |x|+a+b, I≲ R^d-k-m+δ_kdR^-mlogR/a+δ_mdR^-klogR/b+1_k>dR^-ma^d-k+1_m>dR^-kb^d-m. Let n≥ 3 and K(x',t)=∫_Σe^-|x'-z'|^2/4t/|z'|^n-2dz' for x'∈Σ=ℝ^n-1. For any m∈[2,∞), we have K(x',s) ≥(m/(m+1)|x'|)^n-2s^n-1/2(4π)^n-1/2(1-2^n-1/4e^-|x'|^2/8m^2s), K(x',s) ≤(m/(m-1)|x'|)^n-2s^n-1/2(4π)^n-1/2+C/|x'|^n-2s^n-1/2e^-|x'|^2/8m^2s, where C=C(n)>0 is a constant. If 0<s<t<1, 0<a<1 and c>0, then e^-x_n^2/cs/(1-t+s)^1-a≲1/(x_n^2+1-t+s)^1-a. The proof will be provided in the Appendix <ref>. Finally we will frequently use the following integral estimates. For any c>0, ∫_0^t1/s^ke^-x_n^2/csds≲e^-x_n^2/2ct/x_n^2k-2 k>1, e^-x_n^2/ctlog(1+ct/x_n^2 ) k=1, e^-x_n^2/2ct 0<k<1. The case k=1 follows from the inequality (5) in <cit.>. The case k>1 follows from the integration by parts after a suitable change of variables with the aid of the estimate u^k-1e^-u≲ e^-u/2. Lastly, the case 0<k<1 can be similarly obtained as the case k>1. For b, c>0 and α∈[1/2, 1), β∈[1/2, 1], ∫_0^11/(u+c)^α (u+b)^βdu≲log(2+1/√(b+c)) α=β=1/2, 1/(c+b)^α+β-1 1/2≤α <1, 1/2≤β <1, 1/(b+c)^αlog(c/b+1) 1/2≤α <1, β=1. The proof will be provided in the Appendix 8.2. § PROOF OF THEOREM <REF> FOR THE STOKES EQUATIONS Since the Stokes system is linear, we treat the cases of the normal and tangential components of the boundary data separately. ∙ (Step 1) (Case that g_j=0 for j≠ n and g_n ≠ 0) We first consider the case that only the normal component of the boundary data is not zero, i.e. j=n. 1) We first estimate the tangential components of the velocity w. Let 1≤ i ≤ n-1. Reminding that K_in(x,t)=-4∂_n C_i(x,t)-2∂_i E(x)δ(t) =-4∂_i C_n(x,t)-4∂_i∂_n B(x,t)-2∂_i E(x)δ(t), we note that w_i is decomposed as follows. w_i(x,t)= ∫_0^t∫_ΣK_in(x-y',t-s)g_n(y',s)dy'ds = -4∫_0^t∫_Σ∂_i C_n(x-y',t-s)g_n(y',s)dy'ds -4∫_0^t∫_Σ∂_i∂_n B(x-y',t-s)g_n(y',s)dy'ds -2∫_Σ∂_i E(x-y')g_n(y',t)dy' := w_i^𝒞(x,t)+w_i^ℬ(x,t)+w_i^ℰ(x,t). We control w_i^𝒞, w_i^ℬ and w_i^ℰ separately. (Estimate of w_i^ℰ) We note that w_i^ℰ(x,t)=-2g_n^𝒯(t)∫_Σ∂_i E(x-y')g_n^𝒮(y')dy'. Thus we find that ∂_x^l∂_t^mw_i^ℰ(x,t)=-2∂_t^mg_n^𝒯(t)∫_Σ∂_x^l∂_i E(x-y')g_n^𝒮(y')dy', and we get that using |x'-y'|≳ 1 for any x'∈ B'(0,1) and y'∈ A, |∂_x^l∂_t^m w_i^ℰ(x,t)| ≲|∂_t^m g_n^𝒯(t)|∫_A|∂_x^l∂_i E(x-y')||g_n^𝒮(y')|dy'ds ≲∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1. Thus we obtain that ∂_x^l∂_t^m w_i^ℰ_L^∞(Q_+^1)<∞. (Estimate of w_i^𝒞) We note that w_i^𝒞(x,t)=-4∫_0^tg_n^𝒯(t-s)∫_Σ∂_i C_n(x-y',s)g_n^𝒮(y')dy'ds and since ∂_t^k g_n^𝒯(0)=0 for any k≥ 0, we find that ∂_x'^l∂_t^m w_i^𝒞(x,t)=-4∫_0^t∂_t^mg_n^𝒯(t-s)∫_Σ∂_x'^l∂_i C_n(x-y',s)g_n^𝒮(y')dy'ds. Thus we find that |∂_x'^l∂_t^m w_i^𝒞(x,t)| ≲∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_A|∂_x'^l∂_i C_n(x-y',s)||g_n^𝒮(y')|dy'ds. ≲∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_A1/s^1/2(|x-y'|^2+s)^l+n/2|g_n^𝒮(y')|dy'ds ≲√(t)∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1, where we used that (|x-y'|^2+s)^α/2≥ |x'-y'|^α≳ 1 for any α>0. Next, the normal derivatives of w_i^𝒞 are calculated as follows using (<ref>): ∂_x_n∂_x'^l∂_t^m w_i^𝒞(x,t)= -4∫_0^t∂_t^mg_n^𝒯(t-s)∫_Σ∂_x_n∂_x'^l∂_i C_n(x-y',s)g_n^𝒮(y')dy'ds = 4∑_k=1^n-1∫_0^t∂_t^mg_n^𝒯(t-s)∫_Σ∂_x_k∂_x'^l∂_i C_k(x-y',s)g_n^𝒮(y')dy'ds +2∫_0^t∂_t^mg_n^𝒯(t-s)∫_Σ∂_x'^l∂_i Γ(x-y',s)g_n^𝒮(y')dy'ds. As shown above in the case of tangential derivatives, similar computations show that |∂_x_n∂_x'^l∂_t^m w_i^𝒞(x,t)| ≲ (t+√(t))∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1. Hence, we obtain ∂_x_n∂_x'^l∂_t^m w_i^𝒞_L^∞(Q_+^1)≲∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1. Next, the second normal derivatives of w_i^𝒞 is estimated as follows using (<ref>) and (<ref>): ∂_x_n^2∂_x'^l∂_t^m w_i^𝒞(x,t) =4∑_k=1^n-1∫_0^t∂_t^m g_n^𝒯(t-s)∫_Σ∂_x_k∂_x'^l∂_i∂_k C_n(x-y',s)g_n^𝒮(y')dy'ds +4∑_k=1^n-1∫_0^t∂_t^m g_n^𝒯(t-s)∫_Σ∂_x_k∂_x'^l∂_i∂_n∂_k B(x-y',s)g_n^𝒮(y')dy'ds +2∫_0^t∂_t^mg_n^𝒯(t-s)∫_Σ∂_n∂_x'^l∂_i Γ(x-y',s)g_n^𝒮(y')dy'ds =I_1+I_2+I_3. Here each I_k is estimated as follows: |I_1| ≲∑_k=1^n-1∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_A|∂_x_k∂_x'^l∂_i∂_k C_n(x-y',s)||g_n^𝒮(y')|dy'ds ≲∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_A1/s^1/2(|x-y'|^2+s)^l+n+2/2|g_n^𝒮(y')|dy'ds ≲√(t)∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1, |I_2| ≲∑_k=1^n-1∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_A|∂_x_k∂_x'^l∂_i∂_n∂_k B(x-y',s)||g_n^𝒮(y')|dy'ds ≲∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_Ae^-x_n^2/10s/s(|x-y'|^2+s)^l+n+1/2|g_n^𝒮(y')|dy'ds ≲∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1∫_0^te^-x_n^2/10s/sds ≲∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1log(1+10t/x_n^2), where the last inequality follows from Lemma <ref>. Thus we conclude that |I_2|≲∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1log(1+10t/x_n^2). Unfortunately, we cannot directly estimate the L^∞ norm of I_2. However using the above estimate for I_2 and the estimate ∫_0^1log(1+10t/x_n^2)^pdx_n≤ c(p) for any p∈ [1,∞) and t∈ (0,1), we see that for any p∈[1,∞) we have the bound I_2_L^p(Q_+^1)<∞. Finally, |I_3| ≲∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_A|∂_n∂_x'^l∂_i Γ(x-y',s)||g_n^𝒮(y')|dy'ds ≲∫_0^t|∂_t^mg_n^𝒯(t-s)|∫_A1/(|x-y'|^2+s)^l+n+2/2|g_n^𝒮(y')|dy'ds ≲ t∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1. Hence we conclude that for p ∈ (1,∞), ∂_x_n^2∂_x'^l∂_t^m w_i^𝒞_L^p(Q_+^1)≲∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1. Next, the we estimate the third normal derivatives of w_i^𝒞. Using (<ref>), (<ref>) and the fact that the function B(x,t) satisfies the heat equation, ∂_x_n^3∂_x'^l∂_t^m w_i^𝒞(x,t) =-4∑_k=1^n-1∑_j=1^n-1∫_0^t∂_t^m g_n^𝒯(t-s)∫_Σ∂_x_k∂_x'^l∂_i∂_k∂_j C_j(x-y',s)g_n^𝒮(y')dy'ds -2∑_k=1^n-1∫_0^t∂_t^m g_n^𝒯(t-s)∫_Σ∂_x_k∂_x'^l∂_i∂_k∂_n Γ(x-y',s)g_n^𝒮(y')dy'ds -4∑_k=1^n-1∑_j=1^n-1∫_0^t∂_t^m g_n^𝒯(t-s)∫_Σ∂_x_k^2∂_x'^l∂_i∂_j^2B(x-y',s)g_n^𝒮(y')dy'ds +4∑_k=1^n-1∫_0^t∂_t^m+1g_n^𝒯(t-s)∫_Σ∂_k^2∂_x'^l∂_i B(x-y',s)g_n^𝒮(y')dy'ds +2∫_0^t∂_t^mg_n^𝒯(t-s)∫_Σ∂_n^2∂_x'^l∂_i Γ(x-y',s)g_n^𝒮(y')dy'ds =J_1+J_2+J_3+J_4+J_5. Since there are only the tangential derivatives in the functions B and C_j (1≤ j≤ n-1), using the estimates of the functions given in Preliminaries, we see that ∂_x_n^3∂_x'^l∂_t^m w_i^𝒞_L^∞(Q_+^1)≤∑_k=1^5 |J_k|≲(∂_t^m g_n^𝒯_L^∞+∂_t^m+1 g_n^𝒯_L^∞)g_n^𝒮_L^1. For arbitrary kth normal derivative, we divide the case where k is even or odd. If k is odd, after removing all the normal derivatives applied to the functions C_j (1≤ j ≤ n) and B, we obtain that ∂_x_n^k∂_x'^l∂_t^m w_i^𝒞_L^∞(Q_+^1)≲∑_s=0^k-1/2∂_t^m+s g_n^𝒯_L^∞g_n^𝒮_L^1. On the other hand, if k is even, we cannot remove the first normal derivative applied to the function B and this leads to the time singularity at 0 as shown in (<ref>). Thus we cannot obtain the L^∞ estimates directly from the pointwise estimates of the functions C_k and B given in the Preliminaries. However we still have the following L^p estimates for any p∈ [1,∞): ∂_x_n^k∂_x'^l∂_t^m w_i^𝒞_L^p(Q_+^1)≲∑_s=0^k-2/2∂_t^m+s g_n^𝒯_L^∞g_n^𝒮_L^1 k ≥ 2, ∂_t^m g_n^𝒯_L^∞g_n^𝒮_L^1 k=0. Then from the interpolation inequality f_L^∞(Q_+^1)≤ C(∇ f_L^∞(Q_+^1)^n+1/p+n+1f_L^p(Q_+^1)^p/p+n+1+f_L^p(Q_+^1)), 1≤ p <∞, we obtain that ∂_x_n^k∂_x'^l∂_t^m w_i^𝒞_L^∞(Q_+^1) <∞. (Estimate of w_i^ℬ) We note that w_i^ℬ(x.t)=-4∫_0^tg_n^𝒯(t-s)∫_Σ∂_i∂_n B(x-y', s)g_n^𝒮(y')dy'ds. Thus ∂_x_n^k ∂_x'^l ∂_t^mw_i^ℬ(x.t)=-4∫_0^t∂_t^m g_n^𝒯(t-s)∫_Σ∂_x_n^k ∂_x'^l∂_i∂_n B(x-y', s)g_n^𝒮(y')dy'ds. We already have discussed how to estimate the above integral when we estimated w_i^𝒞 and thus we obtain ∂_x_n^k∂_x'^l∂_t^m w_i^ℬ_L^∞(Q_+^1) <∞. 2) We now estimate the normal component of the velocity w. Recalling that K_nn(x,t) =-2∂_n Γ(x,t)-4∂_n C_n(x,t)-2∂_n E(x)δ(t) =4∑_k=1^n-1∂_k C_k(x,t)-2∂_n E(x)δ(t), we note that w_n is decomposed as follows w_n(x,t) =∫_0^t∫_ΣK_nn(x-y',t-s)g_n(y',s)dy'ds =4∑_k=1^n-1∫_0^t∫_Σ∂_k C_k(x-y',t-s)g_n(y',s)dy'ds-2∫_Σ∂_n E(x-y')g_n(y',t)dy' =w_n^𝒞(x,t)+w_n^ℰ(x,t). (Estimates of w_n^ℰ and w_n^𝒞) w_n^ℰ and w_n^𝒞 enjoy a similar estimate as those of w_i^ℰ and w_i^𝒞 for i<n respectively, and therefore, it follows that ∂_x^l∂_t^m w_n^ℰ_L^∞(Q_+^1)+∂_x_n^k∂_x'^l∂_t^m w_n^𝒞_L^∞(Q_+^1) <∞. ∙ (Step 2) (Case that g_j ≠ 0 for j≠ n and g_n=0) Secondly, we treat the case where only the tangential component of the boundary data is not zero, i.e., 1≤ j ≤ n-1. 1) We first estimate the tangential components of the velocity w. Let 1≤ i ≤ n-1. Since K_ij(x,t)=-2δ_ij∂_n Γ(x,t)-4∂_j C_i(x,t), w_i is expressed as follows: w_i(x,t) =-2δ_ij∫_0^t∫_Σ∂_n Γ(x-y',t-s)g_j(y',s)dy'ds-4∫_0^t∫_Σ∂_j C_i(x-y',t-s)g_j(y',s)dy'ds =w_i^ℋ(x,t)+w_i^𝒞(x,t). (Estimate of w_i^ℋ and w_i^𝒞) For w_i^ℋ we note that w_i^ℋ(x,t)=-2δ_ij∫_0^tg_j^𝒯(t-s)∫_Σ∂_nΓ(x-y',s)g_j^𝒮(y')dy'ds and since ∂_t^k g_j^𝒯(0)=0 for any k≥ 0, we find that ∂_x_n^k∂_x'^l∂_t^mw_i^ℋ(x,t)=-2δ_ij∫_0^t∂_t^m g_j^𝒯(t-s)∫_Σ∂_x_n^k∂_x'^l∂_n Γ(x-y',s)g_j^𝒮(y')dy'ds. Thus we find that |∂_x_n^k∂_x'^l∂_t^m w_i^ℋ(x,t)| ≲∫_0^t|∂_t^m g_j^𝒯(t-s)|∫_A|∂_x_n^k∂_x'^l∂_n Γ(x-y',s)||g_j^𝒮(y')|dy'ds ≲∫_0^t|∂_t^m g_j^𝒯(t-s)|∫_A1/(|x-y'|^2+s)^n+k+l+1/2|g_j^𝒮(y')|dy'ds ≲ t∂_t^m g_j^𝒯_L^∞g_j^𝒮_L^1. Since w_i^𝒞 enjoys the same estimates as the Step 1, we find that ∂_x_n^k∂_x'^l∂_t^m w_i^ℋ_L^∞(Q_+^1)+ ∂_x_n^k∂_x'^l∂_t^m w_i^𝒞_L^∞(Q_+^1)<∞. 2) We now estimate the normal component of the velocity w. Recalling that K_nj(x,t)=-4∂_j C_n(x,t), we note that w_n is written as follows w_n(x,t)=-4∫_0^t∫_Σ∂_jC_n(x-y',t-s)g_j(y',s)dy'ds. By the calculations given in Step 1, we find that w_n satisfies the estimate ∂_x^l∂_t^m w_n^𝒞_L^∞(Q_+^1)<∞. ∙ (Step 3) We now estimate the pressure p. We use the second formula (<ref>) for the pressure to obtain that p(x,t) =2∂_n^2∫_ΣE(x-y')g_n(y',t)dy'+2∫_ΣE(x-y')∂_t g_n(y',t)dy' -4∂_t∫_0^t∫_Σ∂_n A(x-y',t-s)g_n(y',s)dy'ds+4Δ_x'∫_0^t∫_Σ∂_n A(x-y',t-s)g_n(y',s)dy'ds =I_1+I_2+I_3+I_4. The estimates of I_1 and I_2 are easy and the result is |I_1|+|I_2|≲g_n^𝒮_L^1(g_n^𝒯_L^∞+∂_t g_n^𝒯_L^∞). For I_3 we have that using g_n^𝒯(0)=0, I_3=-4∫_0^t∫_A∂_n A(x-y',s)∂_t g_n(y',t-s)dy'ds. Thus we have that |I_3| ≲∫_0^t∫_A|∂_n A(x-y',s)||∂_t g_n(y',t-s)|dy'ds ≲∫_0^t∫_A1/s^1/2(|x-y'|^2+s)^n-1/2|∂_t g_n(y',t-s)|dy'ds ≲√(t)g_n^𝒮_L^1∂_t g_n^𝒯_L^∞. Finally for I_4, we have that I_4=4∫_0^t∫_ΣΔ_x'∂_n A(x-y',s)g_n(y',t-s)dy'ds. Thus we have that |I_4| ≲∫_0^t∫_Σ|Δ_x'∂_n A(x-y',s)||g_n(y',t-s)|dy'ds. ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n+1/2|g_n(y',t-s)|dy'ds ≲√(t) g_n^𝒯_L^∞g_n^𝒮_L^1. Hence we conclude that |p(x,t)|≲g_n^𝒮_L^1(g_n^𝒯_L^∞+∂_t g_n^𝒯_L^∞). For the time derivatives of p, we find that since ∂_t^k g_n^𝒯(0)=0 for any k≥ 0, ∂_t^m p(x,t) =2∂_n^2∫_ΣE(x-y')∂_t^m g_n(y',t)dy'+2∫_ΣE(x-y')∂_t^m+1g_n(y',t)dy' -4∫_0^t∫_Σ∂_n A(x-y',t-s)∂_t^m+1g_n(y',t-s)dy'ds +4Δ_x'∫_0^t∫_Σ∂_n A(x-y',s)∂_t^m g_n(y',s)dy'ds. And we obtain the estimates |∂_t^m p(x,t)|≲g_n^𝒮_L^1(∂_t^m g_n^𝒯_L^∞+∂_t^m+1 g_n^𝒯_L^∞), by following the same proof for the estimate of p. The higher mixed derivatives of p follows from the previous estimates of w and the Stokes equations. § PROOF OF THEOREM <REF> FOR THE STOKES EQUATIONS ∙ (Step 1) Estimate of w_i and ∇ w_i. We recall that for 1≤ i ≤ n, w_i(x,t) =-2δ_ij∫_0^t∫_Σ∂_n Γ(x-y',t-s)g_j(y',s)dy'ds-4∫_0^t∫_Σ∂_j C_i(x-y',t-s)g_j(y',s)dy'ds =I_1+I_2. It is rather straightforward that |I_1| ≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+1/2|g_j(y',t-s)|dy'ds≲ tg_j^𝒯_L^∞g_j^𝒮_L^1, |I_2| ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n/2|g_j(y',t-s)|dy'ds≲√(t)g_j^𝒯_L^∞g_j^𝒮_L^1. Thus we find that w_i_L^∞(Q_2^+)≲g_j^𝒯_L^∞g_j^𝒮_L^1. Next, we estimate the tangential derivatives. For 1≤ k≤ n-1, we note that ∂_k w_i(x,t) =-2δ_ij∫_0^t∫_Σ∂_k∂_n Γ(x-y',s)g_j(y',t-s)dy'ds-4∫_0^t∫_Σ∂_k∂_j C_i(x-y',s)g_j(y',t-s)dy'ds =J_1+J_2. Similarly as in the above computations, it follows that |J_1| ≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+2/2|g_j(y',t-s)|dy'ds≲ tg_j^𝒯_L^∞g_j^𝒮_L^1, |J_2| ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n+1/2|g_j(y',t-s)|dy'ds≲√(t)g_j^𝒯_L^∞g_j^𝒮_L^1. Therefore, we obtain for 1≤ k ≤ n-1, ∂_k w_i_L^∞(Q_2^+)≲g_j^𝒯_L^∞g_j^𝒮_L^1. Next, we estimate the normal derivatives. We compute ∂_n w_i(x,t) =-2δ_ij∫_0^t∫_Σ∂_n^2 Γ(x-y',s)g_j(y',t-s)ds-4∫_0^t∫_Σ∂_n∂_j C_j(x-y',s)g_j(y',t-s)dy'ds =2δ_ij∫_0^t∫_Σ∂_n^2 Γ(x-y',s)g_j(y',t-s)ds-4∫_0^t∫_Σ∂_j^2 C_n(x-y',s)g_j(y',t-s)dy'ds -4∫_0^t∫_Σ∂_j^2∂_n B(x-y',s)g_j(y',t-s)dy'ds =K_1+K_2+K_3. The term K_1 and K_2 are controlled as follows: |K_1| ≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+2/2|g_j(y',t-s)|dy'ds≲ tg_j^𝒯_L^∞g_j^𝒮_L^1, |K_2| ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n+1/2|g_j(y',t-s)|dy'ds≲√(t)g_j^𝒯_L^∞g_j^𝒮_L^1. For K_3, with the aid of Lemma <ref>, we have K_3 =-4∫_0^t∫_Σ∂_nΓ_1(x_n,s)(∂_j E(x'-y',0)+J(x'-y',s))∂_j g_j(y',t-s)dy'ds and thus, it follows that by Lemma <ref>, |K_3| ≲∫_0^t∫_Σ|∂_n Γ_1(x_n,s)|(|∂_j E(x'-y',0)|+|J(x'-y',s)|)|∂_j g_j(y',t-s)| dy'ds ≲g_j^𝒯_L^∞∫_0^t∫_Ax_n/se^-x_n^2/4s(1/|x'-y'|^n-1+s^1/2)|∂_j g_j^𝒮(y')|dy'ds ≲g_j^𝒯_L^∞∇ g_j^𝒮_L^1∫_0^t(x_n/s^1/2e^-x_n^2/4s+x_n/se^-x_n^2/4s)ds ≲g_j^𝒯_L^∞∇ g_j^𝒮_L^1(1+x_nlog(1+t/4x_n^2)) ≲g_j^𝒯_L^∞∇ g_j^𝒮_L^1. This leads to conclude that ∂_n w_i_L^∞(Q_2^+)≲g_j^𝒯_L^∞g_j^𝒮_W^1,1. ∙ (Step 2) Estimate of ∇^2 w_i. We now consider the second derivatives of w_i. We begin with two the tangential derivatives. If 1≤ l, k ≤ n-1 we have that ∂_l∂_k w_i =-2δ_ij∫_0^t∫_Σ∂_l∂_k∂_n Γ(x-y',t-s)g_j(y',s)dy'ds-4∫_0^t∫_Σ∂_l∂_k∂_j C_i(x-y',t-s)g_j(y',s)dy'ds =I_1+I_2. We seperately estimate I_1 and I_2. |I_1| ≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+3/2|g_j(y',t-s)|dy'ds≲ tg_j^𝒯_L^∞g_j^𝒮_L^1, |I_2| ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n+2/2|g_j(y',t-s)|dy'ds≲√(t)g_j^𝒯_L^∞g_j^𝒮_L^1. Thus we find that ∂_l∂_k w_i_L^∞(Q_2^+)≲g_j^𝒯_L^∞g_j^𝒮_L^1. We now estimate the second derivatives with one tangential derivative and one normal derivative. If 1≤ k≤ n-1, we have that ∂_n∂_k w_i =-2δ_ij∫_0^t∫_Σ∂_k∂_n^2 Γ(x-y',s)g_j(y',t-s)dy'ds-4∫_0^t∫_Σ∂_k∂_j∂_n C_i(x-y',s)g_j(y',t-s)dy'ds =J_1+J_2. For J_1, we note that |J_1|≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+3/2|g_j(y',t-s)|dy'ds ≲ tg_j^𝒯_L^∞g_j^𝒮_L^1. For J_2, we divide into the following cases: 1≤ i≤ n-1 and i=n. If 1≤ i ≤ n-1, then J_2 =-4∫_0^t∫_Σ∂_k∂_j (∂_i C_n(x-y',s)+∂_i∂_n B(x-y',s))g_j(y',t-s)dy'ds =-4∫_0^t∫_Σ∂_k∂_j∂_i C_n(x-y',s)g_j(y',t-s)dy'ds-4∫_0^t∫_Σ∂_i∂_n B(x-y',s)∂_k∂_j g_j(y',t-s)dy'ds =J_21^(1)+J_22^(1). Continuing computations, we obtain |J_21^(1)| ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n+2/2|g_j(y',t-s)|dy'ds≲√(t)g_j^𝒯_L^∞g_j^𝒮_L^1, |J_22^(1)| ≲g_j^𝒯_L^∞∇^2 g_j^𝒮_L^1, where we last estimate can be obtained using the same argument for estimating K_3 in the previous step. If i=n, then J_2 =-4∫_0^t∫_Σ∂_k∂_j∂_n C_n(x-y',s)g_j(y',t-s)dy'ds =4∑_l=1^n-1∫_0^t∫_Σ∂_k∂_j∂_l C_l(x-y',s)g_j(y',t-s)dy'ds+2∫_0^t∫_Σ∂_k∂_j∂_n Γ(x-y',s)g_j(y',t-s)dy'ds =J_21^(2)+J_22^(2). We compute that |J_21^(2)| ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n+2/2|g_j(y,t-s)|dy'ds≲√(t)g_j^𝒯_L^∞g_j^𝒮_L^1, |J_22^(2)| ≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+3/2|g_j(y,t-s)|dy'ds≲ tg_j^𝒯_L^∞g_j^𝒮_L^1. Thus we find that ∂_n∂_k w_i_L^∞(Q_2^+)≲g_j^𝒯_L^∞g_j^𝒮_W^2,1. Finally we now estimate the second derivatives with two normal derivatives. If i=n, using ∇· w=0, we have that ∂_n^2 w_n=-∑_k=1^n-1∂_n∂_kw_k and thus we have the L^∞-boundedness of ∂_n^2 w_n from the previous case. If i≠ n, then ∂_n^2 w_i =-2δ_ij∫_0^t∫_Σ∂_n^3 Γ(x-y',s)g_j(y',t-s)dy'ds-4∫_0^t∫_Σ∂_n^2∂_j C_i(x-y',s)g_j(y',t-s)dy'ds =K_1+K_2. Continuing computations, we get |K_1|≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+3/2|g_j(y',t-s)|dy'ds≲ tg_j^𝒯_L^∞g_j^𝒮_L^1. For K_2, we have K_2 =-4∫_0^t∫_Σ∂_n∂_j(∂_i C_n(x-y',s)+∂_i∂_n B(x-y',s))g_j(y',t-s)dy'ds =4∑_k=1^n-1∫_0^t∫_Σ∂_i∂_j∂_k C_k(x-y',s)g_j(y',t-s)dy'ds+2∫_0^t∫_Σ∂_i∂_j∂_nΓ(x-y',s)g_j(y',t-s)dy'ds -4∫_0^t∫_Σ∂_n^2 B(x-y',s)∂_i∂_j g_j(y',t-s)dy'ds =K_21^(2)+K_22^(2)+K_23^(2). First two terms are estimated as follows: |K_21^(2)| ≲∫_0^t∫_Σ1/s^1/2(|x-y'|^2+s)^n+2/2|g_j(y',t-s)|dy'ds≲√(t)g_j^𝒯_L^∞g_j^𝒮_L^1, |K_22^(2)| ≲∫_0^t∫_Σ1/(|x-y'|^2+s)^n+3/2|g_j(y',t-s)|dy'ds≲ tg_j^𝒯_L^∞g_j^𝒮_L^1. For K_23^(2) we find that Lemma <ref> gives K_23^(2) =-4∫_0^t∫_Σ∂_n∂_i∂_nB(x-y',s)∂_jg_j(y',t-s)dy'ds =4∫_0^t∫_Σ∂_n^2Γ(x_n,s)(∂_i E(x'-y',0)+J(x'-y',s))∂_jg_j(y',t-s)dy'ds =K_231^(2)+K_232^(2). Here using calculations similar to one for the integral appeared in the estimate of first I_2 in the proof of Theorem <ref> and Lemma <ref>, we find that |K_232^(2)|≲(1+log(1+t/x_n^2))g_j^𝒯_L^∞g_j^𝒮_L^1. It is shown in <cit.>, that for any p∈(1,∞), if g_j^𝒯 satisfies the hypothesis given in Theorem <ref>, K_231^(2)_L^p(Q_1/2^+) is unbounded. Our construction of such an example for g_j^𝒯 satisfying the hypothesis is rather direct. We let g_j^𝒯 to be the sum of all characteristic functions of the interval ((2k+1)^-a, (2k)^-a) where k∈ℤ_+ and a∈(0,1) is to be determined. Then since each interval above is pairwise disjoint, g_k^𝒯∈ L^∞(ℝ) is immediate. To show that this function is not in the stated Besov space, we will use its standard integral characterization. Then we can see that the integral corresponding to g_k^𝒯 is divergent for all a∈ (0,1) for p ≥ 3 and for some a for p <3. (see <cit.> for more details). Since we have ∂_n^2 w_i=K_1+K_21^(2)+K_22^(2)+K_231^(2)+K_231^(2), the required blow-up follows. § PROOF OF THEOREM <REF> FOR THE STOKES EQUATIONS ∙ (Step 1) Estimate of w_i. We have that for 1≤ k≤ n-1, K_ik(x,t)=-2δ_ik∂_n Γ(x,t)-4∂_k C_i(x,t). Thus, we remind that w_i(x,t) =-2δ_ik∫_0^t∫_|y'|≤ 1∂_n Γ(x-y',s) g_k(y',t-s)dy'ds-4∫_0^t∫_|y'|≤ 1∂_k C_i(x-y',s)g_k(y',t-s)dy'ds =I_1 +I_2. Assume first that 1≤ i ≤ n-1. We first estimate I_1. Since |x-y'|≥|x|/2 for |x|>2, we find that for |x|>2, |I_1| ≲∫_0^t∫_|y'|≤ 11/(|x-y'|^2+s)^n+1/2dy'dsg_k_L^∞ ≲t/|x|^n+1g_k_L^∞. For |x|<2, we find that from Lemma <ref>, |I_1| ≲∫_0^t∫_|y'|≤ 1|∂_n Γ(x-y',s)||g_k(y',t-s)|dy'ds ≲∫_0^t|∂_n Γ_1(x_n,s)|∫_|y'|≤ 1|Γ'(x'-y',s)|dy'ds g_k_L^∞ ≲∫_0^tx_n/s^3/2e^-x_n^2/4sds g_k_L^∞ ≲ g_k_L^∞. Thus we obtain that |I_1|≲1/<x>^n+1g_k_L^∞. We now estimate I_2. Since |x-y'|≤ 3 for |x|<2, we find that for |x|<2, |I_2| ≲∫_0^t∫_|y'|≤ 1| C_i(x-y',s)||∂_kg_k(y',t-s)|dy'ds ≲∫_0^t∫_|y'|≤ 11/s^1/2(|x-y'|^2+s)^n-1/2dy'ds ∇_x' g_k_L^∞ ≲∫_|y'|≤ 11/|x-y'|^n-2∫_0^t/|x-y'|^21/u^1/2(u+1)^n-1/2∇_x' g_k_L^∞ ≲∫_|y'|≤ 11/|x-y'|^n-2∇_x' g_k_L^∞ ≲∫_0^3r^n-2/(r+x_n)^n-2dr∇_x' g_k_L^∞≲∇_x' g_k_L^∞. For |x|>2, we find that |I_2| ≲∫_0^t∫_|y'|≤ 11/s^1/2(|x-y'|^2+s)^n/2dy'ds g_k_L^∞ ≲∫_0^t1/s^1/2∫_|y'|≤ 11/(|x|^2+s)^n/2dy'dsg_k_L^∞≲√(t)/|x|^n g_k_L^∞. Thus we obtain that |I_2|≲g_k_L^∞+∇_x' g_k_L^∞/<x>^n. Hence we conclude that |w_i(x,t)|≤ |I_1|+|I_2|≲1/<x>^n( g_k_L^∞+∇_x' g_k_L^∞) for 1≤ i ≤ n-1. Now let i=n. Then I_1=0 and thus we only need to estimate I_2, which enjoys the same estimate as that of I_2 in the previous case. Hence we find that w_L^∞(Q_1^+)≲N_1/<x>^n. ∙ (Step 2) Estimate of ∂_j w_i. We have that ∂_j w_i(x,t) =-2δ_ik∫_0^t∫_|y'|≤ 1∂_j∂_n Γ(x-y',s)g_k(y',t-s)dy'ds -4∫_0^t∫_|y'|≤ 1∂_j∂_k C_i(x-y',s)g_k(y',t-s)dy'ds =J_1+J_2. 1. We first estimate J_1. For j≤ n-1 reminding that J_1=-2δ_ik∫_0^t∫_|y'|≤ 1∂_n Γ(x-y',s)∂_j g_k(y',t-s)dy'ds, and using the same method for estimating I_1, we get |J_1|≲t/<x>^n+2∇_x' g_k_L^∞. For j=n, we have that for |x|<2, J_1 =-2δ_ik∫_0^t∫_|y'|≤ 1∂_n^2 Γ(x-y',t)g_k(y',t-s)dy'ds ≲ -2δ_ik∫_0^t∫_|y'|≤ 1Γ(x-y',s)Δ_y'g_k(y',t-s)ds+∫_0^tΓ(x-y',s)∂_t g_k(y',t-s)dy'ds. Here the first integral is bounded by ∇^2 g_k_L^∞ and the second integral is bounded by g_k_L^∞. For |x|>2, as done for I_1, we find that |J_1|≲tg_k_L^∞/|x|^n+2. Thus we obtain |J_1|≲1/<x>^n+2[1+1_|x|<2{1/(x_n^2+|t-1|)^1/2-a1_0<a<1/2 + (1+log(2+1/x_n^2+|t-1|))1_a=1/2}]g_k_W^2,∞. 2. We now estimate J_2. We divide into the following cases: * 1≤ i,  j ≤ n-1, * i=n, 1≤ j≤ n-1, * 1≤ i ≤ n-1, j=n, * i=j=n. (1) If 1≤ i, j ≤ n-1, then for |x|≥2, |J_2| ≲∫_0^t∫_|y'|≤ 1|∂_j∂_k C_i(x-y',s)||g_k(y',t-s)|dy'ds ≲∫_0^t∫_|y'|≤ 11/s^1/2(|x-y'|^2+s)^n+1/2dy'dsg_k_L^∞≲√(t)/|x|^n+1g_k_L^∞, and for |x|<2, using the same method for estimating I_2, we get |J_2|≲∇_x' g_k_L^∞. Thus we see that for 1≤ i, j ≤ n-1 |∂_j w_i(x,t)|≤ |J_1|+|J_2|≲ g_k_W^1,∞/<x>^n+1. (2) If i=n,  1≤ j ≤ n-1, since C_i and C_n share the same estimates we obtain that by the same method of estimates in the previous case, we get that |∂_j w_n(x,t)|≲ g_k_W^1,∞/<x>^n+1. (3) If 1≤ i ≤ n-1,  j=n, then J_2 =-4∫_0^t∫_|y'|≤ 1∂_n∂_k C_i(x-y',s)g_k(y',t-s)dy'ds =-4∫_0^t∫_|y'|≤ 1∂_i∂_k C_n(x-y',s)g_k(y',t-s)dy'ds-4∫_0^t∫_|y'|≤ 1∂_i∂_k∂_n B(x-y',s)g_k(y',t-s)dy'ds =J_21+J_22. For J_21, using the same method of estimates in the case (1), we get that |J_21|≲1/<x>^n+1( g_k_L^∞+∇_x' g_k_L^∞). For J_22, we find that if |x|≥ 2, then using ∂_n B(x,t)=-x_n/2tB(x,t), |J_22| ≲∫_0^t∫_|y'|≤ 1∂_i∂_k∂_n B(x-y',s)g_k(y',t-s)dy'ds =2∫_0^t∫_|y'|≤ 1x_n/s∂_i∂_j B(x-y',s)g_k(y',t-s)dy'ds. Thus we see that by Lemma <ref>, |J_22| ≲∫_0^t∫_|y'|≤ 1x_n/se^-x_n^2/10s/(|x-y'|^2+s)^n/2s^1/2dy'dsg_k_L^∞ =∫_0^tx_n e^-x_n^2/10s/s^3/2∫_|y'|≤ 11/(|x-y'|^2+s)^n/2dy'ds g_k_L^∞ ≲1/|x|^ng_k_L^∞. If |x|<2, using the same method as above and integration by parts, we find that |J_22| ≲∇_x'g_k_L^∞. Thus we get |J_22|≲g_k_W^1,∞/<x>^n. Hence we conclude that |∂_j w_n(x,t)| ≤ |J_1|+|J_21|+|J_22|≲(1/<x>^n+1+1/<x>^n) g_k_W^1,∞. (4) If i=j=n, then ∇· w=0 gives the same estimate as in the case (1). (Step 3) Estimate of ∂_l ∂_j w_i. We recall that ∂_l∂_j w_i(x,t) =-2δ_ik∫_0^t∫_|y'|≤ 1∂_l∂_j∂_n Γ(x-y',s)g_k(y',t-s)dy'ds -4∫_0^t∫_|y'|≤ 1∂_l∂_j∂_k C_i(x-y',s)g_k(y',t-s)dy'ds =K_1+K_2. 1. We first estimate K_1. For |x|≥ 2, we get |K_1| ≲∫_0^t∫_|y'|≤ 11/(|x-y'|^2+s)^n+3/2dy'dsg_k_L^∞≲g_k_L^∞/|x|^n+3. For |x|<2, if l, j<n, then |K_1|≲∫_0^t∫_|y'|≤ 1|∂_n Γ(x-y',s)||∂_l∂_j g_k(y',t-s)|dy'ds≲∇_x'^2 g_k_L^∞, where the last estimate follows from the same method for I_1. If l=n,  j≠ n or l≠ n,  j=n, then using integration by part and performing the similar estimates leading (<ref>), we get |K_1|≲1/<x>^n+2[1+1_|x|<2{1/(x_n^2+t-1)^1/2-a1_0<a<1/2 + (1+log(2+1/x_n^2+|t-1|))1_a=1/2}]∇ g_k_W^2,∞. If l=j=n, then using integration by parts, K_1 =-∫_0^t∫_|y'|≤ 1∂_nΓ(x-y',s)Δ_y'g_k(y',t-s)dy'ds+∫_0^t∫_|y'|≤ 1∂_n Γ(x-y',s)∂_t g_k(y',t-s)dy'ds =-∫_0^t∂_nΓ_1(x_n,s)(1-t+s)_+^a∫_|y'|≤ 1Γ'(x'-y',s)Δ_y'g_k^𝒮(y')dy'ds +a∫_0^t∂_n Γ_1(x_n,s)(1-t+s)_+^a-1∫_|y'|≤ 1Γ'(x'-y',s)g_k^𝒮(y')dy' ds. Here the first integral is bounded by ∇^2 g_k^𝒮_L^∞ and the second integral, if t<1, it is bounded by log(1+16t/x_n)/(x_n^2+1-t)^1-ag_k^S_L^∞, where we have used Lemma <ref> and Lemma <ref> to get the above bounds. And if t>1, then it is bounded by ∫_t-1^t1/se^-x_n^2/8s(1-t+s)^a-1ds ≲∫_t-1^t1/(x_n^2+s)(1-t+s)^1-ads =∫_0^1du/(x_n+t-1+u)u^1-a ≲1/(x_n^2+t)^a(x_n^2+t-1)^1-a, where we have used the inequality e^-θ≲1/1+θ and Lemma <ref>. Thus |K_1|≲(1+log(1+16t/x_n)/(x_n^2+1-t)^1-a1_t<1+1/(x_n^2+t)^a (x_n^2+t-1)^1-a1_t>1)g_k_W^2,∞. 2. We now estimate K_2. We divide into the following cases: * 1≤ i, j, l ≤ n-1, * 1≤ j, l ≤ n-1,  i=n, * 1≤ i, j≤ n-1, l=n, or 1≤ i, l≤ n-1,  j=n * 1≤ j≤ n-1,  i=l=n, or 1≤ l≤ n-1,  i=j=n, * 1≤ i≤ n-1, j=l=n, * i=j=l=n. Note that only the roles of j and l are switched for the respective subcases of the case (3) and (4). Thus the subcases of (3) and (4) will give the same estimate and therefore we only focus one of them respectively. (1) If 1≤ i, j, l ≤ n-1, using the same method for estimating I_2 in Step 1, we obtain |K_2|≲g_k_W^2,∞/<x>^n+2. (2) If 1≤ j, l ≤ n-1, i=n, since C_i and C_n share the same estimates we obtain that from the previous case, |K_2|≲g_k_W^2,∞/<x>^n+2. (3) If 1≤ i, j≤ n-1, l=n, we have that K_2 =-4∫_0^t∫_|y'|≤ 1∂_n ∂_j∂_k C_i(x-y',s)g_k(y',t-s)dy'ds =-4∫_0^t∫_|y'|≤ 1∂_j∂_k (∂_i C_n(x-y',s)+∂_i∂_n B(x-y',s))g_k(y',t-s)dy'ds =K_21+K_22. Here K_21 is estimated similarly as in the case (1), and we get that |K_21|≲g_k_W^2,∞/<x>^n+2. For K_22 we find that using ∂_n B(x,t)=-x_n/2tB(x,t) K_22=2∫_0^t∫_|y'|≤ 1x_n/s∂_j∂_k∂_i B(x-y',s)g_k(y',t-s)dy'ds. Then using the same method as estimating J_22, we get that |K_22|≲g_k_W^1,∞/<x>^n+1. (4) If 1≤ j≤ n-1,  i=l=n, we have that K_2 =4∫_0^t∫_|y'|≤ 1∑_p=1^n-1∂_j∂_k∂_pC_p(x-y',s)g_k(y',t-s)dy'ds +2∫_0^t∫_|y'|≤ 1∂_j∂_k∂_n Γ(x-y',s)g_k(y',t-s)dy'ds =K_23+K_24. Here K_24 shares the same estimate as that of K_1 and we get |K_24|≲g_k_W^2,∞/<x>^n+3. And for K_23, we see that it shares the same estimate as that of K_2 in the case (1), and we thus have |K_23|≲g_k_W^2,∞/<x>^n+2. (5) If 1≤ i≤ n-1, j=l=n, we have that, K_2 =-4∫_0^t∫_|y'|≤ 1∂_k∂_n^2 C_i(x-y',s)g_k(y',t-s)dy'ds =-4∫_0^t∫_|y'|≤ 1∂_k∂_n (∂_i C_n(x-y',s)+∂_i ∂_nB(x-y',s))dy'ds =-4∫_0^t∫_|y'|≤ 1∂_k∂_i(∑_p=1^n-1∂_p C_p(x-y',s)-1/2∂_n Γ(x-y',s))g_k(y',t-s)dy'ds -4∫_0^t∫_|y'|≤ 1∂_i∂_k∂_n^2 B(x-y',s)g_k(y',t-s)dy'ds =K_25+K_26-4∫_0^t∫_|y'|≤ 1∂_i∂_k∂_n^2 B(x-y',s)g_k(y',t-s)dy'ds =K_25+K_26+4∫_0^t∫_|y'|≤ 1∂_i∂_k(-∑_p=1^n-1∂_p^2 B(x-y',s)+∂_s B(x-y',s))g_k(y',t-s)dy'ds =K_25+K_26+K_27+K_28. Here K_25 enjoys the same estimate as that of K_23 and K_26 enjoys the same estimate as that of K_24. Thus we get that |K_25|+|K_26|≲g_k_W^2,∞/<x>^n+2. Also K_27 has the similar estimate as that of K_22 and we get |K_27|≲g_k_W^2,∞/<x>^n+2. We finally estimate K_28. We have that K_28 =4∫_0^t∫_|y'|≤ 1∂_i∂_k B(x-y',s)∂_tg_k(y',t-s)dy'ds. First assume that |x|≥ 2 and t<1. We then have that using Lemma <ref>, and Lemma <ref>, |K_28| ≲∫_0^t∫_|y'|≤ 1e^-x_n^2/10s/s^1/2(|x-y'|^2+s)^n/2|∂_t g_k(y',t-s)|dy'ds ≲∫_0^t∫_|y'|≤ 1e^-x_n^2/10s/s^1/2(|x-y'|^2+s)^n/2(1-t+s)^a-1dy'ds g_k^𝒮_L^∞ ≲1/|x|^n∫_0^te^-x_n^2/10s/s^1/2(1-t+s)^1-ads g_k^𝒮_L^∞≲1/|x|^n∫_0^t1/s^1/2(x_n^2+1-t+s)^1-ads g_k^𝒮_L^∞ ≲1/|x|^n1/x_n+1(log(2+1/x_n^2+|t-1|)1_a=1/2+1/(x_n^2+|1-t|)^1/2-a1_a<1/2) g_k^𝒮_L^∞ = g_k^𝒮_L^∞/|x|^n(x_n+1)≲ g_k^𝒮_L^∞/|x|^n(x_n+1)^2a, (∵ 2a≤ 1). Assuming |x|≥ 2, t>1, since g_k(y',t-s) is supported in t-1<s<t, it follows that |K_28| ≲∫_t-1^t∫_|y'|≤ 1e^-x_n^2/10s/s^1/2(|x-y'|^2+s)^n/2|∂_t g_k(y',t-s)|dy'ds ≲∫_t-1^t∫_|y'|≤ 1e^-x_n^2/10s/s^1/2(|x-y'|^2+s)^n/2(1-t+s)^a-1dy'ds g_k^𝒮_L^∞ ≲1/|x|^n∫_t-1^te^-x_n^2/10s/s^1/2(1-t+s)^1-ads g_k^𝒮_L^∞≲1/|x|^n∫_t-1^t1/(x_n^2+1-t+s)^1-a(x_n^2+s)^1/2ds g_k^𝒮_L^∞ ≤1/|x|^n∫_0^1u^a-1/(x_n^2+u+t-1)^1/2du g_k^𝒮_L^∞≲/|x|^n(x_n+1)^2a g_k^𝒮_L^∞, where we have used the estimate e^-x_n^2/10s≲(s/x_n^2+s)^1/2. Assume |x|<2 and n≥ 3. By performing similar calculations as above, |K_28| ≲∫_0^t∫_|y'|≤ 1|B(x-y',s)||∂_i∂_k∂_t g_k(y',t-s)|dy'ds ≲∫_(t-1)_+^t∫_|y'|≤ 1e^-x_n^2/10s/(|x-y'|^2+s)^n-2/2s^1/2(1-t+s)^a-1dy'ds∇^2g_k^𝒮_L^∞ ≲∫_(t-1)_+^te^-x_n^2/10s/√(s)(1-t+s)^a-1ds∇^2g_k^𝒮_L^∞ ≲∇^2g_k^𝒮_L^∞. Finally assume |x|<2 and n=2. Then, this is essentially the same integral as I_22 in (3.26) of <cit.> and the result follows by following the proof of Proposition 3.1 in <cit.>. (6) If i=j=l=n then ∇· w=0 gives that it reduces to the case (5). (Step 4) L^p estimates. 1. The estimate of u is immediate from its pointwise bound. 2. For the estimate of ∇ u, we only need to consider the term 1_|x|<2 from the pointwise bound of ∇ u as the other terms can be bounded by the bounds of u, which belong to L^p(ℝ_+^n× (0,2)) if and only if p∈ (1,∞]. If a<1/2, then ∫_0^2∫_|x|<2||^p dxdt ≲∫_0^1∫_0^2 (x_n^2+t)^p(a-1/2)dtdx_n =1/1+p(a-1/2)∫_0^2((x_n^2+1)^p(a-1/2)+1-x_n^2p(a-1/2)+2)dx_n. If 1+p(a-1/2)≥ 0, then the integral is obviously finite and if 1+p(a-1/2)<0, then the integral converges if and only if 2p(a-1/2)+2>-1, i.e., p<3/2a-1. Thus the integral is convergent if and only if p<3/2a-1. If a=1/2, then ∫_0^2∫_|x|<2||^p dxdt ≲∫_0^2∫_0^1 |log(2+1/x_n^2+t)|^p dtdx_n ≲∫_0^2|log(2+1/x_n^2)|^p dx_n, which is finite for all p≥ 1. 3. For the estimate of ∇^2 u, we only need to consider the estimates of the followings /<x>^n(x_n+1)^2a+1_|x|<2log(1+t/x_n)/(x_n^2+|t-1|)^1-a, n≥ 3, and log(2+1/x_2^2+|t-1|), n=2, since the other terms appearing in the pointwise estimate are previously estimated. In this proof, we shall estimate the second term in (<ref>) only, which is locally the most singular. We have that ∫_0^2∫_0^2log^p(1+t/x_n)/(x_n^2+|t-1|)^(1-a)pdtdx_n ≤∫_0^2∫_0^2log^p(1+2/x_n)/(x_n^2+t)^(1-a)p dtdx_n ≲1/1-(1-a)p∫_0^2 log^p(1+2/x_n)((x_n^2+2)^1-(1-a)p-x_n^2-2(1-a)p)dx_n. Similarly as the above calculations, we find that the integral above is finite if and only if p<3/2(1-a). (Step 5) Pressure estimate. We next estimate the pressure. We remind that p(x,t) =2∂_k∂_n ∫_ΣE(x-y')g_k(y',t)dy'-4(∂_t-Δ_x')∫_-∞^∞∫_Σ∂_k A(x-y',t-s)g_k(y',s)dy'ds =I_1+I_2. For I_1 we note that I_1=∫_Σ2∂_n E(x-y')∂_kg_k^𝒮(y')dy'g_k^𝒯(t) and the above integral is the solution of the following Dirichlet problem: Δ u =0 in ℝ_+^n, u(x',0) = ∂_k g_k^𝒮(x') on Σ, and thus we have the estimate |I_1|≲∇_x'g_k_L^∞1_1/4≤ t≤ 1(t). For |x|≥ 2, we find that since |x-y'|≤|x|/2 for |y'|≤ 1, |I_1|≲∫_|y'|≤ 11/|x-y'|^ndy'g_k_L^∞1_1/4≤ t≤ 1(t)≲1_1/4≤ t≤ 11/|x|^ng_k_L^∞ and thus we have that |I_1|≲1_1/4≤ t≤ 11/<x>^n∇_x'g_k_L^∞. For I_2, we divide into the following cases: * 0<t<1/4, * 1/4≤ t<1, * t≥ 1. (1) If 0<t<1/4 then since supp(g_k^𝒯)⊂[1/4, 1] and A(x,t)=0 for t<0, we find that ∫_-∞^∞∫_Σ∂_k A(x-y',t-s)g_k(y',s)dy'ds =∫_1/4^∞∫_Σ∂_k A(x-y',t-s)g_k^𝒮(y')g_k^𝒯(s)dy'ds=0, and thus I_2=0. (2) If 1/4≤ t <1, then |I_2| =|∫_-∞^∞∫_Σ∂_k A(x-y',t-s)(g_k^𝒮(y')∂_s g_k^𝒯(s)-Δ_y'g_k^𝒮(y')g_k^𝒯(s))dy'ds| ≤∫_-∞^∞∫_Σ|∂_k A(x-y',t-s)|(|g_k^𝒮(y')||∂_s g_k^𝒯(s)|+|Δ_y'g_k^𝒮(y')||g_k^𝒯(s)|)dy'ds ≲∫_1/4^t∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2dy'ds×g_k^𝒯_L^∞∇_x'^2g_k^𝒮_L^∞ +∫_1/4^t∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds×g_k^𝒮_L^∞. We first claim the following estimate, which will be used throughout this proof. ∫_|y'|≤ 11/(|x-y'|^2+(t-s))^n-1/2dy'≲1_|x|<2log(2+1/√(t-s))/(|x|+√(t-s)+1)^n-1+1_|x|>21/|x|^n-1. Indeed, if |x|<2, then using that |y'|≤ 1 implies |x-y'|≤ |x|+1 and Lemma <ref>, we see that ∫_|y'|≤ 11/(|x-y'|^2+(t-s))^n-1/2dy' ≲∫_|x-y'|≤ |x|+11/(|x-y'|^2+(t-s))^n-1/2dy' ≲∫_0^|x|+1r^n-2/(r+√(t-s))^n-1dr ≲(|x|+1)^n-1/(|x|+√(t-s)+1)^n-1(1+log_+|x|+1/√(t-s)) ≲log(2+1/√(t-s))/(|x|+√(t-s)+1)^n-1, where in the last inequality we used the simple inequality 1+log_+(3a)≤ 4log(a+2) for a>0. If |x|≥ 2, then using |x-y'|≥|x|/2, we find that ∫_|y'|≤ 11/(|x-y'|^2+(t-s))^n-1/2dy' ≲1/|x|^n-1. This proves the claim (<ref>). i) If t≤1/2, then the term |∂_s g_k^𝒯(s)| is bounded and thus using (<ref>), we have that for any ϵ>0, both integrals of the RHS of (<ref>) are bounded by ∫_1/4^t∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2 dy'ds ≲1_|x|<2∫_1/4^t1/(t-s)^1/2log(2+1/√(t-s))/(|x|+1+√(t-s))^n-1ds +1_|x|>2∫_1/4^t1/(t-s)^1/21/|x|^n-1ds ≲1_|x|<2∫_0^√(t-1/4)log(2+1/u)/(|x|+1+u)^n-1du+1_|x|>21/|x|^n-1 ≲1_|x|<2∫_0^√(t-1/4)log(2+1/u)du+1_|x|>21/|x|^n-1 ≲1_|x|<2(t-1/4)^1-ϵ/2+1_|x|>21/|x|^n-1. Thus we have that for t≤1/2, |I_2|≲1_|x|<2+1_|x|>21/|x|^n-1. ii) We assume t≥1/2. Then, we have that the first integral in the RHS of (<ref>) is bounded by 1_|x|<2∫_1/4^tlog(2+1/√(t-s))/(t-s)^1/2(|x|+1+√(t-s))^n-1ds +1_|x|>21/|x|^n-1∫_1/4^t 1/√(t-s)ds ≲1_|x|<2∫_0^√(t-1/4)log(2+1/u)du+1_|x|>21/|x|^n-1 ≲1_|x|<2 +1_|x|>21/|x|^n-1. Also the second integral in the RHS of (<ref>) is bounded by ∫_1/4^t∫_|y'|≤ 1 1/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds =∫_1/4^1/2∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds +∫_1/2^t∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds ≲1_|x|<21/(1-t)^1+ϵ/2-a+1_|x|>21/|x|^n-1 +1_|x|<2∫_1/2^t1/(t-s)^1/2log(2+1/√(t-s))/(|x|+1+√(t-s))^n-11/(1-s)^1-a ds+1_|x|>2∫_1/2^t1/(t-s)^1/2(1-s)^1-ads1/|x|^n-1, where the last inequality follows since the integral with the range s∈[1/4,1/2] is bounded by the integral estimated in the case t≤1/2. To estimate the first integral on the RHS of (<ref>) (thus we may assume that |x|<2 here), we use the estimate log(1+x)≤ C_ϵ x^ϵ for any 0<ϵ<1 to get ∫_1/2^t1/(t-s)^1/2 log(2+1/√(t-s))/(|x|+1+√(t-s))^n-11/(1-s)^1-a ds = ∫_0^√(t-1/2)log(2+1/u)/(|x|+1+u)^n-11/(u^2+1-t)^1-adu ≲∫_0^√(t-1/2)(1+1/u)^ϵ/(u+√(1-t))^2-2a du≲1/(1-t)^1+ϵ/2-a and the second integral on the RHS of (<ref>) is bounded by ^*. Thus we conclude that |I_2|≲1_|x|<2(1+1/(1-t)^1+ϵ/2-a) +1_|x|>2(1/|x|^n-1+^*/|x|^n-1).  3) Finally if t≥ 1, then as in the previous case, |I_2| ≲∫_1/4^1∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2dy'ds×g_k^𝒯_L^∞∇_x'^2g_k^𝒮_L^∞ +∫_1/4^1∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds×g_k^𝒮_L^∞. Here the first integral in the RHS of (<ref>) can be estimated as ∫_1/4^1∫_|y'|≤ 1 1/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2dy'ds≲∫_1/4^11/(t-s)^1/2log(2+1/√(t-s))/(|x|+1+√(t-s))^n-1ds ≲∫_√(t-1)^√(t-1/4)log(2+1/u)/(|x|+u+1)^n-1du≲1/<x>^n-1∫_√(t-1)^√(t-1/4)log(2+1/u)du ≲1/<x>^n-1∫_√(t-1)^√(t-1/4)(1+1/u)^ϵdu≲1/<x>^n-1∫_√(t-1)^√(t-1/4)1/u^ϵdu ≲(t-1/4)^1-ϵ/2/<x>^n-1. For the second integral in the RHS of (<ref>), we have that ∫_1/4^1∫_|y'|≤ 1 1/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds ≲∫_1/4^1/2∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds +∫_1/2^1∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds. The first integral of (<ref>) can be estimated as follows: for any ϵ>0, ∫_1/4^1/2∫_|y'|≤ 1 1/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_s g_k^𝒯(s)|dy'ds ≲∫_1/4^1/21/(t-s)^1/2log(2+1/√(t-s))/(|x|+1+√(t-s))^n-1ds ≲1/<x>^n-1∫_√(t-1/2)^√(t-1/4)log(2+1/u)du≲1/<x>^n-1∫_√(t-1/2)^√(t-1/4)1/v^ϵdv ≲(t-1/2)^1-ϵ/2/<x>^n-1. The second integral of (<ref>) can be estimated as follows: ∫_1/2^1∫_|y'|≤ 1 1/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2|∂_sg_k^𝒯(s)|dy'ds ≲∫_1/2^1∫_|y'|≤ 11/(t-s)^1/2(|x-y'|^2+(t-s))^n-1/2(1-s)^1-ady'ds ≲∫_1/2^11/(t-s)^1/2log(2+1/√(t-s))/(|x|+√(t-s)+1)^n-11/(1-s)^1-ads1_|x|<2+∫_1/2^11/(t-s)^1/2(1-s)^1-ads1/|x|^n-11_|x|>2 ≲∫_√(t-1)^√(t-1/2)log(2+1/u)/(1-t+u^2)^1-adu1_|x|<2+^*/|x|^n-11_|x|>2 ≲∫_√(t-1)^√(t-1/2)(1+1/u)^ϵ/(1-t+u^2)^1-adu1_|x|<2+^*/|x|^n-11_|x|>2 ≲∫_√(t-1)^√(t-1/2)1/u^ϵ(1-t+u^2)^1-adu1_|x|<2+^*/|x|^n-11_|x|>2 ≲1/(t-1)^ϵ/2∫_0^1/2dv/v^1-a(t-1+v)^1/21_|x|<2+^*/|x|^n-11_|x|>2 ≲^*/(t-1)^ϵ/21_|x|<2+^*/|x|^n-11_|x|>2. Thus we conclude that |I_2|≲(t-1/4)^1-ϵ/2/<x>^n-1+1_|x|<2^*/(t-1)^ϵ/2+1_|x|>2^*/|x|^n-1. Hence (<ref>), (<ref>) and (<ref>) give |I_2| ≲1_|x|<2(1_t≤1/2(t-1/4)^1-ϵ/2+1_1/2≤ t<1(1+1/(1-t)^1+ϵ/2-a)+1_t> 1^*/(t-1)^ϵ/2) +1_|x|>2(1_t≤1/21/|x|^n-1+(1_1/2≤ t<1+1_t>1)(1/|x|^n-1+^*/|x|^n-1)). For the estimates of the higher spatial derivatives, the similar argument as above gives the result, and this proves the pressure estimate. § PROOF OF THEOREM <REF> From the proof of the previous theorem, we find that |∂_l∂_j w_i(x,t)-σ K_28|≲CN_3/<x>^n for x∈ℝ_+^n and t∈[0, 2] and thus assuming σ=1, we find that |∂_n^2 w_i(x,t)|≥|K_28|-|∂_n^2 w_i(x,t)-K_28|≳ |K_28|-CN_3/<x>^n. We now show that |K_28(x,1)| is unbounded as x_n→ 0. Note that using 4B(x,s)=e^-x_n^2/4sC_n/s^n/2K(x',s) where C_n=4(4π)^-n/21/n(n-2)|B_1|, K_28(x,1) =4∫_0^1∫_|y'|≤ 1∂_i∂_kB(x-y',s)∂_s g_k(y',1-s)dy'ds =4∫_0^1∫_|y'|≤ 1B(x-y',s)∂_i∂_k∂_s g_k(y',1-s)dy'ds =4∫_0^1∫_|y'|≤ 1B(x-y',s)∂_i∂_kg_k^𝒮(y')∂_s g_k^𝒯(1-s)dy'ds =∫_0^1∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2K(x'-y',s)∂_i∂_k g_k^𝒮(y')∂_s g_k^𝒯(1-s)dy'ds =-a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2K(x'-y',s)∂_i∂_k g_k^𝒮(y')s^a-1dy'ds +∫_1/2^3/4∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2K(x'-y',s)∂_i∂_k g_k^𝒮(y')∂_s g_k^𝒯(1-s)dy'ds =K_281(x,1)+K_282(x,1). 1) First assume that i≠ k. Note that K_28(x, 1) is odd in x_i and x_k and thus we may assume that x_i<0 and x_k>0. The other cases then follow with a sign change. We also note that since ∂_s g_k^𝒯(1-s) is bounded on [1/2, 3/4] and using Lemma <ref>, |K_282(x,1)| ≲1/|x'|^n-2. We now estimate K_281(x,1). We have, with d:=|x'-y'|, that K_281(x,1) =-a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2-a+1K(x'-y',s)((∂_i∂_k g_k^𝒮)_+(y')-(∂_i∂_k g_k^𝒮)_-(y'))dy'ds ≥ -a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2-a+1((m/(m-1)d)^n-2s^n-1/2(4π)^n-1/2+C/d^n-2s^n-1/2e^-d^2/8m^2s)(∂_i∂_k g_k^𝒮)_+(y')dy'ds +a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2-a+1(m/(m+1)d)^n-2s^n-1/2(4π)^n-1/2(1-2^n-1/4e^-d^2/8m^2s)(∂_i∂_k g_k^𝒮)_-(y')dy'ds ≥ a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^-a+3/2(4π)^n-1/2{(m/(m+1)d)^n-2(∂_i∂_k g_k^𝒮)_-(y')-(m/(m-1)d)^n-2(∂_i∂_k g_k^𝒮)_+(y')}dy'ds -a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC/s^-a+3/21/d^n-2e^-d^2/8m^2s|∂_i∂_kg_k^𝒮(y')|dy'ds. Let J be the second integral of the RHS above, then K_281(x,1)+J/aC_n (4π)^n-1/2 ≥∫_0^1/2e^-x_n^2/4ss^a-3/2ds ×∫_|y'|≤ 1{(m/(m+1)d)^n-2(∂_i∂_k g_k^𝒮)_-(y')-(m/(m-1)d)^n-2(∂_i∂_k g_k^𝒮)_+(y')}dy'. Here since x_n≤ 1, if a<1/2, ∫_0^1/2e^-x_n^2/4ss^a-3/2ds =(x_n^2/4)^a-1/2∫_x_n^2/2^∞e^-uu^-a-1/2du ≥(x_n^2/4)^a-1/2∫_1/2^∞e^-uu^-a-1/2du∼(x_n^2/4)^a-1/2, and if a=1/2, ∫_0^1/2e^-x_n^2/4ss^a-3/2ds =∫_x_n^2/2^∞e^-uu^-1du≳log2/x_n^2≥log2/x_n. We now estimate the space integral which we denote as I. From our construction of the boundary data, we find that (∂_i∂_k g_k^𝒮)_-(y')=0 on {y_i y_k≥ 0} and (∂_i∂_k g_k^𝒮)_+(y')=0 on {y_i y_k≤ 0} and thus I =∫_|y'|≤ 1 y_iy_k<0(m/(m+1)d)^n-2(∂_i∂_k g_k^𝒮)_-(y')dy' -∫_|y'|≤ 1 y_iy_k>0(m/(m-1)d)^n-2(∂_i∂_k g_k^𝒮)_+(y')dy' =∫_|y'|≤ 1 y_i>0 ,  y_k<0(m/(m+1)d)^n-2(∂_i∂_k g_k^𝒮)_-(y')dy' +∫_|y'|≤ 1 y_i<0,  y_k>0(m/(m+1)d)^n-2(∂_i∂_k g_k^𝒮)_-(y')dy' -∫_|y'|≤ 1 y_i>0,  y_k>0(m/(m-1)d)^n-2(∂_i∂_k g_k^𝒮)_+(y')dy' -∫_|y'|≤ 1 y_i<0,  y_k<0(m/(m-1)d)^n-2(∂_i∂_k g_k^𝒮)_+(y')dy' =:I_1+I_2-I_3-I_4. Now we denote y_(j)^*:=(y_1,⋯, -y_j,⋯, y_n-1) and d_j:=|x'-y_(j)^*| for j=1,2,⋯, n-1 and y_(i,k)^*:=(y_1,⋯, y_i-1, -y_i, y_i+1⋯,y_k-1, -y_k, y_k+1,⋯, y_n-1) with d_ik:=|x'-y_(i,k)^*|. Then we find that I_1 =∫_|y'|≤ 1 y_i>0 ,  y_k<0(m/(m+1)d)^n-2(∂_i∂_k g_k^𝒮)_-(y')dy' =∫_|y'|≤ 1 y_i>0 ,  y_k>0(m/(m+1)d_k)^n-2(∂_i∂_k g_k^𝒮)_-(y_(k)^*)dy' =∫_|y'|≤ 1 y_i>0 ,  y_k>0(m/(m+1)d_k)^n-2(∂_i∂_k g_k^𝒮)_+(y')dy', where the last equality follows since ∂_i∂_k g_k^𝒮(y_(k)^*)=-∂_i∂_k g_k^𝒮(y'). Similarly we find that I_2 =∫_|y'|≤ 1 y_i>0,  y_k>0(m/(m+1)d_i)^n-2(∂_i∂_k g_k^𝒮)_+(y')dy' I_3 =∫_|y'|≤ 1 y_i>0,  y_k>0(m/(m-1)d)^n-2(∂_i∂_k g_k^𝒮)_+(y')dy' I_4 =∫_|y'|≤ 1 y_i>0,  y_k>0(m/(m-1)d_ik)^n-2(∂_i∂_k g_k^𝒮)_+(y')dy'. Thus we find that I=∫_|y'|≤ 1 y_i>0,  y_k>0{(m/(m+1)d_k)^n-2+(m/(m+1)d_i)^n-2-(m/(m-1)d)^n-2-(m/(m-1)d_ik)^n-2}|∂_i ∂_k g_k^𝒮(y')|dy'. Now consider the integral I':=∫_|y'|≤ 1 y_i>0,  y_k>0(1/d_k^n-2+1/d_i^n-2-1/d^n-2-1/d_ik^n-2)|∂_i ∂_k g_k^𝒮(y')|dy'. Then we find that |I-I'| ≤∫_|y'|≤ 1 y_i>0,  y_k>0[(1-(m/m+1)^n-2)1/d_k^n-2+(1-(m/m+1)^n-2)1/d_i^n-2. .+((m/m-1)^n-2-1)1/d^n-2+((m/m-1)^n-2-1)1/d_ik^n-2]|∂_i ∂_k g_k^𝒮(y')|dy'. Here using the inequalities 1-(m/m+1)^n-2≤n-2/m+1,      (m/m-1)^n-2-1≤2^n-2/m-1, and the fact that min{d, d_i, d_k, d_ik}=d_i≥2/3|x'| since x_i<0, x_k>0, we find that |I-I'|≤ C_n(1/m+1+1/m-1)1/|x'|^n-2∫_|y'|≤ 1 y_i>0,  y_k>0|∂_i ∂_k g_k^𝒮(y')|dy', where C_n is a constant depending only on n, which will vary line by line. Now for any given ϵ>0, take m≥C_m|x'|^4/ϵ where C_m is a sufficiently large positive number independent of x' and ϵ, we then note that 1/m+1+1/m-1≤1/m+9C_m/9C_m-11/m≤ϵ C_n/C_m1/|x'|^4 (we will only need C_m≥2/9 to obtain the above estimate). Hence we obtain that |I-I'|≤ϵ C_n/C_m1/|x'|^n+2∫_|y'|≤ 1 y_i>0,  y_k>0|∂_i ∂_k g_k^𝒮(y')|dy'≤ϵ C_1/C_m1/|x'|^n+2, where the above C_1 depends only on n and g_k^𝒮. We now estimate I'. For this we need a lemma which generalizes Lemma A.3 of <cit.>, for which we skip its proof. Let n≥ 3, H(x,y)=(x^2+y^2)^-n-2/2, 0≤ t<u and 0≤ a<b. Then H(t,a)-H(t,b)-H(u,a)+H(u,b)≥n(n-2)/4(u^2-t^2)(b^2-a^2)H(u,b)^n+2/n-2. Let us define the variable x”:=(x_1,⋯, x_k-1, x_k+1,⋯, x_n-1)∈ℝ^n-2. Then letting a=|x_i+y_i|, b=|x_i-y_i|, u=|x”-y”|+|x_k+y_k|, t=|x”-y”|+|x_k-y_k|, and R:=|x”-y”|, we find that the above lemma implies 1/d_k^n-2+1/d_i^n-2-1/d^n-2-1/d_ik^n-2 ≥n(n-2)/4(4|x_i|y_i)(4x_k y_k+4Rx_k)1/d_k^n+2 ≥ 4n(n-2)|x_i|x_k y_i y_k 1/d_k^n+2≥Cαβ/|x'|^n+2, where in the last inequality we used that d_k=|x'-y_(k)^*|≥ |x'|-1≥2/3|x'|. We then conclude that I'≥C_2αβ/|x'|^n+2. Thus, it follows that I≥ I'-|I-I'|≥C_2αβ/|x'|^n+2-ϵ C_1/C_m1/|x'|^n+2, and by choosing ϵ>0 such that C_2αβ-ϵC_1/C_m>Cαβ for some C>0, we find that I≥Cαβ/|x'|^n+2. Thus we conclude that K_281(x,1)+J/aC_n(4π)^n-1/2≥(1_a<1/2(x_n^2/4)^a-1/2+1_a=1/2log2/x_n)Cαβ/|x'|^n+2. We now estimate J: J/aC_n(4π)^n-1/2 =∫_0^1/2∫_|y'|≤ 1 e^-x_n^2/4s1/s^-a+3/21/d^n-2e^-d^2/8m^2 s|∂_i∂_k g_k^𝒮(y')|dy'ds ≤∫_0^1/2∫_|y'|≤ 11/s^-a+3/21/d^n-2e^-d^2/8m^2 s|∂_i∂_k g_k^𝒮(y')|dy'ds = ∫_|y'|≤ 11/d^n-2|∂_i∂_k g_k^𝒮(y')|∫_0^1/21/s^-a+3/2e^-d^2/8m^2 sdsdy'. Similarly as before, we have that for m≥ d, ∫_0^1/21/s^-a+3/2e^-d^2/8m^2 sds=1_a<1/2(d^2/8m^2)^a-1/2+1_a=1/2log4m^2/d^2. Thus, by choosing m=2|x'|>d, we get J ≲∫_|y'|≤ 11/d^n-2[1_a<1/2(d^2/8m^2)^a-1/2+1_a=1/2log4m^2/d^2.]|∂_i∂_k g_k^𝒮(y')|dy' ≲1_a<1/2m^1-2a/|x'|^n-1-2a+1_a=1/21/|x'|^n-2log(3m/|x'|)≲1/|x'|^n-2. Thus we obtain that K_281(x,1)≳(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n^2)Cαβ/|x'|^n+2-C/|x'|^n-2. Then finally we see that |K_28(x,1)| =|K_281(x,1)+K_282(x,1)| ≥ |K_281(x,1)|-|K_282(x,1)| ≳(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n)Cαβ/|x'|^n+2-C/|x'|^n-2. Thus for 1≤ i ≤ n-1, (<ref>) gives |∂_n^2 w_i(x,1)| ≳(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n)Cαβ/|x'|^n+2-C/|x'|^n-2-CN_3/<x>^n ≳(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n)Cαβ/|x'|^n+2-1/|x'|^n-2. 2) We now consider the case i=k. In this case the boundary data g are symmetric in x_k and thus we will assume that x_k>0. Also in this case the integral K_281 is not necessarily positive and thus we obtain both a lower bound and upper bound for this integral. First for a lower bound (assuming K_281 to be positive), as done in the case i≠ k, K_281 can be written as follows: K_281(x,1) ≥ a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^-a+3/2(4π)^n-1/2{(m/(m+1)d)^n-2(∂_k^2 g_k^𝒮)_-(y')-(m/(m-1)d)^n-2(∂_k^2 g_k^𝒮)_+(y')}dy'ds -J, where J is now given by J:=a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC/s^-a+3/21/d^n-2e^-d^2/8m^2s|∂_k^2 g_k^𝒮(y')|dy'ds. Thus we obtain that K_281(x,1)+J/aC_n (4π)^n-1/2 ≥∫_0^1/2e^-x_n^2/4ss^a-3/2ds ×∫_|y'|≤ 1{(m/(m+1)d)^n-2(∂_k^2 g_k^𝒮)_-(y')-(m/(m-1)d)^n-2(∂_k^2 g_k^𝒮)_+(y')}dy'. The time integral in the RHS above is already estimated in the i≠ k case and we denote the space integral of the RHS above as I. Then using the symmetry of ∂_k^2 g_k^𝒮 with respect to y_k=0 and y_k=± q (see Appendix 8.3), I is given as I =∫_|y'|≤ 1 p≤|y_k|≤ q(m/(m+1)d)^n-2(∂_k^2g_k^𝒮)_-(y')dy'-∫_|y'|≤ 1 q≤|y_k|≤ r(m/(m-1)d)^n-2(∂_k^2 g_k^𝒮)_+(y')dy' =∫_|y'|≤ 1 p≤ y_k≤ q((m/(m+1)d)^n-2+(m/(m+1)d_k)^n-2)(∂_k^2g_k^𝒮)_-(y')dy' -∫_|y'|≤ 1 p≤ y_k≤ q((m/(m-1)d_q)^n-2+(m/(m-1)d_q k)^n-2)(∂_k^2g_k^𝒮)_+(2qe_k-y')dy' =∫_|y'|≤ 1 p≤ y_k≤ q((m/(m+1)d)^n-2+(m/(m+1)d_k)^n-2-(m/(m-1)d_q)^n-2-(m/(m-1)d_q k)^n-2)|∂_k^2g_k^𝒮(y')|dy', where the last equality follows since (∂_k^2g_k^𝒮)_+(2qe_k -y')=(∂_k^2g_k^𝒮)_-(y') for p≤ y_k≤ q, and d_q:=|x'-y_k^(*)-2q e_k|, d_q k:=|x'-y'+2qe_k|. Now consider the integral I':=∫_|y'|≤ 1 p≤ y_k≤ q(1/d^n-2+1/d_k^n-2-1/d_q^n-2-1/d_q k^n-2)|∂_k^2g_k^𝒮(y')|dy'. Then we find that |I-I'| ≤∫_|y'|≤ 1 p≤ y_k ≤ q[(1-(m/m+1)^n-2)1/d^n-2+(1-(m/m+1)^n-2)1/d_k^n-2. .+((m/m-1)^n-2-1)1/d_q^n-2+((m/m-1)^n-2-1)1/d_q k^n-2]|∂_k^2 g_k^𝒮(y')|dy'. Here using the inequalities 1-(m/m+1)^n-2≤n-2/m+1,          (m/m-1)^n-2-1≤2^n-2/m-1, and the fact that min{d, d_k, d_q, d_q k}=d_α≥2/3(1-13α/20√(n-1))|x'| since |x'|≥ 3, we find that |I-I'|≤ C_n (1/m+1+1/m-1)1/|x'|^n-2∫_|y'|≤ 1 0≤ y_k ≤α|∂_k^2 g_k^𝒮(y')|dy'. Now for any given ϵ>0, take m≥C_m|x'|^2/ϵ where C_m is a sufficiently large number independent of x' and ϵ, we note that 1/m+1+1/m-1≤1/m+9C_m/9C_m-11/m≤ϵ C_n/C_m1/|x'|^2 (we will only need C_m≥2/9 to obtain the above estimate). Hence we obtain that |I-I'|≤ϵ C_n/C_m1/|x'|^n∫_|y'|≤ 1 0≤ y_k ≤α|∂_k^2 g_k^𝒮(y')|dy'≤ϵ C'/|x'|^n. We now estimate I'. Assume that x_k<p. Note the following chain of inequalities 0<y_k-x_k≤min{x_k+y_k, 2q-x_k-y_k}≤max{x_k+y_k,  2q-x_k-y_k}≤ x_k-y_k+2q. Denote t:=y_k-x_k, a:=2q-2y_k, b:=2x_k. Then from the above chain of inequalities we have that t>0,  a>2δ,  b≥ 0,  t+a=2q-y_k-x_k<q<1<|x'|. Now define the function f(x)=1/(R^2+x^2)^n-2/2 (x>0), where R:=|x”-y”|≤ |x'|+1≤4/3|x'|. Then since f'(x)=-(n-2)x/(R^2+x^2)^n/2, it follows that 1/d^n-2+1/d_k^n-2-1/d_q^n-2-1/d_q k^n-2 =f(t)+f(t+b)-f(t+a)-f(t+a+b) =(n-2)(∫_t^t+a1/(R^2+s^2)^n/2ds+∫_t+b^t+a+b1/(R^2+s^2)^n/2ds) ≥(n-2)a/(R^2+(t+a)^2)^n/2≥2δ(n-2)/(16/9|x'|^2+|x'|^2)^n/2=C_n/|x'|^n, and thus we see that I'≥C_n/|x'|^n and hence by taking ϵ>0 sufficiently small, |I|≥ |I'|-|I-I'|≥C/|x'|^n-C'ϵ/|x'|^n≥C/|x'|^n. We now consider an upper bound for K_281 (assuming K_281 to be negative), K_281(x,1) =-a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2-a+1K(x'-y',s)((∂_k^2 g_k^𝒮)_+(y')-(∂_k^2 g_k^𝒮)_-(y'))dy'ds ≤ -a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2-a+1(m/(m+1)d)^n-2s^n-1/2(4π)^n-1/2(1-2^n-1/4e^-d^2/8m^2s)(∂_k^2 g_k^𝒮)_+(y')dy'ds +a∫_0^1/2∫_|y'|≤ 1e^-x_n^2/4sC_n/s^n/2-a+1((m/(m-1)d)^n-2s^n-2/2(4π)^n-1/2+C/d^n-2s^n-1/2e^-d^2/8m^2s)(∂_k^2 g_k^𝒮)_-(y')dy'ds ≤ -a∫_0^1/2 e^-x_n^2/4sC_n/s^3/2-a(4π)^n-1/2 ds∫_|y'|≤ 1{(m/(m+1)d)^n-2(∂_k^2g_k^𝒮)_+(y')-(m/(m-1)d)^n-2(∂_k^2g_k^𝒮)_-(y')}dy' +J, where J is the integral previously defined in (<ref>). Let I be the space integral of RHS above. Then I is given as follows I=∫_|y'|≤ 1 q≤ y_k≤ r((m/(m+1)d)^n-2+(m/(m+1)d_k)^n-2-(m/(m-1)d_q)^n-2-(m/(m-1)d_q k)^n-2)|∂_k^2g_k^𝒮(y')|dy'. Now consider the integral I'=∫_|y'|≤ 1 q≤ y_k≤ r(1/d^n-2+1/d_k^n-2-1/d_q^n-2-1/d_qk^n-2)|∂_k^2g_k^𝒮(y')|dy'. Similarly as in the previous case, we can show that for any ϵ >0, by taking m=m(ϵ) sufficiently large, |I-I'|≤C'ϵ/|x'|^n. We now estimate the integral I'. Assume that x_k>r and |x”|<3/2. Note the following chain of inequalities 0<x_k-y_k<x_k+y_k-2q<x_k-y_k+2q<x_k+y_k. Denote t:=x_k-y_k, a:=2y_k-2q, b=2q. Then from the above chain of inequalities we have that t, a, b>0. Then since f”(x)=(n-2)((n-1)x^2-R^2)/(x^2+R^2)^n+2/2, 1/d^n-2 +1/d_k^n-2-1/d_q^n-2-1/d_qk^n-2=f(t)+f(t+a+b)-f(t+a)-f(t+b) = -∫_t^t+af'(s)ds+∫_t+b^t+a+bf'(s)ds=∫_t^t+a(f'(s+b)-f'(s))ds = ∫_t^t+a∫_s^s+bf”(τ)dτ ds=(n-2)∫_t^t+a∫_s^s+b(n-1)τ^2-R^2/(τ^2+R^2)^n+2/2dτ ds. Now since R=|x”-y”|≤ |x”|+|y”|<3/2+√(n-2)r and τ≥ s≥ t=x_k-y_k>3/2-r, we have that (n-1)τ^2-R^2≥ (n-1)(3/2-r)^2-(3/2+√(n-2)r)^2). On the other hand since R≤ |x'|+1≤4/3|x'| and τ<t+a+b=x_k+y_k≤4/3|x'|, we have that (τ^2+R^2)^n+2/2≤ 2^n+2/2(4/3)^n+2|x'|^n+2. With these estimates (<ref>) and (<ref>), we find that ∫_t^t+a∫_s^s+b(n-1)τ^2-R^2/(τ^2+R^2)^n+2/2dτ ds ≥ C_n∫_t^t+a∫_s^s+b(n-1)(3/2-r)^2-(3/2+√(n-2)r)^2)/|x'|^n+2dτ ds ≥ C_nab (n-1)(3/2-r)^2-(3/2+√(n-2)r)^2)/|x'|^n+2=C_n qδ/|x'|^n+2, and thus we see that I' ≥C_n/|x'|^n+2 and hence by taking ϵ>0 sufficiently small, |I|≥ |I'|-|I-I'|≥C/|x'|^n+2-C'ϵ/|x'|^n+2≥C/|x'|^n+2. Summarizing above, we get that K_281(x,1)≥(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n^2)C/|x'|^n-C/|x'|^n-2 if x_k <p, ≤ -(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n^2)C/|x'|^n+2+C/|x'|^n-2 if x_k>r and |x”|<α. Then we finally see that |K_28(x,1)| =|K_281(x,1)+K_282(x,1)| ≥ |K_281(x,1)|-|K_282(x,1)| ≳(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n)C/|x'|^n+2-C/|x'|^n-2, and thus for i=k, (<ref>) gives |∂_n^2 w_i(x,1)| ≳(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n)C/|x'|^n+2-C/|x'|^n-2-CN_3/<x>^n ≳(1_a<1/21/x_n^1-2a+1_a=1/2log2/x_n)C/|x'|^n+2-1/|x'|^n-2. § PROOF OF THEOREM <REF> FOR THE NAVIER-STOKES EQUATIONS Let w be a solution of the Stokes equations (<ref>)-(<ref>) with f=0 and g given in Theorem 2. Let ϕ_1∈ C_c^∞(ℝ^n) be a cut-off function satisfying ϕ_1 ≥ 0, supp(ϕ_1)⊂ B_2 and ϕ_1 ≡ 1 in B_1. Also let ϕ_2∈ C_c^∞(-∞, ∞) be a cut-off function satisfying ϕ_2 ≥ 0, supp(ϕ_2)⊂ (-2,2) and ϕ_2 ≡ 1 in (-1,1). Set ϕ(x,t)=ϕ_1(x)ϕ_2(t) and define W=ϕ w. Then it is immediate that W=w in Q_1^+:=B_1^+×(0,1) and W|_x_n=0=W|_t=0=0. From the proof of Theorem <ref> for the Stokes equations, we find that W_L^r(ℝ_+^n×(0,1))+ ∇ W_L^r(ℝ_+^n×(0,1)) +∇∇' W_L^r(ℝ_+^n× (0,1)) ≤ C( w_L^r(B_2^+×(0,4))+ ∇ w_L^r(B_2^+×(0,4))+∇∇' w_L^r(B_2^+× (0,4))) ≤ Cg_L^∞((0,∞);W^2,1(A))≤ Ca for all 1≤ r≤∞, where ∇':=(∂_1,⋯,∂_n-1) and a>0 will be determined later. We now consider the following perturbed Navier-Stokes equations in Q_+:=ℝ_+^n× (0,1): ∂_t v-Δ v+∇ q+∇· (v⊗ v+v⊗ W+W⊗ v) =-∇·(W⊗ W), div v = 0 in Q_+, with zero initial and boundary data v|_x_n=0=v|_t=0=0. We first show that the solution v for (<ref>) satisfies v, ∇ v∈ L^r(Q_+)∩ L^∞(Q_+) and ∇^2 v∈ L^r(Q_+) for all n+2<r<∞. To show the existence we consider the following iteration: For a positive integer m≥ 1, ∂_t v^m+1-Δ v^m+1+∇ q^m+1 =-∇· (v^m⊗ v^m+v^m⊗ W+W⊗ v^m+W⊗ W), div v^m+1 = 0 in Q_+, with zero initial and boundary data v^m+1|_x_n=0=v^m+1|_t=0=0, and define v^1≡ 0 and q^1≡ 0. 1) We first show the uniform-in-m estimate. By Proposition <ref>, we have v^2(t)_L^r(ℝ_+^n)≤ C∫_0^t(t-s)^-1/2W(s)⊗ W(s)_L^r(ℝ_+^n)ds. Using the above estimate and Young's inequality for convolution, we find that for r∈ (1,∞), v^2_L^r(Q_+)≤ C_1W⊗ W_L^r(Q_+)≤ C_1W_L^r(Q_+)W_L^∞(Q_+). Also by Proposition <ref>, we have for r>n+2 that, v^2_L^∞(Q_+)≤ C_2W⊗ W_L^r(Q_+)≤ C_2W_L^r(Q_+)W_L^∞(Q_+). Finally by Proposition <ref>, we have for r∈(1,∞) that ∇ v^2_L^r(Q_+)≤ C_3W⊗ W_L^r(Q_+)≤ C_3W_L^r(Q_+)W_L^∞(Q_+). By the maximal regularity of the initial-boundary value problem for the Stokes equations in the half-space, we have ∂_t v^2_L^r(Q_+)+∇^2 v^2_L^r(Q_+)≤1/2n C_4∇· (W⊗ W)_L^r(Q_+)≤ C_4W_L^∞(Q_+)∇ W_L^r(Q_+). Then by the Sobolev embedding, it follows that ∇ v^2_L^∞(Q_+)≤C_5/C_4(∇^2 v^2_L^r(Q_+)+∂_t v^2_L^r(Q_+))≤ C_5 W_L^∞(Q_+)(W_L^r(Q_+)+∇ W_L^r(Q_+)). We now differentiate the equation (<ref>) in x_j (j=1, ⋯, n-1) to get ∂_t ∂_j v^m+1-Δ∂_j v^m+1+∇∂_j q^m+1 =-∇· (∂_j v^m⊗ v^m+ v^m⊗∂_j v^m+∂_j v^m⊗ W+v^m⊗∂_j W    +∂_j W⊗ v^m+W⊗∂_j v^m+∂_j W⊗ W+W⊗∂_jW), div ∂_j v^m+1 = 0 in Q_+, with zero initial and boundary data ∂_j v^m+1|_x_n=0=∂_j v^m+1|_t=0=0. By the maximal regularity for the Stokes equations, we find that ∂_t v^2_L^r(Q_+)+∇^2 ∂_j v^2_L^r(Q_+) ≤1/2nC_4∇·(∂_j W⊗ W+W⊗∂_j W)_L^r(Q_+) ≤1/2n C_4(∇ W_L^r(Q_+)∇ W_L^∞(Q_+)+∇∇' W_L^r(Q_+)W_L^∞(Q_+)). We then find that since ∇· v^2=0, ∇^2 ∂_n v_n^2_L^r(Q_+)≤∑_j=1^n-1∇^2 ∂_j v_j^2_L^r(Q_+)≤ C_4(∇ W_L^r(Q_+)∇ W_L^∞(Q_+)+∇∇' W_L^r(Q_+)W_L^∞(Q_+)). Then we have the following estimates: for j≠ n and 1≤ k ≤ n and r>n+2, ∂_j∂_k v^2_L^∞(Q_+) ≤C_5/C_4(∇^2∂_j v^2_L^r(Q_+)+∂_t∂_j v^2_L^r(Q_+)) ≤ C_5(W_L^∞(Q_+)+∇ W_L^∞(Q_+))(∇ W_L^r(Q_+)+∇∇'W_L^r(Q_+)) so that by letting C_6:=2n^2C_5, ∇∇'v^2_L^∞(Q_+)≤ C_6 (W_L^∞(Q_+)+∇ W_L^∞(Q_+))(∇ W_L^r(Q_+)+∇∇'W_L^r(Q_+)), and ∂_n^2 v_n^2_L^∞(Q_+) ≤∑_k=1^n-1∂_n∂_k v_k^2_L^∞(Q_+)≤ C_6(W_L^∞(Q_+)+∇ W_L^∞(Q_+))(∇ W_L^r(Q_+)+∇∇'W_L^r(Q_+)). Now let A:=W_L^r(Q_+)+W_L^∞(Q_+)+∇ W_L^r(Q_+)+∇ W_L^∞(Q_+)+∇∇'W_L^r(Q_+)≤ c^*a for some fixed constant c^*>0. Define C':=max_1≤ k≤ 6C_k. Then we find that by taking a>0 sufficiently small such that C'c^*a≤1/100, v^2_L^r(Q_+) +v^2_L^∞(Q_+)+∇ v^2_L^r(Q_+)+∇ v^2_L^∞(Q_+)+∇^2 v^2_L^r(Q_+)+∇∇'v^2_L^∞(Q_+)+∂_n^2 v_n^2_L^∞(Q_+) ≤ 7C'A^2≤ 7C'c^*aA≤7/100A<A. Suppose for m≥ 2 v^m_L^r(Q_+)+v^m_L^∞(Q_+)+∇ v^m_L^r(Q_+)+∇ v^m_L^∞(Q_+)+∇^2 v^m_L^r(Q_+)+∇∇'v^m_L^∞(Q_+)+∂_n^2 v_n^m_L^∞(Q_+)≤ A . We now estimate each term of the left hand side of the above inequality with v^m replaced by v^m+1. v^m+1_L^r(Q_+) +v^m+1_L^∞(Q_+) +∇ v^m+1_L^r(Q_+) ≤ (C_1+C_2+C_3)v^m⊗ v^m+v^m⊗ W+W⊗ v^m+W⊗ W_L^r(Q_+) ≤ (C_1+C_2+C_3)(v^m_L^r(Q_+)v^m_L^∞(Q_+)+v^m_L^r(Q_+)W_L^∞(Q_+). .+W_L^r(Q_+)v^m_L^∞(Q_+)+W_L^r(Q_+)W_L^∞(Q_+)) ≤ 4(C_1+C_2+C_3)A^2, ∂_t v^m+1_L^r(Q_+)+∇^2 v^m+1_L^r(Q_+) ≤C_4/2n∇·(v^m⊗ v^m+v^m⊗ W+W⊗ v^m+W⊗ W)_L^r(Q_+) ≤ C_4(∇ v^m_L^r(Q_+)v^m_L^∞(Q_+)+∇ v^m_L^r(Q_+)W_L^∞(Q_+). .+∇ W_L^r(Q_+)v^m_L^∞(Q_+)+∇ W_L^r(Q_+)W_L^∞(Q_+)) ≤ 4C_4A^2, and ∇ v^m+1_L^∞(Q_+) ≤C_5/C_4(∇^2 v^m+1_L^r(Q_+)+∂_t v^m+1_L^r(Q_+))≤ 4C_5/C_4C_4A^2=4C_5A^2. By the maximal regularity for the Stokes equations, we find that for 1≤ j ≤ n-1, ∂_t ∂_j v^m+1_L^r(Q_+)+∇^2 ∂_j v^m+1_L^r(Q_+) ≤C_4/2n∇· (∂_j v^m⊗ v^m+ v^m⊗∂_j v^m+∂_j v^m⊗ W+v^m⊗∂_j W. .+∂_j W⊗ v^m+W⊗∂_j v^m+∂_j W⊗ W+W⊗∂_jW_L^r(Q_+) ≤C_4/2n2n(v^m_L^r(Q_+)+∇ v^m_L^r(Q_+)+∇∇'v^m_L^r(Q_+). .+W_L^r(Q_+)+∇ W_L^r(Q_+)+∇∇'W_L^r(Q_+)) ×(v^m_L^∞(Q_+)+∇ v^m_L^∞(Q_+)+∇∇'v^m_L^∞(Q_+). .+W_L^∞(Q_+)+∇ W_L^∞(Q_+)+∇∇'W_L^∞(Q_+)) ≤ 4C_4 A^2. We then see that ∇^2∂_n v_n^m+1_L^r(Q_+)≤∑_j=1^n-1∇^2 ∂_j v_j^m+1_L^r(Q_+)≤ 4nC_4A^2. Then we have that for j≠ n and 1≤ k≤ n, ∂_j∂_k v^m+1_L^∞(Q_+) ≤C_5/C_4(∇^2∂_jv^m+1_L^r(Q_+)+∂_t∂_jv^m+1_L^r(Q_+))≤4C_4 C_5/C_4A^2= 4C_5 A^2, and we see that ∇∇'v^m+1_L^∞(Q_+)≤ 4n^2 C_5A^2=2C_6 A^2. Finally we find that ∂_n^2 v_n^m+1_L^∞(Q_+) ≤∑_k=1^n-1∂_k^2 v_k^m+1_L^∞(Q_+)≤ 2C_6 A^2. Hence summing all the estimates above gives v^m_L^r(Q_+)+v^m_L^∞(Q_+)+∇ v^m_L^r(Q_+) +∇ v^m_L^∞(Q_+)+∇^2 v^m_L^r(Q_+)+∇∇'v^m_L^∞(Q_+)+∂_n^2 v_n^m_L^∞(Q_+) ≤ 24C'A^2 ≤ 24C'c^*aA≤6/25A<A. Thus (<ref>) holds for all m≥ 2. 2. We now show the Cauchy estimate. Denote V^m:=v^m+1-v^m and Q^m+1:=q^m+1-q^m for m≥ 1. Then (V^m+1, Q^m+1) solves ∂_t V^m+1-Δ V^m+1+∇ Q^m+1 =-∇· (V^m⊗ v^m+v^m-1⊗ V^m+V^m⊗ W+W⊗ V^m), ∇· V^m+1 = 0 in Q_+, with zero initial and boundary data V^m+1|_x_n=0=V^m+1|_t=0=0. We then obtain V^m+1_L^r(Q_+)+V^m+1_L^∞(Q_+) +∇ V^m+1_L^r(Q_+) ≤ (C_1+C_2+C_3)V^m⊗ v^m+v_m-1⊗ V^m+V^m⊗ W+W⊗ V_m_L^r(Q_+) ≤(C_1+C_2+C_3)(V^m_L^r(Q_+)(v^m_L^∞(Q_+)+W_L^∞(Q_+)). .+V^m_L^∞(Q_+)(v^m-1_L^r(Q_+)+W_L^r(Q_+))) ≤ 2(C_1+C_2+C_3)A(V^m_L^r(Q_+)+V^m_L^∞(Q_+)), ∂_t V^m+1_L^r(Q_+)+∇^2 V^m+1_L^r(Q_+) ≤1/2nC_4∇·(V^m⊗ v^m+v_m-1⊗ V^m+V^m⊗ W+W⊗ V_m)_L^r(Q_+) ≤C_4/2(∇ V^m_L^r(Q_+)v^m_L^∞(Q_+)+V^m_L^∞(Q_+)∇ v^m_L^r(Q_+). .+∇ v^m-1_L^r(Q_+)V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)v^m-1_L^∞(Q_+). .+2∇ W_L^r(Q_+)V^m_L^∞(Q_+)+2W_L^∞(Q_+)∇ V^m_L^r(Q_+)) ≤ 2C_4A (∇ V^m_L^r(Q_+)+V^m_L^∞(Q_+)), and ∇ V^m+1_L^∞(Q_+) ≤C_5/C_4(∂_t V^m+1_L^r(Q_+)+∇^2 V^m+1_L^r(Q_+)) ≤C_5/C_4(2C_3A+2C_4A)(V^m_L^r(Q_+)+∇ V^m_L^r(Q_+)+V^m_L^∞(Q_+)) =2C_5A(V^m_L^r(Q_+)+∇ V^m_L^r(Q_+)+V^m_L^∞(Q_+)). We now differentiate the equation (<ref>) in x_j (j=1, ⋯, n-1) to get ∂_t ∂_j V^m+1-Δ∂_j V^m+1+∇∂_j Q^m+1 =-∇· (∂_j V^m⊗ v^m+ V^m⊗∂_j v^m+∂_j v^m-1⊗ V^m+v^m-1⊗∂_j V^m    +∂_j V^m⊗ W+V^m⊗∂_j W+∂_j W⊗ V^m+W⊗∂_j V^m), div ∂_j V^m+1 = 0 in Q_+, with zero initial and boundary data ∂_j V^m+1|_x_n=0=∂_j V^m+1|_t=0=0. By the maximal regularity for the Stokes equations, we find that, ∂_t ∂_j V^m+1_L^r(Q_+) +∇^2 ∂_j V^m+1_L^r(Q_+) ≤C_4/2n∇· (∂_j V^m⊗ v^m+ V^m⊗∂_j v^m+∂_j v^m-1⊗ V^m+v^m-1⊗∂_j V^m. .   +∂_j V^m⊗ W+V^m⊗∂_j W+∂_j W⊗ V^m+W⊗∂_j V^m)_L^r(Q_+) ≤C_4/2[(∇∂_j V^m_L^r(Q_+)+∇ V^m_L^r(Q_+))(v^m_L^∞(Q_+)+v^m-1_L^∞(Q_+)+2W_L^∞(Q_+).. .+∇ v^m_L^∞(Q_+)+∇ v^m-1_L^∞(Q_+)+2∇ W_L^∞(Q_+)) +(V^m_L^∞(Q_+)+∇ V^m_L^∞(Q_+))(∇ v^m_L^r(Q_+)+∇ v^m-1_L^r(Q_+)+2∇ W_L^r(Q_+). ..+∇∂_j v^m_L^r(Q_+)+∇∂_j v^m-1_L^r(Q_+)+2∇∂_j W_L^r(Q_+))] ≤ 2C_4A(V^m_L^∞(Q_+)+∇ V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇^2 V^m_L^r(Q_+)). Hence we find that for j≠ n and 1≤ k≤ n, ∂_j∂_k V^m+1_L^∞(Q_+) ≤C_5/C_4(∇^2∂_jV^m+1_L^r(Q_+)+∂_t∂_jV^m+1_L^r(Q_+)) ≤ 2C_5 A(V^m_L^∞(Q_+)+∇ V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇^2 V^m_L^r(Q_+)), and thus ∇∇' V^m+1_L^∞(Q_+) ≤ 2n^2C_5A(V^m_L^∞(Q_+)+∇ V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇∂_j V^m_L^r(Q_+)) ≤ C_6A(V^m_L^∞(Q_+)+∇ V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇^2 V^m_L^r(Q_+)). Finally we obtain that ∂_n^2 V_n^m+1_L^∞(Q_+) ≤∑_k=1^n-1∂_n∂_k V_k^m+1_L^∞(Q_+) ≤ C_6 A(V^m_L^∞(Q_+)+∇ V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇^2 V^m_L^r(Q_+)). Thus summing all the above estimates gives V^m+1_L^r(Q_+) +V^m+1_L^∞(Q_+)+∇ V^m+1_L^r(Q_+)+∇ V^m+1_L^∞(Q_+) +∇^2 V^m+1_L^r(Q_+)+∇∇'V^m+1_L^∞(Q_+)+∂_n^2 V_n^m+1_L^∞(Q_+) ≤ 12C'A( V^m_L^r(Q_+)+V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇ V^m_L^∞(Q_+)+∇^2 V^m_L^r(Q_+)) ≤ 12C'c^*a( V^m_L^r(Q_+)+V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇ V^m_L^∞(Q_+)+∇^2 V^m_L^r(Q_+)) ≤1/2( V^m_L^r(Q_+)+V^m_L^∞(Q_+)+∇ V^m_L^r(Q_+)+∇ V^m_L^∞(Q_+)+∇^2 V^m_L^r(Q_+)). This implies that (v^m, ∇ v^m, ∇∇'v^m, ∇^2 v^m, ∂_n^2 v^m) converges to (v, ∇ v, ∇∇'v,∇^2 v, ∂_n^2v) in L^∞(Q_+)× L^∞(Q_+)× L^∞(Q_+)× L^r(Q_+)× L^r(Q_+) such that v solves (<ref>) with an appropriate distribution q. We now set u:=v+W and p:=q+π. Then u becomes a weak solution of the Navier-Stokes equations (<ref>) in Q_+ with the no-slip boundary condition (<ref>). Also, then the claim that u satisfies (<ref>) with Q_1^+ in place of Q_2^+ and (<ref>) follows directly from the construction. § APPENDIX §.§ Proof of Lemma 5 Lemma <ref> If 0<s<t<1, 0<a<1 and c>0, then e^-x_n^2/cs/(1-t+s)^1-a≲1/(x_n^2+1-t+s)^1-a. First note that the function f(t)=(t+1)e^-t/c (t>0), has its maximum at t=c-1 and thus f(t)≤ ce^-c-1/c=:C. Then letting u=x_n^2/s gives e^-x_n^2/cs≤ Cs/x_n^2+s≤ C(s/x_n^2+s)^1-a, since 0<a≤1/2. Clearly e^-x_n^2/cs≲ C, which implies that e^-x_n^2/cs≲ Cmin{1,(s/x_n^2+s)^1-a}. Now we note that min{1, s/x_n^2+s}1/1-t+s≤min{1/1-t+s, s/1-t+s1/x_n^2+s}≤min{1/1-t+s, 1/x_n^2+s}, and min{1/1-t+s, 1/x_n^2+s}≤2/x_n^2+1-t+s. The above inequality follows since if x_n^2+s>1-t+s, then 1/x_n^2+s=2/x_n^2+x_n^2+2s<2/x_n^2+1-t+2s<2/x_n^2+1-t+s, while if x_n^2+s<1-t+s, then 1/1-t+s=2/(1-t)+(1-t)+2s<2/x_n^2+1-t+2s<2/x_n^2+1-t+s. Thus we have that min{1, s/x_n^2+s}≤2(1-t+s)/x_n^2+1-t+s, and hence e^-x_n^2/cs≲2^1-a(1-t+s)^1-a/(x_n^2+1-t+s)^1-a. We obtain the desired inequality. §.§ Proof of Lemma 7 Lemma <ref> For b, c>0 and α∈[1/2, 1), β∈[1/2 ,1], ∫_0^11/(u+c)^α (u+b)^βdu≲log(2+1/√(b+c)) α=β=1/2, 1/(c+b)^α+β-1 1/2≤α <1, 1/2≤β <1, 1/(b+c)^αlog(c/b+1) 1/2≤α <1, β=1. We first show when b , c <1. 1) If α=β=1/2, it is direct that ∫_0^11/(u+b)^β(u+c)^αdu =2log√(b+1)+√(c+1)/√(b)+√(c), which gives the last formula in the lemma. 2) Next, we consider the case that α, β∈[1/2, 1). Via the change of variable, we rewrite the integral as follows: ∫_0^11/(u+b)^β(u+c)^αdu =1/b^α+β-1∫_0^1/b1/(u+c/b)^α(u+1)^βdu=:I. If c>1/2, then using b+c/4<c, I≤1/c^α∫_0^11/(u+b)^βdu≤(1+b)^1-β/c^α≲1/(b+c)^α+β-1. We now treat the case c<1/2. We split the integral I as follows: I=1/b^α+β-1∫_0^c/b1/(u+c/b)^α(u+1)^β du+1/b^α+β-1∫_c/b^1/b1/(u+c/b)^α(u+1)^βdu=:I_1+I_2. Estimating each term separately, we have that I_1 ≤1/b^α+β-1(b/c)^α∫_0^c/b1/(u+1)^βdu≲1/b^β-1c^α[(c/b+1 )^1-β-1 ] ≲(c+b)^1-β-b^1-β/c^α≲c^1-α/(c+b)^β≲1/(c+b)^α+β-1. On the other hand, we have that since c<1/2, I_2 ≤1/b^α-1(c+b)^β∫_c/b^1/b1/(u+c/b)^α du≲b^1-α/(c+b)^β(1+c)^1-α≲1/(c+b)^α+β-1. 3) It remains to treat the case when α∈[1/2, 1) and β=1. Indeed, again considering c>1/2, we have I≤1/c^αlog(1+1/b)≲1/c^αlog(1+c/b)≲1/(b+c)^αlog(1+c/b), and if c<1/2, we obtain that I_1 ≤1/b^α∫_0^c/b1/(u+c/b)^α(u+1)du. If c/b<1/2, then I_1 ≤1/b^α∫_0^∞1/(u+c/b)^α(u+1)du≲1/b^α≲1/(b+c)^α, since the integral above is convergent. If c/b≥1/2, it is straight forward that I_1 ≤1/c^αlog(1+c/b)≲1/(c+b)^αlog(c/b+1 ). For I_2, we have that if c/b≥1/2, then I_2≲1/b^α∫_c/b^1/b1/(u+1)^α+1du≲1/b^α≲1/(b+c)^α, and if c/b<1/2, then I_2≤1/b^α∫_c/b^1/b1/(u+c/b)^α(u+1)du≤1/b^α∫_0^∞1/(u+c/b)^α(u+1)du≲1/b^α≲1/(b+c)^α, since the last integral is convergent. We now show when b>1 or c>1. 1) If 1/2≤α<1 and 1/2≤β<1, we find that if c≥ b (so that c>1), then I≤1/c^α∫_0^11/(u+b)^βdu≲(1+b)^1-β/c^α≤(c+b)^1-β/c^α≲1/(c+b)^α+β-1. If b>c (so that b>1), we may switch the role of b and β to c and α to get the desired result. 2) If α=β=1/2, the result follows from the identity (<ref>). 3) If α∈[1/2,1) and β=1, we find that if c≥ b, then ∫_0^11/(u+c)^α(u+b)du≤1/c^α∫_0^11/u+bdu=log(1+1/b)/c^α≲log(1+c/b)/(c+b)^α, and finally if b>c, then ∫_0^11/(u+c)^α(u+b)≤1/b∫_0^11/(u+c)^αdu≲(1+c)^1-α/b≲1/(c+b)^α. This completes the proof. §.§ A figure of the spatial boundary data The following is a figure of a spatial boundary data 𝒢 satisfying all the assumptions given in Theorem 4. §.§ A figure of the domain We also include the domain for u such that the conclusions of Theorem 5 hold when n=3, k=1 and i=2 (i≠ k case), and n=3 and k=1 (i=k case). The "+" and "-" in the figures denote the respective signs of ∂_n^2 u(x,1). § ACKNOWLEDGEMENT K. Kang is supported by NRF-2019R1A2C1084685. C. Min is supported by NRF-2019R1A2C1084685. 9 ChangJin T. Chang and B. Jin, Initial and boundary values for L_α^q(L^p) solution of the Navier-Stokes equations in the half-space, J. Math. Anal. Appl. 439, no. 1, 70-90 (2016). Chang-Kang23 T. Chang and K. Kang, Singular weak solutions near boundaries in a half-space away from localized force for the Stokes and Navier-Stokes equations, preprint, arXiv:2303.05746. KangChang22 T. Chang and K. Kang, Local regularity near boundary for the Stokes and Navier-Stokes equations, to appear in Siam J. Math. Anal. CKca T. Chang and K. Kang, On Caccioppoli's inequalities of Stokes equations and Navier-Stokes equations near boundary, J. Differential Equations, 269 no. 9, 6732–6757 (2020). Gau Gautschi, W. Some Elementary Inequalities Relating to the Gamma and Incomplete Gamma Function. Journal of Mathematics and Physics, 38(1-4), 77–81. (1959) Gol60K. K. Golovkin, Potential theory for the non-stationary linear Navier-Stokes equations in the case of three space variables, Trudy Mat. Inst. Steklov., 59:87-99 (1960) Jin2013 B.-J. Jin, On the Caccioppoli inequality of the unsteady Stokes system, Int. J. Numer. Anal. Model. Ser. B, 4 no. 3, 215-223 (2013). Kang05 K. Kang, Unbounded normal derivative for the Stokes system near boundary, Math. Ann., 331 no. 1, 87–109 (2005). KangTsai22 K. Kang, B. Lai, C.-C. Lai, and T.-P. Tsai, The Green tensor of the nonstationary Stokes system in the half-space, Comm. Math. Phys. 399 no. 2, 1291–1372, (2023). KangTsai22FE K. Kang, B. Lai, C.-C. Lai, T.-P. Tsai, Finite energy Navier-Stokes flows with unbounded gradients induced by localized flux in the half-space, Trans. Amer. Math. Soc., 375 no. 9, 6701–6746 (2022). Seregin00 G. A. Seregin, Some estimates near the boundary for solutions to the non-stationary linearized Navier-Stokes equations, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 271 (2000). Seregin-Sverak10 G. A. Seregin and V. S̆verák, On a bounded shear flow in half-space, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 385 (2010). Sol68 V. A. Solonnikov, Estimates for solutions of a non-stationary linearized system of Navier-Stokes equations, Trudy Mat. Inst. Steklov., 70:213-317, 1964. In Russian; English translation in A.M.S. Translations, Series II 75:1-117, 1968. Sol153 V. A. Solonnikov, Estimates of the solutions of the nonstationary Navier-Stokes system, (Russian) Boundary value problems of mathematical physics and related questions in the theory of functions, 7. Zap. Naucn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LoMI) 38: 153-231. Translated in J. Soviet Math. 1977, 8: 47-529. Wolf2015 J. Wolf, On the local regularity of suitable weak solutions to the generalized Navier-Stokes equations, Ann. Univ. Ferrara Sez. VII Sci. Mat., 61 no. 1, 149-171 (2015). . [ ; ; ; ; ].
http://arxiv.org/abs/2307.00786v1
20230703070525
An FTP Algorithm for Temporal Graph Untangling
[ "Riccardo Dondi", "Manuel Lafond" ]
cs.DS
[ "cs.DS", "cs.SI" ]
[ Leon Lang August 1, 2023 ====================== Several classical combinatorial problems have been considered and analysed on temporal graphs. Recently, a variant of on temporal graphs, called , has been introduced to summarize timeline activities in social networks. The problem asks to cover every temporal edge while minimizing the total span of the vertices (where the span of a vertex is the length of the timestamp interval it must remain active in, minus one). While the problem has been shown to be NP-hard even in very restricted cases, its parameterized complexity has not been fully understood. The problem is known to be in FPT under the span parameter only for graphs with two timestamps, but the parameterized complexity for the general case is open. We settle this open problem by giving an FPT algorithm that is based on a combination of iterative compression and a reduction to the Digraph Pair Cut problem, a powerful problem that has received significant attention recently. § INTRODUCTION Temporal graphs are emerging as one of the main models to describe the dynamics of complex networks. They describe how relations (edges) change in in a discrete time domain <cit.>, while the vertex set is not changing. The development of algorithms on temporal graphs has mostly focused on finding paths or walks and on analyzing graph connectivity <cit.>. However, several classical problems in computer science have been recently extended to temporal graphs and one of the most relevant problem in graph theory and theoretical computer science, , has been considered in this context <cit.>. In particular, here we study a variant of , called introduced in <cit.>. has application in discovering event timelines and summarizing temporal networks. It considers a sequence of temporal interactions between entities (e.g. discussions between users in a social network) and aims to explain the observed interactions with few (and short) activity intervals of entities, such that each interaction is covered by at least one of the two entities involved (i.e. at least one of the two entities is active when an interaction between them is observed). can be seen as a variant of , where we search for a minimum cover of the interactions, called temporal edges. The size of this temporal vertex cover is based on the definition of span of a vertex, that is the length of vertex activity. In particular, the span of a vertex is defined as the difference between the maximum and minimum timestamp where the vertex is active. Hence, if a vertex is active in exactly one timestamp, it has a span equal to 0. Four combinatorial formulations of have been defined in <cit.>, varying the definition of vertex activity (a single interval or h ≥ 2 intervals) and the objective function (minimization of the sum of vertex spans or minimization of the maximum vertex span). Here we consider the formulation, denoted by , where vertex activity is defined as a single interval and the objective function is the minimization of the sum of vertex spans. Hence, given a temporal graph, searches for a cover of the temporal edges that has minimum span and such that each vertex is active in one time interval. The problem is known to be NP-hard <cit.>. The problem is hard also in very restricted cases when each timestamp contains at most one temporal edge <cit.>, when each vertex has at most two incident temporal edges in each timestamp and the temporal graph is defined over three timestamps <cit.>, and when the temporal graph is defined over two timestamps <cit.>. Note that, since the span of a vertex activity in exactly one timestamp is equal to 0, is trivially in P when the temporal graph is defined on a single timestamp, since in this case any solution of the problem has span 0. Furthermore, deciding whether there exists a solution of that has span equal to 0 can be decided in polynomial time via a reduction to 2-SAT <cit.>. has been considered also in the parameterized complexity framework. The definition of span leads to a problem where the algorithmic approaches applied to cannot be easily extended for the parameter span of the solution. Indeed, in for each edge we are sure than at least one of the endpoints must be included in the solution, thus at least one of the vertex contributes to the cost of the solution. This leads to the textbook FPT algorithm of branching over the endpoints of any edge. For , a vertex with span 0 may cover a temporal edge, as the vertex can be active only in the timestamp where the temporal edge is defined. This makes it more challenging to design FPT algorithms when the parameter is the span of the solution. In this case, is known to admit a parameterized algorithm only when the input temporal graph is defined over two timestamps <cit.>, with a parameterized reduction to the Almost 2-SAT problem. However, the parameterized complexity of for parameter the span of the solution on general instances has been left open <cit.>. The authors of <cit.> have also analyzed the parameterized complexity of the variants of proposed in <cit.>, considering other parameters in addition to the span of the solution: the number of vertices of the temporal graph, the length of the time domain, and the number of intervals of vertex activity. Our contributions. We solve the open question on the parameterized complexity of by showing that the problem is FPT in parameter k, the span of a solution, even if the number of timestamps is unbounded. Our algorithm takes time O^*(2^5k log k), where the O^* notation hides polynomial factors. Our algorithm is divided into two phases, each using a different technique. First, given a temporal graph G, we use a variant of iterative compression, where we start from a solution S of span at most k on a subgraph of G induced by a subset of vertices (taken across all timestamps), and then try to maintain such a solution after adding a new vertex of G to the graph under consideration. This requires us to reorganize which vertices involved in S should be in the solution or not, and in which timestamps. One challenge is that since the number of such timestamps is unbounded, there are too many ways to decide how to include or not include the vertices that are involved in S. We introduce the notion of a feasible assignment, which allows us to compute how the vertices in S can be reorganized (see text for definition). There are only 2^O(k log k) ways of reorganizing the vertices in S. We try each such feasible assignments X, and we must then find a temporal cover of the whole graph G that “agrees” with X. This leads to the second phase of the algorithm, which decides if such an agreement cover exists through a reduction to a variant of a problem called Digraph Pair Cut. In this problem, we receive a directed graph and forbidden pairs of vertices, and we must delete at most k arcs so that a specified source vertex does not reach both vertices from a forbidden pair. It is known that the problem can be solved in time O^*(2^k). In this work, we need a version where the input specifies a set of deletable and undeletable arcs, which we call . The Digraph Pair Cut problem and its variants have played an important role in devising randomized kernels using matroids <cit.> and, more recently, in establishing a dichotomy in the complexity landscape of constraint satisfaction problems <cit.>. Here, the problem is useful since it can model the implications of choosing or not a vertex in a solution and, in a more challenging way, allows implementing the notion of cost using our definition of span. We hope that the techniques developed for this reduction can be useful for other variants of temporal graph cover. Overview of the algorithm. Our approach is loosely inspired by some ideas from the FPT algorithm for two timestamps, which is a reduction to Almost 2-SAT <cit.>. In the latter, one is given a set of clauses with at most two variables and must delete a minimum number of them so that those remaining are satisfiable. We do not use Almost 2-SAT directly, but its usage for two timestamps may help understanding the origins of our techniques and the relevance of our reduction to Digraph Pair Cut. The reduction from on two timestamps to Almost 2-SAT associates each vertex v_i with a variable x(v_i), which is true when one should include v_i in a vertex cover and false otherwise; each edge u_iv_i is associated with a clause x(u_i) ∨ x(v_i) (here, v_i represents the occurrence of vertex v at timestamp i ∈{1, 2}). This corresponds to enforcing the inclusion of u_i or v_i in our vertex cover, and we can include enough copies of this clause to make it undeletable. Since our goal is to minimize the number of base vertices v with both v_1 and v_2 in the cover, we also add a clause x(v_1) ∨ x(v_2). Then there is a temporal cover of G of span at most k if and only if one can delete at most k clauses of the latter form to make all remaining clauses satisfiable. Even though this reduction produces clauses with only positive or negative clauses, does not appear to be much simpler than Almost 2-SAT in terms of FPT algorithms, and studying the SAT formulation seems more approachable. For T ≥ 3 timestamps, the clauses of the form x(u_i) ∨ x(v_i) can still be used model the vertex cover requirements, but there seems to be no obvious way to model the span of a cover. One would need to devise a set of clauses of size two such that choosing an interval of t vertices in a cover corresponds to deleting t - 1 negative clauses. Our idea is to extend current FPT algorithms for Almost 2-SAT to accommodate our cost function. In <cit.>, the authors propose an iterative compression FPT algorithm that starts from a solution that deletes k + 1 clauses, and modifies it into a solution with k clauses, if possible. The algorithm relies on several clever, but complicated properties of the dependency graph of the clauses (in which vertices are literals and arcs are implications implied by the clauses). This algorithm seems difficult to adapt to our problem. To our knowledge, the only other FPT algorithm for Almost 2-SAT is that of <cit.>. This is achieved through a parameterized reduction to Digraph Pair Cut. At a high level, the idea is to start from an initial guess of assignment for a well-chosen subset of variables, then to construct the dependency graph of the clauses. A certain chain of implications is enforced by our initial guess, the vertex pairs to separate correspond to contradictory literals, and deleting arcs corresponds to deleting clauses. It turns out that, with some work, we can skip the Almost 2-SAT formulation and reduce to (a variant of) Directed Pair Cut directly by borrowing some ideas from this reduction. This is not immediate though. The first challenge is that the aforementioned “well-chosen initial guess” idea cannot be used in our context, and we must develop new tools to enumerate a bounded number of initial guesses from a partial solution (which we call feasible assignment). The second challenge is that our reduction to our variant of Directed Pair Cut needs a specific gadget to enforce our cost scheme, while remaining consistent with the idea of modeling the dependency graph of the Sat instance corresponding to the vertex problem at hand. § PRELIMINARIES For an integer n, we denote [n] = {1,…,n} and for two integers i, j, with i < j, we denote [i,j] = {i, i+1, …, j-1,j}. Temporal graphs are defined over a discrete time domain , which is a sequence 1, 2 …, T of timestamps. A temporal graph is also defined over a set of vertices, called base vertices, that do not change in the time domain and are defined in all timestamps, and are associated with vertices, which are base vertices defined in specific timestamps. We use subscripts to denote the timestamp to which a vertex belongs to, so, for a base vertex v and t ∈ [T], we use v_t to denote the occurrence of v in timestamp t. A temporal edge connects two vertices, associated with distinct base vertices, that belong to the same timestamp. A temporal graph G = (V_B,E,) consists of * A time domain = {1,2…,T}; * A set V_B of base vertices; V_B has a corresponding set V(G) of vertices, which consists of base vertices in specific timestamps, defined as follows: V(G) = { v_t: v ∈ V_B ∧ t ∈ [T]}. * A set E = E(G) of temporal edges, which satisfies: E ⊆{u_t v_t : u, v ∈ V_B, t ∈ [T] ∧ u ≠ v }. For a directed (static) graph H, we denote by (u,v) an arc from vertex u to vertex v (we consider only directed static graphs, not directed temporal graphs). Given a temporal graph G = (V_B, E, ) and a set of base vertices B ⊆ V_B, we define the set B of all vertices of B across all times: B = {v_t : v ∈ B ∧ t ∈ [T]}. If B = {v}, we may write v instead of {v}. Given a set W ⊆ V(G), we denote by G[W] the subgraph induced by vertices W, i.e. V(G[W]) = W and E(G[W]) = {u_t v_t ∈ E : u_t, v_t ∈ W}. For a subset W_B ⊆ V_B of base vertices, we denote G[W_B] = G[W_B]. We also use the notation G - W_B = G[V_B ∖ W_B]. Observe that G[W_B] and G - W_B are temporal graphs over the same time domain as G. In order to define the problem we are interested in, we need to define the assignment of a set of base vertices. Consider a temporal graph G = (V_B,E,) and a set W_B ⊆ V_B of base vertices. An assignment of W_B is a subset X ⊆W_B such that if u_p ∈ X and u_q ∈ X, with p, q ∈ [T], then u_t ∈ X, for each t ∈ [T] with p ≤ t ≤ q. For a base vertex u ∈ W_B such that there exists t ∈ [T] with u_t ∈ X, we denote by δ(u,X), Δ(u,X), respectively, the minimum and maximum timestamp, respectively, such that u_δ(u,X), u_Δ(u,X)∈ X. If u_t does not exist, then δ(u, X) = Δ(u, X) = 0. If W_B is clear from the context or not relevant, then we may say that X is an assignment, without specifying W_B. Note that, given an assignment X and a set v, for some v ∈ V_B, then X ∩v = { v_t : v_t ∈ X ∧ v_t ∈v} contains vertices for v that belong to a contiguous interval of timestamps. Consider a set I ⊆ [T] of timestamps. An assignment X intersects I if there exists v_t ∈ X such that t ∈ I. Now, we give the definition of temporal cover. Given a temporal graph G=(V_B, , E) a temporal cover of G is an assignment X of V_B such that the following properties hold: * For each v ∈ V_B there exists at least one v_t ∈ X, for some t ∈. * For each u_t v_t ∈ E, with t ∈ [T], at least one of u_t, v_t is in X. For a temporal cover X of G, the span of v in X is defined as: sp(v, X) = Δ(v, X) - δ(v, X). Note that if a temporal cover X contains, for a base vertex v ∈ V_B, a single vertex v_t, then sp(v,X) = 0. The span of X, denoted by sp(X), is then defined as: sp(X) = ∑_v ∈ V_B sp(v, X). Now, we are able to define (an example is presented in Fig. <ref>). () Input: A temporal graph G = (V_B,,E). Question: Does there exist a temporal cover of G of span at most k? A temporal cover S ⊆ V(G) of span at most k will sometimes be called a solution. Our goal is to decide whether is FPT in parameter k. § AN FPT ALGORITHM In this section we present our FPT algorithm, which consists of two parts: * The iterative compression technique. * A reduction to the problem. Before presenting the main steps of our algorithm, we present the main idea and some definitions. Recall that our parameter, that is the span of a solution of , is denoted by k. Consider a temporal graph G and assume we have a temporal cover S of span at most k of the subgraph G - {w}, for some base vertex w ∈ V_B. The idea of the iterative compression step is, starting from S, to show how to decide in FPT time whether there exists a solution of for G. This is done by solving a subproblem, called , where we must modify S to consider w. A solution to this subproblem is computed by branching on the assignments of base vertices having a positive span in S and on w, and then reducing the problem to . is defined as follows. () Input: A temporal graph G = (V_B,,E), a vertex w ∈ V_B, a temporal cover S of G - {w} of span at most k. Output: Does there exist a temporal cover of G of span at most k? For technical reasons that will become apparent later, we will assume that the temporal graph contains no edge at timestamps 1 and T, i.e. G[{v_1, v_T : v ∈ V_B}] is an edgeless graph (as in Fig. <ref>). It is easy to see that if this is not already the case, we can add two such “dummy” timestamps, where G does not contain any temporal edge. Indeed, since there are no temporal edges in these two timestamps, then G has a temporal cover of span at most k if and only the same graph with dummy timestamps has a temporal cover of span at most k. Informally, if we are able to solve in FPT time, then we can obtain an FPT algorithm for as well. Indeed, we can first compute a temporal cover on a small subset of base vertices (for example a single vertex), and then we can add, one at a time, the other vertices of the graph. This requires at most |V_B| iterations, and each time a vertex is added, we compute a solution of to check whether it is possible to find a temporal cover of span at most k after the addition of a vertex. §.§ Iterative Compression We now present our approach based on iterative compression to solve the problem. Given a solution S for G - {w}, we focus on the vertices of V_B that have a positive span in S and vertex w. An example of our approach, that illustrates the sets of base vertices and vertices used by the algorithm, is presented in Fig. <ref>. Consider the input of that consists of a temporal graph G = (V_B,,E), a vertex w ∈ V_B, and a temporal cover S of G - {w} of span at most k. Define the following sets associated with S: V_S = { v ∈ V_B : ∃ p, q ∈ [T] , p < q, }∪{ w } V'_S = { v_t : v_t ∈ S, v ∈ V_S ∖{ w }}∪{ w_t: t ∈ [T] }. The set V_S is defined as the set of base vertices having span greater than 0 in S, plus vertex w. V'_S contains the vertices in V(G) associated with V_S, in particular: (1) the vertices corresponding to the base vertices in V_S ∖{ w } that are included in S and (2) vertices corresponding to the base vertex w in every timestamp. Define the following set I_S of timestamps associated with V_S ∖{ w }: I_S = {t ∈ [T]: u_t ∈ V'_S for some u ∈ V_S ∖{ w }}. Essentially, I_S contains those timestamps where the base vertices of V_S ∖{ w }, that is of span greater than zero, have associated vertices in S. These timestamps are essential for computing a solution of , that is to compute whether there exists a temporal cover of G[V_B] of span at most k starting from S. We define now the sets of base vertices and vertices associated with S and having a span equal to 0: Z_S = V_B ∖ V_S Z'_S = S ∖ V'_S. First, we show two easy properties of S and I_S on the temporal graph G - {w}. Let S be a solution of on instance G -{w} and let I_S be the associated set of timestamps. Then |I_S| ≤ 2k. Let S be a solution of on instance G -{w}. Then, sp(Z'_S) = 0. Moreover, Z'_S covers each temporal edge of G -{w} not covered by V'_S∖w. Now, we introduce the concept of feasible assignment, which is used to “guess” how S is rearranged in a solution of . Recall that an assignment X intersects a set I_S of timestamps if there exists v_t ∈ X such that t ∈ I_S. Consider an instance of that consists of a temporal graph G = (V_B,,E), a vertex w ∈ V_B, a temporal cover S of G -{w} of span at most k, and sets V_S, V'_S and I_S associated with S. We say that an assignment X ⊆V_S of V_S is a feasible assignment (with respect to G, S, and I_S) if all of the following conditions hold: * the span of X is at most k; * every edge of G[V_S] is covered by X; * X ∩w is a non-empty assignment of {w}; * for every v ∈ V_S ∖{w}, at least one of the following holds: (1) X ∩v is empty; (2) X ∩v is an assignment of {v} that intersects with I_S; or (3) X ∩v contains a vertex v_t such that v_t w_t ∈ E and w_t ∉ X ∩w. Given a feasible assignment X, we denote M_S(X) = {v ∈ V_S : X ∩v≠∅} N_S(X) = {v ∈ V_S : X ∩v = ∅} Informally, notice that point 4 considers the possible cases for a feasible assignment of the vertices of a base vertex v ∈ V_S ∖{w} : none of the associated vertices in I_S belongs to the computed solution (case 4.(1)), or some of its associated vertices in I_S belongs to the solution, case 4.(2) and case 4.(3), where the latter case is forced by the need of covering temporal edge v_t w_t, with t ∈ I_s, not covered by w_t. Note that M_S(X) and N_S(X) form a partition of V_S. Also note that G, S, and I_S are fixed in the remainder, so we assume that all feasible assignments are with respect to G, S, and I_S without explicit mention. We now relate feasible assignments to temporal covers. Let X^* be a temporal cover of G and let X be a feasible assignment. We say that X^* agrees with X if: * for each v ∈ M_S(X), X^* ∩v = X ∩v; * for each v ∈ N_S(X) and each t ∈ I_S, X^* contains every neighbor u_t of v_t such that u_t ∈Z_S. The intuition of X^* agreeing with X is as follows. For v ∈ M_S(X), X “knows” which vertices of v should be in the solution, and we require X^* to contain exactly those. For v ∈ N_S(X), we interpret that X does not want any vertex v_t with t ∈ I_S. Thus, to cover the edges incident to v_t that go outside of V_S, we require X^* to contain the other endpoint. Note an important subtlety: we act “as if” X^* should not contain v_t or other vertices of N_S(X) with timestamp in I_S, but the definition does not forbid it. Hence, X^* can contain a vertex of N_S(X) in some timestamps of I_S, as long as X^* contains also its neighbors (in I_S) outside V_S. The main purpose of feasible assignments and agreement is as follows. Let X^* be a temporal cover of G of span at most k. Then there exists a feasible assignment X such that X^* agrees with X. Construct X ⊆ X^* as follows: add X^* ∩w to X, and for v ∈ V_S ∖{w}, add X^* ∩v to X if and only if X^* ∩v intersects with the set I_S, or if it contains a vertex v_t incident to an edge v_tw_t ∈ E such that w_t ∉ X^* ∩w. Note that since X^* is an assignment of V_B, X is an assignment of V_S. We first focus on arguing that X satisfies each condition of a feasible assignment (Definition <ref>). For Condition 1, since X^* has span at most k and X ⊆ X^*, it is clear that X also has span at most k. For Condition 3, X^* ∩w is non-empty by the definition of a temporal cover, and we added X^* ∩w to X. For Condition 4, we explicitly require in our construction of X that for each v ∈ V_S ∖{w}, if X ∩v is non-empty, then it is equal to X^* ∩v and it either intersects with I_S or covers an edge not covered by X ∩w = X^* ∩w. Let us focus on Condition 2. Let u_t v_t ∈ E(G[V_S]). If u = w, then if we did not add w_t to X, X^* must contain v_t and we added X^* ∩v to X, thereby covering the edge. The same holds if v = w. Assume u ≠ w, v ≠ w, and suppose without loss of generality that X^* contains u_t to cover the edge. Suppose for contradiction that X does not cover u_tv_t. Then we did not add X^* ∩u to X, which implies that X^* ∩u does not intersect with I_S. In particular, t ∉ I_S. Recall that S, the temporal cover of G - {w}, only intersects with u and v in timestamps contained in I_S. Hence, S cannot cover u_tv_t, a contradiction. We deduce that X covers every edge. Therefore, X is a feasible assignment. It remains to show that X^* agrees with X. For v ∈ M_S(X), X^* ∩v = X ∩v by the construction of X. For v ∈ N_S(X), there is no v_t ∈ X^* with t ∈ I_S, as otherwise we would have added X^* ∩v to X. For every such v_t, X^* must contain all of its neighbors in Z_S to cover the edges, as required by the definition of agreement. It remains to show that the number of feasible assignments has bounded size and can be enumerated efficiently. We first show the latter can be achieved through the following steps. Start with X as an empty set and then apply the following steps: (1) Branch into every non-empty assignment X_w of {w} of span at most k. In each branch, add the chosen subset X_w to X; (2) For every edge v_t w_t ∈ E(G[V_S]) such that w_t ∉ X_w, add v_t to X; (3) For every v ∈ V_S ∖{ w }, such that X ∩v = ∅ at this moment, branch into |I_S| + 1 options: either add no vertex of v to X, or choose a vertex v_t and add it to X, where t ∈ I_S; (4) For every v ∈ V_S ∖{w} such that X ∩v≠∅ at this moment, branch into every assignment X_v of {v} of span at most k that contains every vertex of X ∩v (if no such assignment exists, abort the current branch). For each such branch, add every vertex of X_v ∖ X to X. The above steps enumerate every feasible assignment in time O(2^4 k log k T^3 n), where n = |V_B|. §.§ Reducing to Our objective is now to list every feasible assignment and, for each of them, to verify whether there is a temporal cover that agrees with it. More specifically, consider a feasible assignment X ⊆V_S. Our goal is to decide whether there is a temporal cover X^* of span at most k that agrees with X. Since we branch over every possible feasible assignment X, if there is a temporal cover X^* of G of span at most k, then by Theorem <ref> our enumeration will eventually consider an X that X^* agrees with, and hence we will be able to decide of the existence of X^*. We show that finding X^* reduces to the problem, as we define it below. For a directed graph H, we denote its set of arcs by A(H) (to avoid confusion with E(G), which is used for the edges of an undirected graph G). For F ⊆ A(H), we write H - F for the directed graph with vertex set V(H) and arc set A(H) ∖ F. () Input: A directed graph H = (V(H),A(H)), a source vertex s ∈ V(H), a set of vertex pairs P ⊆V(H) 2 called forbidden pairs, a subset of arcs D ⊆ A(H) called deletable arcs, and an integer k'. Output: Does there exist a set of arcs F ⊆ D of H such that |F| ≤ k' and such that, for each {u,v}∈ P, at least one of u, v is not reachable from s in H - F? It is known that can be solved in time O^*(2^k') <cit.>, but a few remarks are needed before proceeding. In <cit.>, the authors only provide an algorithm for the vertex-deletion variant, and do not consider deletable/undeletable arcs. It is easy to make an arc undeletable by adding enough parallel paths between the two endpoints, and we show at the end of the section that our formulation of reduces to the simple vertex-deletion variant. The vertex-deletion variant also admits a randomized polynomial kernel, and other FPT results are known for weighted arc-deletion variants <cit.>. So let us fix a feasible assignment X for the remainder of the section. We will denote M_S = M_S(X) and N_S = N_S(X). We also consider the following set of vertices associated with N_S: N'_S = {v_2 : v ∈ N_S} N”_S = {v_t ∈N_S : t ∈ I_S}. For each base vertex v ∈ N_S, we need N'_S to contain any vertex of v in time [2, T - 1], so we choose v_2 arbitrarily. Then, N”_S contains those vertices v_t, with t ∈ I_S, not chosen by the feasible assignment X. Note that according to our definition of agreement, a solution X^* should contain all the neighbors of N”_S vertices that are in Z_S. Recall that we have defined Z_S = V_B ∖ V_S and Z'_S = S ∖ V'_S. By Lemma <ref> we know that Z'_S covers each temporal edge of G[V_B ∖{w}] not covered by S ∩ V'_S, and that sp(Z'_S) = 0. We may assume that for each v ∈ Z_S, there is exactly one t ∈ [T] such that v_t ∈ Z'_S (there cannot be more than one since Z'_S has span 0, and if there is no such t, we can add any v_t without affecting the span). Furthermore, we will assume that for each v ∈ Z_S, the vertex v_t in Z'_S is not v_1 nor v_T. Indeed, since we assume that the first and last timestamps of G have no edges, if v_t = v_1 or v_t = v_T, then v_t covers no edge and we may safely change v_p to another vertex of v. The following observation will be useful for our reduction to . Let u_t v_t ∈ E(G) such that u ∈ N_S and v ∉ M_S. Then v ∈ Z_S and, if u_t ∉ N”_S, we have v_t ∈ Z'_S. Now, given a feasible assignment X ⊆ V'_S, sets M_S, N_S, N'_S, N”_S, Z_S, and Z'_S, we present our reduction to the problem. We construct an instance of this problem that consists of the directed graph H = (V(H), A(H)), the set of forbidden (unordered) pairs P ⊆V(H) 2, and the deletable arcs D ⊆ A(H) by applying the following steps. The second step in the construction is the most important and is shown in Figure <ref>. The intuition of these steps is provided afterwards. * add to H the source vertex s; * for each v ∈ Z_S ∪ N_S, let v_i be the vertex of Z'_S ∪ N'_S, where i ∈ [2,T-1]. Add to H the vertices 1, …, i-1, i, i+1, …, T, the vertices b_v,j, c_v,j, d_v,j, for j ∈ [T] ∖{i}, and the set of arcs shown in Figure <ref>, that is there are arcs (j,b_v,j), (j,c_v,j), (c_v,j, d_v,j), (d_v,j, j), for each j ∈ [T] ∖{i} and four directed paths: (1) from b_v,i-1 to b_v,1, (2) from c_v,1 to c_v,i-1, (3) from b_v,i+1 to b_v,T and (4) from c_v,T to c_v,i+1. Add to D the set of deletable arcs (c_v, j, d_v,j), for j ∈ [T] ∖{i}. Then add the following pairs to P: * {d_v,h, b_v,j}, with 1 ≤ h < j ≤ i-1; * {d_v,h, b_v,j}, with i+1 ≤ j < h ≤ T; * {c_v,h, d_v,j}, with 1 ≤ h ≤ i-1 ≤ i+1 ≤ j ≤ T; * {c_v,h, d_v,j}, with 1 ≤ j ≤ i-1 ≤ i+1 ≤ h ≤ T. Note that we have created T + 3(T - 1) = 4 T-3 vertices in H in this step. The subgraph of H induced by these vertices will be called the gadget corresponding to v. * for each temporal edge u_t v_t ∈ E(G) such that u_t, v_t ∈Z_S∪ (N_S∖ N”_S), there are three cases. First note that at least one of u_t or v_t is in Z'_S. Indeed, if u, v ∈ Z_S, this is because an element of Z'_S must cover the temporal edge, and if u ∈ N_S, then v_t ∈ Z'_S by Observation <ref> (or if v ∈ N_S, u_t ∈ Z'_S). The subcases are then: * if u_t, v_t ∈ Z'_S ∪ N'_S, add the pair {t, t} to P; * if u_t ∈ Z'_S ∪ N'_S, v_t ∉ Z'_S ∪ N'_S, add the arc (t, t) to H; * if v_t ∈ Z'_S ∪ N'_S, u_t ∉ Z'_S ∪ N'_S, add the arc (t, t) to H; * for each temporal edge u_t v_t ∈ E(G) such that u_t ∈ (M_S∖ X) ∪ N”_S and v_t ∈Z_S, there are two cases: * if v_t ∉ Z'_S, add the arc (s, t) to H; * if v_t ∈ Z'_S, add the pair {s, t} to P. Define k' = k - sp(X). This concludes the construction. We will refer to the elements 1, 2, 3, 4 of the above enumeration as the Steps of the construction. Note that the only deletable arcs in D are the arcs (c_v,j, d_v,j) introduced in Step 2. From here, the interpretation of H is that if we delete arc set F, then (p1) For v_t ∉ Z'_S ∪ N'_S we should include v_t in X^* if and only if s reaches t in H - F; (p2) For v_t ∈ Z'_S ∪ N'_S we should include v_t in X^* if and only if s does not reach t in H - F. The idea behind the steps of the construction is then as follows (and is somewhat easier to describe in the reverse order of steps). Step 4 describes an initial set of vertices that s is forced to reach, which correspond to vertices that are forced in X^*. A vertex v_t in Z_S is forced in X^* if it is in an edge u_t v_t and u_t ∈M_S but u_t ∉ X. By our definition of agreement, v_t is also forced if u_t ∈ N_S”. Step 4 handles both situations: if v_t ∉ Z'_S, we force s to reach t with the arc (s, t), which is not deletable. If v_t ∈ Z'_S, then t∈ V(H), and s is forced to not reach t by adding {s, t} to P. By (p1) and (p2), both cases correspond to including v_t in X^*. Then, Step 3 ensures that each temporal edge is “covered”: for a temporal edge u_t v_t, a pair of the form {t, t} in P requires that s does not reach one of the two, i.e. that we include one in X^*, and an undeletable arc of the form (t, t) enforces that if s reaches t (i.e. u_t ∉ X^*), then s reaches t (i.e. v_t ∈ X^*). The reason why Z'_S is needed in our construction is that each edge has at least one negative corresponding vertex, so that no other case needs to be considered in Step 3. Finally, Step <ref> enforces the number of deleted arcs to correspond to the span of a solution. That is, it ensures that if we want to add to X^* a set of h vertices of base vertex v ∈ Z_S to our solution of (so with a span equal to h-1), then we have to delete h-1 deletable arcs of the corresponding gadget of H in order to obtain a solution to (and vice-versa). Indeed, consider the gadget in Fig. <ref>. If v_i is not included in X^*, then in the gadget s reaches h positive vertices v_l^+, … , v_r^+ (and i). It follows that vertices b_v,l, …, b_v,r, c_v,l, …, c_v,r and d_v,l, …, d_v,r are all reachable from s. The pairs { d_v,x, b_v,y} defined at Step 2, where either l ≤ x ≤ y ≤ r-1 if r < i, or l+1 ≤ x ≤ y ≤ r if l > i, ensures that arcs (c_v,j, d_v,j), with j ∈ [l,r-1] in the former case or with j ∈ [l+1,r] in the latter case, are deleted. If v_i is included in X^*, then in the gadget s reaches h-1 positive vertices v_l^+, …, v_r^+, with i ∈ [l , r], and must not reach negative vertex i. It follows that vertices b_v,l, …, b_v,r, c_v,l, …, c_v,r and d_v,l, …, d_v,r are all reachable from s. Then h-1 arcs (c_v,j, d_v,j), with j ∈ [l,r] ∖{i}, must be deleted, due to the pairs { d_v,x, b_v,y}, { c_v,x, d_v,y} defined at Step 2. Note that Step 2 is the reason we added dummy timestamps 1 and T. If v_1 or v_T were allowed to be in Z'_S ∪ N'_S, we would need a different gadget for these cases as they behave a bit differently, along with more cases in the proofs. Adding the edgeless timestamps lets us bypass these cases. We now proceed with the details. There exists a solution of that agrees with X if and only if there is F ⊆ D with |F| ≤ k' such that s does not reach a forbidden pair in H - F. Moreover, given such a set F, a solution of can be computed in polynomial time. (⇒) Suppose that there exists a solution X^* of that agrees with X. By definition of , X^* has span at most k. Note that for v ∈ M_S, the agreement requires that X^* ∩v = X ∩v, and so the span of v in X^* is the same as the span of v in X. Thus ∑_v ∈ Z_S ∪ N_S sp(v, X^*) ≤ k - sp(X) = k'. We may assume that for every v ∈ V_B, at least one of v_2, …, v_T-1 is in X^*, as otherwise we add one arbitrarily without affecting the span (if only v_1 or v_T is in X^*, remove it first). For each v ∈ Z_S ∪ N_S, consider the gadget corresponding to v in H and delete some of its dashed arcs as follows (we recommend referring to Figure <ref>). First, if only one of v is in X^*, no action is required on the gadget. So assume that X^* ∩v has at least two vertices; in the following we denote v_l = v_δ(v,X^*) and v_r = v_Δ(v,X^*) the vertices associated with v having minimum and maximum timestamp, respectively, contained in X^*. We assume that l,r ∈ [2, T-1] and l < r. Note that X^* ∩v = { v_l, v_l+1, …, v_r}. Let v_i ∈ Z'_S ∪ N'_S, where i ∈ [2, T-1]. Then * suppose that l,r ∈ [2,i-1], then: delete every arc (c_v,q, d_v,q), with l ≤ q ≤ r-1 * suppose that with l,r ∈ [i+1,T-1], then: delete every arc (c_v,q, d_v,q), with l+1 ≤ q ≤ r * suppose that l ∈ [2,i] and r ∈ [i,T-1], then: delete every arc (c_v,q, d_v,q), with l ≤ q ≤ i-1, and delete every arc (c_v,q, d_v,q), with i+1 ≤ q ≤ r. We see that by construction for all v ∈ Z_S ∪ N_S, the number of arcs deleted in the gadget corresponding to v is equal to the number of vertices in X^* ∩v minus one, that is the span of v in X^*. Since these vertices have span at most k', it follows that we deleted at most k' arcs from H. Denote by H' the graph obtained after deleting the aforementioned arcs. We argue that in H', s does not reach a forbidden pair. To this end, we claim the following. For v ∈ Z_S ∪ N_S and t ∈ [T], if s reaches t in H', then v_t ∈ X^*, and if s reaches t in H', then v_t ∉ X^*. Now, armed with the above claim, we can prove that in H', s does not reach both vertices of a forbidden pair q ∈ P, thus concluding this direction of the proof. (⇐) Suppose that there is a set F ⊆ D with at most k' arcs such that s does not reach a forbidden pair in H - F. Denote H' = H - F. We construct X^* from F, which will also show that it can be reconstructed from F in polynomial time. Define X^* ⊆ V(G) as follows: * for each v ∈ M_S, add every element of X ∩M_S to X^*; * for each v_t ∈ V(G) ∖M_S, we add v_t to X^* if and only if one of the following holds: (1) t∈ V(H) and s reaches t in H'; or (2) t∈ V(H), and s does not reach t in H'; * for each v_j, v_h ∈ X^* with j < h, add v_t to X^* for each t ∈ [j+1,h-1]. Note that X^* agrees with X. Indeed, for v ∈ M_S, there is no gadget corresponding to v in the construction and thus we only add X ∩v to X^*. For u ∈ N_S, consider u_t ∈ N_S” and a neighbor v_t of u_t in Z_S. If v_t ∉ Z'_S, Step 4 adds an undeletable arc from s to t, hence s reaches that vertex and we put v_t in X^*. If v_t ∈ Z'_S, Step 4 adds {s, t} to P, and thus s does not reach t in H', and again we add v_t to X^*. Therefore, we add all the Z_S neighbors of u_t to X^*, and so it agrees with X. We can prove that X^* covers every temporal edge of G and that sp(X^*) ≤ k. §.§ Wrapping up Before concluding, we must show that we are able to use the results of <cit.> to get an FPT algorithm for , as we have presented it. As we mentioned, the FPT algorithm in <cit.> studied the vertex-deletion variant and does not consider undeletable elements, but this is mostly a technicality. Roughly speaking, in our variant, it suffices to replace each vertex with enough copies of the same vertex, and replace each deletable arc (u, v) with a new vertex, adding arcs from the u copies to that vertex, and arcs from that vertex to the v copies. Deleting (u, v) corresponds to deleting that new vertex. For undeletable arcs, we apply the same process but repeat it k' + 1 times. The problem can be solved in time O^*(2^k), where k is the number of arcs to delete. We are able now to prove the main result of our contribution. on a temporal graph G=(V_B, E, ) can be solved in time O^*(2^5k log k). First, we discuss the correctness of the algorithm we presented. Assume that we have an ordering on the base vertices of G and that v is the first vertex of this ordering. A solution S of on G[{v}] is equal to S = ∅. Then for i, with i ∈ [2,|V_B|], let G_i be the temporal graph induced by the first i vertices and let w be the i+1-th vertex. Given a solution S of on instance G_i of span at most k, we can decide whether there exists a solution of on instance G_i+1 by computing whether there exists a solution X^* of the problem on instance G_i, w, S. By Lemma <ref> and by Theorem <ref> if there exists such an X^*, then there exists a feasible assignment X that agrees with X^*. By Lemma <ref> we can compute, via the reduction to , whether there exists a solution of on instance on instance G_i, w, S, and if so obtain such a solution (if no such solution X^* exists, then Lemma <ref> also says that we will never return a solution, since every feasible assignment X that we enumerate will lead to a negative instance of ). Thus the subproblem is solved correctly, and once it is solved on G_|V_B|, we have a solution to . Now, we discuss the complexity of the algorithm. We must solve |V_B| times. For each iteration, by Theorem <ref> we can enumerate the feasible assignments in O(2^4k log k T^3 n) time. For each such assignment, the reduction from to requires polynomial time, and each generated instance can be solved in time O^*(2^k). The time dependency on k is thus O^*(2^4k log k· 2^k), which we simplify to O^*(2^5k log k). § CONCLUSION We have presented a randomized FPT algorithm for the problem, a variant of on temporal graph recently considered for timeline activities summarizations. We point out some relevant future directions on this topic: (1) to improve, if possible, the time complexity of by obtaining a single exponential time algorithm (of the form O^*(c^k)); (2) to establish whether admits a polynomial kernel, possibly randomized (which it might, since famously admits a randomized polynomial kernel); and (3) to extend the approach to other variants of . § APPENDIX §.§ Proof of Lemma <ref> Let S be a solution of on instance G[V_B ∖{w}] and let I_S be the associated set of timestamps. Then |I_S| ≤ 2k. The result follows from the fact that, since |V_S ∖{ w}| ≤ k and V_S ∖{w} contains only vertices of positive span, 2k timestamps contribute at least k to the span of S. §.§ Proof of Lemma <ref> Let S be a solution of on instance G -{w}. Then, sp(Z'_S) = 0. Moreover, Z'_S covers each temporal edge of G -{w} not covered by V'_S∖w. Since S is a cover of G -{w}, it follows that the temporal edges not covered by V'_S∖{w} must be covered by vertices of Z'_S. Furthermore, by the definition of Z'_S, it follows that sp(Z'_S)=0. §.§.§ Proof of Theorem <ref> The above steps enumerate every feasible assignment in time O(2^4 k log k T^3 n), where n = |V_B|. We first argue that every feasible assignment is enumerated. Consider a feasible assignment X. First, consider the (non-empty) intersection X_w = X ∩w. Since in Step (1) we branch into every non-empty assignment of {w}, then we eventually enumerate X_w. In what follows, we assume that we are in the branch where X_w is added. Now, consider a vertex v ∈ V_S ∖{ w } and let X_v = X ∩v. If no vertex of v belongs to X (hence X_v is empty), then Step (3) enumerates this case (note that Step (2) does not add a vertex of v either: X_v empty implies that there is no edge of the form v_tw_t with w_t ∉ X, since X must cover this edge). Assume that X_v is non-empty. By the definition of a feasible assignment, some v_t is in X_v to cover an edge incident to a vertex of w not covered by X_w, or v_t is in a timestamp of I_S. In the former case, v_t is added in Step (2), and in the latter case, one of the branches of Step (3) will add v_t to the set under construction. Since the span of X and hence of X_v is at most k, it follows that X_v is an assignment of {v} of span at most k, and Step (4) will branch into a case where it adds X ∩v. It follows that X will be enumerated at some point. Now, we discuss the number of feasible assignments enumerated by Steps (1) – (4). Step (1) is computed in O(T^2) time, as there are O(T^2) possible non-empty intervals X_w. For each branch defined at Step (1), Step (2) can be computed in Tn time as the vertices in w can have at most Tn neighbors. Step (3), for each vertex v ∈ V_S ∖{w}, branches into at most 2k + 1 cases, as from Lemma <ref> we have that |I_S| ≤ 2k. Since there are at most k vertices in V_S ∖{w}, the number of branches explored in Step (3) is at most (2k + 1)^k. As for Step (4), for each branch defined in Step (3) for v ∈ V_S ∖{w} such that X ∩v≠∅, it branches over O(k^2) possible choices of timestamps a, b that are endpoints of X_v. Again since |V_S| ≤ k, the number of branches explored in Step (4) is at most O(k^2k). Thus the overall time to enumerate the feasible assignments with Steps (1) – (4) is O(T^2 · Tn · (2k+1)^k k^2k), which is O(2^4 k log k T^3 n) for k ≥ 3 (and if k ≤ 2, it is constant and this still holds). §.§ Proof of Observation <ref> Let u_t v_t ∈ E(G) such that u ∈ N_S and v ∉ M_S. Then v ∈ Z_S and, if u_t ∉ N”_S, we have v_t ∈ Z'_S. There cannot be an edge u_t v_t between two vertices of N_S, since X must cover every edge in G[V_S] and contains no vertices of N_S. Thus v ∉ N_S, and since v ∉ M_S, we have v ∈ Z_S. Next suppose that u_t ∉ N”_S. Then t ∉ I_S, implying that u_t ∉ S. Hence, the edge must be covered by v_t, which is in S ∖ V'_S = Z'_S. §.§ Proof of Lemma <ref> There exists a solution of that agrees with X if and only if there is F ⊆ D with |F| ≤ k' such that s does not reach a forbidden pair in H - F. Moreover, given such a set F, a solution of can be computed in polynomial time. (⇒) Suppose that there exists a solution X^* of that agrees with X. By definition of X^* has span at most k. Note that for v ∈ M_S, agreement requires that X^* ∩v = X ∩v, and so the span of v in X^* is the same as the span of v in X. Thus ∑_v ∈ Z_S ∪ N_S sp(v, X^*) ≤ k - sp(X) = k'. We may assume that for every v ∈ V_B, at least one of v_2, …, v_T-1 is in X^*, as otherwise we add one arbitrarily without affecting the span (if only v_1 or v_T is in X^*, remove it first). For each v ∈ Z_S ∪ N_S, consider the gadget corresponding to v in H and delete some of its dashed arcs as follows (we recommend referring to Figure <ref>). First, if only one of v is in X^*, no action is required on the gadget. So assume that X^* ∩v has at least two vertices; in the following we denote v_l = v_δ(v,X^*) and v_r = v_Δ(v,X^*) the vertices associated with v having minimum and maximum timestamp, respectively, contained in X^*. We assume that l,r ∈ [2, T-1] and l < r. Note that X^* ∩v = { v_l, v_l+1, …, v_r}. Let v_i ∈ Z'_S ∪ N'_S, where i ∈ [2, T-1]. Then * suppose that l,r ∈ [2,i-1], then: delete every arc (c_v,q, d_v,q), with l ≤ q ≤ r-1 * suppose that with l,r ∈ [i+1,T-1], then: delete every arc (c_v,q, d_v,q), with l+1 ≤ q ≤ r * suppose that l ∈ [2,i] and r ∈ [i,T-1], then: delete every arc (c_v,q, d_v,q), with l ≤ q ≤ i-1, and delete every arc (c_v,q, d_v,q), with i+1 ≤ q ≤ r. We see that by construction for all v ∈ Z_S ∪ N_S, the number of arcs deleted in the gadget corresponding to v is equal to the number of vertices in X^* ∩v minus one, that is the span of v in X^*. Since these vertices have span at most k', it follows that we deleted at most k' arcs from H. Denote by H' the graph obtained after deleting the aforementioned arcs. We argue that in H', s does not reach a forbidden pair. To this end, we claim the following. For v ∈ Z_S ∪ N_S and t ∈ [T], if s reaches t in H', then v_t ∈ X^*, and if s reaches t in H', then v_t ∉ X^*. The proof is by induction on the distance between s and the vertex. As a base case, consider the out-neighbors of s in H'. Suppose that t is such that (s, t) ∈ A(H'). This arc was added to H by Step <ref> because G contains a temporal edge u_t v_t with u_t ∈ (M_S∖ X) ∪ N”_S and v_t ∈Z_S. If u_t ∈M_S∖ X, then u_t ∉ X and u_t ∉ X^* either, since it agrees with X. Thus we must have v_t ∈ X^* to cover the edge. If u_t ∈ N”_S, then v_t ∈ X^* holds by the definition of agreement. By inspecting the construction of H, we see that s does not have an out-neighbor of the form t, and so this suffices for the base case. Now consider a vertex of the form t or t at distance greater than 1 from s in H', and assume by induction that the claim holds for vertices of this form at a smaller distance. Suppose that this vertex is t. By inspecting the construction, we see that the only possible in-neighbors of t are either s, or some t. The s case was handled as a base case, and so we assume the latter. Thus any shortest path from s to t ends with an arc (t, t). Since s reaches t with a shorter path, we know by induction that u_t ∉ X^*. Moreover, the arc (t, t) must have been created on Step <ref>, and so u_t v_t ∈ E(G). Therefore, v_t ∈ X^* must hold to cover u_tv_t, as desired. So consider instead a vertex of the form t that s reaches in H'. Assume for contradiction that v_t ∈ X^*. By inspecting the construction, we see that the only in-neighbors of t belong to the gadget corresponding to v. Moreover, the only way to reach t from s is to go through some j, where j ∈ [T] ∖{t}, and then through some other vertices of the gadget. Consider a shortest path from s to t in H', and let j be the first vertex of the gadget corresponding to v in this path. By induction, we may assume that v_j ∈ X^*, and hence the span of v is at least one, since we are currently assuming that v_t ∈ X^*. Now, the existence of t implies that v_t ∈ Z'_S ∪ N'_S. Consider {v_l,v_l+1, …, v_r}⊆ X^*. Then we have l ∈ [2,t], r ∈ [t, T - 1], and j ∈ [l,r] ∖{t}. In this case, all the arcs (c_v,y, d_v,y), with y ∈ [l,r] ∖{t} are removed. Moreover, by inspecting the steps of the construction, we see that the only out-neighbors of j are b_v,j and c_v, j. Thus j can only reach t through a c_v, y and then a d_v, y vertex. However, because of our deletions, v_j^+ cannot reach any vertex d_v,y, thus it cannot reach t, leading to a contradiction. We deduce that v_t ∉ X^*, which concludes the proof of the claim. Now, armed with the above claim, assume for contradiction that in H', s reaches both vertices of a forbidden pair q ∈ P. If q was created on Step <ref>, then q = {t, t}, where u_t v_t is an edge of G. By Claim <ref>, this implies that u_t, v_t ∉ X^*, a contradiction since X^* would not cover the edge. If q was created on Step <ref>, then q = {s, t}, where there is an edge u_t v_t ∈ E(G) such that u_t ∈ (M_S∖ X) ∪ N”_S and v_t ∈Z_S. Under the assumption that s reaches t, Claim <ref> implies that v_t ∉ X^*. If u_t ∈ N”_S, this contradicts the fact that X^* agrees with X, since v_t should be in X^*. If u_t ∈M_S∖ X, then u_t ∉ X implies that u_t ∉ X^* , reaching a contradiction since the edge u_t v_t is not covered. We may thus assume that q was created on Step <ref>. Let v ∈ Z_S ∪ N_S be the base vertex for which the corresponding gadget contains the two vertices of q. Let v_i ∈ Z'_S ∪ N'_S, where i ∈ [2, T - 1]. Assume that q is {d_v,h, b_v,j} with h < j < i. Then s must reach l, c_v,l, c_v,h (possibly identical to c_v,l) and d_v,h, with l ≤ h. On the other hand, since s reaches b_v,j, we have that s must reach r and b_v, r, where r ∈ [j, i - 1]. By Claim <ref>, v_l, v_r ∈ X^*, in which case we deleted the deletable arc (c_v,h, d_v,h) (since h < j, and either the first or third situation arises in our list of three cases of arc deletions). Thus s cannot reach d_v,h. Likewise, assume that q is {d_v,h, b_v,j} with i < j < h. Then s must reach c_v,r, c_v,h (possibly identical to c_v,r) and d_v,h, with r ≥ h. On the other hand, s must reach l, with l ≤ j <h. By Claim <ref>, v_l, v_r ∈ X^*, in which case we deleted the deletable arc (c_v,h, d_v,h). Thus s cannot reach d_v,h. Assume that q is {c_v,h, d_v,j} or {c_v,j, d_v,h} with h < i < j. Then s must reach l, c_v,l, c_v,h, possibly identical to c_v,l and d_v,h, if (c_v,h, d_v,h) is not deleted. On the other hand, s must reach c_v,r, c_v, j, possibly identical to c_v,r, and d_v,j, if (c_v,i, d_v,j) is not deleted. By Claim <ref>, v_l, v_r ∈ X^*, in which case we deleted the deletable arcs (c_v,h, d_v,h) and (c_v,j, d_v,j). Thus s cannot reach d_v,h and d_v,j. Having handled every forbidden pair, we deduce that we can remove at most k' edges from H so that s does not reach any of them. (⇐) Suppose that there is a set F ⊆ D with at most k' arcs such that s does not reach a forbidden pair in H - F. Denote H' = H - F. We construct X^* from F, which will also show that it can be reconstructed from F in polynomial time. Define X^* ⊆ V(G) as follows: * for each v ∈ M_S, add every element of X ∩M_S to X^*; * for each v_t ∈ V(G) ∖M_S, we add v_t to X^* if and only if one of the following holds: (1) t∈ V(H) and s reaches t in H'; or (2) t∈ V(H), and s does not reach t in H'; * for each v_j, v_h ∈ X^* with j < h, add v_t to X^* for each t ∈ [j+1,h-1]. Note that X^* agrees with X. Indeed, for v ∈ M_S, there is no gadget corresponding to v in the construction and thus we only add X ∩v to X^*. For u ∈ N_S, consider u_t ∈ N_S” and a neighbor v_t of u_t in Z_S. If v_t ∉ Z'_S, Step 4 adds an undeletable arc from s to t, hence s reaches that vertex and we put v_t in X^*. If v_t ∈ Z'_S, Step 4 adds {s, t} to P, and thus s does not reach t in H', and again we add v_t to X^*. Therefore, we add all the Z_S neighbors of u_t to X^*, and so it agrees with X. We claim that X^* covers every temporal edge of G. Since X is a feasible assignment, every temporal edge u_t v_t ∈ E(G) with u, v ∈ V_S is covered by X, and thus also by X^* since X ⊆ X^*. Next consider a temporal edge u_t v_t ∈ E(G) with u_t ∈M_S∪ N”_S and v_t ∈Z_S. If u_t ∈ X, the temporal edge is covered since X ⊆ X^*. So assume that u_t ∉ X, i.e. u_t ∈ (M_S∖ X) ∪ N”_S. If v_t ∉ Z'_S, Step <ref> adds an undeletable arc from s to t, which implies that this arc is in H'. Thus s reaches t and v_t ∈ X^* by construction, and u_t v_t is covered. If v_t ∈ Z'_S, then {s, t} is in P owing to Step <ref>, and thus s does not reach t in H'. Again by construction, v_t ∈ X^*. Finally, consider a temporal edge u_t v_t ∈ E(G) with u_t, v_t ∈Z_S∪ (N_S∖ N”_S). As argued in Step 3, we know that at least one of u_t or v_t is in Z'_S. Suppose that both u_t, v_t ∈ Z'_S ∪ N'_S. Then by Step <ref>, {t, t}∈ P, and there is at least one of the two that s does not reach. By construction, one of u_t or v_t is in X^* and the temporal edge is covered. So suppose, without loss of generality, that u_t ∈ Z'_S ∪ N'_S, v_t ∉ Z'_S ∪ N'_S. Then t∈ V(H) and t∈ V(H). If s does not reach t in H', then u_t ∈ X^* and the temporal edge is covered. Thus we may assume that s reaches t in H'. By Step <ref>, there is an undeletable arc (t, t) in H, and thus in H', which implies that s reaches t. Thus v_t ∈ X^* and the temporal edge is covered. Note that we have covered every case of a possible temporal edge, and we deduce that X^* covers every temporal edge. We next claim that sp(X^*) ≤ k. Since X^* ∩M_S = X, the vertices in M_S have span equal to sp(X). We must argue that the vertices of V_B ∖ M_S have a span of at most k' = k - sp(X). Consider a vertex v ∈ V_B ∖ M_S = N_S ∪ Z_S that has span sp(v, X^*) more than 0 in X^* (recall that sp(v, X^*) denotes the span of v in X^*). We want to show that sp(v, X^*) edges of H were deleted in the gadget corresponding to v. In the following we denote by v_l and v_r, with l,r ∈ [2, T - 1] the minimum and maximum timestamp, respectively, such that v_l ∈ X^* and v_r ∈ X^*. Let v_i ∈ Z'_S ∪ N'_S, where i ∈ [l, r]. Suppose that r < i. Then by the construction of X^*, s reaches l and r, hence s reaches (1) c_v,l and thus c_v,j, for each j ∈ [l,i-1], and (2) b_v,r and thus b_v,j, for each j ∈ [1,r]. Thus the arcs (c_v,j, d_v,j), with j ∈ [l,r-1], have to be deleted due the forbidden pairs {d_v,j, b_v,y}, with l ≤ j < y ≤ r. This amounts to r -l deletions, which is the span of v in X^*. Suppose instead that with i < l. Similarly to the previous case, by the construction, s reaches l and r, hence s reaches (1) c_v,r and thus c_v,j, for each j ∈ [i+1,r], and (2) b_v,l and thus b_v,j, for each j with j ∈ [l, T]. Thus the arcs (c_v,j, d_v,j), for each j ∈ [l+1,r], have to be deleted due the forbidden pairs {d_v,j, b_v,y}, with l ≤ y < j ≤ r. Again, this amounts to r - l deletions, which is the span of v in X^*. Finally, suppose that l ≤ i ≤ r. We have three cases depending on the fact that l = i, r = i or l < i < r. Consider the first case l = i < r. Thus by the construction of X^*, s reaches v_r^+ but does not reach v_i^-. Moreover, s reaches c_v,r thus s reaches c_v,j, for each j ∈ [i+1,r]. Thus arcs (c_v,j, d_v,j), for each j ∈ [i+1,r], have to be deleted in order to make i not reachable from s. This amounts to r- i = r- l deletions, which is the span of v in X^*. Consider the second case l < i = r. Similarly to the previous case, s reaches l but does not reach i. Moreover, s reaches c_v,j, for each j with j ∈ [l,i-1]. Thus arcs (c_v,j, d_v,j), for each j ∈ [l,i-1] have to be deleted in order to make i not reachable from s. This amounts to i - j = r - l deletions, which is the span of v in X^*. Finally, consider the third case l < i < r. Then arcs (c_v,j, d_v,j), with j ∈ [l,i-1] and (c_v,j, d_v,j), with j ∈ [i+1,r], have to be deleted due to forbidden pairs {c_v,j, d_v,z}, with j < i < z and forbidden pairs {c_v,z, d_v,j}, with z < i < j. This requires i - l + r - i = r - l deletions, which is the span of v in X^*. We thus see that each vertex v of V_B ∖ M_S has a span that is at most the number of arcs deleted in the gadget of H corresponding to v. Therefore, X^* is a temporal cover of span at most sp(X) + k' ≤ k, thus completing the proof. §.§ Proof of Lemma <ref> The problem can be solved in time O^*(2^k), where k is the number of arcs to delete. We call Vertex-Deletion Digraph Pair Cut the problem in which, given a directed graph H, a source s ∈ V(H), pairs ⊆V 2, and integer k, we must decide whether there is R ⊆ V(H) ∖{s} with |R| ≤ k such that in H - R, s does not reach both u and v for every {u, v}∈ (note that H - R removes vertices here, not arcs). In <cit.>, this problem was shown to be solvable in time O^*(2^k). We show that as we defined it reduces to Vertex-Deletion Digraph Pair Cut, with the same parameter value k. Suppose that we have an instance of Digraph Pair Cut, with directed graph H, source s, pairs , deletable arcs D, and integer k. From this instance, obtain an instance of Vertex-Deletion Digraph Pair Cut with directed graph H', source s', pairs ', and integer k as follows (note that k is unchanged). First for each u ∈ V(H), add k + 1 copies u^1, …, u^k+1 of u to V(H'). Also add a new vertex s' to V(H'), which serves as the source for the modified instance. Add to A(H') the set of arcs (s', s^1), …, (s', s^k+1). Then for each deletable arc (u, v) ∈ D, add to H' a new vertex uv1, and the set of arcs {(u^i, uv1) : i ∈ [k+1]}∪{(uv1, v^j : j ∈ [k+1]} Finally, for each undeletable arc (u, v) ∈ A(H) ∖ D, add to H' the k + 1 new vertices uv1, uv2, …, uvk+1, and then add to A(H') the set of arcs {(u^i, uvl) : i ∈ [k+1], l ∈ [k+1]}∪{(uvl, v^j) : l ∈ [k+1], j ∈ [k+1]} In other words, each vertex of H has k + 1 corresponding vertices in H', making the latter pointless to delete. For (u, v) ∈ D, deleting the arc corresponds to deleting uv1 since it removes the path of length 2 from every u^i to every v^j. For (u, v) ∈ A(H) ∖ D, there are too many uvl copies, making them pointless to delete. Finally, for each {u, v}∈, we add to ' all the pairs {u^i, v^j} for every i, j ∈ [k+1]. Assume that there is F ⊆ D with |F| ≤ k such that s reaches no pair of P in H - F. In H', we delete the set of vertices R = {uv1 : (u, v) ∈ F }. Note that |R| = |F| ≤ k. Suppose for contradiction that in H' - R, s' reaches both {u^i, v^j}∈'. Since every arc of H' is incident to a vertex of the form uvl, in H', the path from s' to u^i in H' - R has the form s' → s^b_0→sx_1a_1→ x_1^b_1→x_1x_2a_2→ x_2^b_2→x_2x_3a_3→…→x_lua_l+1→ u^i for some vertices x_1, …, x_l ∈ V(H) and indices b_0, a_1, b_1, …, a_l+1. By our construction of R, this means that in H - F, all the arcs (s, x_1), (x_1, x_2), …, (x_l, u) are present, and that s reaches u in H - F. By the same logic, s also reaches v in H - F, a contradiction since {u^i, v^j}∈' implies that {u, v}∈. Thus R is a solution for H'. Conversely, assume that there is R ⊆ V(H') ∖{s'} with |R| ≤ k such that s reaches no pair of P' in H' - R. We may assume that R does not contain a vertex u^i with u ∈ V(H), since R cannot contain every copy of u. Likewise, for undeletable (u, v) ∈ A(H) ∖ D, we may assume that R does not contain a vertex uvl since R cannot contain every copy. Therefore, we may assume that R only contains vertices of the form uv1, where (u, v) ∈ D. Define F = {(u, v) : uv1∈ R }. Note that F has at most |R| ≤ k arcs, and they are all deletable. Now suppose for contradiction that s reaches both u, v for {u,v}∈ in H - F. Hence in H - F there are paths P_u, P_v from s to u and v, respectively. Note that for every arc (x, y) of P_u, the vertex xy1 is still present in H' - R. Thus by replacing every (x, y) with the subpath x^1, xy1, y^1, we can obtain a path form s' to u^1 in H' - R. Likewise, there is a path from s' to v^1 in H' - R. This is a contradiction since {u, v}∈ implies that {u^i, v^j}∈'. Hence F is a valid solution for H. To conclude, constructing H' can clearly be done in time polynomial in |V(H)| + |A(H)|. It follows that we can solve the instance H in time O^*(2^k) by constructing H' and solving the vertex-deletion variant on it.
http://arxiv.org/abs/2307.02592v1
20230705184153
Matter-antimatter asymmetry and dark matter stability from baryon number conservation
[ "Mar Císcar-Monsalvatje", "Alejandro Ibarra", "Jérôme Vandecasteele" ]
hep-ph
[ "hep-ph" ]
[email protected] Physik-Department, Technische Universität München, James-Franck-Straße, 85748 Garching, Germany [email protected] Physik-Department, Technische Universität München, James-Franck-Straße, 85748 Garching, Germany [email protected] Physik-Department, Technische Universität München, James-Franck-Straße, 85748 Garching, Germany There is currently no evidence for a baryon asymmetry in our Universe. Instead, cosmological observations have only demonstrated the existence of a quark-antiquark asymmetry, which does not necessarily imply a baryon asymmetric Universe, since the baryon number of the dark sector particles is unknown. In this paper we discuss a framework where the total baryon number of the Universe is equal to zero, and where the observed quark-antiquark asymmetry arises from neutron portal interactions with a dark sector fermion N that carries baryon number. In order to render a baryon symmetric universe throughout the whole cosmological history, we introduce a complex scalar χ, with opposite baryon number and with the same initial abundance as N. Notably, due to the baryon number conservation, χ is absolutely stable and could have an abundance today equal to the observed dark matter abundance. Therefore, in this simple framework, the existence of a quark-antiquark asymmetry is intimately related to the existence (and the stability) of dark matter. Matter-antimatter asymmetry and dark matter stability from baryon number conservation Jérôme Vandecasteele August 1, 2023 ===================================================================================== § INTRODUCTION The Standard Model (SM) of Particle Physics describes with outstanding precision the results of a myriad of experiments involving particle reactions. However, several cosmological observations suggest that the SM should be extended. Two of the most solid evidences for New Physics beyond the SM are the existence of dark matter in our Universe <cit.> Ω_ DM,0h^2=0.120± 0.001 , and the existence of a cosmic asymmetry between the number of SM matter particles and their antiparticles, commonly expressed as the difference between the number density of baryons and antibaryons normalized to the entropy density <cit.> Y_B,0=n_B-n_B̅/s|_0=(8.75± 0.23)× 10^-11 . Furthermore, observations have revealed that the density in the form of dark matter is comparable to the density of Standard Model Matter (SMM) Ω_ DM/Ω_ SMM∼ 5. In 1967, Sakharov presented three necessary conditions that must be simultaneously fulfilled in order to generate a baryon asymmetry in our Universe <cit.>: (i) baryon number violation; (ii) C and CP violation; (iii) departure from thermal equilibrium. Many concrete models fulfilling these three conditions have been proposed which generate a baryon asymmetry in qualitative agreement with observations (see e.g. <cit.>). On the other hand, in these models the dark matter is typically not accounted for and assumed to be produced through a different mechanism, involving different particles and interactions, and occurring at different cosmic times. Hence, the similarity between the densities of protons and dark matter is merely coincidental. A popular framework to explain this similarity consists in postulating that the dark sector is asymmetric under a global dark symmetry, U(1)_D, while the visible sector is asymmetric under a global baryon symmetry, U(1)_B. If appropriate conditions are fulfilled in the dark sector, analogous to the Sakharov conditions, an asymmetry between dark matter particle and antiparticle could be generated, which is then transferred to the visible sector <cit.>. Alternatively, both asymmetries could be generated simultaneously in the dark and the visible sectors <cit.>. For reviews on asymmetric dark matter, see <cit.>). From the observational standpoint one cannot conclude that the Universe is baryon asymmetric, since the baryon number of the dark sector particles is unknown. Instead, one can only asseverate that the visible sector is baryon asymmetric, or more strictly, that the Universe contains an asymmetry between the total number of quarks and antiquarks, given by Y_Δ q,0= (2.63± 0.07)× 10^-10, which is obtained from multiplying Eq. (<ref>) by 3. In this paper we will argue that not all the Sakharov conditions are necessary to generate a quark-antiquark asymmetry. We will assume that the baryon and lepton numbers are exact symmetries of Nature, and we will show that a cosmic quark-antiquark asymmetry can be generated when the three following conditions are satisfied: (i) C is violated in the dark sector, (ii) there is departure from thermal equilibrium, (iii) there are portal interactions between the dark sector and the quarks. These conditions are (arguably) less restrictive that the Sakharov conditions, and seem plausible in dark sector scenarios. To illustrate the idea, we will consider a simple framework where the dark matter particle is a complex scalar carrying baryon number, and that the dark sector interacts with the visible sector via a “neutron portal" <cit.>. Imposing an initial asymmetry between the number of dark matter particles and antiparticles, we will show that the asymmetry in the dark sector is transmitted to the visible sector thus generating an asymmetry between the number of quarks and antiquarks. In this way, the yield of baryons in the visible sector and the yield of dark matter particles are naturally comparable. Furthermore, in this simple framework the dark matter stability is ensured by the conservation of the baryon number, and does not require additional ad hoc symmetries. Therefore, in this scenario the existence of dark matter in our Universe today is intimately related to the existence of a quark-antiquark asymmetry. The paper is organized as follows. In Section <ref> we present our scenario and we qualitatively describe its main characteristics. In section <ref> we present the Boltzmann equations for the temperature evolution of the yields of the various particle species, and we estimate the present value of the dark matter abundance and the quark-antiquark asymmetries in terms of the initial conditions of our scenario. In Section <ref> we discuss the constraints on the neutron portal and the prospects of detecting signals from the dark sector, and in Section <ref> the prospects of detecting a dark matter signal. Finally, in Section <ref> we present our conclusions. § DARK SECTOR BARYONS AND THEIR IMPACT ON THE VISIBLE SECTOR We consider a hidden sector containing a complex scalar χ and a Dirac fermion N, both singlets under the Standard Model gauge group, with masses m_χ and m_N respectively. We assume that these fields transform under an exactly conserved U(1)_B symmetry, with charges B(χ)=-1 and B(N)=+1. The baryon numbers of the proton and the neutron are defined as usual as B(p)=+1, B(n)=+1 and so are the baryon numbers of the quarks and antiquarks, B(q)=+1/3, B(q̅)=-1/3. The kinetic and mass terms in the Lagrangian of the dark sector baryons read ℒ⊃|∂_μχ|^2+m_χ^2 χ^*χ + (i∂-m_N)N. Further, the Lagrangian contains interaction terms among the dark sector particles, which we describe via dimension-5 effective operators of the form ℒ⊃1/Λ_0χχ^* N+1/Λ_2(χχN^cN + h.c.). where the superscript c denotes charge conjugation. The first term does not change the baryon number in the fermionic current, while the second term changes the baryon number by two units, hence the notation for the suppression scale of the dimension-5 operators, Λ_0 and Λ_2 respectively. Lastly, the gauge symmetry and the baryon symmetry allow interaction terms between the dark sector and the visible sector of the form ℒ⊃λ_χ H|χ|^2 |H|^2+ 1/Λ_n^2( d_R u_R^c d_R+ h.c), with H the Standard Model Higgs doublet, and u_R and d_R the right-handed up and down quarks. These two terms respectively correspond to a Higgs portal interaction and to a neutron portal interaction <cit.>. Portals involving heavier generation quarks are also possible, but will not be discussed in what follows for simplicity. In order to generate an asymmetry in the visible sector, we assume that some unspecified C-violating mechanism in the dark sector generates an excess of N over N at high temperatures. Since this mechanism operates exclusively in the dark sector, the conservation of baryon number requires the generation of an excess of χ over χ^*, so that the total baryon number of the Universe remains zero, as depicted in Fig. <ref> (a concrete realization for that mechanism will be presented elsewhere <cit.>). Due to the neutron portal interaction in Eq. (<ref>), the scatterings N d̅↔ u d and N u̅↔ d d, as well as the decays N→ u d d, inject a net baryon number into the visible sector (which could be partially converted into a net lepton number via sphaleron transitions, depending on the temperature at which the asymmetry in the dark sector is generated). Therefore, the excess of dark sector fermionic baryons over antibaryons is leaked to the visible sector via the neutron portal, ultimately generating an excess of quarks over antiquarks. Note that the Higgs portal does not transmit baryon number from the dark sector to the visible sector. Hence, we will set it to zero for simplicity, although it could have phenomenological implications, as we will briefly discuss in Section <ref>. Notably, due to the Lorentz symmetry and the baryon number conservation, χ and χ^* are absolutely stable, although they can annihilate with one another generating a relic population of χ. Therefore, in this simple framework the existence of a quark-antiquark asymmetry in the visible sector is intimately related to the existence of dark matter in our Universe. We stress that the stability of the dark matter does not require any ad hoc new symmetry, but is simply due to the conservation of the total baryon number in the Universe[In the model presented in Eq. (<ref>-<ref>), the dark matter particle exhibits an accidental ℤ_2 symmetry, χ→ -χ, which however has no influence on the dark matter stability. As it has no importance for the phenomenology at hand, we will not address it further in the remaining of this work.]. In the next section, we will describe in detail the evolution of the yields of the different particles, and the expectations for the relic abundance of dark matter and the quark-antiquark asymmetry. § EVOLUTION OF THE PARTICLE NUMBER DENSITIES AND ASYMMETRIES The Boltzmann equations for the yields of the different particle species can be cast as d Y_χ/dx = - λ/x^2[ ⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - Y_χ^ eqY_χ^∗ ^ eq/Y_N^ eqY_N ^ eqY_NY_N)+2⟨σ v ⟩ _χχ→NN(Y_χ^2 - (Y_χ^ eq/Y_N^ eq)^2Y_N^2) +⟨σ v ⟩ _χ N→χ^∗N(Y_χY_N - Y_χ^ eqY_N ^ eq/Y_χ^∗ ^ eqY_N ^ eqY_χ^∗Y_N)], d Y_χ^∗/dx = - λ/x^2[ ⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - Y_χ^ eqY_χ^∗ ^ eq/Y_N^ eqY_N ^ eqY_NY_N)+2⟨σ v ⟩ _χ^∗χ^∗→ N N(Y_χ^∗^2 - (Y_χ^∗^ eq/Y_N^ eq)^2Y_N^2) -⟨σ v ⟩ _χ N→χ^∗N(Y_χY_N - Y_χ^ eqY_N ^ eq/Y_χ^∗ ^ eqY_N ^ eqY_χ^∗Y_N)], d Y_N/dx = - λ/x^2[ -⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - Y_χ^ eqY_χ^∗ ^ eq/Y_N^ eqY_N ^ eqY_NY_N)-2⟨σ v ⟩ _χ^∗χ^∗→ N N(Y_χ^∗^2 - (Y_χ^∗^ eq/Y_N^ eq)^2Y_N^2) +⟨σ v ⟩ _χ N→χ^∗N(Y_χY_N - Y_χ^ eqY_N ^ eq/Y_χ^∗ ^ eqY_N ^ eqY_χ^∗Y_N) +1/𝔰⟨Γ_N⟩(Y_N - Y_N^ eq) +⟨σ v ⟩ _Nd̅→ ud(Y_N Y_d̅^ eq-Y_u^ eqY_d^ eq) +⟨σ v ⟩ _Nu̅→ dd(Y_N Y_u̅^ eq-(Y_d^ eq)^2 )], d Y_N/dx = - λ/x^2[ -⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - Y_χ^ eqY_χ^∗ ^ eq/Y_N^ eqY_N ^ eqY_NY_N)-2⟨σ v ⟩ _χχ→NN(Y_χ^2 - (Y_χ^ eq/Y_N^ eq)^2Y_N^2) -⟨σ v ⟩ _χ N→χ^∗N(Y_χY_N - Y_χ^ eqY_N ^ eq/Y_χ^∗ ^ eqY_N ^ eqY_χ^∗Y_N)+1/𝔰⟨Γ_N⟩(Y_N - Y_N^ eq) +⟨σ v ⟩ _Nd→u̅d̅(Y_NY_d^ eq-Y_u̅^ eqY_d̅^ eq) +⟨σ v ⟩ _ N u→d̅d̅(Y_NY_u^ eq-(Y_d̅^ eq)^2 )] . Here x≡ m_χ/T, 𝔰=2π^2/45 g_⋆, 𝔰T^3 is the entropy density of the Universe at the temperature T, with g_⋆, 𝔰 the number of relativistic degrees of freedom. The overall numerical factor λ is defined as λ≡4 π/√(90) m_χ M_ Pl √(g_⋆, 𝔰) . The Boltzmann equations depend on the thermally averaged rates of different processes. Firstly, there are reactions involving only dark sector particles: χχ^*↔ N N, which is induced by the effective operator in the first term of Eq. (<ref>), suppressed by Λ_0; χχ↔ N N and the C-conjugated reaction χ^*χ^* ↔ N N, induced by the effective operator found in the second term of Eq. (<ref>), suppressed by Λ_2; and χ N ↔χ^* N, also induced by the same term and suppressed by Λ_2. The explicit expressions for the cross-sections of these processes read σ _χχ^∗→ N N = (s-4 m_N^2 )^3/2/8π s √(s-4 m_χ ^2) Λ_0^2, σ _χχ→NN =σ _χ^∗χ^∗→ N N= (s-4 m_N^2 )^3/2/8π s √(s-4 m_χ ^2) Λ_2^ 2, σ _χ N→χ^∗N =m_N^4-2 m_N^2 (m_χ^2 -3 s ) + (m_χ^2 - s)^2/32π s^2 Λ_2^2, with s the square of the center of mass energy. Secondly, one has reactions between dark sector particles and visible sector particles, N d̅↔ ud and N u̅↔ dd, as well as the decay N→ u d d, all mediated by the neutron portal interaction, suppressed by Λ_n. The corresponding cross-sections and decay rate read σ_ N d̅→ ud =m_N^2+14s/192π s^2 Λ_n^4, σ_ N u̅→ dd =2m_N^2+s/192π s^2 Λ_n^4, Γ_N =1/768π^3m_N^5/Λ_n^4. To simplify the discussion, we will assume that the neutron portal interaction is sufficiently strong to bring the dark sector baryons into thermal equilibrium with the visible sector. At a temperature T, this condition requires Λ_n≲ 5.8× 10^7 (T/10^5 GeV)^3/4 GeV. Therefore the equilibrium number density of the dark sector particle species i with mass m_i and number of internal degrees of freedom g_i is given as usual by n_i^eq=g_i/2π^2m_i^2 T K_2(m_i/T), where K_2(x) is the modified Bessel function of the 2^nd kind. It will be convenient to work with the total yields of the different particle species, along with their corresponding asymmetries, defined as Y_χ^tot=Y_χ+Y_χ^∗, Y_Δχ=Y_χ-Y_χ^∗, Y_N^tot=Y_N+Y_, Y_Δ N=Y_N-Y_. The Boltzmann equations describing their temperature evolution are d Y_χ^ tot/dx = - λ/x^2[ 2⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - P Y_NY_N)+2⟨σ v ⟩ _χχ→NN(Y_χ^2 +Y_χ^∗^2 - P (Y_N^2+Y_N^2))], d Y_Δχ/dx =- λ/x^2[ 2⟨σ v ⟩ _χχ→NN(Y_χ^2 -Y_χ^∗^2 - P (Y_N^2-Y_N^2)) +2⟨σ v ⟩ _χ N→χ^∗N(Y_χY_N- Y_χ^∗Y_N)], d Y_N^ tot/dx = - λ/x^2[ -2⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - P Y_NY_N)-2⟨σ v ⟩ _χχ→NN(Y_χ^2 +Y_χ^∗^2 - P (Y_N^2+Y_N^2)) +1/𝔰⟨Γ_N⟩(Y_N +Y_N- Y_N^ eq- Y_N^ eq)+⟨σ v ⟩ _Nd̅→ ud(Y_N Y_d̅^ eq+Y_NY_d^ eq -Y_u^ eqY_d^ eq -Y_u̅^ eqY_d̅^ eq)+(Nu̅↔ dd)], d Y_Δ N/dx = - λ/x^2[ 2⟨σ v ⟩ _χχ→NN(Y_χ^2 -Y_χ^∗^2 - P (Y_N^2-Y_N^2))+2⟨σ v ⟩ _χ N→χ^∗N(Y_χY_N -Y_χ^∗Y_N) +1/𝔰⟨Γ_N⟩(Y_N -Y_N- Y_N^ eq+ Y_N^ eq)+⟨σ v ⟩ _Nd̅→ ud(Y_N Y_d̅^ eq-Y_NY_d^ eq -Y_u^ eqY_d^ eq +Y_u̅^ eqY_d̅^ eq)+(Nu̅↔ dd)]. The yields Y_χ, Y_χ^∗, Y_N and Y_ are implicit functions of the total yields and of the asymmetries. Further, we have introduced P ≡ Y_χ^ eqY_χ^∗ ^ eq/Y_N^ eqY_N ^ eq=(Y_χ^eq/Y_N^eq)^2=(Y_χ^∗^eq/Y_N^eq)^2 ≃(g_χ/g_N)^2(m_χ/m_N)^4(K_2(m_χ/T)/K_2(m_N/T))^2. Lastly, the asymmetry present in N is transmitted to the visible sector through the neutron portal, giving rise to an asymmetry between the total number of quarks and antiquarks Δ q≡∑_i Δ q_i. Its time evolution is described by d Y_Δ q/dx = c 3λ/x^2[ 1/𝔰⟨Γ_N⟩(Y_N -Y_N- Y_N^ eq+ Y_N^ eq) +⟨σ v ⟩ _Nd̅→ ud(Y_N Y_d̅^ eq-Y_NY_d^ eq -Y_u^ eqY_d^ eq +Y_u̅^ eqY_d̅^ eq) +(Nu̅↔ dd)] . The constant c in this expression characterises the efficiency of the conversion of the baryon asymmetry stored in the left-handed quarks into a lepton asymmetry stored in the left-handed leptons via sphaleron processes. If the asymmetry in the dark sector is generated at temperatures above 130 GeV, the point at which sphalerons drop out of thermal equilibrium <cit.>, then c= 36/111 <cit.>. In turn, the asymmetry between the total number of leptons and antileptons, Δℓ≡∑_i Δℓ_i can be calculated from d Y_Δℓ/dx = -3 1-c/c d Y_Δ q/dx. If the initial asymmetry is instead generated after sphaleron freeze-out, then we set c=1 and no asymmetry is transferred to the leptons. Let us consider a scenario where both the baryon and the lepton numbers are exactly conserved in Nature. As initial condition, we assume that at a temperature T_ in≫ 130 GeV there is no asymmetry between quarks and antiquarks (nor between leptons and antileptons) but that there is a C-violating mechanism in the dark sector that generates a primordial asymmetry between N and N. The conservation of the baryon number requires a corresponding asymmetry between χ and χ^*. This initial condition is sketched in Fig. <ref>, where we also show the different portals relating the various particle species in our model. Under this plausible assumption, the dark matter relic abundance and the quark-antiquark asymmetry are determined by the initial asymmetry in the dark sector, and by the energy scales Λ_0, Λ_2 and Λ_n, which determine the strengths of the different portal interactions. For this representative set-up, the various yields qualitatively evolve with temperature as shown Fig. <ref>. At the high temperature T_ in all particle species are ultra-relativistic and their equilibrium yields are related solely by their different number of internal degrees of freedom, see Eq. (<ref>). For the yields of the particles in the hidden sector, this implies Y_N^eq= 2 Y_χ^eq. Further, for the initial condition sketched in Fig. <ref> the yields of the asymmetries satisfy Y_Δ N^ in= Y_Δχ^ in, Y_Δ q^ in= Y_Δℓ^ in=0. Let us first discuss the evolution of the asymmetries with the temperature. At temperatures very close to T_ in, the scatterings N u̅↔ dd and N d̅↔ ud (with rates depending on Λ_n) effectively transfer a fraction of the baryon asymmetry from the dark sector to the visible sector, which is then distributed between leptons and quarks by sphaleron transitions. At the same time, the processes χχ↔NN and χ N↔χ^*N (with rates depending on Λ_2) contribute to the washout of the asymmetry within the hidden sector, thereby influencing the size of the asymmetries in the visible sector. The effect of the washout in Δ N follows from Eq. (<ref>). Given that at high temperatures all the yields can be well approximated by their equilibrium values, the Boltzmann equation for Y_Δ N simplifies to d Y_Δ N/dx≃ -3λ/x^2(⟨σ v ⟩ _χχ→NN+⟨σ v ⟩ _χ N→χ^∗N)Y_N^eqY_Δ N, and similarly for Y_Δχ. The analytical solution for this equations reads Y_Δ N(T)≃ Y_Δ N^inexp{3[n_N^eq/H .(⟨σ v ⟩ _χχ→NN+⟨σ v ⟩ _χ N→χ^∗N) ]|_T=T_inT-T_ in/T_in}. The requirement that no more than 50% of the asymmetry is washed out implies the lower limit Λ_2≳ 6× 10^10(T_in/10^5 GeV)^1/2 GeV. In order to simplify our discussion, we will assume in what follows that Λ_2 satisfies this lower limit and that the initial asymmetry in Δ N (and Δχ) is very weakly washed out. A weak washout has been displayed in Fig. <ref> as well as the efficient redistribution of the asymmetry within the visible sector. If that is the case, the asymmetries in the different particles species at a temperature slightly below T_ in take the simple expressions Y_Δ N(T)=42/79 Y_Δ N^ in, Y_Δ q(T)=36/79 Y_Δ N^ in, Y_Δℓ(T)=-25/79 Y_Δ N^ in. These asymmetries remain constant until a temperature T_ decay at which the decays of N and N inject an additional quark-antiquark asymmetry in the visible sector (and a lepton-antilepton asymmetry if the sphalerons are still in equilibrium at the epoch of decays). For the range of parameters of interest (see Section <ref>), we find that the decays typically occur when the sphalerons are out-ot-equilibrium, so that the remaining Δ N asymmetry is entirely converted into a quark-antiquark asymmetry, whereas the lepton-antilepton asymmetry stays frozen. The decrease in Y_Δ N at T∼ T_ in and then at T∼ T_ decay, along with the corresponding changes in Y_Δ q and Y_Δℓ, can be seen in Fig. <ref>. At the present time, the asymmetries in the visible sector read Y_Δ q,0 =342/79 Y_Δ N^ in+36/79 Y_Δ N^ in=162/79Y_Δ N^ in, Y_Δℓ,0 =-25/79 Y_Δ N^ in. In these equations, the factor of 3 arises from the fact that N generates three quarks in its decay. Let us now turn to the evolution of the total yields Y_χ^ tot and Y_N^ tot, Eqs. (<ref>) and (<ref>). Again, to simplify our discussion we will assume that Λ_2 satisfies the condition in Eq. (<ref>). Further, we will assume Λ_0≪Λ_n, so that the rate of conversion of N into quarks is slow compared to the rate of conversion of χ into N. This implies that the dark matter abundance is determined by the freeze-out of the annihilation process χχ^*→ NN. Under these assumptions the Boltzmann equations Eqs. (<ref>) and (<ref>) simplify to d Y_χ^ tot/dx = - λ/x^2[ 2⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - P Y_NY_N)], d Y_N^ tot/dx = - λ/x^2[ -2⟨σ v ⟩ _χχ^∗→ N N(Y_χ Y_χ^∗ - P Y_NY_N)], which amount to a secluded sector freeze-out scenario[If the effective neutron portal is above Λ_n≈ 4× 10^3 GeV, N conversions into quarks drop out of equilibrium before the time of dark matter freeze-out. Given the experimental constraints, see Section <ref>, the dark sector is indeed secluded from the visible sector at the time of dark matter freeze-out.], albeit with a hidden sector temperature identical to the one of the visible sector. The behaviour of Y_χ^tot and Y_N^tot is sketched in Fig. <ref>. The total dark matter abundance at the current epoch can be calculated using standard tools (see e.g. <cit.>), resulting in Ω_ DM,0 h^2≃ 2.8× 10^8 Y_χ^ tot(x_ f.o.) m_χ/ GeV. where the freeze-out temperature is determined by the condition Γ_χχ^∗→ N N(x_ f.o.)=H(x_ f.o.), and Y_χ^ tot(T_ f.o.) can be well approximated by its equilibrium value at freeze-out. Besides, due to the fact that χχ^* pairs can only annihilate into NN pairs, one finds that Y_N^ tot(x_ f.o.)≃ 3Y_N^eq(T_in), which could additionally lead to an epoch of early matter domination if N is sufficiently long-lived <cit.>. Some implications of this early phase of matter domination will be discussed in Section <ref>. In the simplest scenario, all dark matter antiparticles annihilate resulting in the final state sketched in the left diagram of Fig. <ref>. In this case, Y_χ^ tot(x_ f.o.)=Y_Δχ(x_ f.o.)=Y_Δχ^ in=Y_Δ N^ in where in the last step we have used Eq. (<ref>). Therefore, in this simple scenario the dark matter abundance and the quark-antiquark asymmetry are determined by the same parameter, the initial asymmetry in the dark sector, Y_Δ N^ in. More concretely, using Eqs. (<ref>) and (<ref>), one finds the relation Ω_ DM,0h^2≃ 1.4× 10^8 Y_Δ q,0m_χ/ GeV . One can then adjust the initial condition Y_Δχ^ in to generate the observed quark-antiquark asymmetry, Y_Δ q,0≃ 2.6× 10^-10, and the dark matter mass to generate the observed dark matter abundance, Ω_ DM,0 h^2≃ 0.12. We obtain Y_Δχ^ in ≃1.3× 10^-10, m_χ ≃ 3.4 GeV. In the case where not all dark matter antiparticles annihilate, then Y_χ^ tot(x_ f.o.)=Y_Δχ(x_ f.o.)+2Y_χ^*(x_ f.o.), and Eq. (<ref>) must be replaced by Ω_ DM,0h^2=2.8× 10^8(2Y_χ^*(x_ f.o.)+79/162Y_Δ q,0)m_χ/ GeV . This scenario is sketched in the right diagram of Fig. <ref>. In this case, the initial asymmetry in the dark sector necessary to reproduce the quark-antiquark asymmetry is still given by Eq. (<ref>). However, since the total dark matter yield is larger, the observed dark matter abundance is reproduced for a smaller value of the dark matter mass, m_χ|_Y_χ*(x_ f.o.)≠ 0=Y_Δχ(x_ f.o.)/Y_Δχ(x_ f.o.)+2Y_χ^*(x_ f.o.) m_χ|_Y_χ*(x_ f.o.)=0. Here Y_χ^*(x) can be calculated from particularizing Eq. (<ref>) to the weak wash-out regime, dY_χ^∗/dx = - λ/x^2⟨σ v ⟩ _χχ^∗→ N N(Y_χ^∗(Y_χ^∗+Y_Δχ^ in) - 9 P Y_χ^ eq, in). Since at freeze-out the dark matter antiparticles were still in thermal equilibrium, one can approximate Y_χ^*(x_ f.o.)=Y^ eq_χ^*(x_ f.o.), and then the equilibrium distribution can be easily obtained by setting the right hand side of Eq. (<ref>) to zero. We obtain Y^ eq_χ^*(x) ≃-Y^ in_Δχ/2+√(Y_Δχ^ in 2/4+9P(x)(Y_χ^eq,in)^2) . Let us stress that these conclusions are quite insensitive to the concrete values of the portal strengths Λ_2, Λ_0 and Λ_n, provided that (i) the washout of the asymmetry is weak, (ii) Λ_n is small enough to keep the hidden sector thermalized with the visible sector and (iii) Λ_0≪Λ_n, so that N is stable in the timescale of the freeze-out. Other scenarios are also possible, by adjusting the initial conditions in the Boltzmann equations. On the other hand, a crucial assumption of our scenario is the existence of a neutron portal that transmits the asymmetry in the dark sector to the visible sector. This neutron portal could lead to experimental signatures in our scenario, which will be discussed in detail in the next section. § CONSTRAINTS ON THE NEUTRON PORTAL The particle N can have implications in our visible sector through the neutron portal Eq. (<ref>), e.g. through the decay of N into quarks, the production of N in proton-proton collisions, or the generation of a mass mixing term with the neutron below the QCD confinement scale. The various constraints are summarized in Fig. <ref>, in the parameter space defined by the mass of N (m_N) and the energy scale of the neutron portal (Λ_n). The region allowed by all the constraints, shown in white, is bounded and could in principle be probed in its totality. Let us describe in detail the multiple constraints from the parameter space. The standard Big Bang Nucleosynthesis (BBN) scenario is extremely successful in describing the evolution of the Universe after ∼ 1 s. In particular, observations indicate that the quark-antiquark asymmetry at the time of BBN does not differ significantly from the quark-antiquark asymmetry at the time of recombination. In order to preserve the standard BBN scenario, we will require that the yield of N is largely depleted at ∼ 1 s, so that their decays have practically no impact at later times. Using the fact that the width of N is given by Γ^-1_N→ udd≈ 1.6 s(Λ_n/10^5 GeV)^4(GeV/m_N)^5, and requiring conservatively Γ^-1_N→ udd≲ 0.1 s, we exclude the region Λ_n≳ 10^5 GeV (m_N/ GeV)^5/4, indicated in Fig. <ref> as a hatched orange region. The neutron portal also leads to the production of N in proton-proton collisions through the partonic processes u d→ Nd̅ and d d→ Nu̅. We estimate the non-resonant production cross-section at √(s)=14 TeV to be σ_p p → N +jet≈ 2 fb(f_ PDF/10^-2) (10^4 GeV/Λ_n)^4 , where we have estimated the effect of the partonic distributions in the protons in the parameter to be f_ PDF≈ 10^-2 <cit.>. Depending on the lifetime of N, the signal at colliders could be in the form of missing p_T (if stable within the detector's volume), in the form of a displaced vertex (if the decay length is macroscopic), or in the form of dijets (if the decay length is microscopic). We show in Fig. <ref> the line for which the decay length lies outside of the ATLAS or CMS detectors, c t_ lab=100 m, where we have taken a Lorentz factor γ=√(ŝ)/(2m_N) with √(ŝ)∼ 2 TeV for the partonic center of mass energy. We also indicate in the plot the values of Λ_n corresponding to a production cross-section σ_p p → N+ jet=1 fb, 10 fb, 100 fb and 1 pb, and in green the ballpark area of values that can be probed at the LHC with an integrated luminosity of L=100 fb^-1, which corresponds to effective interactions with strength Λ_n≲ 6 TeV. We note that for small values of Λ_n the effective field theory breaks down at LHC energies, and instead a dedicated search for the new particles mediating the neutron portal should be performed. A detailed collider analysis is however beyond the scope of this paper. We also note that the collider constraints become weaker if the portal between the dark sector and the visible sector involves sea quarks, e.g. N s_R c_R^c s_R instead of N d_R u_R^c d_R; this variant of the portal could be probed in the decays of heavy mesons and baryons <cit.>. The mass of N is bounded from below from the requirement that the proton must be the lightest fermion carrying baryon number. Otherwise the proton could decay, e.g. p→ N π^+. This requirement translates into the lower limit m_N ≥ 938 MeV (this limit on the mass could be avoided if the width is suppressed by a large Λ_n, however this region is in tension with BBN <cit.>). Lastly, the mass of N is bounded from above from the requirement that the dark matter is not overproduced. More specifically, the annihilation process χχ^*→ N N must be efficient enough to deplete most of the dark matter density. Naively, this requires m_N< m_χ, however, as argued in <cit.>, the annihilation can also occur in a “forbidden” channel, due to the existence of sufficiently energetic dark matter particles in the tail of the Maxwell-Boltzmann distribution. The thermally averaged annihilation cross section in a forbidden channel is approximately given by <cit.> ⟨σ v⟩_χχ^∗→ N N≈ 8π f(Δ) ⟨σ v⟩_ N N→χχ^∗e^-2Δ x, where Δ=(m_N-m_χ)/m_χ is the relative mass splitting and f(Δ) is a function of Δ, which approximates to f(Δ) ≈ 1+Δ for Δ≫1. In Eq. (<ref>) the Boltzmann suppression of the dark matter annihilation cross section is explicit. A more rigorous upper limit on m_N is derived by ensuring that dark matter overproduction is avoided for the largest possible value of the annihilation cross-section, Eq. (<ref>). From the s-wave unitarity requirement ⟨σ v⟩_N N→χχ^∗≤ 4π/m_N^2 (x_f.o./π)^1/2, we obtain m_N≲ 4.8 GeV. For the milder requirement that the effective field theory remains valid, which corresponds to Λ_0∼ m_χ, we obtain m_N≲ 4.3 GeV. This limit is shown in Fig. <ref> in red, where the relaxation of constraint at large Λ_n values is attributed to the freeze-out of dark matter taking place during a period of matter domination. In this case, the value of the total annihilation cross section required to obtain the correct dark matter relic abundance is smaller than in the standard WIMP paradigm <cit.>. § CONSTRAINTS FROM DARK MATTER SEARCHES §.§ Dark matter signals via the Higgs portal As our dark matter candidate is a complex scalar, interactions between χ and the SM via the Higgs portal are not forbidden, see the first term in Eq. (<ref>). In turn, this could lead to potential signals for a dark matter discovery. In the present scenario, the predicted dark matter mass of 3.4 GeV allows for the Higgs to decay invisibly into a dark matter particle-antiparticle pair, h→χχ^∗. This search is the most sensitive probe of the portal coupling at colliders and excludes values of λ_χ H above 10^-2 <cit.>. The asymmetric nature of dark matter forbids any signals in indirect searches via the Higgs portal. Additionally, its low mass makes nuclear recoil direct detection experiments attain comparable sensitivity to the portal coupling when compared to the collider constraints <cit.>. One can also consider the following constraint on the portal coupling λ_χ H. After electroweak symmetry breaking, the square of the mass of the dark matter candidate will receive an extra contribution of the form λ_χ Hv^2, with v≈ 246 GeV being the vacuum expectation value of the Higgs field in the broken electroweak phase. In the simplest scenario, assuming dark matter to be fully asymmetric, we find that the portal coupling should be less than λ_χ H<2× 10^-4, as to prevent the Higgs portal contribution to the bare mass to lead to dark matter overabundance, see Eq. (<ref>). Given the above constraints on λ_χ H, it can be verified that interactions between χ and the visible sector do not change the dynamics of dark matter freeze-out. Within our scenario, the possibility to detect χ via the Higgs portal is open. §.§ Dark matter signals via the neutron portal Within the present scenario, we expect that our dark matter candidate χ will dominantly interact with neutrons through scattering. The two possible processes are depicted in Fig. <ref>. At energies below the QCD confinement scale, the interactions between N (N) and (anti-)neutrons are mediated by a mass mixing δ m <cit.>, which is depicted in Fig. <ref> as a crossed circle. This mass mixing can be expressed in terms of the neutron portal effective scale as δ m = 1.4 × 10^-8( 10^3 GeV/Λ_n)^2 GeV. As mentioned in the case of dark matter scatterings via the Higgs portal, direct detection experiments searching for nuclear recoils quickly lose their sensitivity in the regime of light dark matter masses, m_DM≲ 10 GeV. Given the predicted dark matter mass of 3.4 GeV, we find that the strongest current limits on the DM-nucleon scattering cross section <cit.> do not place any constraints in our scenario. Recently, neutron stars have been put forward as a new laboratory for detecting dark matter. They effectively act as calorimeters, and the energy injected via annihilation of captured dark matter <cit.> and/or kinetic heating by dark matter <cit.> could provide a signal in the form of anomalous surface temperature. The neutron star sensitivity to dark matter scatterings is comparable to Earth-based experiments <cit.> and they can accommodate for much larger mass-splitting (≲ 300 MeV) in dark matter inelastic scattering processes <cit.>. A priori, neutron stars could be the most sensitive probe of dark matter scatterings within our realisation. A dark matter mass in the range of a few GeV is not significantly hindered by Pauli-blocking effects <cit.>, which become important when m_χ<m_n, where m_n is the neutron mass. In this scenario, the dominant capture process could be the inelastic scattering χ n →χ N, suppressed only by one insertion of N-n mass mixing. This holds for values of m_N reasonably close to the neutron mass. However, due to medium effects, the effective neutron mass inside the compact star is significantly reduced compared to its vacuum value <cit.> and the dark matter kinetic energy is insufficient to overcome the effective mass difference between N and n. Within our scenario, neutron stars are therefore not expected to yield any constraints on the parameters of the framework. Our scenario allows for exotic scatterings in which dark matter transmutes a neutron into an antineutron[Previous works have considered this effect in the context where N is the dark matter candidate <cit.>. Other scenarios in which dark matter and baryogenesis are linked, can also generate similar exotic decays of baryons <cit.>.], χ n →χ^∗n, see Fig. <ref> b). It is possible to constrain dark matter-induced neutron transmutation with an existing search for neutron-antineutron oscillation in Super-Kamiokande <cit.>. However, in the present realization, due to the large suppression coming from the values of Λ_n and Λ_2, the potential dark matter signal sits well below existing and future limits. The prospects for indirect detection of dark matter are strongly influenced by whether the symmetric component has been depleted at freeze-out. In the asymmetric case the only annihilation channel available is χχ→NN. This process is highly suppressed in the present realization due to the large effective scale Λ_2, and no constraints on the model parameters are expected. Nevertheless, it is noteworthy that an asymmetric complex scalar dark matter candidate could have an open direct annihilation channel today. In the symmetric scenario, it is expected that strong χχ^∗→ N N annihilations will occur, as χ is a light and thermal dark matter candidate. However, due to the conservation of baryon number, N must eventually decay into neutrons, whose masses would already account for a significant portion of the energy budget in the final state of the annihilation. The decay of N into semi-relativistic neutrons would emit a soft cascade of various hadrons and/or a gamma-ray of energy E_γ=m_N-m_n, which could provide a window into both the nature of dark matter and its link to the neutron portal. § CONCLUSIONS We have presented a scenario that accommodates both dark matter and a quark-antiquark asymmetry. We have postulated the existence of a spin-0 and a spin-1/2 particle in the dark sector, both singlets under the Standard Model gauge group, and carrying baryon number. As initial conditions, we have assumed that at very high temperatures the Universe has zero baryon and lepton numbers, but that the dark sector contains an asymmetry between particles and antiparticles. We have argued that the asymmetry in the spin 1/2 particle can be transmitted to the visible sector through (baryon conserving) neutron portal interactions, thus resulting into a quark-antiquark asymmetry (and possibly a lepton-antilepton asymmetry via sphalerons). On the other hand, the spin-0 particle is stable due to the baryon number conservation and constitutes a dark matter candidate. In this framework, the B-L number of the visible sector is exactly compensated by an opposite asymmetry in the dark sector, thus linking the observed quark-antiquark asymmetry to the existence of dark matter. Under reasonable assumptions, we expect the dark matter mass to be ∼ 3.4 GeV if it is fully asymmetric, or potentially lighter if there remains a population of dark matter antiparticles after freeze-out. The scenario also predicts the mass of the exotic spin-1/2 particle to be comparable to that of the dark matter. Such a particle could be produced at the LHC or in flavor physics experiments through the neutron portal, generically leaving the detector before decaying. This particle would then produce a signal of missing energy, and an apparent violation of baryon number, due to the imbalance in the baryon number of the visible sector particles involved in the reaction. Finally, we have briefly discussed the prospects for observing dark matter signals in our scenario. We find that the most promising avenue to detect signals lies in the Higgs portal, either through the invisible Higgs decay width or in direct detection experiments, akin to the singlet scalar dark matter model. § ACKNOWLEDGMENTS This work is supported by the Collaborative Research Center SFB1258 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. We thank Motoko Fujiwara, Florian Jörg, Robert McGehee and David Rousso for useful discussions. apsrev4-1
http://arxiv.org/abs/2307.01270v1
20230703180117
Minimal Inert Doublet Benchmark for Dark Matter and the Baryon Asymmetry
[ "María Dias Astros", "Sven Fabian", "Florian Goertz" ]
hep-ph
[ "hep-ph" ]
High-Strength Amorphous Silicon Carbide for Nanomechanics Richard A. Norte August 1, 2023 ========================================================= § INTRODUCTION AND MODEL SETUP Thanks to the discovery of a resonance resembling the Higgs particle proposed in the 1960s <cit.>, at ATLAS <cit.> and CMS <cit.> at the Large Hadron Collider in 2012, the minimal Standard Model of Particle Physics (SM) was completed. Subsequent studies of couplings of the Higgs particle to fermions and electroweak (EW) gauge bosons showed agreement with the SM predictions and thus demonstrated, once more, the powerful predictiveness of the theory. However, despite the success of gaining understanding of the properties of elementary particles and their interactions, it is well-known that the SM lacks in providing explanations for various phenomena, inter alia, the existence of dark matter (DM) and the observed baryon asymmetry of the Universe (BAU). In this article, we attempt to address these questions via an effective-field-theory (EFT) approach for the Inert Doublet Model (IDM), minimally extended at a beyond-IDM energy scale Λ. The IDM has already been widely studied as a model for DM and in the context of the electroweak phase transition (EWPhT) as a first step towards explaining baryogenesis <cit.>. Nonetheless, since the interactions between the additional inert scalars and the SM states preserve the CP symmetry, baryogenesis cannot be achieved in the non-modified `vanilla' IDM. Adding an effective CP-violating operator allow us then to (quantitatively) accommodate the missing Sakharov condition and explain the BAU within the framework. Moreover, due to its minimal nature, this effective IDM could serve as a realistic economic benchmark extension of the SM that solves prominent shortcomings and – with its new scalars being preferably rather light – could be seen as the low energy limit of a larger class of viable completions residing at higher scales. The existence of DM is well established through a wide range of observations <cit.>, including colliding clusters (e.g. the bullet cluster), rotational curves of various galaxies, gravitational lensing, structure formation, big bang nucleosynthesis, and the cosmic microwave background. The energy density of the unknown DM component today is quantified by <cit.> _refρ_DM,ref/ρ_crith^2 = 0.1200(12) with the critical energy density ρ_crit = 3H_0^2/(8π G_N) defined in terms of today's Hubble parameter H_0 and Newton's gravitational constant G_N. To avoid an overclosure of the Universe, the model under consideration should not predict a larger DM relic abundance than the reference value given above. Many candidates have been proposed to account for DM[The DM candidate must be (i) electrically neutral or milli-charged at the most, (ii) at most weakly interacting with SM particles and (iii) stable on cosmological time scales. ], among which the weakly interacting massive particles (WIMPs) are of the most appealing. Their mass ranges between a few GeV and 𝒪(100) and they interact only weakly with the SM particles. WIMPs are thermally produced via freeze out and their “final" comoving density can make out the entirety of the measurable DM relic abundance in Eq. (<ref>). The IDM naturally features a WIMP DM candidate. Concretely, the IDM (see, e.g., Refs. <cit.>) is an extension of the SM with an additional SU(2)_L doublet scalar H_2, odd under a new ℤ_2 symmetry and featuring a vanishing vacuum expectation value (vev) at zero temperature (which guarantees its inert nature, see below). The SU(2)_L doublets read H_1 = 1/√(2)[ √(2) G^+; v_1 + h + iG ] H_2 = 1/√(2)[ √(2) H^+; v_2 + H + iA ] , with the SM Higgs boson h and the vevs v_1≡ v = 246 and v_2=0 at zero temperature. Both doublets are (2,1) representations of the EW SU(2)_L× U(1)_Y gauge group and the Goldstone bosons G, G^± are associated with the longitudinal modes of the respective EW gauge bosons Z, W^± after EW symmetry breaking. Similarly, H^± correspond to two new CP-even, electrically charged physical scalars, whereas H and A are two additional neutral scalars, the former being CP-even while the latter is CP-odd. We choose H to be the lightest scalar and therefore the DM candidate. Its stability is guaranteed by the aforementioned ℤ_2 symmetry, under which all SM fields are even but H_2 is odd. This is the prominent feature of the IDM and prohibits any interaction term between the inert doublet H_2 and SM fermions (and therefore perilous contributions to flavour-changing neutral currents <cit.>) at the renormalizable level. The scalar potential is given by V(H_1,H_2) = μ_1^2| H_1|^2 + μ_2^2| H_2|^2 + λ_1| H_1|^4 + | H_2|^4 + | H_1|^2| H_2|^2 + | H_1^†H_2|^2+ 1/2[ ( H_1^†H_2)^2+ h.c.] where all the couplings are real[The coupling parameter can be complex in general. However, its complex phase can be removed by a suitable (global) Higgs doublet redefinition.] and the masses of the scalars are given by m_h^2 = 2 v^2 , m_H^2 = ^2 + v^2/2 , m_A^2 = ^2 + λ̅_345v^2/2 , m_H^±^2 =μ_2^2 +λ_3 v^2/2 with the short-hand notations = + + , λ̅_345 = + - . The theoretical and experimental constraints on the model, e.g. from perturbative unitarity, vacuum stability, invisible SM Higgs decays into a pair of inert scalars, or electroweak precision tests, as well as the parameter space allowing for the correct DM abundance can be found in Ref. <cit.> and references therein. Since real couplings prevent CP violation, the IDM must be augmented in order to become a realistic model of baryogenesis. To this end, we focus mostly on the dimension-six operator ℒ_CPv^IDM⊃ℒ_BAU=c̃_2 |H_2|^2 V_μνV^μν≡c̃_2/2ϵ^μναβ| H_2|^2 V_μν V_αβ , which plays a rather unique role within the set of potential operators, as we will explain further below.[See Refs. <cit.> for different explicit extensions of the IDM with new states to implement CP violation.] Here, V_μνV^μν=W_μν^aW^a,μν + B_μνB^μν represents the sum of products of the SU(2)_L isospin and U(1)_Y hypercharge field strength tensors and their respective duals. Lifting the assumption of equal coefficients does not change the result for the BAU, as this is governed only by the coefficient of the SU(2)_L term. As we will see, considering the field strength coupling to the inert doublet has phenomenological advantages over the alternative involving the `active' H_1 doublet, for example leading to suppressed electric dipole moments of leptons (ℓEDMs). Still, also the latter operator can lead to viable results for the BAU in corners of the parameter space, and we will analyze this below, too. However, the focus is on the role of the operator in Eq. (<ref>) in baryogenesis and its impact on the DM relic abundance, which will be studied in detail in the next sections. We point out that the IDM augmented by this operator delivers a minimal, yet versatile, benchmark model to study the simultaneous realization of DM and the BAU. Given stringent constraints on CP-violating operators from limits on ℓEDMs, together with the modest required corresponding coefficients to realize the BAU that we find, it is very reasonable that CP violation is generated at a higher scale.[See Refs. <cit.> for scenarios where also the EWPhT is lifted to such higher scales.] This makes the implementation via effective operators particularly suitable, being able to describe the effect of a set of potential completions. The structure of this article is as follows. After having introduced and motivated the model in Sec. <ref>, the baryogenesis mechanism is elaborated on in Sec. <ref>, including also a discussion on constraints from experimental limits on ℓEDMs. Consecutively, the dark matter abundance for suitable parameters is investigated in Sec. <ref>. We finally present our conclusions in Sec. <ref>, while a series of appendices contains technical details, in particular a two-loop analysis of the ℓEDM induced by the second Higgs. § BARYOGENESIS AT THE ELECTROWEAK SCALE In this work we consider the scenario of baryogenesis during the EWPhT, an idea that has been extensively studied in the past (see, e.g., Refs. <cit.> and references therein). The BAU is quantified by η_refn_B/n_γ = ( 6.143 ± 0.190 )· 10^-10 , where n_B is the difference of baryon and anti-baryon number densities and n_γ is the number density of photons. Assuming CPT invariance, the three vital ingredients for successful baryogenesis, elaborated by Sakharov <cit.> in 1967, must be fulfilled: in addition to violation of baryon number B and of charge conjugation symmetry C as well as of the combination CP of charge-conjugation and parity symmetry, the presence of an out-of-equilibrium process is a requisite. The first condition of this list is met by the fact that neither baryon nor lepton number is conserved in the SM because of the U(1)_B+L anomaly, as shown by 't Hooft in 1976 <cit.>. This violation is mediated by sphaleron processes which become effective at sufficiently high temperatures as later realized by Kuzmin, Rubakov and Shaposhnikov <cit.>. In fact, for temperatures below ∼ 10^13 <cit.>, sphalerons are expected to be active and in thermal equilibrium, effectively preventing the creation of a net baryon number. Below the EW scale, on the other hand, sphaleron processes are Boltzmann suppressed. However, a baryon asymmetry can be created during the EWPhT and it will then remain, provided that the sphalerons are quickly turned off thereafter. This is the so-called wash-out condition which requires of a strong first-order phase transition, generating the out-of equilibrium situation mentioned above, that we know is not provided by the SM since the Higgs mass of m_h≈ 125 GeV is too large. On top of that, the CP violation in the weak sector of the SM is too small to explain the measured BAU even if the other two Sakharov conditions were fulfilled (see Ref. <cit.> for instance). Hence, an SM extension must feature an additional source of CP violation and a strong first-order EWPhT.
http://arxiv.org/abs/2307.02893v1
20230706095715
Walk-off induced dissipative breathers and dissipative breather gas in microresonators
[ "A. Villois", "D. N. Puzyrev", "D. V. Skryabin", "M. Onorato" ]
physics.optics
[ "physics.optics", "nlin.PS" ]
http://arxiv.org/abs/2307.01133v1
20230703161346
Spin model for the Honeycomb $\rm NiPS_3$
[ "Paula Mellado" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
School of Engineering and Sciences, Universidad Adolfo Ibáñez, Santiago, Chile In the Vander Waal material NiPS_3, Ni atoms have spin S=1 and realize a honeycomb lattice. Six sulfur atoms surround each Ni and split their d manifold into three filled and two unfilled bands, which differ in bulk and monolayer systems. Aimed to determine the spin Hamiltonian of NiPS_3, we study its exchange mechanisms using a two-band half-filled Hubbard model. Hopping between d orbitals is mediated by p orbitals of sulfur and gives rise to bilinear and biquadratic spin couplings in the limit of strong electronic correlations. In bulk, a ferromagnetic first nearest neighbor J_1 and a more significant antiferromagnetic third nearest neighbor spin coupling J_3 agreed with the literature, while in monolayer J_1 is positive and very small in comparison. The microscopic model exposed a ferromagnetic biquadratic spin interaction K_1 allowing the completion of a minimal J_1-J_3-K_1 spin Hamiltonian for NiPS_3. Using a variational scheme to find the phase diagram of the system, we found that a zig-zag antiferromagnetic order coexists with a ferroquadrupolar phase and is the most likely ground state of bulk samples. The zig-zag pattern is adjacent to commensurate and incommensurate spin spirals, which could hint at the puzzling results reported in NiPS_3 monolayers. Spin model for the Honeycomb NiPS_3 Paula Mellado August 1, 2023 =================================== Vander Waals compounds, particularly transition-metal thiophosphates <cit.>, are an exciting class of materials. Their negligible interlayer coupling reduces their dimensionality and promotes intriguing electronic and optical quantum effects <cit.> while allowing for easy tunability of magnetic exchange and anisotropy through ligand substitution <cit.>. Family members of transition-metal thiophosphates Vander Waals materials have monoclinic space group C/2m, with the transition metal atoms forming a planar honeycomb lattice in the ab planes <cit.>, Fig.<ref>(a). The metal atoms are enclosed in octahedra formed by sulfur atoms and have a phosphorus doublet at the center of the honeycomb hexagons <cit.>. Magnetic susceptibility measurements on single crystals reveal that the family member NiPS_3 has the smallest spin S = 1 <cit.> and the largest Neel temperature T_N=155 K. DFT calculations <cit.> and experimental measurements have suggested that below T_N Ni spins form a zig-zag antiferromagnetic ground state featured as double parallel ferromagnetic chains antiferromagnetically coupled within the single layer (see Fig.<ref>). Large spacing between adjacent layers suppresses interlayer exchange such that the antiferromagnetic order acquires a 2D character even in the bulk form <cit.>. Spin dynamics in NiPS_3 has been proved by high-resolution spectroscopy methods <cit.>, and linear spin-wave theory using a Heisenberg Hamiltonian with single-ion anisotropies was applied to determine its magnetic exchange parameters and the nature of its anisotropy <cit.>. In-plane magnetic exchange interactions up to third-nearest neighbors were required to account for the results. The nearest-neighbor exchange was found ferromagnetic with J_1∼ 2.5 meV and the dominant antiferromagnetic third-neighbor exchange J_3∼ 13 meV. The anisotropy was shown to be XY-like with a small uniaxial component, leading to two low-energy spin wave modes appearing in the spin-wave spectrum at the Brillouin zone center <cit.>. The anisotropic Heisenberg Hamiltonian with up to three nearest neighbor couplings could reproduce the spin-wave energies but was at odds with the calculated neutron intensities showing that the classical spin models accounting for its magnetism up to date are a subject of debate. <cit.>. Further, the presence of orbital degeneracy combined with the small magnitude of Ni spins suggests that quantum aspects could play a role in the magnetic properties of NiPS_3. Main results. Aimed to find the spin model responsible for the magnetism in NiPS_3, in this letter, we study the electron exchange mechanisms of a multi-band Hubbard model for the Ni atoms in NiPS_3 in the limit of strong Coulomb interactions. d orbitals in transition-metals are localized, and direct exchange hopping can only occur between orbitals on different atoms that are very close to each other <cit.>, which makes direct hopping unlikely in NiPS_3. Therefore the exchange mechanisms are extended by taking into account hopping via intermediate p orbitals located at the sulfur atoms in between two Ni sites. Integrating out the high energy states of the microscopic model, bilinear and biquadratic spin exchanges were computed. Besides ferromagnetic and antiferromagnetic bilinear spin interactions, we found that a ferromagnetic biquadratic spin coupling is important in NiPS_3, giving rise to the following bilinear-biquadratic effective spin Hamiltonian for the Ni atoms in NiPS_3: ℋ = -J_1∑_<ij>( S_i· S_j)+J_3∑_(ik)( S_i· S_k) - K_1∑_<ij>( S_i· S_j)^2 where J_1 and J_3 denote the first and third nearest neighbor spin exchange couplings, respectively, and K_1 is the first neighbor biquadratic spin exchange. The computed spin couplings are shown in Table <ref> and were used to evaluate the system's variational ground state energy considering the quantum nature of spins in NiPS_3. We found that the zig-zag magnetic order corresponds to the ground state of Eq.<ref> in the relevant space of parameters for NiPS_3. The zig-zag pattern coexists with a ferroquadrupolar order and competes with magnetic commensurate and incommensurate magnetic spirals. Microscopic. In bulk NiPS_3 the octahedral crystal field at Ni sites causes the 3d orbitals to split into a triplet of lower energy, d_xy,d_xz, d_yz and a doublet d_x^2-y^2, d_3z^2-r^2, Fig<ref>(b) <cit.>. Each Ni^2+ is in a d^8 electronic configuration. Consequently, d orbitals belonging to the triplet are fully occupied while the doublet is half filled. In monolayer, the trigonal crystal field splits d orbitals into the fully occupied triplet d_x^2-y^2,d_3z^2-y^2,d_xy, and the doublet d_xz, d_yz <cit.>. Consider the case of two atoms of Ni with two d orbitals, each forming a spin S=1 state, and two atoms of sulfur symmetrically located in between the Ni and with two fully occupied p orbitals, Fig.<ref>(a). Operator c_iασ^† creates a spin electron σ=↑,↓ in the α orbital at site i and n_i,α,σ=c_iασ^†c_iασ defines the number of electrons at site i and orbital α with spin σ. The difference in energy of p and d orbitals is denoted Δ. Onsite Coulomb repulsion of two electrons in a single d orbital is U_d; electron repulsion in p orbitals is neglected. The transfer of one electron from p to d orbital has associated the energy U=U_d+Δ. t_αβ^ij denotes the hopping amplitude from orbital α at site i to orbital β at site j. Intrasite orbital hopping cancels since orbitals at the same site are orthogonal. States of interest have a fixed magnetic moment S=1 per site (atom); therefore, Hund's couplings J_H and J_H^' are included at Ni and sulfur sites respectively <cit.>. Altogether the microscopic model becomes: H = ∑_i≠ j α≠β σt_αβ^ij(c_iασ^† c_jβσ+H.c.) + U∑_i,α n_iα↑n_iα↓-J_H∑_i,σ,α≠β n_iασn_iβσ - J_H^'∑_i,α≠β (c_iα↑^†c_iα↓c_iβ↓^†c_iβ↑+H.c.) In NiPS_3 the spin exchange at the nearest neighbor level results from a competence between direct overlap of d orbitals and indirect hopping mediated by sulfur atoms <cit.>. In the direct process, electrons hop between Ni orbitals at different sites of the honeycomb lattice. The indirect exchange mechanism known as superexchange <cit.> is mediated by the virtual hopping to two sulfur ions in between the two Ni atoms. This is a more realistic situation for NiPS_3 because in transition metals compounds, the overlap between d orbitals separated a distance r decays as ∼ 1/r^5 <cit.>. Consider the two initial states of Fig.<ref>(c) where spins at different Ni sites are antiparallel. They are denoted |1,0⟩|1,0⟩ and |1,1⟩|1,-1⟩, according to the notation |S^1,m^1⟩|S^2,m^2⟩ where S^k and m^k represent respectively the total spin quantum number of site k and the z component of the total spin. With four available half-filled d-orbitals, up to four electrons could hop. The electron hopping from the initial states with S^k=1 gives rise to intermediate states with one, two, three, and four double-occupied d-orbitals where S^k≠ 1. Indirect interactions across intermediate states mediate interactions between |1,0⟩|1,0⟩ and |1,1⟩|1,-1⟩. To integrate out high energy states, the electron Hamiltonian matrix from Eq.<ref> was separated into H=H̃_00+H̃+T̃. H̃_00 and H̃ contain the on-site contributions due to Coulomb interactions and Hund's exchange of the low-energy states with no double occupied d orbitals, (H̃_00), and the high energy states with at least one double occupied d-orbital (H̃). T̃ contains off-diagonal terms due to electron hopping (details in supplementary file <cit.>). H was partitioned into blocks where diagonal matrices contain the energy of the basis states with k double occupied d-orbitals (H̃_kk). Off diagonal blocks are hopping matrices T̃_k-1k that connect states with k-1 and k double occupied d-orbitals: H=[ H̃_00 T̃_01 0̃ 0̃ 0̃; T̃_10 H̃_11 T̃_12 0̃ 0̃; 0̃ T̃_21 ... ... ...; 0̃ 0̃ ... H̃_k-1k-1 T̃_k-1k; 0̃ 0̃ ... T̃_kk-1 H̃_kk ] Effective Spin Hamiltonian. The subspaces are decoupled through a canonical transformation using the perturbative approach of Lodwin <cit.> and Schrieffer-Wolff <cit.> where H̃ and T̃ are treated as perturbation (details in the supplementary file <cit.>). In this way, high energy states are down-folded into the energetically well-separated sector of interacting spins of constant quantum number at each site <cit.>. This is a consequence of Hund coupling and onsite Coulomb interactions, which are large with respect to the hopping amplitudes as in transition metal compounds <cit.>. For nearest neighbor Ni atoms (1-nn) in NiPS_3, sulfur ions form a ninety degrees bridge between the two Ni sites, Fig.<ref>(a) and Fig.<ref>(a). By symmetry, there is only hopping between d and p orbitals that point to each other. Therefore in the superexchange process at 1-nn level at each Ni site, one of the d-orbitals will overlap with the p_x orbital of one of the sulfur atoms, and the other will overlap with the p_y orbital of the second, Fig<ref>(a) <cit.>. The superexchange Hamiltonian matrix contains five diagonal blocks with the energy of: the two initial states, H̃_00, the eight excited states with one double occupied d-orbital, H̃_11, the twelve states with two double occupied d-orbitals, H̃_22, the eight with three double occupied d-orbitals, H̃_33, and the two excited states with four double occupied d-orbitals, H̃_44. The off diagonal matrix elements consist of hopping matrices (T̃_01, T̃_12, T̃_23, T̃_34) between the basis states. Direct exchange between 1-nn is also considered <cit.>. The direct overlap is negligible for the third nearest neighbor Ni atoms (3-nn), and the angle between two 3-nn Ni and the two sulfur ions in between is larger than ninety degrees, Fig.<ref>. Consequently, superexchange between two 3-nn Ni is mediated by a single p orbital <cit.>, which serves the two Ni ions, Fig<ref>(b). At 3-nn level, we only consider bilinear spin exchanges; therefore, only virtual hopping of one and two electrons are considered <cit.>. Second neighbors Ni atoms (2-nn) do not share a common S ion in NiPS_3, (Fig.<ref>); therefore, the electron exchange proceeds via direct overlap, which for 2-nn we neglected. Downfolding the high energy states by going up to fourth order in perturbation theory <cit.> and by writing spin operators in terms of electron operators c_iασ,c_iασ^† <cit.> yields the effective Hamiltonian of the electron system <cit.>. Mediated by two orthogonal p orbitals, the spin couplings J_1^s<0 and K_1<0 are ferromagnetic, and they correspond respectively to bilinear and biquadratic 1-nn spin interactions originated from four, six and eight virtual hoppings. |J_1^D|<|J_1^s| originates from a direct exchange between Ni atoms and is antiferromagnetic. J_3>0 originates from four virtual hopping mediated by a single p orbital and consequently is antiferromagnetic <cit.>. Explicit expressions for J_1=J_1^s+J_1^D, J_3 and K_1 are shown in Table<ref>. There, terms 𝒪(t_pd^4) arise from two double occupied d-orbitals. Terms proportional to 𝒪(t_pd^6) correspond to three double occupied d orbitals from six p-d hopping processes, and the term 𝒪(t_pd^8) accounts for four double occupied d-orbitals. Terms proportional to 𝒪(t_dd^2) are due to one doubly occupied d-orbital due to direct exchange. While in bulk NiPS_3 the octahedral crystal field favors the doublet d_x^2-y^2, d_3z^2-r^2, in monolayer the trigonal crystal favors the doublet d_xz, d_yz Fig.<ref>. We assume that the different symmetry of the participating d-orbitals in bulk and monolayer does not change the superexchange mechanism presented above; however, it affects the values of the hopping integrals according to the Slater-Koster scheme <cit.> as shown in Table<ref>. Taking into account Slater-Koster rules, considering all possible combinations of d and p orbital hopping <cit.> and using values of U, Δ and J_H from the literature <cit.> the spin couplings from Table<ref> in bulk and monolayer NiPS_3 are evaluated and shown in Table<ref>. Phase diagram at T=0. To study magnetic orders of Eq.<ref> at T=0, we consider the trial ground state |ψ⟩=⊗_i|ψ_i⟩ <cit.>. It consists of an entanglement-free direct product of arbitrary wavefunctions with spin S=1 at each site <cit.>. A general single spin state can be written as the coherent state |ψ_i⟩=b_ix|x⟩+b_iy|y⟩+b_iz|z⟩ where b_i is an arbitrary complex vector satisfying the normalization constraint b_i^*·b_i=1, and we have chosen the time-reversal invariant basis of the SU(3) (S=1) fundamental representation |x⟩=i|1⟩-i|1̃⟩/√(2) , |y⟩=i|1⟩+i|1̃⟩/√(2) , |z⟩=-i|0⟩ where |1⟩,|1̃⟩ and |0⟩ are the three cartesian spin-1 states quantized along the z axis <cit.>. The basis states satisfy S^x|x⟩=S^y|y⟩=S^z|z⟩=0. The magnetization of the system is defined through the expectation value of the spin at each site <cit.> M=∑_k⟨ψ_k| S_k|ψ_k⟩=-i∑_k b_k^*× b_k In terms of the complex vectors b_i the expectation value of the J_1-J_3-K_1 spin Hamiltonian of Eq.<ref> becomes ⟨ψ|ℋ|ψ⟩ = ∑_<ij>[-J_1| b_i^*· b_j|^2+(J_1-K_1)| b_i· b_j|^2] + J_3∑_(ij)[| b_i^*· b_j|^2-| b_i· b_j|^2] Because of the biquadratic interaction, we investigate a possible quadrupolar order, QP in the system. To that purpose we introduce the quadrupolar operator QP, a tensor with five components Q_i=[Q_i^x^2-y^2,Q_i^3z^2-r^2,Q_i^xy,Q_i^yz,Q_i^xz]=[(S_i^x)^2-(S_i^y)^2,(2(S_i^z)^2-(S_i^x)^2-(S_i^y)^2)/√(3),S_i^xS_i^y+S_i^yS_i^x,S_i^yS_i^z+S_i^zS_i^y,S_i^xS_i^z+S_i^zS_i^x]. QP can be expressed in terms of the complex b vectors, ⟨ψ_i| Q_iμ,ν|ψ_i⟩=1/3δ_μ,ν-1/2(b_iμ^*b_iν+b_iν^*b_iμ) Using the identity Q_i· Q_j=2( S_i· S_j)^2+ S_i· S_j-2/3, in terms of the QP operators the J_1-J_3-K_1 Hamiltonian, Eq.<ref> can be written as: ℋ = -(J_1-K_1/2)∑_<ij>( S_i· S_j) +J_3∑_(ik)( S_i· S_k) - K_1/2∑_<ij>( Q_i· Q_j)-∑_i4/3K To find variational ground states of the spin Hamiltonian, Eq.<ref> was minimized respect to all b_i vectors in a hexagonal cluster of twenty-two spins. Fig.<ref> presents the corresponding -J_3/J_1 v/s K_1/J_1 phase diagram of Eq.<ref> at T=0. We considered the range of J_3 and K_1 according to results in bulk samples from Tables<ref> and <ref>. Four magnetic phases and one quadrupolar order have been identified. For all values of K_1 and for J_3 in the range 0≤|J_3|≤0.4J_1 the system settles in a ferromagnetic state F, but as J_3 enters in the range 0.4J_1≤|J_3|≤0.6J_1 spins rearrange into incommensurate spirals IS. For J_3≥ 0.6J_1 two magnetic phases can arise. At K_1>0.4 a zig-zag phase with spins canted out of the x-y plane competes with a commensurate spiral magnetic order CS with wavevector q_x^cs=π/2. The zig-zag phase with zero average spin moment coexists with a uniaxial ferroquadrupolar order FQ <cit.>. The CS order (inset of Fig.<ref>) is a noncoplanar spiral where generally spins rotate out of the x-y plane and give rise to non zero QP moments <cit.>. Magnetic phases and quadrupolar order were identified by inspecting the ground state spin textures obtained from the variational results and were confirmed by computing the space correlation functions through the static structure factor of the magnetization and quadrupolar order parameters C, S( q)=e^i r_ij· q⟨ C_iC_j⟩ for q∈ (0,π). Phase boundaries were identified by computing the second derivative of the ground state energy, with respect to the couplings J_3 and K_1, and looking for singular features indicative of changes in the ground state phases. Quadrupolar and spin correlations grow toward larger values of K_1/J_1, and from IS, toward larger values of |J_3/J_1|. Inspecting Eq.<ref>, in the case of J_3=0 a ferromagnetic ground state is expected as long as (J_1-K_1/2)>0. Finite values of K_1 favor a FQ order. In the case (J_1-K_1/2)<0 and J_3≥ 0 the ground state could become a Neel state. Results from Monte Carlo simulations of an anisotropic Heisenberg model with first and third nearest neighbor interactions using ab-initio parameters support the hypothesis that a Neel order could compete with the zig-zag in NiPS_3 <cit.>. In our case, Table<ref> shows K_1 to be about one order of magnitude smaller than J_1; thus, a Neel state is unlikely to occur, at least in bulk. For J_3≥ 0 and (J_1-K_1/2)>0 magnetic frustration could lead to disordered phases. If J_3 is large enough, a solution to it is the zig-zag state where a single spin is arranged ferromagnetically with two of the nearest neighbors in its chain and antiferromagnetically with the third one in a next neighbor chain. Nevertheless, the zig-zag state becomes more difficult to accomplish as J_3 and K_1 approach the limit |J_1-K_1/2|∼ |J_3|. In this case the IS seems a plausible alternative. To further investigate the spiral order, we introduced the anzatzs b_i=(sinθcos( q· r_i)e^iϕ,sinθsin( q· r_i)e^iϕ,cosθ) <cit.> for the vector of amplitudes b_i describing the wavefunction at a single site located at position r_i in the honeycomb lattice. Now Eq.<ref> was minimized with respect to q,θ,α in the hexagonal cluster with twenty-two spins introduced before, and the results were compared with the variational ones confirming zig-zag, F, and QP orders, as well as the wavevector q_x^cs=π/2. In the range of parameters of IS we found the order wavevector 1.10<|q^is|∼ 1.15. The orientation of magnetic moments in NiPS_3 is influenced by biaxial magnetocrystalline anisotropy consisting of a dominant easy-plane anisotropy that locks the orientation of the spins to a magnetic plane x-y, slightly inclined from the crystallographic ab plane, and a secondary weaker anisotropy that orients the spins in the magnetic x-y plane along the x-axis <cit.>. To determine its effects on the phase diagram of Fig.<ref> we added to Eq.<ref> the term A∑_j(S_j^z)^2=A∑_j[i (b_j^*× b_j)·ẑ]^2, where A plays the role of the anisotropic coupling. Variational ground states were subsequently computed for the range of parameters of Fig.<ref> and 0≤ A≤ 2× 10^-2J_1 according to reported values of A <cit.>. The resulting magnetic phases coincide with the ones of Fig.<ref>. Now F, zig-zag, CS, and IS settle near the x-y plane, and the QP phase develops a uniaxial director vector along the ẑ axis. At A∼× 10^-1J_1, the zig-zag phase settles in the x-y plane, slightly inclined out of it by an angle ∼ 8 degrees. The boundary between zig-zag and spiral phases moved slightly toward smaller K_1 and J_3 after the inclusion of anisotropy <cit.>. Discussion. Depending on the relative strength between the spin couplings, Eq.<ref> gives rise to four magnetic phases at T=0. The zig-zag phase, which has been found in bulk samples of NiPS_3, is the most likely to occur and competes with spiral magnetic patterns when we use the available values for Coulomb repulsion, Hund coupling and d-p gaps from DFT calculations in the computed bulk spin exchange couplings. One aspect that has remained controversial is whether or not the zig-zag order survives up to the monolayer limit <cit.>. The crystal field of bulk and monolayer NiPS_3 differs <cit.> such that the active d orbitals in both cases have a different symmetry. We have shown that a possible consequence is the relative decrease of the spin couplings in monolayer systems. Crucially the positive and very small J_1 of monolayers (Table<ref>) could drive samples to a IS in the best case scenario or to a disordered phase, which path monolayers will take it is unknown at this point: for such small J_1, interactions like second nearest neighbors could play a role and effects like disorder could become relevant <cit.>. One important aspect not considered here is the deviation of the ninety degrees angle between Ni and sulfur atoms in monolayers. According to Goodenough-Kanamori rules <cit.>, such deviations could favor a superexchange mediated by a single p orbital, which would increase the magnitude of the antiferromagnetic J_1 in monolayer samples. If that were the case, a different scenario to the one shown in Fig.<ref> would arise. Preliminary variational calculations hint that a Neel order could be favored. Acknowledgments. This work was supported by Fondecyt under Grant No. 1210083. The author thanks Je-Geun Park, Joerg Schmalian, and Alexander Mirlin for valuable discussions. 44 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Burch et al.(2018)Burch, Mandrus, and Park]burch2018magnetism authorK. S. Burch, authorD. Mandrus, and authorJ.-G. Park, journalNature volume563, pages47 (year2018). [Seifert et al.(2022)Seifert, Ye, and Balents]seifert2022ultrafast authorU. F. Seifert, authorM. Ye, and authorL. Balents, journalPhysical Review B volume105, pages155138 (year2022). [Kim et al.(2018)Kim, Kim, Sandilands, Sinn, Lee, Son, Lee, Choi, Kim, Park et al.]kim2018charge authorS. Y. Kim, authorT. Y. Kim, authorL. J. Sandilands, authorS. Sinn, authorM.-C. Lee, authorJ. Son, authorS. Lee, authorK.-Y. Choi, authorW. Kim, authorB.-G. Park, et al., journalPhysical review letters volume120, pages136402 (year2018). [Gu et al.(2019)Gu, Zhang, Le, Li, Xiang, and Hu]gu2019ni authorY. Gu, authorQ. Zhang, authorC. Le, authorY. Li, authorT. Xiang, and authorJ. Hu, journalPhysical Review B volume100, pages165405 (year2019). [Rosenblum et al.(1994)Rosenblum, Francis, and Merlin]rosenblum1994two authorS. Rosenblum, authorA. Francis, and authorR. Merlin, journalPhysical Review B volume49, pages4352 (year1994). [Rosenblum and Merlin(1999)]rosenblum1999resonant authorS. Rosenblum and authorR. Merlin, journalPhysical Review B volume59, pages6317 (year1999). [Basnet et al.(2021)Basnet, Wegner, Pandey, Storment, and Hu]basnet2021highly authorR. Basnet, authorA. Wegner, authorK. Pandey, authorS. Storment, and authorJ. Hu, journalPhysical Review Materials volume5, pages064413 (year2021). [Afanasiev et al.(2021)Afanasiev, Hortensius, Matthiesen, Mañas-Valero, Šiškins, Lee, Lesne, van Der Zant, Steeneken, Ivanov et al.]afanasiev2021controlling authorD. Afanasiev, authorJ. R. Hortensius, authorM. Matthiesen, authorS. Mañas-Valero, authorM. Šiškins, authorM. Lee, authorE. Lesne, authorH. S. van Der Zant, authorP. G. Steeneken, authorB. A. Ivanov, et al., journalScience advances volume7, pageseabf3096 (year2021). [Belvin et al.(2021)Belvin, Baldini, Ozel, Mao, Po, Allington, Son, Kim, Kim, Hwang et al.]belvin2021exciton authorC. A. Belvin, authorE. Baldini, authorI. O. Ozel, authorD. Mao, authorH. C. Po, authorC. J. Allington, authorS. Son, authorB. H. Kim, authorJ. Kim, authorI. Hwang, et al., journalNature communications volume12, pages1 (year2021). [Kang et al.(2020)Kang, Kim, Kim, Kim, Sim, Lee, Lee, Park, Yun, Kim et al.]kang2020coherent authorS. Kang, authorK. Kim, authorB. H. Kim, authorJ. Kim, authorK. I. Sim, authorJ.-U. Lee, authorS. Lee, authorK. Park, authorS. Yun, authorT. Kim, et al., journalNature volume583, pages785 (year2020). [Ergeçen et al.(2022)Ergeçen, Ilyas, Mao, Po, Yilmaz, Kim, Park, Senthil, and Gedik]ergeccen2022magnetically authorE. Ergeçen, authorB. Ilyas, authorD. Mao, authorH. C. Po, authorM. B. Yilmaz, authorJ. Kim, authorJ.-G. Park, authorT. Senthil, and authorN. Gedik, journalNature Communications volume13, pages98 (year2022). [Basnet et al.(2022)Basnet, Kotur, Rybak, Stephenson, Bishop, Autieri, Birowska, and Hu]basnet2022controlling authorR. Basnet, authorK. M. Kotur, authorM. Rybak, authorC. Stephenson, authorS. Bishop, authorC. Autieri, authorM. Birowska, and authorJ. Hu, journalPhysical Review Research volume4, pages023256 (year2022). [Mak et al.(2019)Mak, Shan, and Ralph]mak2019probing authorK. F. Mak, authorJ. Shan, and authorD. C. Ralph, journalNature Reviews Physics volume1, pages646 (year2019). [Lançon et al.(2018)Lançon, Ewings, Guidi, Formisano, and Wildes]lanccon2018magnetic authorD. Lançon, authorR. Ewings, authorT. Guidi, authorF. Formisano, and authorA. Wildes, journalPhysical Review B volume98, pages134414 (year2018). [Kertesz and Hoffmann(1984)]kertesz1984octahedral authorM. Kertesz and authorR. Hoffmann, journalJournal of the American Chemical Society volume106, pages3453 (year1984). [Hempel and Miller(1981)]hempel1981ground authorJ. C. Hempel and authorM. E. Miller, journalThe Journal of Chemical Physics volume75, pages2959 (year1981). [Kim and Park(2021)]kim2021magnetic authorT. Y. Kim and authorC.-H. Park, journalNano Letters volume21, pages10114 (year2021). [Chittari et al.(2016)Chittari, Park, Lee, Han, MacDonald, Hwang, and Jung]chittari2016electronic authorB. L. Chittari, authorY. Park, authorD. Lee, authorM. Han, authorA. H. MacDonald, authorE. Hwang, and authorJ. Jung, journalPhysical Review B volume94, pages184428 (year2016). [Lane and Zhu(2020)]lane2020thickness authorC. Lane and authorJ.-X. Zhu, journalPhysical Review B volume102, pages075124 (year2020). [Ushakov et al.(2013)Ushakov, Kukusta, Yaresko, and Khomskii]ushakov2013magnetism authorA. Ushakov, authorD. Kukusta, authorA. Yaresko, and authorD. Khomskii, journalPhysical Review B volume87, pages014418 (year2013). [Wildes et al.(2015)Wildes, Simonet, Ressouche, Mcintyre, Avdeev, Suard, Kimber, Lançon, Pepe, Moubaraki et al.]wildes2015magnetic authorA. R. Wildes, authorV. Simonet, authorE. Ressouche, authorG. J. Mcintyre, authorM. Avdeev, authorE. Suard, authorS. A. Kimber, authorD. Lançon, authorG. Pepe, authorB. Moubaraki, et al., journalPhysical Review B volume92, pages224408 (year2015). [Wildes et al.(2022)Wildes, Stewart, Le, Ewings, Rule, Deng, and Anand]wildes2022magnetic authorA. Wildes, authorJ. Stewart, authorM. Le, authorR. Ewings, authorK. Rule, authorG. Deng, and authorK. Anand, journalPhysical Review B volume106, pages174422 (year2022). [Hwangbo et al.(2021)Hwangbo, Zhang, Jiang, Wang, Fonseca, Wang, Diederich, Gamelin, Xiao, Chu et al.]hwangbo2021highly authorK. Hwangbo, authorQ. Zhang, authorQ. Jiang, authorY. Wang, authorJ. Fonseca, authorC. Wang, authorG. M. Diederich, authorD. R. Gamelin, authorD. Xiao, authorJ.-H. Chu, et al., journalNature Nanotechnology volume16, pages655 (year2021). [Brec(1986)]brec1986review authorR. Brec, journalSolid State Ionics volume22, pages3 (year1986). [Olsen(2021)]olsen2021magnetic authorT. Olsen, journalJournal of Physics D: Applied Physics volume54, pages314001 (year2021). [Chandrasekharan and Vasudevan(1994)]chandrasekharan1994magnetism authorN. Chandrasekharan and authorS. Vasudevan, journalJournal of Physics: Condensed Matter volume6, pages4569 (year1994). [Kim et al.(2019)Kim, Lim, Lee, Lee, Kim, Park, Jeon, Park, Park, and Cheong]kim2019suppression authorK. Kim, authorS. Y. Lim, authorJ.-U. Lee, authorS. Lee, authorT. Y. Kim, authorK. Park, authorG. S. Jeon, authorC.-H. Park, authorJ.-G. Park, and authorH. Cheong, journalNature Communications volume10, pages1 (year2019). [Harrison(2012)]harrison2012electronic authorW. A. Harrison, titleElectronic structure and the properties of solids: the physics of the chemical bond (publisherCourier Corporation, year2012). [Autieri et al.(2022)Autieri, Cuono, Noce, Rybak, Kotur, Agrapidis, Wohlfeld, and Birowska]autieri2022limited authorC. Autieri, authorG. Cuono, authorC. Noce, authorM. Rybak, authorK. M. Kotur, authorC. E. Agrapidis, authorK. Wohlfeld, and authorM. Birowska, journalThe Journal of Physical Chemistry C volume126, pages6791 (year2022). [Koo et al.(2021)Koo, Kremer, and Whangbo]koo2021unusual authorH.-J. Koo, authorR. Kremer, and authorM.-H. Whangbo, journalMolecules volume26, pages1410 (year2021). [Anderson(1950)]anderson1950antiferromagnetism authorP. W. Anderson, journalPhysical Review volume79, pages350 (year1950). [Koch(2012)]koch2012exchange authorE. Koch, journalCorrelated electrons: from models to materials volume2, pages1 (year2012). [Mila and Zhang(2000)]mila2000origin authorF. Mila and authorF.-C. Zhang, journalThe European Physical Journal B-Condensed Matter and Complex Systems volume16, pages7 (year2000). [sup()]supp noteSee supplemental material for details. [Löwdin(1951)]lowdin1951note authorP.-O. Löwdin, journalThe Journal of Chemical Physics volume19, pages1396 (year1951). [Schrieffer and Wolff(1966)]schrieffer1966relation authorJ. R. Schrieffer and authorP. A. Wolff, journalPhysical Review volume149, pages491 (year1966). [Bravyi et al.(2011)Bravyi, DiVincenzo, and Loss]bravyi2011schrieffer authorS. Bravyi, authorD. P. DiVincenzo, and authorD. Loss, journalAnnals of physics volume326, pages2793 (year2011). [Hoffmann and Blügel(2020)]hoffmann2020systematic authorM. Hoffmann and authorS. Blügel, journalPhysical Review B volume101, pages024418 (year2020). [Slater and Koster(1954)]slater1954simplified authorJ. C. Slater and authorG. F. Koster, journalPhysical review volume94, pages1498 (year1954). [Läuchli et al.(2006)Läuchli, Mila, and Penc]lauchli2006quadrupolar authorA. Läuchli, authorF. Mila, and authorK. Penc, journalPhysical review letters volume97, pages087205 (year2006). [Ivanov and Kolezhuk(2003)]ivanov2003effective authorB. A. Ivanov and authorA. K. Kolezhuk, journalPhysical Review B volume68, pages052401 (year2003). [Stoudenmire et al.(2009)Stoudenmire, Trebst, and Balents]stoudenmire2009quadrupolar authorE. Stoudenmire, authorS. Trebst, and authorL. Balents, journalPhysical Review B volume79, pages214436 (year2009). [Goodenough(1955)]goodenough1955theory authorJ. B. Goodenough, journalPhysical Review volume100, pages564 (year1955). [Kanamori(1959)]kanamori1959superexchange authorJ. Kanamori, journalJournal of Physics and Chemistry of Solids volume10, pages87 (year1959).
http://arxiv.org/abs/2307.00260v1
20230701075054
Bootstrapping the Cross-Validation Estimate
[ "Bryan Cai", "Fabio Pellegrini", "Menglan Pang", "Carl de Moor", "Changyu Shen", "Vivek Charu", "Lu Tian" ]
stat.ME
[ "stat.ME", "math.ST", "stat.ML", "stat.TH" ]
#1 1 1 Bootstrapping the Cross-Validation Estimate Bryan Cai^1, Fabio Pellegrini^2, Menglan Pang^2, Carl de Moor^2, Changyu Shen ^2, Vivek Charu^3, and Lu Tian ^4 ^1: Department of Computer Science, Stanford University ^2: Biogen Inc. ^3: Department of Medicine, Stanford University ^4: Department of Biomedical Data Science, Stanford University August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================= 0 Bootstrapping the Cross-Validation Estimate Cross-validation is a widely used technique for evaluating the performance of prediction models. It helps avoid the optimism bias in error estimates, which can be significant for models built using complex statistical learning algorithms. However, since the cross-validation estimate is a random value dependent on observed data, it is essential to accurately quantify the uncertainty associated with the estimate. This is especially important when comparing the performance of two models using cross-validation, as one must determine whether differences in error estimates are a result of chance fluctuations. Although various methods have been developed for making inferences on cross-validation estimates, they often have many limitations, such as stringent model assumptions This paper proposes a fast bootstrap method that quickly estimates the standard error of the cross-validation estimate and produces valid confidence intervals for a population parameter measuring average model performance. Our method overcomes the computational challenge inherent in bootstrapping the cross-validation estimate by estimating the variance component within a random effects model. It is just as flexible as the cross-validation procedure itself. To showcase the effectiveness of our approach, we employ comprehensive simulations and real data analysis across three diverse applications. Keywords: random effects model, mean absolute prediction error, c-index, individualized treatment response score 1.9 § INTRODUCTION Predictive modeling has emerged as a prominent tool in biomedical research, encompassing diverse applications such as disease diagnosis, patient risk stratification, and personalized treatment recommendations, as seen in studies such as <cit.>. A wide range of methods have been employed to create prediction models, from basic linear regression to sophisticated deep learning algorithms. Once the models have been developed, it's crucial to assess their performance for a number of reasons. Firstly, the results from a model cannot be effectively utilized or interpreted without understanding its accuracy. For instance, a positive result from a model with a positive predictive value (PPV) of 20% will be treated differently from a positive result with a PPV of 1% by both physicians and patients. Secondly, with a wealth of prediction tools at hand, choosing the best model from a set of models can be a challenge, with multiple factors influencing the decision, including cost, interpretability, and, most importantly, the model's future performance in a population. This performance can be measured in various ways, depending on the intended application of the model. If the model aims to predict clinical outcomes, its accuracy can be measured by a prediction accuracy metric, such as mean squared prediction error for continuous outcomes or a receiver operating characteristics (ROC) curve for classification. In other cases, the performance measure can be more complex. If the model is used to recommend treatment to individual patients, the model performance can be measured by the observed treatment benefit among patients who are recommended to receive a particular treatment according to the model. Lastly, even in the model construction step, evaluating the model performance is often needed for optimal tuning. For example, in applying neural networks, the network structure needs to be specified by the analyst to have best prediction performance. Cross-validation is a commonly used technique to assess the performance of a predictive model and overcome the over-optimistic bias that results from using the same data for both training and evaluating the model <cit.>. The approach involves using out-of-sample observations to evaluate model performance, thus avoiding optimism bias. The resulting cross-validation estimator is a random quantity, dependent on both the random splitting of data into training and testing sets, and the observed data itself. To reduce the randomness due to the splitting of data, one can repeat the training and testing process multiple times and average the prediction performance on the testing data. The randomness inherent in the observed data reflects the fact that if a new set of data were randomly sampled from the underlying population, the cross-validation results would be different from those based on the current observed data. In essence, the cross-validation estimator is a statistic, or a function of random data, despite its complex construction. Realizing this fact, it is important to derive and estimate the distribution of this statistic so that we may (1) understand what population parameter the cross-validation procedure estimates and (2) attach an appropriate uncertainty measure to the cross-validation estimate <cit.>. For the simple case of large sample size and small number of parameters, the asymptotic distribution of the cross-validation estimator has been studied in depth <cit.>. For example, when the model training and validation are based on the same loss function, the cross-validation estimator is asymptotically Gaussian <cit.>. A computationally efficient variance estimator for the cross-validated area under the ROC curve has also been proposed, if the parameters used in the classifier are estimated at the root n rate <cit.>. More recently, Bayle et al. have established the asymptotic normality of general K-fold cross-validated prediction error estimates and proposed a consistent variance estimator under a set of stability conditions <cit.>. The learning algorithm can be general and flexible, but the error estimate in the validation set needs to be in the form of sum of identically independent distributed random elements. The validity of their proposed variance estimate requires that the variation from model training is dominated by that in estimating the prediction error in the testing set. Along a similar line, Austern and Zhou (2020) have also studied the asymptotic distribution of K fold cross-validation estimates allowing K increases with the sample size <cit.>. The proposed variance estimator is very similar to that in <cit.>. Asymptotic normality of the cross-validation estimator has been established when the prediction model is trained on the same loss as is used for evaluation, the same case as in <cit.>. A nested cross-validation method has been introduced to automatically quantify the uncertainty of the cross-validation estimate and construct confidence intervals for the model performance <cit.>. The key is to use an additional loop of cross-validations to correct the finite-sample bias of the variance estimator proposed in <cit.>. However, this method still requires specific forms for the performance measure of interest and may not be applicable in certain applications. The majority of previous work on cross-validation assumes a simpler form for the performance measure, such as the average of a random variable, and requires that the prediction model is trained using the same loss function. However, there are applications of cross-validation not covered by these conventional approaches, as demonstrated in Example 3 of the paper. Additionally, the validity of the proposed confidence intervals relies on suitable stability conditions and large sample approximations. Although resampling methods are a well-known approach for estimating the variance of a statistic and can provide fairly accurate approximations even with small to moderate sample sizes, the main challenge with using them in this setting is the computational speed. This paper aims to characterize the underlying population parameter estimated by the cross-validation procedure and to propose a general, computationally efficient resampling method for constructing a confidence interval for this population parameter, removing as many restrictions as possible. § METHOD In a very general setup, we use random vector X to denote individual observations, and the observed data consist of n independent identically distributed (i.i.d.) copies of X, i.e., D={X_1,⋯, X_n}. The output of the training procedure is a parameter estimate, which can be a finite dimensional vector or infinite dimensional functions, denoted by ψ̂(D) to emphasize its dependence on observed data D and the fact that it is a random quantity. In evaluating the “performance” of ψ̂(D) in a new testing set consisting of m i.i.d. observations D={X_1, ⋯, X_m}, a summary statistic is calculated as a function of testing data and ψ̂(D), which can be written as L{D, ψ̂(D)}. It is possible to make inference and derive confidence interval on L{D, ψ̂(D)} by treating D or both D and D as random. In most applications, however, we only have a single dataset, and the cross-validation procedure is needed in order to objectively evaluate the model performance. Specifically, In cross-validation, we randomly divide the observed data D into two non-overlapping parts denoted by D_train and D_test, and calculate L{D_test, ψ̂(D_train)}. In order to reduce the variability of random splits, the aforementioned step is oftentimes repeated many times and the average performance measure is obtained as the final cross-validation estimator of the model performance: Err^CV=1/B_CV∑_b=1^B_CV L{D_test^b, ψ̂(D_train^b)}, where D=D_train^b∪ D_test^b represents the bth split. The number of replications, B_CV, needs to be relatively large to reduce the Monte-Carlo variation due to random splits. It is often in the range of several hundreds in practice. Note that although we used Err to represent the model performance in consistent with notations used in the literature <cit.>, the performance measure is not necessarily a prediction error. It can be other metric with a higher value indicating a good performance as discussed in the following sections. §.§ Applications Many cross-validation applications can fit into the very general framework described above. In this paper, we will focus on several typical examples. §.§.§ Application 1 In the first example, we are interested in evaluating the performance of predicting continuous outcomes measured by mean absolute prediction error <cit.>. The result can help us to determine, for example, whether a newly developed prediction algorithm significantly outperform an existing model. The prediction model can be constructed by fitting a standard linear regression model, i.e., calculating a regression coefficient vector β̂(D_train) by minimizing a L_2 loss function ∑_X_i∈ D_train(Y_i- β'Z_i )^2, based on a training dataset D_train, where Z_i is the baseline covariate for the ith patient and Z_i=(1, Z_i')' including an intercept. If nonlinear prediction models are considered, then one may construct the prediction model via a more flexible machine learning algorithm such as random forest or neural network. In all cases, the prediction error in a testing set D_test is calculated as θ̂(D_train, D_test)=1/n_test∑_X_i ∈ D_test|Y_i-β̂(D_train)'Z̃_i|, where n_test is the number of observations in the testing set. In cross-validation, we may repeatedly split an observed dataset D into training and testing sets of given sizes, (D_train, D_test) , and obtain the corresponding cross-validated mean absolute prediction error estimator, θ̂(D_train, D_test). In the end, the sample average of those resulting cross-validated mean absolute prediction error estimators becomes the final estimator measuring the predictive performance of the linear model. In this application, X=(Z, Y) with Z and Y being the predictor and outcome of interest, respectively, ψ̂(D)=β̂(D), and the summary statistic measuring the prediction performance is: L(D_test, ψ)=1/n_test∑_X_i∈ D_test|Y_i-ψ'Z_i|. §.§.§ Application 2 In the second example, we are interested in evaluating the performance of a prediction model in predicting binary outcomes by its c-index for new patients, which is the area under the ROC curve <cit.>. The result can help us to determine, for example, whether c-index from a new prediction model is significantly higher than 0.5 or a desirable level. The prediction model can be constructed via fitting a logistic regression model, i.e., calculating a regression coefficient vector β̂(D_train) by maximizing the log-likelihood function ∑_( Z_i,Y_i)∈ D_train[ β'Z_i Y_i-log{1+exp(β'Z̃_i) }], based on a training dataset D_train. If the dimension of the predictor Z_i is high, a lasso-regularization can be employed in estimating β. In any case, a concordance measure, the c-index in a testing set D_test can be calculated as θ̂(D_train, D_test)=1/ñ_test, 0ñ_test, 1∑_X_i ∈ D_test(0)∑_X_j∈ D_test1(1)I(β̂(D_train)'Z̃_i<β̂(D_train)'Z̃_j ), where ñ_test,g is the number of observations in the set D_test(g)={(Z_i,Y_i)∈ D_test: Y_i=g}, g∈{0, 1}. In cross-validation, we may repeatedly split the observed dataset D into training and testing sets, (D_train, D_test), and obtain the corresponding cross-validated c-indexes θ̂(D_train, D_test). In the end, the sample average of those resulting cross-validated c-index estimator is our final estimator measuring the classification performance of the logistic regression. In this application, X=(Z, Y) with Z and Y being the predictor and a binary outcome of interest, respectively, ψ̂(D)=β̂(D), and L(D_test, ψ)=1/ñ_test, 0ñ_test, 1∑_X_i∈ D_test(0)∑_X_j∈ D_test(1) I(ψ'Z̃_i<ψ'Z̃_j ). Remark 1 In evaluating the performance of the logistic regression model for predicting binary outcomes, we may choose to use the entire ROC curve to measure the model performance. Since the construction of a stable ROC curve requires sufficient number of cases, i.e, observations with Y_i=1, and controls, i.e., observations with Y_i=0, one may want to construct the ROC curve with as many observations from testing set as possible. In particular, one can implement the K-fold cross-validation, i.e., randomly splitting the observed dataset D in to K parts of approximately equal sizes: D=D_1∪ D_2 ∪⋯ D_K. Let β̂(D^(-k)) be the regression coefficient estimated based on observations not in the kth part of the observed data. We may then construct the predicted risk score for patients in the kth part as β̂(D^(-k))'Z_i. Cycling through k=1,⋯, K, we can obtain a predicted risk score for every patient as R̂_i( D)={∑_k=1^K β̂(D^(-k))I(X_i∈ D_k)}'Z_i, i=1, ⋯, n, where D represents the particular division of the observed dataset into K parts. The ROC curve can then be calculated as ROC(u | D)=Ŝ_1 D{Ŝ_0 D^-1(u)}, where Ŝ_g D(·) is the empirical survival function of {R̂_i( D)| Y_i=g, i=1, ⋯, n}, g∈{0, 1}, depending on the particular division of D. In cross-validation, we may repeatedly split the observed dataset D into K parts and obtain the corresponding cross-validated ROC curve ROC(u| D), and the sample average of those resulting ROC curves becomes the final estimator of the ROC curve measuring the predictive performance of the logistic regression. In this example, L(D_test, ψ)(u)=Ŝ_1{Ŝ_0^-1(u|ψ, D_test ) |ψ, D_test}, where Ŝ_g(·|ψ, D_test) is empirical survival function of {ψ'Z_i | Y_i=g, X_i∈ D_test}, g∈{0, 1}. Our proposed method will cover inference for estimator from K fold cross-validation as well. §.§.§ Application 3 In the third example, we are interested in developing a precision medicine strategy and evaluating its performance in a randomized control setting. Specifically, the precision medicine strategy here is a binary classification rule to recommend a treatment to a patient based on his or her baseline characteristics to maximize the treatment benefit for individual patient as well as in a patient population of interest. Before applying this recommendation to clinical practice, it is important to estimate the uncertainty of the treatment effect in the identified subgroup who would be recommended for the treatment, to make sure the anticipated stronger treatment effect is real. There are many different ways of constructing such a treatment recommendation classifier <cit.>. For example, one may first construct a individualized treatment response (ITR) score by minimizing a loss function based on a training dataset D_train, ∑_X_i∈ D_train{Y_i-γ'Z̃_i-(G_i-π)β'Z̃_i }^2 , where X_i=(Z_i,G_i,Y_i), Y_i is the response of interest with a higher value being desirable, G_i ∈{0, 1} is a binary treatment indicator and independent of the baseline covariate Z_i (i.e., the treatment is randomly assigned to patients in the training set), and π=P(G_i=1). Let the resulting minimizers of γ and β be γ̂(D_train) and β̂(D_train), respectively <cit.>. Note that we have the decomposition that E{(Y-γ'Z̃-(G-π)β'Z̃)^2} = P(G=1)E{(Y^(1)-γ'Z̃-(1-π)β'Z̃)^2}+P(G=0)E{(Y^(0)-γ'Z̃+πβ'Z̃)^2} = E[ (Y-γ'Z̃)^2 +π(1-π) (Y^(1)-Y^(0)-β'Z̃)^2-π(1-π)(Y^(1)-Y^(0))^2 ], where Y^(g) is the potential outcome if the patient receives treatment g ∈{0, 1}, and the observed outcome Y=Y^(1)G+Y^(0)(1-G). Therefore, minimizing the original loss function with respect to β amounts to approximating the conditional average treatment effect (CATE), Δ(z)=E(Y^(1)-Y^(0)| Z=z), via (1, z')β̂(D_train), a linear function of Z=z, because the solution β(D_train) minimizes a loss function, whose population limit is E{(Y^(1)-Y^(0)-β'Z̃)^2}. The constructed ITR score is Δ(z| D_train)=(1, z')β̂(D_train), which can be used to guide the treatment recommendation for individual patient. There are other ways of constructing the ITR score approximating the CATE. Once an estimated ITR score is available, one may recommend treatment G=1 to patients whose Δ(Z| D_train)>c_0 and treatment G=0 to patients whose Δ(Z| D_train)≤ c_0, where c_0 is a constant reflecting the “cost” of the treatment. Here, we choose c_0=0 for simplicity. In the testing set, one may evaluate the performance of this recommendation system by estimating the average treatment effect among the subgroup of patients recommended to receive the treatment G=1, i.e, D_test^(1)={X ∈ D_test|Δ(Z| D_train)>0} and among the subgroup of patients recommended to receive the treatment G=0, i.e., D_test^(0)={X ∈D_test|Δ(Z| D_train)≤ 0}. Specifically, we may consider the observed treatment effects Δ_1(D_train,D_test)= ∑_X_i∈D̂_test^(1)Y_iG_i/∑_X_i∈D̂_test^(1) G_i-∑_X_i∈D̂_test^(1)Y_i(1-G_i)/∑_X_i∈D̂_test^(1) (1-G_i) and Δ_0(D_train,D_test)= ∑_X_i∈D̂_test^(0)Y_iG_i/∑_X_i∈D̂_test^(0) G_i-∑_X_i∈D̂_test^(0)Y_i(1-G_i)/∑_X_i∈D̂_test^(0) (1-G_i) If Δ_1(D_train, D_test) takes a “large" positive value and Δ_0(D_train, D_test) takes a “large” negative value, (in other words, the treatment effect is indeed estimated to be greater among those who are recommended to receive the treatment based on the constructed ITR score), then we may conclude that Δ(Z| D_train)>0, is an effective treatment recommendation system. In cross-validation, we may repeatedly divide the observed data set D into training and testing sets, (D_train, D_test), and obtain the corresponding cross-validated treatment effect difference Δ_g(D_train, D_test), g∈{0, 1}. In the end, the sample average of those resulting cross-validated treatment effect estimators is our final cross-validation estimator measuring the performance of the treatment recommendation system. In this application X=(Z,G,Y) with Z, G and Y being predictors, treatment assignment indicator, and a binary outcome, respectively, ψ̂(D)=β̂(D), and L(D_test, ψ)=∑_X_i∈ D_testY_iG_iI(ψ'Z_i>0)/∑_X_i∈ D_test G_i I(ψ'Z_i>0)-∑_X_i∈ D_testY_i(1-G_i)I(ψ'Z_i>0)/∑_X_i∈ D_test(1-G_i)I(ψ'Z_i>0) or ∑_X_i∈ D_testY_iG_iI(ψ'Z_i≤ 0)/∑_X_i∈ D_test G_i I(ψ'Z_i≤ 0)-∑_X_i∈ D_testY_i(1-G_i)I(ψ'Z_i≤ 0)/∑_X_i∈ D_test(1-G_i)I(ψ'Z_i≤ 0). §.§ The Estimand of Cross-validation The first important question is what population parameter the cross-validation procedure estimates. As discussed in <cit.>, there are several possibilities. The first obvious population parameter is Err(D_n)= lim_N →∞ L(D̃_N, ψ̂(D_n)), where D_n is the training set of sample size n and D̃_N is a new independent testing set of sample size N drawing from the same distribution as the training dataset, D_n. This parameter depends on the training set D_n, and directly measures the performance of the prediction model obtained from observed data D_n in a future population. The second population parameter of interest is Err_n=E{Err(D_n)}, the average performance of prediction models trained based on “all possible” training sets of size n sampled from the same distribution as the observed dataset, D_n. The subscript n emphasizes the fact that this population parameter only depends on the sample size of the training set D_n. While Err(D_n) is the relevant parameter of interest in most applications, where we want to know the future performance of the prediction model at hand, Err_n is a population parameter reflecting the expected performance of prediction models trained via a given procedure. The prediction performance of the model from the observed dataset D_n can be better or worse than this expected average performance. It is known that the cross-validation targets on evaluating a training procedure rather than the particular prediction model obtained from the training procedure. Specifically, the cross-validation estimator Err^CV_m actually estimates Err_m in the sense that ([ Err_m^CV; Err(D_n) ])≈([ Err_m+ϵ; Err_n+ζ ]), where m is the sample size of the training set used in constructing the cross-validation estimate, i.e., the dataset D_n is repeatedly divided into a training set of size m and a testing set of size (n-m) in cross-validation. Here, ϵ and ζ are two mean zero random noises and oftentimes nearly independent. In many cases Err_n≈ Err_m, when m is not substantially smaller than n. If we ignore their differences, then Err^CV_m can also be viewed as an approximately unbiased estimator for Err(D_n), because Err^CV_m-Err(D_n)=(Err_m-Err_n)+(ϵ-ζ), whose mean is approximately zero. On the other hand, the variance of Err^CV_m-Err(D_n) tends to be substantially larger than the variance of Err^CV_m-Err_m, since ϵ and ζ are often independent and the variance of ζ is nontrivial relative to that of ϵ. This is analogous to the phenomenon that the sample mean of observed data is a natural estimator of the population mean. It also can be viewed as an unbiased “estimator” of the sample mean of a set of future observations, because the expectation of sample mean of future observations is the same as the population mean, which can be estimated by the sample mean of observed data. In this paper, we take Err_m as the population parameter of interest, because approximately Err(D_n) is simply Err_m plus a random noise ζ, which may be independent of the cross-validation estimate. In other words, we take the view that the cross-validation estimate evaluates the average performance of a training procedure rather than the performance of a particular prediction model. As the training sample size goes to infinity, we write Err=lim_n→∞Err_n. When n is sufficiently large, Err≈ Err_n ≈ Err(D_n). In the following, we use a simple example to demonstrate the relationship of aforementioned quantities. §.§.§ A Toy Example Consider the linear prediction model in section <ref>. Suppose that covariate Z_i∼ N(0, I_10), and the response Y_i=α_0+β_0'Z_i+ϵ_i, i=1,⋯, n, where I_10 is 10 by 10 identity matrix, α_0=0, β_0=(1, 1, 1, 1, 0, 0, 0, 0, 0, 0)', ϵ_i∼ N(0, 1) and n=90. We were interested in constructing a prediction model via fitting a linear regression model and evaluating its performance in terms of the mean absolute prediction error. To this end, for each simulated dataset D_n={X_i=(Z_i, Y_i), i=1,⋯, n}, we estimated the regression coefficients of the linear model by ordinary least square method and denoted the estimators of α_0 and β_0 by α̂(D_n) and β̂(D_n), respectively. Then we calculated the true mean absolute prediction error as the expectation of |G|, where G∼ N(α̂(D_n), 1+β̂(D_n)-β_0_2^2) is a random variable. This expectation was Err(D_n), the prediction error of the model trained based on dataset D_n in a future population. We also constructed the cross-validation estimate of the prediction error by repeatedly splitting D_n into a training set of size m=80 and a testing set of size n-m=10. The resulting estimator for the estimation error was Err_m^CV. Repeating these steps, we obtained 1000 pairs of Err(D_n) and Err_m^CV from simulated datasets. The empirical average of 1000 Err(D_n)s was an approximation to Err_n=E{Err(D_n)}. Figure <ref> is the scatter plot of Err(D_n) vs. Err^CV_m based on 1000 simulated datasets. It was clear that Err^CV_m and Err(D_n) were almost independent but shared a similar center. Specifically, the empirical average of Err^CV_m was 0.859, and the empirical average of Err(D_n) was 0.851. Therefore, the cross-validated error estimator Err^CV_m can be viewed as a sensible estimator for Err_n=E{Err(D_n)}≈ 0.851, and more precisely an unbiased estimator for Err_m, whose value was estimated as 0.861 using the same simulation described above. Note that (m, n)=(80, 90) and n and m are fairly close. The distribution of the cross-validated error estimators along with Err_m and Err_n are plotted in Figure <ref>, suggesting that the small difference between Err_80 and Err_90 is negligible relative to the variation of the cross-validation estimator Err^CV_m itself. In addition, Err^CV_m can also be thought as a “prediction” to Err(D_n), since the latter was approximately Err_n plus an independent mean zero “measurement error”. §.§ Statistical Inference on Err_m In this section, we aim to construct a valid confidence interval for Err_m based on the cross-validated estimate. First, we define the cross-validated estimate with repeated training and testing splits as Err^CV_m = E[L{D_test^b, ψ̂(D_train^b)}] where the expectation is with respect to random division of training and testing sets. One may anticipate that Err^CV is a “smooth" functional with respect to the empirical distribution of observed data D_n, because each individual observation's contribution to the final estimator is “averaged” across different training and testing divisions. Therefore, we expect that Err^CV_m is a root-n regular estimator of Err, i.e., √(n)(Err^CV_m-Err) → N(0, σ_D^2), in distribution as n, the sample size of D_n, goes to infinity and lim_n→∞ m/n ∈ (0, 1). Indeed, under the following regularity conditions, this weak convergence holds in general. C1 ψ̂-ψ_0 has a first order expansion, i.e, ψ̂-ψ_0=1/n∑_i=1^n ξ(X_i)+o_p(n^-1/2), where ψ̂ is a consistent estimator for a population parameter ψ_0 based on D_n={X_1,⋯, X_n}, and ξ(X_i),i=1,⋯, n are independent mean zero random elements with a finite variance. C2 The process L(D_n, ψ)-E{L(D_n, ψ)} is stochastic equal continuous at ψ, i.e., √(n)[L(D_n, ψ_1)-E{L(D_n, ψ_1)}]-√(n)[ L(D_n, ψ_2)-E{L(D_n, ψ_2)}]=o_p(1) as ψ_2-ψ_1=o(1), where · is an appropriate norm measuring the distance between ψ_1 and ψ_2. <cit.> C3 The random sequences √(n)[L(D_n, ψ_0)-E{L(D_n, ψ_0)}] =1/√(n)∑_i=1^n η(X_i)+o_p(1), converges to a mean zero Gaussian distribution, where η(X_i) are independent mean zero random elements with a finite variance. C4 There exists a linear functional l_ψ(·) indexed by ψ, such that E{L(D_n, ψ_2)}-E{L(D_n, ψ_1)}=l_ψ_1(ψ_2-ψ_1)+o(ψ_2-ψ_1). In the Appendix, we have provided an outline to show that √(n){L{D_test^b, ψ̂(D_train^b)}-Err} = 1/√(n)∑_i=1^n [I(X_i∈ D_test)/1-π̂η(X_i)+I(X_i∈ D_train)/π̂l_ψ_0{ξ(X_i)}] + o_p(1), where π̂=m/n. After taking expectation with respect to the random training and testing division, i.e., indicators {I(X_i ∈ D_train), i=1, ⋯, n}, E{I(X_i∈ D_test)/1-π̂}=E{I(X_i∈ D_train)/π̂}=1 and √(n){Err^CV_m-Err}=1/√(n)∑_i=1^n [η(X_i)+l_ψ_n{ξ(X_i)}]+o_p(1) converges weakly to a mean zero Gaussian random distribution with a variance of σ_D^2= E([η(X)+l_ψ_0{ξ(X)}]^2). For a finite sample n, we expect that the difference between Err_m and Err may be negligible. Specifically, the distribution of √(n)(Err^CV_m-Err_m) can be approximated by N(0, σ̂_D^2), if the following condition holds C5 E [lim_N→{L(D_N, ψ_n)}]-Err=o_p(n^-1/2), where ψ_n is based on training set of size n, and the expectation is with respect to ψ_n. This assumption is in general true, since L(D_N, ψ_n) often converges to a smooth functional of ψ_n as N→∞, and the expectation of the limit can be approximated by Err=lim_N→{L(D_N, ψ_0)} with the approximation error bounded by O{|E(ψ_n-ψ_0)|}, which is in the order of o(n^-1/2) following condition C1. Since E{Err^CV_m} is closer to Err_m than Err in finite sample setting, the Gaussian approximation for √(n)(Err^CV_m-Err_m) is likely to be more accurate. As a consequence, an asymptotical confidence interval for Err_m can be constructed as [Err^CV_m-1.96 σ̂_D/√(n), Err^CV_m+1.96 σ̂_D/√(n)], where σ̂_D is a consistent estimator of the standard error σ_D. However, in general it is difficult to obtain such a consistent variance estimator when complex procedures such as lasso regularized regression or random forest are used for constructing the prediction model as in three examples discussed above. An appealing solution is to use the nonparametric bootstrap to estimate σ_D^2 <cit.>. The rational is that, under the same set of assumptions, √(n){Err^CV*_m-Err_m}=1/√(n)∑_i=1^n [η(X_i)+l_ψ_0{ξ(X_i)}]W_i+o_P^*(1), where Err^CV*_m is the cross-validated estimator based on bootstrapped data D^*_n, (W_1, ⋯, W_n)∼ Multn(n, (1/n, ⋯, 1/n)), W_i is the number of observation X_i in D^*_n, and P^* is the product probability measure with respect to random data and the independent random weights (W_1,⋯, W_n). Therefore, conditional on observed data D_n, √(n){Err^CV*_m-Err^CV_m}=1/√(n)∑_i=1^n [η(X_i)+l_ψ_0{ξ(X_i)}](W_i-1)+o_P^*(1), converges weakly to a mean zero Gaussian distribution with a variance of σ̂^2_D=1/n∑_i=1^n[η(X_i)+l_ψ_0{ξ(X_i)}]^2=σ_D^2+o_P(1). Operationally, the following naive bootstrap procedure is expected to generate a consistent variance estimator of σ_D^2, σ̂_D^2. The variance σ_D^2 can be estimated by n times the empirical variance of B bootstrapped cross-validation estimates {Err_1,m^CV*, ⋯, Err_B_BOOT,m^CV*}. However, there are several concerns in this naive resampling procedure, which may result in poor performance in practice. * The bootstrap procedure samples observations with replacement and results in potential duplicates of the same observation in the bootstrapped dataset. Naively splitting the bootstrapped dataset into training and testing sets may result in overlap between them, which can induce nontrivial optimistic bias in evaluating the model performance. If we apply the naive bootstrap method to analyze the Toy Example described in section <ref>, then the empirical average of bootstrapped cross-validation estimates Err_b, m^CV* was downward biased in comparison with Err_m^CV by 0.80 standard deviation of cross-validation estimates Err_m^CV. The practical influence of overlap on the variance estimation is less clear but can be potentially nontrivial. * The training set of size m in the bootstrapped dataset D^* contains substantially fewer than m distinct observations, which reduces the “effective" sample size for training a prediction model and induces a downward bias in evaluating the average model performance. This downward bias may be smaller or greater than the optimism bias induced by the overlap between training and testing sets depending on specific applications, but is undesirable in general. * To obtain a cross-validated estimate for each bootstrap sample, one needs to perform the cross-validation multiple times to reduce the Monte-Carlo variation due to random training/testing divisions, i.e., B_CV needs to be sufficiently big such as ≥ 200. In addition, the number of bootstraps also can not be too small. The conventional recommendation for estimating variance of a statistic using bootstrap is to set the number of bootstraps B_BOOT∼ 400-1000. In such a case, one needs to train and evaluate the prediction model more than 80,000 times and the corresponding computational burden can be prohibitive for complex training algorithm. In this paper, we present a modified bootstrap procedure to overcome aforementioned difficulties. First, in implementing cross-validation on a bootstrapped dataset, we view bootstrapped data as a weighted samples of the original data, i.e., observation X_i is weighted by a random weight W_i, which is the number of this observation selected in this bootstrap iteration. In cross-validation, we first split the original dataset into training and testing sets, D_n=D_train∪ D_test, and bootstrapped training and testing sets denoted by D^*_train and D^*_test, respectively, are then constructed by collecting all observations in D_train and D_test, respectively, but with their respective bootstrap weights. Since D_train and D_test have no overlap, D^*_train∩ D^*_test=ϕ as well. Therefore, we don't allow that the same observation used in both training and testing. One consequence is that the sample sizes of D^*_train and D^*_test are not fixed across different bootstrap samples. But their average sample sizes remain the same as those for D_train and D_test. Second, we note that the effective sample size of the training set based on the bootstrapped data can be substantially smaller than m. Specifically, the number of distinct observations in D_train^* is 0.632m on average <cit.>. Therefore, it is desirable to increase the number of distinct observations of D_train^* by allocating more observations to D_train, which is used to generate D_train^* in a bootstrapped dataset. Ideally, we may want to increase the sample size of D_train such that the number of distinct observations used for training is close to m in bootstrapped cross-validation, i.e., # D_train^*≈ m, which requires to increase the sample size in D_train from m to m/0.632. On the other hand, the sample size of D_test and thus D_test^* would be reduced by using a larger training set in the bootstrapped cross-validation and such a large reduction in testing sample size may increase the variance of the resulting estimate. In summary, while we want to increase the sample size in the training set to reduce the bias of estimating the model performance in bootstrapped cross-validation due to the fact that fewer distinct observations are used to train the prediction model, we also want to limit the reduction in the number of testing samples so that the variance of the cross-validation estimate would not be greatly affected by this adjustment in training and testing sample sizes. A compromise is to find a adjusted sample size m_adj by minimizing the loss function (m_adj/0.632m-1)^2+λ_0 (n-m/n-m_adj-1)^2, where the first and second terms control the closeness of the “effective" sample size in the bootstrapped training set to m and the relative change in the sample size of the testing set after the adjustment, respectively. Here, λ_0 controls the relative importance of theses two tasks in determining the final adjustment. In our limited experience, we have found that the performance of the resulting resampling procedure was not very sensitive to the selection of this penalty parameter within a wide range, and we recommended to set λ_0=1-0.632=0.368 in practice. More importantly, to alleviate the computational demand for the bootstrap procedure, we propose the following algorithm: The rational is that the center of θ^*_bk, θ_0+ϵ_b^*, is approximately the cross-validation estimate based on the bootstrapped dataset D^*_b as the number of random training and testing division increasing to infinity. Under this framework, σ_BT^2 measures the between-bootstrap variance, which is the bootstrap variance estimator we aim to calculate, and τ_0^2 measures the within-bootstrap variance, i.e., the variance due to random training and testing divisions. The empirical variance of {θ̅^*_1, ⋯, θ̅^*_B} based on a very big B_CV is approximately unbiased in approximating σ_BT^2, corresponding to the naive bootstrap procedure. However, this naive approach is very inefficient and there is no need to choose a very large B_CV for eliminating the Monte-Carlo variance in estimating cross-validation prediction error for every bootstrapped dataset. Alternatively, an accurate moment estimate for the variance component in the random effects model can be constructed with a moderate B_CV, say 10-20, and a reasonably large B_BOOT, say 400. This can substantially reduce the computational burden from 80,000 model training to 8,000 model training. Remark 2 The total number of model training is B_BOOT× B_CV. A natural question is how to efficiently allocate the number of bootstraps and number of cross-validations per bootstrap given the total number of model training. The variance estimator, σ̂_BT^2, is a random statistic itself with a variance <cit.>, which can be approximated by 2((σ̂_BT^2+B_CV^-1τ̂_0^2)^2/(B_BOOT-1))+2((B_CV^-1τ̂_0^2 )^2/B_BOOT(B_CV-1)), where τ̂_0^2= 1/B_BOOT(B_CV-1)∑_b=1^B_BOOT∑_k=1^B_CV(θ^*_bk-θ̅^*_b)^2 is an estimator for τ_0^2. It is not difficult to show that fixing B_BOOT× B_CV=N_T, the variance is minimized when B_BOOT≈σ̂^2_BT/τ̂^2_0× N_T and B_CV≈τ̂^2_0/σ̂^2_BT. It suggests that the optimal number of cross-validation per bootstrap should be approximately constant, whose value may depend on the specific problem but doesn't change with the budget for the total number of model training. Normally, τ̂^2_0 can be substantially greater than σ̂_BT^2 and B_CV should be set to be close to their ratio. In the toy example, this ratio is approximately 20. On the other hand, we always can increase the number of bootstraps to improve the precision in approximating the bootstrap variance estimator. Remark 3 The number of distinct observations used in training the bootstrapped prediction model is smaller than m_adj. Specifically, the number of distinct observations in bootstrapped training set is on average only 0.632× m_adj. Therefore, there is a tendency that the “effective total sample size" in bootstrap procedure is smaller than n, which may cause a upward bias in estimating the variance of Err_m^CV using the bootstrap variance estimator σ̂_BT^2. To correct this bias, we can consider an adjusted variance estimator (σ̂_m,adj^CV)^2=σ̂_BT^2( n-m_adj+0.632m_adj/n)=σ̂_BT^2(n-0.368m_adj/n), where the factor (n-0.368m_adj)/n is introduced to account for the reduced sample size in bootstrapped training set. Note that we do not recommend a similar adjustment of the sample size in the testing set, even though that the number of distinct observations in the testing set is also smaller than (n-m_adj) in bootstrap, because in general this reduction in the number of distinct observations doesn't affect the variance estimation. Sometimes, training the prediction model can be very expensive in terms of computation, and it may not be feasible to conduct even the accelerated bootstrap. For example, one may only can train the prediction model 50-100 times. In such a case, regardless of the selection of B_BOOT and B_CV, the Monte-Carlo error in estimating the bootstrap variance may not be ignorable. Consequentially, √(n)(Err_m^CV-Err_m)/σ_m^CV      √(n)(Err_m^CV-Err_m)/σ_m, adj^CV may not follow a standard normal distribution. On the other hand, if we can empirically approximate this distribution, then one still can construct a 95% confidence interval for Err_m based on (Err_m^CV, σ_m^CV). One analogy is that the confidence interval for the population mean of a normal distribution can be constructed using t-distribution rather than the normal distribution in small sample size setting. With a slight abuse of notations, let σ_m^CV(∞) be the bootstrap variance estimator, if both B_BOOT and B_CV→∞, i.e., the bootstrap variance estimator without any Monte Carlo error, and we have √(n)(Err_m^CV-Err_m)/σ_m^CV=√(n)(Err_m^CV-Err_m)/σ_m^CV(∞)×σ_m^CV(∞)/σ_m^CV. The first term of the left hand side of (<ref>) should be approximated well by a standard Gaussian distribution since the “ideal" bootstrap variance estimator is used. The second term is independent of the first term and reflects the Monte-Carlo variation of approximating σ_m^CV(∞) via a small number of bootstrap and cross-validation iterations. To approximate the distribution of this ratio, we can bootstrap the variance estimator based on fitting the random effects model. This observation motivated the following additional steps presented in algorithm <ref> after line (<ref>-<ref>) of algorithm <ref>, when very small B_BOOT and B_CV are used. This resulting confidence interval is expected to be wider than that generated from the algorithm <ref>, since c_0.975>1.96. However, this is a necessary cost to pay for using a small number of bootstrap and cross validation iterations. Note that although two bootstraps have been used in the modified algorithm, the increase in computational burden is minimal. These two bootstrap steps are not nested and the second bootstrap only involves repeated estimation of the variance component of a simple random effects model, which can be completed relatively fast especially with small or moderate B_BOOT and B_CV. The performance of this method depends on the normal approximation to the distribution of (Err_m^CV-Err_m)/σ_m^CV(∞) and the bootstrap approximation to the distribution of σ_m^CV(∞)/σ_m^CV. The second bootstrap is a calibration step for producing a confidence interval of Err_m with a coverage level comparable to that based on (Err_m^CV-Err_m)/σ_m^CV(∞). If the latter yields a confidence interval which either too conservative or too liberal, then the new confidence interval based on additional bootstrap calibration would suffer the same limitation. Operationally, one may choose for example (B_BOOT, B_CV)=(20, 25) or (20, 40). A slightly bigger value for B_CV can prevent a negative or zero variance component estimator in fitting the random effects model. § APPLICATIONS §.§ Application 1 §.§.§ Theoretical Properties In the first example, we are interested in estimating the mean absolute prediction error from a liner regression model via cross-validation. In this case, it is not difficult to verify conditions C1-C5 under reasonable assumptions. For example, under the condition that the matrix A_0=E(Z̃Z̃') is non-singular, the least squared estimator of the regression coefficient in the linear regression model, β̂, converges to a deterministic limit β_0 as n →∞, and √(n)(β̂-β_0)=1/√(n)∑_i=1^n A_0^-1(Y_i-β_0'Z̃_i)+o_p(1), and thus C1 is satisfied. Second, the class of functions {|y-β'z̃| |β∈Ω} is Donsker, where Ω⊂ R^p+1 is a compact set and p=(Z). This fact suggests that the empirical process U(β)=√(n)[L(D_n, β)-E{|Y-β'Z̃|}] is stochastically continuous in β, where L(D_n, β)=1/n∑_i=1^n |Y_i-β_0'Z̃_i|. As a consequence, condition C2 is satisfied. It is clear that E(|Y-β_0'Z̃_i|) is differentiable in β in a small neighborhood of β_0, if the random variable β_0'Z̃ has a differentiable density function, which suffices for condition C3. The central limit theorem implies that √(n)[1/n∑_i=1^n |Y_i-β'Z̃_i|-E{|Y-β_0'Z̃_i|}] converges weakly to a mean zero Gaussian distribution as n →∞ under the assumption that E{(Y_i-β_0'Z̃_i)^2} is finite. Lastly, it is obvious that E|Y-β̂'Z̃|-E|Y-β_0'Z̃|=O{|E(β̂)-β_0|}=o_p(n^-1/2) and C5 is also satisfied. Therefore, we expect that (Err_m^CV-Err_m) can be approximated by a mean zero Gaussian distribution whose variance can be consistently estimated by the proposed bootstrap method. Note that we don't need to assume that the linear regression model is indeed correct for the relationship between Y_i and Z_i. §.§.§ Simulation Study In numerical study, we first considered a simple setting, where Z_i followed a 10 dimensional standard multivariate Gaussian distribution and a continuous outcome Y_i was generated via the linear model Y_i=β_0'Z̃_i+ϵ_i, where β_0=(0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0)' and ϵ_i∼ N(0, 1). The regression coefficient was selected such that the proportion of the variation explained by the true regression model was 80%. We let the sample size n=90 and considered the cross-validation estimate Err_m^CV for the mean absolute prediction error Err_m, m∈{40,45,50, 55, 60, 65, 70, 75, 80}. The true value of Err_m was obtained by averaging the empirical mean absolute prediction error of 5,000 estimated prediction models in an independent testing set consisting of 200,000 generated observations. Each prediction model was trained in a simulated training set of size m. Both training and testing sets were generated according to the linear regression model specified above. Next, we generated 1,000 independent datasets, D_n, of the size n. For each of dataset, we constructed the cross-validation estimator of Err_m. To this end, we divided the simulated dataset D_n into training and testing sets B_CV=400 times and calculated the resulting Err_m^CV as the average of B_CV obtained mean absolute prediction errors in the testing set. We also implemented the fast bootstrap method to estimate the variance of the cross-validated estimates with B_BOOT=400 bootstraps and relatively small number of cross-validations for each bootstrap, i.e., B_CV=20. Thus, constructing one confidence interval required 8,000 model fitting. Based on 1,000 simulated datasets, we calculated the empirical mean and standard deviation of Err_m^CV, and the empirical coverage level of 95% confidence intervals based on bootstrap variance estimator with and without the sample size adjustment. The results were reported in Table <ref>. Next, we examined the performance of bootstrap calibration in algorithm <ref> for constructing 95% confidence intervals with a very small number of bootstraps. In particular, we set (B_BOOT, B_CV)=(20, 25), and the results are summarized in terms of empirical coverage probability of constructed confidence intervals with and without the bootstrap calibration. In this setting, the constructing a confidence interval requires only 500 model training, in contrast to 8,000 model training required by the proposed bootstrap procedure and 80,000 model training required by the regular bootstrap procedure. The results can be found in Table <ref>. The empirical bias of Err_m^CV in estimating Err_m is almost zero, relative to either the standard deviation of Err_m^CV or the true value of Err_m. The empirical coverage level of the 95% confidence interval based on B_BOOT=400 bootstraps was fairly close to its nominal level as expected after the sample size adjustment. The confidence intervals were slightly conservative without the adjustment for the “effective” sample size. Ignoring the difference between Err_m and Err_n for (m, n)=(80, 90), we also examined the empirical coverage level of the constructed confidence interval with respect to Err(D_n), which was the parameter of interest in <cit.>. The empirical coverage level was 92.5%. Therefore, in this case, the constructed confidence interval based on bootstrap method not only covered Err_m with sufficient probability as proposed, but also Err(D_n). In this setting, the standard deviation of Err_m^CV, i.e., ϵ was 0.073, while the standard deviation of Err(D_n), i.e., ζ, was much smaller: 0.025. See Figure <ref> for a graphic representation of this phenomena. Therefore, the coverage levels of constructed confidence intervals for Err_n and Err(D_n) are similar. Lastly, one may construct the confidence interval for Err(D_n) using the nested cross-validation method proposed in <cit.>, since the estimator of mean absolute prediction error in the testing set was a simple average of random statistics. In the simulation, the empirical coverage level of 95% confidence intervals based on nested cross-validation was 91% for Err(D_n). It was interesting to note that its coverage level for Err_m, m=80, was 91.8%, quite close to that for Err(D_n). For confidence intervals constructed with a small number of bootstraps (B_BOOT=20), the empirical coverage level was substantially lower than that based on B_BOOT=400, if no calibration for the confidence interval was made (Table <ref>). After the additional bootstrap calibration outlined in algorithm <ref>, however, the empirical coverage level became similar to or higher than that based on a large number of bootstraps. As a price of recovering the proper coverage level, the median width of calibrated confidence intervals increased 11-37% depending on the training size m. In the second set of simulations, we let p=1000 corresponding to a high dimensional case. In order to construct a prediction model in this case, we used the lasso regularization <cit.>, i.e., minimizing the loss function 1/n∑_i=1^n (Y_i-α-β_Z'Z_i)^2+λ_0|β_Z|_1, where the penalty parameter λ_0 is fixed at 0.20 to save computational time. Letting the minimizer of the regularized loss be denoted by α̂ and β̂_Z, the outcome of a future patient with covariate Z was predicted by β̂'Z̃, where β̂=(α̂, β̂_Z')'. For each of 1,000 simulated datasets, we computed Err_m^CV, its variance estimator σ̂_m^CV via proposed bootstrap method, and the true prediction error Err(D_n) as in the low-dimensional case. We reported the true value of Err_m, empirical mean and standard deviation of Err_m^CV, and the empirical coverage level of the 95% confidence interval based on bootstrap variance estimate in Table <ref>. In addition, we examined the performance of the confidence intervals constructed via a small number of bootstraps, i.e., (B_BOOT, B_CV)=(20, 25). The corresponding results were summarized in Table <ref>. Similar to the low dimensional case, the empirical bias of Err_m^CV in estimating Err_m was almost zero and the empirical coverage level of the 95% confidence interval was slightly higher than the nominal level of 95%. The over-coverage of confidence intervals based on σ_m, adj^CV was slightly smaller. The empirical coverage level of those confidence intervals was 98.8% with respect to Err(D_n). Part of the reason of the high coverage level in this setting was the high correlation between Err(D_n) and Err_m^CV, which was 0.40 (Figure <ref>). In the low dimensional setting, this correlation was almost zero. Similar findings have been reported in <cit.> as well. Lastly, the empirical coverage level of 95% confidence intervals based on nested cross-validation was 93.9% for Err(D_n) and 91.2% for Err_m, where (n, m)=(90, 80). Without the bootstrap calibration, the empirical coverage level of confidence intervals constructed with a small number of bootstrap iterations was lower than those with B_BOOT=400. After the bootstrap calibration, the empirical coverage level became similar to or higher than that based on a large number of bootstraps. The median width of calibrated confidence intervals increased 15-23% depending on the training size m. The lasso regularized estimator of the regression coefficient clearly does not follow a Gaussian distribution, and the theoretical justification for the Gaussian approximation of the cross-validated estimates provided for low dimensional setting is not applicable here. However, the empirical distribution of Err^CV_m was quite “Gaussian" with its variance being approximated reasonably well by the proposed bootstrap method. This phenomena can be visualized by the QQ plot of Err_m^CV with the smallest sample size of the training set, m=40, in Figure <ref>, where the role of lasso regularization was expected to be the biggest. It is clear that the Gaussian approximation holds well empirically for both low and high dimensional settings. One possible explanation was that the mean absolute prediction error is a smooth function of the testing data, and thus the cross-validation estimator Err^CV_m is still a regular root n estimator of Err_m. This observation suggests a broader application of the proposed method for constructing confidence intervals for Err_m. In summary, the proposed confidence interval has a reasonable coverage level but slightly conservative. Empirically, the interval can be viewed as confidence interval for both Err(D_n) and Err_m when m≈ n, although the procedure was designed for the latter. In this case, the confidence interval based on nested cross-validation also had a proper coverage level for both Err(D_n) and Err_m, even though the procedure was designed for the former. Furthermore, the bootstrap calibration can be used to maintain proper coverage level of confidence intervals constructed with a very small number of bootstraps at the cost of enlarging the resulting intervals. §.§.§ Real Data Examples The example is from the UCI machine learning repository. The dataset contains per capita violent crime rate and 99 prediction features for their plausible connection to crime in 1,994 communities after dropping communities and features with missing data. The crimes rate was calculated as the number of violated crimes per 100,000 residents in the community of interest. The violent crimes included murder, rape, robbery, and assault. The crime rate ranged from 0 to 1.0 with a median of 0.15. The predictors in the analysis mainly involved the community of interest, such as the percent of the urban population, the median family income, and law enforcement, such as per capita number of police officers, and percent of officers assigned to drug units. The objective of the analysis was to use these 99 community features to predict the violent crime rate. For demonstration purpose, we considered a subset consisting of first 600 observations as the observed dataset. First, we constructed the prediction model by fitting a linear regression model with lasso regularization, where the penalty parameter λ was fixed at 0.005 for convenience. We applied our method for m∈{60, 120, 180, 240, 300, 360, 420, 480, 540}, i.e., the proportion of the observations used for training varied from 10%, to 20%, 30%, ⋯, 90%. Based on 500 cross-validations, the cross-validation estimate of the mean absolute prediction error for different training sizes was 0.141, 0.121, 0.115, 0.113, 0.111, 0.110, 0.109, 0.109 and 0.108, respectively. The 95% confidence interval for Err_m^CV was then constructed based on the proposed bootstrap method. The number of bootstraps was 500 and the number of cross validations per bootstrap was 20. The results are summarized in Table <ref>. We also constructed the confidence interval for Err(D_n) using nested cross-validation. The resulting 95% confidence interval was [0.100, 0.116], which was fairly close to the bootstrap-based confidence interval for Err_540^CV where the training size m was the closest to n=600. We have also considered a different prediction model trained using random forest. The output the random forest algorithm was an ensemble of 200 regression trees constructed based on bootstrapped samples. Based on 500 cross-validations, the cross-validation estimator of the mean absolute prediction error for different training sizes was 0.121, 0.116, 0.113, 0.112, 0.111, 0.110, 0.109, 0.109, and 0.108, respectively, smaller than those from lasso regularized linear regression model in general. The 95% confidence intervals using the proposed bootstrap method were reported in Table <ref>. Furthermore, the confidence interval for Err(D_600) based on nested cross-validation was [0.100, 0.115], which was also fairly close to the bootstrap-based confidence interval for Err_540^CV as expected. It appears that when sample size of the training data is small, the random forest generates more accurate predictions than the lasso-regularized linear model. When the sample size m of the training set is greater than 200, however, the differences in prediction performance become very small. To account for the uncertainty, we also used the proposed bootstrap method to construct the 95% confidence interval for the difference in mean absolute prediction error between lasso regularized linear model and random forest. Indeed, when m=60, the 95% confidence interval for the difference in Err_m^CV was [1.50, 2.76] suggesting a statistically significantly better prediction performance of the random forest. The superiority of random forest also holds for m=120. The results for other m were reported in Table <ref> showing no statistically significant difference between two prediction models. §.§ Application 2 §.§.§ Theoretical Properties In the second application, we are interested in estimating the c-index from a logistic regression model via cross-validation. In this case, it is not difficult to verify the conditions C1-C5 under conventional assumptions. For example, under the condition that there is no β such that the hyperplane β'z=a_0 can perfectly separate observations with Y_i=1 from those with Y_i=0 and the matrix A_0=E[Z̃'Z̃exp(β'Z̃)/{1+exp(β'Z̃)}^2] is positive definite for all β, the maximum likelihood estimator based on the logistic regression, β̂, converges to a deterministic limit β_0 in probability as n →∞ and √(n)(β̂-β_0)=1/√(n)∑_i=1^n A_0^-1(Y_i-exp(β_0'Z̃_i)/1+exp(β_0'Z̃_i))+o_p(1), and thus C_1 is satisfied <cit.>. Second, the class of functions {I(β'z̃<0)|β∈Ω} is Donsker, where Ω is a compact set in R^p+1. This fact suggests that the U-process U(β)=√(n)[L(D, β)-P(β'(Z̃_1-Z̃_2)<0| Y_1=0, Y_2=1)] is stochastically continuous, where L(D, β)=1/m_0m_1∑_Y_i=0∑_Y_j=1I(β'Z̃_i<β'Z̃_j), m_g=∑_i=1^n I(Y_i=g). As a consequence, condition C2 is satisfied as n →∞ and 0<P(Y=1)<1. It is clear that P(β'(Z̃_1-Z̃_2)<0| Y_1=0, Y_2=1) is differentiable in β in a small neighborhood of β_0, if β_0'Z̃ has a differentiable density function, which suffices for condition C3. Next, the central limit theorem for U-statistics implies that √(n)[L(D, β_0)-P(β_0'(Z̃_1-Z̃_2)<0| Y_1=0, Y_2=1)] converges weakly to a mean zero Gaussian distribution as n →∞, if 0<P(Y=1)<1. Lastly, E{P(β̂'(Z̃_1-Z̃_2)<0| Y_1=0, Y_2=1)}-P(β_0'(Z̃_1-Z̃_2)<0| Y_1=0, Y_2=1)=O(|E(β̂)-β_0|)=o_p(n^-1/2) and C5 is satisfied. Therefore, we expect that (Err_m^CV-Err_m) can be approximated by a mean zero Gaussian distribution whose variance can be consistently estimated by the proposed bootstrap method. Note that we don't need to assume that the logistic regression model is indeed correct for the conditional probability P(Y=1|Z). §.§.§ Simulation Study In the numerical study, we considered two settings corresponding to low and high dimensional covariates vector Z_i. In the first setting, Z_i followed a 10 dimensional standard multivariate normal distribution and the binary outcome Y_i followed a Bernoulli distribution P(Y_i=1|Z_i)=exp(β_0'Z̃_i)/1+exp(β_0'Z̃_i), where β_0=(0, 1.16, 1.16, 1.16, 1.16, 0, ⋯, 0)'. This regression coefficient was selected such that the mis-classification error of the optimal Bayesian classification rule was approximately 20%. We let the sample size n=90 and considered the cross-validation estimator Err_m^CV for the c-index Err_m, m∈{40, 45, 50, 55, 60, 65, 70, 75, 80}. The true value of Err_m was obtained by averaging c-indexes of 5,000 logistic regression models trained in different training sets of size m in an independent testing set consisting of 200,000 observations. We also constructed the cross-validation estimator of Err_m from 1,000 simulated datasets D_n of size n=90 each. For each generated dataset, we divided the dataset into training and testing sets 400 times and calculated the resulting Err_m^CV as the average of 400 c-indexes from testing sets. We first implemented the fast bootstrap method to estimate the variance of the cross-validated estimates with B_BOOT=400 bootstraps and B_CV=20 cross-validations per bootstrap. The 95% confidence interval for Err_m was constructed accordingly. Based on results from 1,000 datasets, we summarized the empirical average and standard deviation of Err_m^CV for c-index and the empirical coverage level of 95% confidence intervals based on bootstrap variance estimates. The results were reported in Table <ref>. We also examined the performance of the bootstrap calibration in algorithm 2 for constructing 95% confidence intervals with a very small number of bootstraps. To this end, we set (B_BOOT, B_CV)=(20, 50), and the corresponding results were summarized in Table <ref>. The cross-validation estimator Err_m^CV was almost unbiased in estimating Err_m with its empirical bias negligible in comparison with the standard deviation of Err_m^CV. The empirical coverage level of 95% confidence intervals with B_BOOT=400 was fairly close to the nominal level. The coverage levels of the confidence intervals based on σ_m, adj^CV were approximately 90%, lower than 95%, for m=75, 80. We also examined the empirical coverage level of confidence intervals based on σ_m, adj^CV with respect to Err(D_n) for m=80. The coverage level was 89.5%, similar to that for Err_m. When a small number of bootstraps was used, the coverage level of the confidence interval was markedly lower than those constructed via a large number of bootstraps. However, with the proposed bootstrap calibration, the coverage level became comparable to those using B_BOOT=400 bootstraps. For example, when m=80, the empirical coverage level of confidence intervals based on σ_m^CV was 94.2% for B_BOOT=400, 90.5% for B_BOOT=20 without calibration, but 98.2% for B_BOOT=20 after calibration. The median width the calibrated confidence intervals increased 11 - 40% depending on m. In this particular setting, we have also examined the choice of B_CV=25 as in other simulation studies but observed a nontrivial proportion of zero variance component estimator for large m, suggesting insufficient number of bootstraps and cross-validation for differentiating intrinsic bootstrap variance from the Monte-Carlo variance due to cross-validation. In the second set of numerical experiments, we have considered a high dimensional case where the dimension of Z, p is set at 1000. To estimate β_0, we have employed the lasso-regularized logistic regression analysis, where β̂ was the maximizer of 1/n∑_i=1^n [ (α+β_Z'Z_i) Y_i-log{1+exp(α+β_Z'Z_i) }]-λ_0|β_Z|_1. To save computational time, the penalty parameter λ_0 was fixed at 0.10 in this simulation study. In this case, the lasso-regularized estimator β̂ was not a root n “regular” estimator and its distribution could not be approximated well by a Gaussian. Therefore, the regularity condition C1 was not satisfied. However, the estimation of the c-index in the testing set was a very “smooth” step based on U-statistic and therefore, we still expected that the cross-validation estimator of the c-index to approximately follow a Gaussian distribution, whose variance could be estimated well by the proposed bootstrap method. Letting the maximizer of the regularized objective function be denoted by α̂ and β̂_Z, the c-index in the testing set is estimated by the concordance rate between Y_i<Y_j and β̂_Z'Z_i<β̂_Z'Z_j. For each of 1,000 simulated datasets, we computed Err_m^CV for c-index, its variance estimator σ̂_m^CV via bootstrap, and the true prediction error Err(D_n). We calculated and reported the true value of Err_m, empirical mean and standard deviation of Err_m^CV, and the empirical coverage level of 95% confidence intervals based on the proposed bootstrap variance estimator in Table <ref>. Similar to the low-dimensional case, Table <ref> summarized the empirical coverage level of confidence intervals constructed only using a very small number of bootstraps. Similar to the low-dimensional case, the empirical bias of Err_m^CV in estimating Err_m was almost zero and the empirical coverage level of the 95% confidence interval was very close to the nominal level of 95%. There was a moderate over-coverage of confidence intervals based on bootstrap variance estimates σ_m^CV. On the other hand, the empirical coverage level of those confidence intervals of Err_m, m=80 with respect to Err(D_n) was 96.1%. The empirical correlation coefficient between Err(D_n) and σ_m^CV was as high as 0.52 in this setting. When only a small number of bootstraps was used, the proposed bootstrap calibration recovered the coverage level to a comparable level of those using a large number of bootstraps. For example, when m=70, the empirical coverage level of the confidence interval based on σ_m, adj^CV was 93.9% for B_BOOT=400, 91.9% for B_BOOT=20 before the bootstrap calibration, and 94.3% for the bootstrap B_BOOT=20 after calibration. The median width the calibrated confidence interval increased 10-14% depending on m. In summary, the proposed confidence intervals have a reasonable coverage. The interval can be viewed as confidence interval for both Err(D_n) and Err_m when m≈ n, although the procedure was designed for the latter. In this case, the nested cross-validation is not directly applicable for c-index or ROC curve, since the parameter estimator in the test set doesn't take a form of sum of independent, identically distributed random elements. Lastly, the bootstrap calibration can be used to effectively account for the variability of bootstrap variance estimator due to small number of bootstrap iterations in constructing confidence intervals for Err_m. §.§.§ Real Data Examples In the first example, we tested our proposed method on the MI dataset from UCI machine learning repository. The dataset contained 1700 patients and up to 111 predictors collected from the hospital admission up to 3 days after admission for each patient. We were interested in predicting all cause mortality. After removing features with more than 300 missing values, there were 100 prediction features available at the day 3 after admission including 91 features available at the admission. The observed data consisted of 652 patients with complete information on all 100 features. There were 72 binary, 21 ordinal, and 7 continuous features. Out of 652 patients, there were 62 deaths corresponding to a cumulative mortality of 9.5%. We considered training size m∈{196, 261, 326, 391, 456, 522, 587}, which represented 30% to 90% of the total sample size. We considered four prediction models, all trained by fitting a lasso regularized logistic regression. Model 1 was based on 91 features collected at the time of admission; Model 2 was based on 100 features collected up to day 3 after hospital admission; Model 3 was based on 126 features collected at the time of admission after converting all ordinal features into multiple binary features; and Model 4 was based on 159 features collected up to day 3 after converting all ordinal features into multiple binary features. For simplicity, we fixed the lasso penalty parameter at 0.01 for fitting Model 1 and Model 3, and 0.0075 for fitting Model 2 and Model 4. First, we estimated the cross-validated c-index, which was the AUC under the ROC curve based on 500 random cross-validations. We then constructed the 95% confidence interval based on standard error estimated via the proposed bootstrap method. The number of the bootstraps and the number of cross validations per bootstrap were set at 400 and 20, respectively. The results were reported in Table <ref>. Model 1 and Model 2 had similar predictive performance with a small gain in c-index by including 8 additional features collected after hospital admission. Likewise, Model 3 and Model 4 had similar predictive performance, which, however, was inferior to that of Models 1 and 2, suggesting that converting ordinal predictive features into multiple binary features may had a negative impact on the prediction performance of the regression model. On the one hand, converting ordinal features into binary features allowed more flexible model fitting. On the other hand, this practice increased the number of features and ignored the intrinsic order across different levels of an ordinal variable. Therefore, it was not a surprise that Models 3 and 4 were not as accurate as Models 1 and 2. We then formally compared the model performance by constructing the 95% confidence interval for the difference in c-index between Model 1 and Model 2; between Model 1 and Model 3; and between Model 2 and Model 4. The detailed comparison results for all training sizes were reported in Table <ref>. It was interesting to note that all confidence intervals include zero, suggesting that none of the observed differences in c-index between different models are statistically significant at the 0.05 level. In the second example for binary outcome, we tested our proposal on the “red wine” data set studied in <cit.>. The data set contained measurements for 1,599 red wine samples and was also available in the UCI Repository. Each of the wine samples was evaluated by wine experts for its quality, which was summarized on a scale from 0 to 10, with 0 and 10 representing the poorest and highest quality, respectively. 11 features including fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulphates and alcohol were also measured for all wine samples. <cit.> compared different data mining methods aiming to predict the ordinal quality score using these eleven features. Here, we conducted a simpler analysis to identify wine samples with a quality score above 6. To this end, we coded a binary outcome Y = 1, if the quality was ≥ 7 and 0, otherwise. We selected an observed data set consisting of first 400 wines samples from the “red wine” data set. Although the sample size was not small relative to the number of predictive features, there were only 40 observations with Y=1 in this subset. For the cross-validation, the training size m∈{200, 240, 280, 320, 360}. We considered two prediction models: Model 1 was based on a regular logistic regression, and Model 2 was based on a random forest with 200 classification and regression trees. We estimated the c-index based on 500 random cross-validations for each training size m based on the “observed” data set including n=400 wine samples. We then constructed 95% confidence interval for the c-index using the proposed bootstrap method with (B_BOOT, B_CV)=(400, 20). The results were reported in Table <ref>. It was clear that the random forest generated a substantially better prediction model than the logistic regression across all training sizes considered. The confidence intervals of the the difference in c-index were above zero when m=320 and 360, suggesting that the random forest was statistically significantly more accurate than the logistic regression model. It was also interesting to note that the performance of the random forest became better with increased training size, while the c-index of the logistic regression was relatively insensitive to m. This was anticipated considering the fact that the random forest fitted a much more complex model than the simple logistic regression and could make a better use of information provided by more training data for improving the prediction accuracy at local regions of the covariate space. We then used the cross-validation to estimate the entire ROC curve based on the random forest with 200 regression and classification trees. Due to the small number of cases in the data set, we used the pre-validation method described in Remark 1 of section <ref>, i.e., constructing the ROC curve based on risk score for all n samples estimated via a 10-fold cross-validation. To remove the Monte-Carlo variability in randomly dividing data into 10 parts, the final ROC curve was obtained by averaging over ROC curves from 500 10-fold cross-validations. We also plotted the ROC curve without cross-validation, which was clearly overly optimistic, since the AUC of the ROC curve was 1, representing a prefect prediction! The same bootstrap method could be used to estimate the point-wise confidence intervals of the ROC curve. Specifically, 500 bootstrapped data sets were generated and ROC curves from 20 12-fold cross-validations per bootstrapped data set were calculated. 12-fold instead of 10-fold cross-validation was employed to adjust for the reduced effective sample size in the training set. The bootstrap variance estimators for sensitivities corresponding to specificity level at 5%, 10%, ⋯, 95% were then obtained. The resulting confidence intervals representing the underlying predictive performance of the trained random forest were plotted in Figure <ref>, suggesting that the real sensitivity in external data would not reach 100% even for low specificity levels. §.§ Application 3 §.§.§ Theoretical Properties In the third application, we are interested in evaluating the performance of a precision medicine strategy. In this case, it is not difficult to verify conditions C1-C5 under suitable assumptions. For example, if the matrix A_0=E(Z̃Z̃') is nonsingular, P(G=1)=π∈ (0, 1), and G⊥ Z, i.e., the treatment assignment is randomized, then γ̂ and β̂ converge to deterministic limits γ_0 and β_0, respectively, as n →∞, and especially √(n)(β̂-β_0)=1/√(n)∑_i=1^n {π(1-π)A_0}^-1{Y_i-γ_0'Z̃_i-(G_i-π)β_0'Z̃_i}+o_p(1), where β_0 is an unique minimizer of the loss function m(β)=E{(Y^(1)-Y^(0)-β'Z̃)^2}. Thus condition C1 is satisfied. Second, the classes of functions {yI(β'z>0) |β∈Ω} , {yI(β'z≤ 0) |β∈Ω}, {I(β'z>0)|β∈Ω}, and {I(β'z≤ 0)|β∈Ω} are Donsker, where Ω is a compact set in R^p+1. This fact suggests that the stochastic processes 1/√(n)∑_i=1[Y_iI(β'Z̃_i>0)-E{YI(β'Z̃>0)}] 1/√(n)∑_i=1[Y_iI(β'Z̃_i≤ 0)-E{YI(β'Z̃≤ 0)}] 1/√(n)∑_i=1[I(β'Z̃_i>0)-P(β'Z̃>0) ] 1/√(n)∑_i=1[I(β'Z̃_i≤ 0)-P(β'Z̃≤ 0) ] are all stochastically continuous in β. Therefore, the processes U_1(β)=√(n){∑_i=1^n Y_iG_iI(β'Z̃_i>0)/∑_i=1^n G_i I(β'Z̃_i>0)-∑_i=1^n Y_i(1-G_i)I(β̃'Z_i>0)/∑_i=1^n (1-G_i)I(β̃'Z_i>0)-E(Y^(1)-Y^(0)|β'Z̃>0 )} and U_0(β)=√(n){∑_i=1^n Y_iG_iI(β'Z̃_i≤ 0)/∑_i=1^n G_i I(β'Z̃_i≤ 0)-∑_i=1^n Y_i(1-G_i)I(β'Z̃_i≤ 0)/∑_i=1^n (1-G_i)I(β'Z̃_i≤ 0)-E(Y^(1)-Y^(0)|β'Z̃≤ 0 )} are also stochastically continuous in β, i.e., U_g(β_1)-U_g(β_2)=o_p(1) for β_2-β_1=o(1), g∈{0, 1}. As a consequence, condition C2 is satisfied. It is clear that l_1(β)=E(Y^(1)-Y^(0)|β'Z̃>0 ) and l_0(β)=E(Y^(1)-Y^(0)|β' Z̃≤ 0 ) are differentiable in β in a small neighborhood of β_0, if the random variable β_0'Z̃ has a continuously differentiable bounded density function and E(Y^(g)|β_0'Z̃=s) is smooth in s. This suffices for condition C3. Next, the central limit theorem and delta-method together imply that U_g(β_0) converges weakly to a mean zero Gaussian distribution as n →∞, where g∈{0, 1}. Lastly, |E{P(Y^(1)-Y^(0)|β̂'Z̃>0)}-P(Y^(1)-Y^(0)|β_0'Z̃>0)|+|E{P(Y^(1)-Y^(0)|β̂'Z̃≤ 0)}-P(Y^(1)-Y^(0)|β_0'Z̃≤ 0)|=o_p(|E(β̂)-β_0|)=o_p(n^-1/2), and C5 is satisfied. Therefore, the cross-validation estimator Err^CV_m is a consistent estimator for Err_m and follows asymptotically a Gaussian distribution, whose variance can be estimated by the proposed bootstrap method. §.§.§ Simulation Study In the simulation study, the covariate Z_i was generated from a p dimensional standard multivariate Gaussian distribution and the continuous outcome Y_i^(g) was generated via two linear regression models: Y_i^(1)=β_1'Z̃_i+ϵ_i^(1), and Y_i^(0)=β_0'Z̃_i+ϵ_i^(0), where β_1=(0, 0.25, 0.25, 0.25, 0.25, 0, ⋯, 0)', β_0=(0, 0.25, -0.25, 0.25, -0.25, 0, ⋯, 0)' and ϵ_i^(g)∼ N(0, 1), g∈{0, 1}. The treatment assignment indicator {G_1, ⋯, G_n} was a random permutation of {1, ⋯, 1, 0, ⋯, 0} consisting of n/2 ones and n/2 zeros. The observed outcome Y_i=Y_i^(1)G_i+Y_i^(0)(1-G_i), i=1, ⋯, n. The generated data D_n={X_i=(Y_i, G_i, Z_i), i=1, ⋯, n}. In the first set of simulations, we let p=10 and the sample size n=90× 2=180. We considered the cross-validation estimator Err_m^CV for Err_m, m∈{80, 90, 100, 110, 120, 130, 140}. Due to symmetry, we only considered the case where Err_m was the average treatment effect among patients recommended to receive treatment G=1, denoted as the high value subgroup consisting of responders to the treatment G=1. The true value of Err_m was computed by averaging the treatment effect among identified responders based on the estimated ITR scores from 5,000 generated training sets of size m. The true treatment effect among responders was calculated with an independently generated testing set consisting of 200,000 patients. The true Err_m is 0.37, 0.39, 0.40, 0.42, 0.43, and 0.44 for m= 80, 90, 100, 110, 120, 130, and 140, respectively. The increasing trend in Err_m reflected the improved quality of the estimated ITR score based on a bigger training set. Note that in this setting, the average treatment effect among responders based on true individualized treatment effects was 0.56. We constructed the cross-validation estimates of the Err_m from 1,000 datasets D_n of size n=90 each. For each simulated data set D_n, we divided the data set into a training set of size m and a testing set of size n-m. The ITR score Δ(z| D_train) was estimated based on the training set and responders in the testing set was identified as patients whose Δ(z| D_train)> 0. The average treatment effect estimator among responders in the testing set was simply the average difference in Y between responders, who received the active treatment G=1, and responders, who received the control treatment G=0. This process was repeated 400 times and the resulting Err_m^CV was the sample average of 400 average treatment effect estimators among identified responders in the testing set. In addition, we used the proposed bootstrap method to compute the standard error estimators σ̂_m^CV and σ̂_m,adj^CV. In computing them, we set the number of bootstraps B_BOOT to be 400 and the number of cross-validations per bootstrap to be B_CV=20. The 95% confidence interval for Err_m was also constructed for each simulated data set D_n. We also examined the performance of constructed confidence intervals using only a very small number of bootstrap iterations by choosing (B_BOOT, B_CV)=(20, 25). In the second set of simulations, we let p=1000. In calculating the estimated regression coefficient β̂ in the ITR score Δ(z| D_train), we implemented the lasso regularized method. Specifically, we estimated β by minimizing a regularized loss function ∑_X_i∈ D_train[Y_i-γ_0-γ_Z'Z_i-(G_i-π)β'Z̃_i]^2 +λ_1|γ_Z|_1+λ_2|β|_1, where λ_1 and λ_2 were appropriate penalty parameters. To save computational time, both penalty parameters were fixed at 0.10 in all simulations instead of being adaptively selected via cross-validation within the training set. The resulting minimizer of β was denoted by β̂(D_train) and Δ(z| D_train)=β̂(D_train)'z̃. Similar to the low dimensional case, we simulated 1,000 datasets and for each generated dataset D_n, we calculated Err(D_n), Err_m^CV, the bootstrap standard error estimators σ_m^CV and σ_m,adj^CV, and the corresponding 95% confidence intervals. Similar to the low dimensional case, we investigated the performance of the confidence intervals constructed using both a large number and a small number of bootstraps. The simulation results including the true value of Err_m, the empirical average and standard deviation of cross-validation estimator Err_m^CV, and the empirical coverage of 95% confidence intervals based on B_BOOT=400 were summarized in Table <ref>. In addition, the empirical coverage levels of the constructed 95% confidence intervals based on B_BOOT=20 were summarized in Table <ref>. For both low and high dimensional cases, the cross-validated estimator Err^CV_m was almost unbiased in estimating Err_m, especially relative to the empirical standard deviation of Err^CV_m. The empirical coverage level of 1000 constructed 95% confidence intervals for Err_m based on bootstrap variance estimator from a large number of bootstrap iterations was quite close to its nominal level. After sample size adjustment in variance estimation, the constructed confidence intervals based on σ_m,adj^CV slightly under-covered the true parameter with empirical coverage levels between 90% and 93%. When a small number of bootstrap iterations was used (B_BOOT=20), the proposed bootstrap calibration can be used to maintain the proper coverage level as Table <ref> showed. As a price, the median width of calibrated confidence interval increased 14-28% depending on the training size m and dimension p. Note that the theoretical justification for the Gaussian approximation to the cross-validated estimator in high dimensional case was not provided as in the previous two examples. However, the empirical distribution of Err^CV_m was quite “Gaussian" with its variance being approximated well by bootstrap method. This observation ensured the good performance of resulting 95% confidence intervals. In addition, the empirical coverage levels of the 95% confidence intervals of Err_m with respect to Err(D_n) were 92.9% and 95.4% in low- and high- dimensional settings, respectively, where (m, n)=(140, 180). In summary, proposed confidence intervals based on bootstrap standard error estimator σ_m^CV have a good coverage level. The bootstrap calibration can effectively correct the under-coverage of the confidence intervals based on a very small number of bootstraps. The constructed confidence interval can be viewed as a confidence interval for both Err(D_n) and Err_m, when m≈ n. In this case, due to the complexity of the evaluation procedure in the testing set, no existing method is readily available for studying the distribution of the cross-validation estimator for the average treatment effect among “responders”. §.§.§ Real Data Example In this section, we consider a recent clinical trial “Prevention of Events with Angiotensin Converting Enzyme Inhibition” (PEACE) to study whether the ACE inhibitors (ACEi) are effective in reducing future cardiovascular-related events for patients with stable coronary artery disease and normal or slightly reduced left ventricular function <cit.>. While a total of 8290 patients are enrolled in the study, we focus on a subgroup of 7865 patients with complete covariate information, in which 3947 and 3603 patients were assigned to receive the ACEi and placebo, respectively. One main endpoint of the study is the patient’s survival time. By the end of the study, there were 315 and 292 deaths observed in the control and treatment groups, respectively. Under a proportional hazards model, the estimated hazard ratio was 0.92 with a 95% confidence interval of (0.78, 1.08) and a two-sided p-value of 0.30. Based on the results of this study, it was inconclusive whether ACEi therapy would reduce the mortality in the overall patient population. However, with further analysis of the PEACE survival data, Solomon et al. (2006) reported that ACEi might significantly prolong the survival for patients whose baseline kidney functions were abnormal <cit.>. This finding could be quite important in guiding the individualized treatment recommendation in practice. The objective of our analysis here was systematic identification of a high value subgroup of patients who may benefited from ACEi, even though the average treatment effect of ACEi in the entire study population was neither clinically nor statistically significant. The outcome of interest was the time to all-cause mortality. To build a candidate ITR scoring system capturing the individualized treatment effect, we first used 4 covariates previously identified as statistically and clinically important predictors of the overall mortality in the literature <cit.>. These covariates are age, gender, eGFR measuring baseline renal function, and left ventricular ejection fraction measuring heart’s ability to pump oxygen-rich blood out to the body. We considered an ITR score derived from a training set of size m=6292, i.e., 80% of the entire study population. The ITR score was the estimated difference in restricted mean survival time given baseline covariates <cit.>. To this end, we fit Cox regression in two treatment arms separately: P(Y_i > t| Z_i=z, G_i=g)=S_g(t)^exp(β_g'z) and denote the resulting estimators by {Ŝ_g(·| D_train), β̂_g(D_train)}, where S_g(t) is the baseline survival function and β_g is the regression coefficient in treatment g group, g∈{0, 1}. Specifically, β_g can be estimated by maximizing the partial likelihood function and S_g(·) can be estimated by a transformation of the Breslow estimator of the cumulative baseline hazard function. Under the assumed Cox regression model, the ITR score for a patient with covariate z is Δ(z | D_train)=∫_0^τ[Ŝ_1(t| D_train)^exp(β̂_1(D_train)'z)-Ŝ_0(t | D_train)^exp(β̂_0(D_train)'z)]dt, where τ=2000 days is a pre-specific cutoff time point. The average treatment effect in the testing set can be measured in different ways. First, we considered the average treatment effect as the difference in restricted mean survival time, representing by the area between the two survival curves restricted between time zero and the cutoff time point τ, i.e., Δ̂_g(D_train, D_test)= ∫_0^τŜ_1(t |D̂_test^(g))dt- ∫_0^τŜ_0(t |D̂_test^(g))dt, where g∈{0, 1}, D̂_test^(1)={ X∈ D_test|Δ̂(Z | D_train)≥δ̂_0}, D̂_test^(0)={ X∈ D_test|Δ̂(Z | D_train)< δ̂_0}, δ̂_0 was the sample median of {Δ(Z| D_train) | X∈ D_train}, Ŝ_1(t|D̂_test^(g)) was the Kaplan-Meier estimator of the survival function based on patients in D̂_test^(g) who received the ACEi, and Ŝ_0(t |D̂_test^(g)) was the Kaplan-Meier estimator of the survival function based on patients in D̂_test^(g) who received the placebo. In other words, we were interested in estimating the average treatment effect in the high value subgroup and its complement. Furthermore, we are also interested in estimating the difference in average treatment effect between the high value subgroup and its complement, i.e., the interaction between treatment and subgrouping based on ITR scores. Here, the high value subgroup was identified by whether the estimated ITR score Δ̂(z| D_train)>δ_0. Based on 500 cross-validations, the cross-validated estimate Err^CV_m was 21.1 days for Δ̂_1(D_train, D_test), the average treatment effect in the high value subgroup, and -13.2 days for Δ̂_0(D_train, D_test), the average treatment effect in the complement of the high value subgroup. Their difference, i.e., the interaction with treatment, was 34.3 days. Now, we implemented the proposed bootstrap method to estimate the standard errors of those estimators. Specifically, we choose the number of bootstrap to be 500 and the number of cross-validation per bootstrap to be 20. The 95% confidence interval for the average treatment effect in the high value subgroup was [-1.3, 45.5] days, and the associated p value for testing no treatment effect was 0.064, nearly statistically significant at the conventional 0.05 level. The 95% confidence interval for the average treatment effect in the complement of the high value subgroup was [-31.5, 5.2] days and the associated p value for testing no treatment effect in that subgroup was 0.161. The 95% confidence interval for their difference was [4.3, 64.3] and the corresponding p value for testing no difference was 0.025, suggesting that the ITR score constructed from the training set of 6,292 patients had a statistically significant interaction with the treatment, i.e., the average treatment effect in the high value subgroup was greater than that in the remaining patients. The detailed results were summarized in Table <ref>. Next, we also summarized the average treatment effect via the commonly used hazard ratio and the results were also reported in Table <ref>. The 95% confidence intervals of hazard ratios were transformed from confidence intervals of the log-transformed hazard ratio, whose distribution can be approximated better by a Gaussian distribution. Similarly, the comparison of two hazard ratios was based on the difference between two log-transformed hazard ratios or equivalently the ratio of two hazard ratios. Based on the hazard ratio, while the interaction between the identified high value subgroup and treatment was only marginally statistically significant at the 0.10 level, the average treatment effect in the high value subgroup was statistically significantly favoring ACEi. Lastly, we repeated the analysis with 7 baseline covariates: age, gender, eGFR, left ventricular ejection fraction, hypertension, diabetes, and history of myocardial infarction. The results were summarized in Table <ref>. The general directions of the result were similar as those based on 4 baseline covariates: the average treatment effect in the high value subgroup tended to be positive, while the average treatment effect in the complement of the high value subgroup tended to be negative after cross-validations. But no difference was statistically significant, which was expected because the three additional covariates didn't demonstrate strong interactions with the treatment in previous analysis, and might dilute the predictiveness of the constructed ITR scores in capturing the individualized treatment effect <cit.>. There are other measures for the performance of a precision medicine strategy. For example, one may use the average utility (e.g., the RMST) when the strategy is applied to the entire patient population. One may also use the AD curve representing the average treatment effect in subgroups of patients with predicted treatment benefits above different cutoffs to gauge the performance of a strategy <cit.>. They are not regular prediction errors, but should also be estimated via cross-validation to avoid overfitting. The bootstrap method proposed in this paper can be used to make rigorous statistical inference for them as well. § DISCUSSION In this paper, we propose a bootstrap method for making statistical inferences on summary statistics obtained from cross-validation, which is commonly used to evaluate the performance of statistical procedures. We clarify the population parameter that cross-validation estimates and provide asymptotic justification for the bootstrap procedure under certain regularity conditions. The proposed method substantially reduces the computational demands of a regular bootstrap using results from a random effects model and can be applied to quantify the uncertainty of almost any cross-validation-based estimate. Our approach complements the work of <cit.>, which focuses on constructing confidence intervals for the random quantity Err(D_n). However, there is a significant gap between the empirical performance of the proposed inference in finite samples and its theoretical justification, which requires large sample approximations and root n regular estimates for all relevant parameters. For example, our simulation study shows that the distribution of the cross-validated estimate of the c-index of the logistic regression model with high-dimensional covariates fitted via lasso regularization is reasonably Gaussian, and the associated bootstrap confidence interval performs well even when the sample size is moderate. However, the current theoretical justification cannot confirm the validity of the proposed procedure in this setting, which is often the most important application of the cross-validation procedure. Therefore, further research in this direction is warranted. We also note an interesting observation that although the bootstrap-based confidence intervals are constructed to cover the population parameter Err_m, they demonstrate a comparable coverage level with respect to Err(D_n) for m≈ n. When the dimension is low, this can be explained by the fact that the variance of Err(D_n) is substantially smaller than that of Err_m^CV, and the correlation coefficient between Err(D_n) and Err_m^CV is close to zero. When the dimension is high, however, the variance of Err(D_n) can be comparable to that of Err_m^CV, and the correlation between Err(D_n) and Err_m^CV can be high. It is not clear when the constructed confidence interval can be interpreted as a confidence interval with respect to Err(D_n). § APPENDIX: OUTLINE OF THE THEORETICAL JUSTIFICATION FOR THE ASYMPTOTIC NORMALITY OF THE CROSS-VALIDATION ESTIMATOR ERR_M^CV First, based on condition C1, ψ̂(D_train)-ψ_0 =1/m∑_i∈ D_trainξ(X_i)+o_p(m^-1) =1/n∑_i=1^n I(X_i∈ D_train)/π̂ξ(X_i)+o_p(n^-1), where π̂=m/n∈ [l, u] and 0<l<u<1., Next, it follow from the condition C2 and C3 that √(n-m)[L(D_test, ψ̂(D_train))-E{L(D_test, ψ̂(D_train))}] = √(n-m)[L(D_test, ψ_0)-E{L(D_test, ψ_0)}]+o_p(1) = 1/√(n-m)∑_i ∈ D_testη(X_i)+o_p(1). Note that E{L(D_test, ψ̂(D_train)} = E{L(D_test, ψ_0)}+l_ψ_0(ψ̂(D_train)-ψ_0)+o(ψ̂(D_train)-ψ_0) = E{L(D_test, ψ_0)}+1/n∑_i=1^n I(X_i∈ D_train)/π̂ l_ψ_0{ξ(X_i)}+o_p(n^-1) based on condition C4 and (<ref>). Lastly, √(n)[E{L(D_test, ψ̂(D_train)}-E{L(D_test, ψ_0)}] = √(n)[E{L(D_test, ψ̂(D_train)}-E{L(D_test, ψ̂(D_train))}] +√(n)[E{L(D_test, ψ̂(D_train))}-E{L(D_test, ψ_0)}] = √(n)/(n-m)∑_i ∈ D_testη(X_i)+ 1/√(n)∑_i=1^n I(X_i∈ D_train)/π̂ l_ψ_0{ξ(X_i)}+ o_p(1) = 1/√(n)∑_i=1^n [ I(X_i∈ D_test)/1-π̂η(X_i)+I(X_i∈ D_train)/π̂ l_ψ_0{ξ(X_i)}]+o_p(1). Conditional on the observed data and taking expectation with respect to I(X_i∈ D_train) and I(X_i ∈ D_test), i=1, ⋯, n √(n)(Err^CV_m-E{L(D_test, ψ_0)})=1/√(n)∑_i=1^n [ η(X_i)+l_ψ_0{ξ(X_i)}]+o_p(1). Since lim_n→∞ E{L(D_test, ψ_0)}=Err, we have √(n)(Err^CV_m-Err) converges weakly to a mean zero Gaussian distribution with a variance of σ_D^2=E( [ η(X_i)+l_ψ_0{ξ(X_i)}]^2). Lastly, under condition C5, √(n)(Err^CV_m-Err_m)= √(n)(Err^CV_m-Err) +o_p(1) also converges weakly to N(0, σ_D^2). plain
http://arxiv.org/abs/2307.01885v1
20230704190650
Generalised linear response theory for the full quantum work statistics
[ "Giacomo Guarnieri", "Jens Eisert", "Harry J. D. Miller" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
arrows,shapes,positioning,shadows,backgrounds,fit § 0ex2ex0.4ex name= § ..3em plaindefnDefinitionlemLemmacorCorollarythmTheoremDahlem Center for Complex Quantum Systems, Freie Universität Berlin, 14195 Berlin, GermanyDahlem Center for Complex Quantum Systems, Freie Universität Berlin, 14195 Berlin, GermanyDepartment of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, UK We consider a quantum system driven out of equilibrium via a small Hamiltonian perturbation. Building on the paradigmatic framework of linear response theory, we derive an expression for the full generating function of the dissipated work. Remarkably, we find that all information about the distribution can be encoded in a single accessible quantity known as the relaxation function, thus opening up new ways to use phenomenological models to study non-equilibrium fluctuations in complex quantum systems. Our results establish a number of refined thermodynamic constraints on the work statistics that apply to regimes of small but arbitrarily fast protocols, and do not require assumptions such as slow driving or weak coupling to an environment. Finally, our approach uncovers a distinctly quantum signature in the work statistics that originates from underlying zero-point energy fluctuations. This causes an increased dispersion of the probability distribution at short driving times, a feature that can be probed in efforts to witness non-classical effects in quantum thermodynamics. Generalised linear response theory for the full quantum work statistics Harry J. D. Miller Accepted. Received; in original form ======================================================================= Irreversible processes are ubiquitous in nature, and their most fundamental signature is entropy production. Thermodynamically, the latter is the pivotal quantity to characterise the fraction of energy which is irretrievably lost and its positivity on average expresses the famous second law <cit.>. At the microscopic level, the laws of thermodynamics, however, fall short at providing an accurate description due to the fact that fluctuations of all thermodynamic quantities, such as work and heat, start playing a preponderant role <cit.>. Besides being of fundamental importance, these thermodynamic fluctuations have several direct implications on the performance of small-scale engines and bio-chemical reactions  <cit.>, due to the fact that trying to reduce them comes at the cost of additional entropy production. For all these reasons, understanding their properties and their connection with dissipation has been the one of the overarching themes of the field of stochastic thermodynamics <cit.>. Furthermore, quantum mechanics provides an additional source for fluctuations even in absence of any thermal agitation. While quantum properties have often been shown to lead to advantages over classical counterparts with regard to expectation values or speed-ups in total process time <cit.>, many open questions still remain surrounding the thermodynamic cost associated to quantum fluctuations (i.e., the process precision)  <cit.>. Recent studies of slowly driven quantum systems have uncovered a wealth of strong results in this direction, ranging from finite-time thermodynamic bounds and trade-offs <cit.>, general optimal control strategies <cit.>, geometric phases <cit.> and broad identifications of quantum signatures in work statistics <cit.>. However, analogous general and system or model independent results valid for finite-time processes beyond slow driving are still relatively unknown to date. Crucially, however, rapid non-equilibrium processes are of much greater utility to real world realisations of quantum devices where shorter operation times are desired, such as, e.g., in the context of quantum computing: So it remains a pressing question to deeply understand quantum thermodynamics<cit.> in finite-time regimes. Perhaps the biggest challenge to further progress lies in the fact that finite time processes require precise knowledge about the underlying dynamics; this is especially difficult to obtain when a system is driven via time-dependent Hamiltonian driving or in contact with an external environment. One way around this is through linear response theory (LRT), which allows one to utilise phenomenological models to make thermodynamic predictions based on the system's response to small perturbations. Since its formulation by Kubo in 1957 <cit.>, LRT has remained an indispensable tool for studying systems out of equilibrium with significant applications to quantum transport <cit.>, many-body quantum physics <cit.> and quantum field theory <cit.>. In the context of quantum thermodynamics <cit.>, one use of LRT has been to develop general optimisation strategies for minimal average work dissipation protocols <cit.>. In this work, we set out to develop a detailed picture of quantum work and its stochastic fluctuations in LRT, going substantially beyond previous studies that have been restricted to descriptions of average quantities. In particular, we derive a universal and model independent expression for all the statistical moments (average, variance, and higher fluctuations) of the dissipated work spent when driving a quantum systems out of equilibrium by means of a finite-time, weak perturbation. Our result, remarkably, shows that all these higher order fluctuations can be directly obtained through a single well-known quantity: The system's relaxation function <cit.>. The latter object is one of the central quantities in LRT and often represents the basis for phenomenological thermodynamic descriptions of complex many-body systems. Our main result opens new routes to analyze all the properties of work statistics in complex systems by means of this single quantity, as we exemplify with a number of examples such as overdamped and underdamped Brownian motion. From this we are able to identify a new set of refined thermodynamic constraints, including a fluctuation theorem and positivity of all work cumulants. These are based on minimal assumptions that require only unitary dynamics on the full system and environment along with small variations to the local system Hamiltonian. Operationally speaking, our theory predicts the existence of a distinctly quantum effect on the work probability distribution, which results in a significant broadening of the dispersion at short timescales and low temperatures. We show that this non-classical signature is fundamentally related to the breakdown of equipartition in energy in quantum statistical mechanics. In schemes aimed at probing quantum effects in notions of quantum thermodynamics say, with trapped ions <cit.>, such quantum signatures could potentially be experimentally detected using work measurements <cit.>Quantum dissipated work statistics in LRT. We begin with a standard setup used in stochastic thermodynamics in which a quantum system is driven unitarily out of equilibrium due to a time-dependent Hamiltonian driving, H_t:=H_0+λ_t V, over a finite interval t∈[0,τ]. Here, t↦λ_t is a dimensionless function characterising a particular protocol, and the operator V is treated as a perturbation that is turned on at time t=0 (i.e., λ_0=0). If necessary, one can partition the Hamiltonian H_0=H_S+H_E+H_int into an interacting subsystem S plus environment E and consider the perturbation V=V_S⊗𝕀_E as acting locally on the subsystem, though we are free to leave this implicit throughout our analysis. The composite system is initially in a thermal Gibbs state ρ_0=π_0 at inverse temperature β=1/(k_B T), where we denote π_t:=e^-β H_t/𝒵_t. This evolves to a final state ρ_τ=U_τπ_0 U^†_τ with U_τ=𝒯exp(∫^τ_0 dt ' H_t') the time-ordered unitary. The central quantity of interest is the dissipated work :=W-Δ F with W the fluctuating work done on the system as it is driven out of equilibrium and Δ F=-β^-1ln(Z_τ/Z_0) the change in equilibrium free energy. By work W we refer to the conventional random variable that is determined from projective energy measurements of the Hamiltonian at the beginning and end of the driving <cit.> (for refined concepts, see also Ref. <cit.>). A complete description of the stochastic thermodynamics of the process is encapsulated by the resulting distribution in dissipated work P(). As shown in Ref. <cit.>, this information is quantified by the quantum Renyi divergence between the instantaneous equilibrium state π_τ and the non-equilibrium state ρ_τ, since the cumulant generating function (CGF) of the process is found to be K(η):=ln e^-ηβ=(η-1)S_η(π_τ||ρ_τ), where S_α(ρ_1||ρ_2):= (α-1)^-1ln ρ_1^αρ_2^1-α is the Renyi divergence of order α>0, generalizing the quantum relative entropy. From the CGF, we can derive cumulants using the formula κ^k_W :=(-k_B T)^k lim_η→ 0∂^k K(η)/∂η^k. In Kubo's linear response theory, one assumes a weak perturbation such that |λ_t|≪ 1 for ∀ t∈[0,τ], with normalization V=1, resulting in a small deviation from the initial equilibrium state at all times given by <cit.> ρ_t= π_0-i/ħ∫^t_0 dt' λ_t'[V(t-t'),π_0]. Here, we denote operators such as A(t):=e^iH_0 t/ħA e^-iH_0 t/ ħ in the interaction picture. Under this approximation, it is known that the average dissipated work is related to the two-time integral <cit.> β⟨⟩=1/2∫^τ_0 dt ∫^τ_0 dt' Ψ_0(t-t')λ̇_t λ̇_t'. Here t↦Ψ_0(t) is known as the relaxation function<cit.>, defined using the Kubo covariance[See Eq. (3.11) of Ref. <cit.> with A = B = V.] Ψ_0(t):= β∫^β_0 ds ⟨ V(-iħ s)V(t)⟩_0-β^2 ⟨ V ⟩_0^2, where ⟨ .⟩_0 is an average with respect to the thermal state π_0. In linear response theory, Ψ_0 plays a fundamental role as a measure of the rate at which a system coupled to an environment relaxes to equilibrium after introduction of the perturbation. Moreover, it defines a characteristic timescale τ_R:=∫^∞_0 dt Ψ_0(t)/Ψ_0(0) over which two-time correlations in t↦ V(t) decay in time. While the calculation of the relaxation function requires knowledge of the exact dynamics of the system, the power of LRT lies in the fact that one may use phenomenological models of Ψ_0 to investigate the generic behaviour of systems where this information may not be available. To do this one must impose certain constraints on any ansatz that ensure both dynamical and thermodynamic consistency. The two key properties we require are, for all times t and all ω, Ψ_0(t)=Ψ_0(-t), Ψ̃_0(ω)≥ 0. The first property reflects the time-reversal symmetry of the underlying Hamiltonian dynamics, which should be satisfied whenever [H_t,Θ ]=0 with Θ the anti-unitary, time-reversal operator. The second property in (<ref>) states that the Fourier transform of the relaxation function, Ψ̃_0(ω)=ℱ[Ψ_0](ω), should be positive. When combined with time-reversal symmetry, it can be shown to be a necessary and sufficient condition to guarantee validity of the second law of thermodynamics under linear response, ⟨⟩≥ 0<cit.>. While thermodynamic consistency at the ensemble level is guaranteed solely by properties of the relaxation function, this is not necessarily the full picture. For small quantum systems it is crucial to consider the higher order statistics of work, and it remains to be seen how the full distribution is impacted by linear response perturbations. To facilitate this investigation, our main technical contribution is a derivation of a formula for the CGF (<ref>) under the linear response approximation (<ref>) and time-reversal symmetry Ψ_0(t)=Ψ_0(-t) for all t. Expanding the Renyi divergences in Eq. (<ref>) up to second order in the perturbation strength, the CGF can be well approximated by two-time correlation function (see Appendix A) K(η):= -β^2∫^τ_0 dt ∫^t_0 dt' λ̇_t λ̇_t' ⟨⟨ V(t),V(t') ⟩⟩_0^η . The bi-linear form in (<ref>) is the quantum covariance ⟨⟨ A,B ⟩⟩_0^η:=∫^η_0 dy ∫^1-x_x dy π_0^y δ A π_0^1-y δ B, with δ A:=A-A π_0. This can be thought of as a generalised version of the Kubo covariance (<ref>) and originates from the field of quantum information geometry<cit.>. It has previously appeared in the study of slow-driving quantum thermodynamics <cit.>, and in the context of the locality of temperature <cit.> though in the present context it is important to the note a key difference. While in slow driving one neglects correlations over long times, the linear response regime can incorporate memory effects due to finite-time driving as reflected by the double time integral in (<ref>). While not immediately clear, this complicated function can be related to the familiar relaxation function Ψ_0 of LRT. By adopting a Fourier transform method used in Ref. <cit.>, we can express (<ref>) instead as K(η)= -∫^τ_0 dt ∫^τ_0 dt' λ̇_t λ̇_t' [g_η∗Ψ_0](t-t') where g_η(t):=ℱ^-1[Sinh(βħω(1-η)/2)Sinh(βħωη/2)/βħω Sinh(βħω/2)](t). The key observation here is that the relaxation function fully characterises the stochastic thermodynamics of a process for small perturbations, with g_η acting as a universal generating function for the higher order fluctuations via its convolution with Ψ_0. The benefit of this expression is that we can now identify a number of general properties of the statistics. Firstly, it is immediate to see the symmetry K(η)=K(1-η) in Eq. (<ref>) for all η, which leads us to a refined fluctuation theorem upon converting the CGF into the distribution P() via an inverse Laplace transform P()/P(-)=e^β. This type of relation is often referred to as the Evan-Searles fluctuation theorem<cit.> and imposes a much stronger constraint on the distribution than the usual detailed fluctuation theorem of Crooks<cit.>, the latter of which relates the work distribution to its time-reversed counterpart through P()=e^β P_R(-). While this has been known to apply to systems driven either slowly <cit.> or via time-symmetric driving protocols <cit.>, here we have confirmed that quantum linear response is another regime where it can arise. In contrast to what is typically expected in classical stochastic thermodynamics <cit.>, our result predicts a quantum work distribution that will be distinctly non-Gaussian at finite temperatures. This can be observed by looking at the non-vanishing, higher order cumulants. Taking derivatives of (<ref>), we find using the convolution theorem the expression β^kκ^k_W=∫_ℝdω/√(2π)Ψ̃_0(ω)γ^k(ω)|∫^τ_0 dt λ̇_t e^iω t|^2, where γ^k(ω):= { 1/2(βħω)^k-1Coth(βħω/2) if k even, 1/2(βħω)^k-1 if k odd.. Since Ψ̃_0(ω)≥ 0 from our consistency requirement of the relaxation function (<ref>), we can infer that all cumulants in dissipated work are non-negative κ^k_W ≥ 0 for integer k. This of course is a much tighter constraint on the process than the usual average second law ⟨⟩≥ 0. Overall, the constraints (<ref>) and (<ref>) give refined statements about the statistics of dissipated work in linear response, demonstrating that the distribution must have a positive skew with positive amounts of dissipation exponentially favoured. Statistical interpretation of the quantum correction. As a final result, our linear response approach also highlights a quantum signature in the statistics via a fluctuation-dissipation relation (FDR). Let us define the Fano factorF_W, which quantifies the relative dispersion of the work distribution, as F_W:=1/2Var(W)/⟨⟩. Classically one should expect F_W=k_B T for a system that remains close to equilibrium, such as in linear response <cit.> or slow driving <cit.>, with the work variance becoming proportional to the average dissipation. However, we show here that for finite temperatures F_W gets modified by quantum fluctuations and we provide a clear-cut statistical interpretation of this effect. To achieve this goal, we first of all introduce a weighted response function in Fourier space Ψ̃_0^eff(ω) :=Ψ̃_0(ω)|∫^τ_0 dt λ̇_t e^iω t|^2, which explicitly accounts for the specific choice of the driving protocol. Since by thermodynamic consistency we have Ψ̃_0(ω)≥ 0, it then follows that Eq. (<ref>) allows to define a normalised probability distribution over the continuum of frequencies ω∈[0,∞) associated with the system dynamics, which we refer to as pseudo-modes, via P̃(ω):=Ψ̃_0^eff(ω)/∫^∞_0 dω Ψ̃_0^eff(ω). Eq. (<ref>) describes how these pseudo-modes are statistically distributed for a given driving protocol in LRT. Crucially, it now becomes apparent that the average energies of these modes determine the dispersion of the work distribution. First note that since Ψ_0(t)=Ψ_0(-t) then Ψ̃_0(ω) will be an even function of ω. It is then straightforward to show from (<ref>) that the Fano factor Eq. (<ref>) can be expressed as F_W=1/2⟨ħω Coth(βħω/2) ⟩=⟨_ω⟩ where the average is taken with respect to the pseudo-modes distribution ω↦P̃(ω) in Eq. (<ref>). We can now recognise the RHS as the average total energy _ω of a harmonic oscillator at frequency ω and in thermal equilibrium. Interestingly, this is true even if the original system at hand is not a harmonic system. This provides a remarkable statistical-mechanical connection between the physical dissipated-work fluctuations and the effective energy distributed amongst these pseudo-modes associated with the driving. One can infer some key properties of this quantity, such as the chain of inequalities F_W≥1/2ħ⟨ω⟩Coth(βħ⟨ω⟩/2)≥ k_B T, The first lower bound follows from Jensen's inequality for the convex function x ↦ x Coth(β x) while the second follows from the simple bound x Coth(x)≥ 1 for real x. Thus, we see that the work distribution will exhibit larger dispersion than what is expected from classical stochastic thermodynamics. We can thus interpret the increased dispersion is analogous to the breakdown of the equipartition theorem in quantum statistical mechanics <cit.>. Traditionally, this applies to systems in equilibrium and expresses the fact that energy cannot be equally shared amongst all degrees of freedom of a quantum system due to its discrete nature, implying a frequency dependence on the average energy rather than the classical prediction of ⟨_ω⟩=kT<cit.>. In the present context, we see that a similar breakdown occurs for non-equilibrium processes in linear response, imparting a signature on the dispersion of the quantum dissipated work distribution. The genuine quantum origin of this effect is clear from the first inequality of Eq. (<ref>); the term ħ⟨ω⟩/2 represents in fact the average zero-point energy of the pseudo-modes with respect to P̃(ω) and is manifestly positive. In fact, at low temperatures ħωCoth(βħω/2)/2≃ħ |ω|/2 and so F_W ≈1/2ħ⟨ω⟩ , βħ/τ_R≫ 1. As a consequence, the non-vanishing work dispersion at zero temperature is of a purely quantum nature, and contrasts with what would be seen classically. This can be measured experimentally, possibly even in the temperature dependence. Note that our approximation is taken with respect to the characteristic timescale τ_R Eq. (<ref>). This reflects the fact that the most significant impact to the dispersion is expected to appear over short times since the distribution P̃ will be concentrated on higher modes. Similarly, the monotonicity of the function (<ref>) in β indicates that quantum signatures to the dispersion will be more significant at lower temperatures, as expected. Finally, consistency with classical thermodynamics is ensured in the high-temperature/long-time limit βħ/τ_R≪ 1, since x Coth(x)≃ 1 for x≪ 1 and one recovers F_W→ k_B T. While a similar quantum signature has been derived for slowly-driven Markovian systems <cit.>, the result obtained here goes significantly beyond this as they apply to generic driving protocols in linear response regime at any speed, as well as to open quantum systems dynamics described by global unitary evolution on system and environment, provided that the perturbation to the local Hamiltonian remains weak. This will be showcased in two case studies. Examples. One of the most powerful consequences of the main result Eq. (<ref>) is that it allows to fully characterize the statistics of dissipation in more complex systems where we may not have solutions to the full Hamiltonian dynamics. This is especially relevant for driven open quantum systems, whose dissipated work's statistics can still be described using our Eq. (<ref>) provided one is simply given an ansatz or an approximation of the relaxation function. Let us show this by considering the two phenomenological models of Ψ_0, Ψ^(1)_0(t):=Ψ^(1)_0(0)e^-γ |t|, Ψ^(2)_0(t):=Ψ^(2)_0(0)e^-γ |t|(cos(ν t)+(γ/ν)sin(ν |t|)). In practice, such models can be built from the Kramers-Kronig relations and sum rules by imposing a number of Hamiltonian constraints that adequately characterise the system behaviour <cit.>. In the first instance, Ψ^(1)_0 provides a reasonable description of an overdamped Brownian dynamics <cit.>, with free parameter γ=1/τ_R setting the characteristic timescale over which the relaxation function decays. The second model, i.e., Ψ^(2)_0, includes an oscillatory behaviour with an additional degree of freedom ν that quantifies the frequency of an external potential. This can be viewed as a model of underdamped Brownian motion <cit.>, though another example where this behaviour arises is in magnetic, weakly interacting spin systems <cit.>. It can be verified that these are valid models consistent with our requirements (<ref>) since their relaxation function in Fourier space is positive, Ψ̃^(1,2)_0(ω)≥ 0 for all ω. From these functions, cumulants can be calculated straightforwardly via the integral Eq. (<ref>) in the frequency domain. A remarkable feature of the linear response regime is that we can predict how the dispersion of the dissipated work distribution changes over time independent of the system-specific features contained in Ψ_0(0). For example, taking the overdamped relaxation model Ψ^(1)_0, along with a linear driving protocol λ_t=α t/τ, we find that the leading order quantum correction to the Fano factor is, analytically (see Appendix B), β F_W≃ 1+ħ^2β^2γ^2/6π(e^γτ-1)/1+(τγ-1)e^γτ. In this case, we see that the correction monotonically decays in the long time limit γτ→∞, indicating that the quantum fluctuations become less relevant at long times. Conversely, at short times we see a dramatic quantum signature with large dispersion in the work distribution above the classical FDR F_W=k_B T. This behaviour can be seen to persist outside the high temperature regime as we show in Fig. <ref> (a), where one can see the impact of the quantum correction increase at lower temperatures, as one would expect. While long time decay can be ensured via the damping terms in (<ref>), monotonicity is not necessarily guaranteed. This in fact can be clearly observed for the dynamics of Ψ^(2)_0, where we plot the corresponding Fano factor as a function of time for various values of ν in Fig. <ref> (b). Our analysis indeed shows non-monotonic changes in the dispersion at short times, indicating a non-trivial interplay between the quantum fluctuations and thermal relaxation from the environment. While it is known that average dissipated work and entropy production can become non-monotonic in time <cit.>, we see here that signatures of memory effects from the perturbation impact the higher order statistics of work as well. Conclusions. In summary, we have formulated a linear response theory that is able to describe the full work statistics of weak quantum processes and used it identify a number of refined thermodynamic constraints such as the fluctuation theorem (<ref>) and positivity of all work cumulants (<ref>). Much like the benefits of traditional linear response theory, which concerns ensemble quantities like average work, our general formula for the CGF opens a route towards studying properties of higher order work statistics of complex systems via phenomenological models of the relaxation function. This can be used to explore thermodynamic optimisation problems that revolve around stochastic fluctuations such as Pareto optimal work extraction <cit.> and free energy estimation <cit.>. The precise connection between the work distribution and relaxation function could also be used to explore the impact of non-Markovianity <cit.> and phase transitions <cit.> on work statistics from a general perspective. Our results have also predicted a clear quantum signature (<ref>) that causes an increase in the dispersion of the work distribution at short driving times and finite temperatures, and is universally applicable to composite systems that are driven by local perturbations. Since the Fano factor F_W can be measured experimentally from the work statistics <cit.>, this offers a clear route to detecting truly non-classical behaviour from thermodynamic variables. Acknowledgements: G. G. acknowledges fundings from European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement INTREPID, No. 101026667. J. E. thanks the DFG (FOR 2724) and the FQXI. H. J. D. M. acknowledges support from the Royal Commission for the Exhibition of 1851. apsrev4-1 § EXPRESSION FOR THE CUMULANT GENERATING FUNCTION FOR THE DISSIPATED WORK IN LINEAR RESPONSE REGIME In this section, we will detail the derivation of the two expression for the cumulant generating function (CGF) of the dissipated work in linear response regime of the main text, i.e., Eqs. (<ref>)-(<ref>). We first start by fixing the notation and introduce some of the main quantities used below. While we keep ħ explicit into the main text, we will set ħ = 1 in this section in order to ease the notation. Let us start by considering a quantum system initially prepared in a thermal state with respect to a given Hamiltonian H_0, i.e., ρ_0 = π_0 := e^-β H_0/Z_0, where β>0 denotes the inverse temperature of the prepared state. From t=0 onwards, the Hamiltonian H_t=H_0+λ_t V is externally driven, up to some arbitrary time τ, by means of a driving schedule t↦λ_t = α g_t (λ_0 = 0), where α denotes the intensity of the perturbation and t↦ g_t describes a generic time-dependent profile. As normalization. we pick the operator norm of the perturbation to take unity value, V=1. Before we make any linear response approximation, we need to express the CGF K(η)=ln⟨ e^-ηβ⟩ in a time local form, for η>0. This can be found by taking the time derivative of Renyi divergence, which has been determined in Ref. <cit.> (see Eqs. (I1)-(I10)) to be K(η) =(η-1)S_η(π_τ||ρ_τ) = ∫_0^τ dt d/dt(Tr[π_t^ηρ_t^1-η]) = -β∫_0^τ dt ∫_0^η[Tr[ π_t^x(Δ_t Ḣ_t) ρ_t^1-x]/Tr[ π_t^ηρ_t^1-η] +∫_0^x dy Tr[ π_t^y(Δ_t Ḣ_t) π_t^x-y(logπ_t - logϱ_t ) ϱ_t^1-x]/Tr[ π_t^ηϱ_t^1-η]] =: K_1(η) + K_2(η), where Δ_t Ḣ_t:=Ḣ_t-Tr[Ḣ_tπ_t], and for later convenience we have separated the two terms in the sum. In this work, we will not put any restriction on the form of the protocol t↦λ_t nor on the speed of the driving (such as, e.g., in Refs. <cit.>), but rather we will assume that the perturbation strength is small, i.e., α≪ 1. This means that we will perform a Taylor expansion in α of all the quantities entering Eq. (<ref>) and truncate to first order. This formal framework is known as linear response theory (LRT). §.§ Derivation of Eq. (<ref>) The first key ingredient from LRT is the linear order approximation for the instantaneous state of the system, which follows from a Dyson expansion of the unitarily evolved state ρ_t:=U_t π_0 U_t^†<cit.>, to get, in this approximation, ρ_t≃π_0+δρ_t, δρ_t=-i α∫^t_0 dt' g_t' [V(t-t'),π_0], where V(t):=e^i t H_0V e^-it H_0 denotes the perturbation evolved in the interaction picture. As well as this, we need to also consider how perturbation effects the instantaneous equilibrium state as well. The Duhamel identity provides a mean of determining this, which gives the Taylor expansion e^A+α B=e^A+α∫^1_0 dy e^s A B e^(1-s)A+𝒪(α^2), for an exponential operator <cit.>. It then follows that the thermal state at time t is given by π_t:=e^-β (H_0+α g_t V)/e^-β (H_0+α g_t V)=π_0-αβ g_t 𝕁_0(δ V) + 𝒪(α^2), where we have introduced the superoperator 𝕁_0(.):=∫^1_0 dy π_0^y (.)π_0^1-y, and have defined the shifted operator δ V:=V-Vπ_0. From this point on, in order to not over-cumber the notation, all the '=' signs in equations (unless explicitly stated otherwise) will signify an equality up to terms of the order 𝒪(α^2). Analogous calculations allow to obtain analytic expressions for the other quantities of interest entering Eq. (<ref>), such as the following. * The logarithm of the instantaneous state <cit.>, logρ_t ≃π_0+𝕁_0^-1(δρ), with 𝕁^-1_0 being the inverse superoperator of 𝕁_0, i.e., 𝕁_0^-1𝕁_0(.) = 𝕁_0𝕁_0^-1(.) = (.) . * Their matrix powers <cit.> ρ_t^x = (π_0+δρ)^x = π_0^x+𝕁_0^(x)(𝕁_0^-1(δρ)), π_t^x = π_0^x-βλ_t𝕁_0^(x)(δ V), where we have introduced the superoperator 𝕁_0^(x)(.):=∫^x_0 dy π_0^y (.)π^1-y_0, consistent with Eq. (<ref>), i.e., 𝕁_0(.) ≡𝕁^(1)_0(.); * The instantaneous deviation of the Hamiltonian from its thermal expectation value Δ_t Ḣ_t = λ̇_t δ V + βλ̇_t λ_t cov^1(V,V), where we have introduced the y-covariance between two operators A and B with respect to π_0 by cov^y(A,B):=π_0^y δ A π_0^1-y δ B, with δ A=A-A π_0. Making use of these expressions, it is straightforward to show that the denominator of Eq. (<ref>) reduces in linear response regime to Tr[ π_t^ηρ_t^1-η] = 1, where we remind the reader that we implicitly neglected the terms 𝒪(α^2). Further calculations allow to express the two terms in the expression for the CGF of the dissipated work Eq. (<ref>) as K_1(η)=-β^2 ηλ_τ^2/2 ∫^1_0 dy cov^y(V,V)+β^2λ^2_τ∫^η_0 dx ∫^x_0 dy cov^y(V,V), K_2(η)=-β∫^τ_0 dt λ̇_t ∫^η_0 dx ∫^1-x_x dy cov^y(𝕁_0^-1(δρ),V). A further insightful simplification stems from expanding the operators appearing in K_2(η) onto the eigenbasis of H_0=∑_n ϵ_n|n⟩⟨n|: Setting M=𝕁_0^-1(δ V) for convenience, one has K_2(η) =-β∫^η_0 dx ∫^1-x_x dy ∫^τ_0 dt λ̇_t ∫^t_0 dt' λ_t' ϕ_y(t-t') =-β∫^η_0 dx ∫^1-x_x dy ∫^τ_0 dt λ̇_t ∫^t_0 dt' λ_t-t' ϕ_y(t'), where we have defined ϕ_y(v):=i∑_n,m p_n^y p_m^1-ye^-iv β^-1log(p_n/p_m)(p_n-p_m) V_n,m M_m,n, and have diagonalised the thermal state π_0=∑_n p_n |n⟩⟨n| with p_n∝ e^-β E_n and set Δ E_n,m:=ϵ_n-ϵ_m. If we now consider its integral ψ_y(t')=∫^t'_0 dt” ϕ_y(t”) =-β∑_n,m p_n^y p_m^1-y(1-e^it'Δ E_n,m)(p_n-p_m)/log(p_n/p_m) V_n,m M_m,n, =β∑_n,m p_n^y p_m^1-y(⟨n|𝕁_0[V]|m⟩-⟨n|𝕁_0[V(t')]|m⟩)⟨m|𝕁_0^-1[δ V]|n⟩, =βπ^y Vπ^1-yδ V-βπ^y V(t')π^1-yδ V, =βcov^y(V,V)-βcov^y(V(t'),V), we can use this expression in Eq. (<ref>) once an integration by parts is performed, leading to K_2(η) =-β∫^η_0 dx ∫^1-x_x dy ∫^τ_0 dt λ̇_t (λ_0 ψ_y(t)-λ_t ψ_y(0)-∫^t_0 dt' dλ_t-t'/dt' ψ_y(t')) =β∫^η_0 dx ∫^1-x_x dy ∫^τ_0 dt λ̇_t ∫^t_0 dt' dλ_t-t'/dt' ψ_y(t'), =-β^2∫^τ_0 dt λ̇_t ∫^t_0 dt λ̇_t-t'∫^η_0 dx ∫^1-x_x dy cov^y(V(t'),V)+β^2λ_τ^2/2∫^η_0 dx ∫^1-x_x dy cov^y(V,V). To get the second line of the equation above, we have used the fact that λ_0 = 0 and ψ_y(0) = 0. Finally, we make use of this expression for K_2(η) and combine it with K_1(η) from Eq. (<ref>), and then exploit the fact that (∫^x_0-1/2∫^1_0+1/2∫^1-x_x) dy cov^y(V,V)=0 due to the symmetry cov^y(V,V)=cov^1-y(V,V). This lead us to obtain the following compact form for the CGF of the dissipated work in linear response regime K(η) =-β^2∫^τ_0 dt λ̇_t∫^t_0 dt' λ̇_t'∫^η_0 dx ∫^1-x_x dy cov^y (V(t-t'),V(0)) = -β^2∫^τ_0 dt∫^t_0 dt' λ̇_tλ̇_t'∫^η_0 dx ∫^1-x_x dy cov^y (V(t),V(t')), = -β^2∫^τ_0 dt ∫^t_0 dt' λ̇_t λ̇_t' ⟨⟨ V(t),V(t') ⟩⟩_0^η where we have used the time translational invariance of the covariance appearing in the integrand. This represents the first expression of the main result Eq. (<ref>). §.§ Derivation of Eq. (<ref>) In this section, we prove that our formula for the CGF (<ref>) can be rewritten in terms of the relaxation function, defined by Ψ_0(t-t') := β∫^β_0 ds ⟨ V(-i s)V(t-t')⟩_0-β^2 ⟨ V ⟩_0^2 =β^2 ∫^1_0 dy cov^y(V(t),V(t')). Our approach will be based on a method used by Shitara and Ueda who found an elegant connection between the linear response function and families of quantum covariances <cit.>. Let us first define the Fourier transform 𝒞_η(ω) := 1/√(2π)∫_ℝd (t-t') e^iω (t-t')⟨⟨ V(t),V(t') ⟩⟩^η_0 of the bilinear form in Eq. (<ref>) with respect to time difference t-t'. We also define a function f_η for a parameter η>0 as f_η(x):=(x^η-1)(x^1-η-1)/log^2(x). The expansion of Eq. (<ref>) on to the basis of the state π_0 gives 𝒞_η(ω) = 1/√(2π)∫_ℝd(t-t') e^iω (t-t') ∑_n,m(-∫^η_0 dx ∫^1-x_x dy p_n^y δ V_n,m e^i(ϵ_n-ϵ_m)(t-t')p_m^1-yδ V_m,n) = 1/√(2π)∫_ℝd(t-t') e^iω (t-t') ∑_n,m e^i(ϵ_n-ϵ_m)(t-t')[∫^η_0 dx ∫^x_1-x dy (p_n/p_m)^y ] δ V_n,m p_m δ V_m,n = 1/√(2π)∑_n,m( ∫_ℝd(t-t') e^i(ω-β^-1log (p_n/p_m))(t-t')) [((p_n/p_m)^η-1)((p_n/p_m)^1-η-1)/log^2(p_n/p_m)] δ V_n,m p_m δ V_m,n = ∑_n,m√(2π)δ(ω-β^-1log (p_n/p_m)) f_η(p_n/p_m) δ V_n,m p_m δ V_m,n = √(2π) f_η(e^-βω)∑_n,mδ(ω-β^-1log(p_n/p_m)) δ V_n,m p_m δ V_m,n, where we have used the relation ϵ_n-ϵ_m = -β^-1log (p_n/p_m) and the definition of the Dirac delta ∫_ℝdt e^iω t = 2πδ(ω) and its properties. The last term in the above expression, i.e., ∑_n,mδ(ω-β^-1log (p_n/p_m)) δ V_n,m p_m δ V_m,n, could be further simplified, but it will be more convenient to keep it in this form for the following calculations. Let us now consider the Fourier transform of the relaxation function Eq. (<ref>), i.e., Ψ_0(ω) = β^2/√(2π)∫_ℝd(t-t') e^iω (t-t') Ψ_0(t) = β^2/√(2π)∫_ℝd(t-t') e^iω (t-t') cov(V(t),V(t')). Analogous calculations can be made for this quantity (k_B T)^2Ψ_0(ω) = 1/√(2π)∫_ℝd(t-t') e^iω (t-t') ∑_n,m(∫^1_0 dy p_n^y δ V_n,m e^i(ϵ_n-ϵ_m)(t-t')p_m^1-yδ V_m,n) = 1/√(2π)∫_ℝd(t-t') e^iω (t-t') ∑_n,m e^i(ϵ_n-ϵ_m)(t-t')[∫^1_0 dy (p_n/p_m)^y ] δ V_n,m p_m δ V_m,n = 1/√(2π)∑_n,m(∫_ℝd(t-t') e^i(ω-β^-1log (p_n/p_m))(t-t')) [(p_n/p_m)-1/log(p_n/p_m)] δ V_n,m p_m δ V_m,n = √(2π) 1-e^-βω/βω∑_n,mδ(ω-β^-1log (p_n/p_m)) δ V_n,m p_m δ V_m,n. By direct comparison between Eqs. (<ref>) and (<ref>) (and putting ħ back in), we obtain the following result β^2𝒞_η(ω) = βħω f_η(e^-βħω)/(1-e^-βħω)Ψ_0(ω). Applying the inverse Fourier transform to both sides and using the convolution theorem gives us β^2⟨⟨ V(t),V(t') ⟩⟩^η_0=ℱ^-1[βħω f_η(e^-βħω)/(1-e^-βħω)]∗Ψ_0(t-t'). Plugging this into (<ref>) and using the symmetry under time-reversal, Ψ_0(t)=Ψ_0(-t), we get a new form for the CGF K(η) =∫^τ_0 dt ∫^t_0 dt' λ̇_t λ̇_t' ℱ^-1[βħω f_η(e^-βħω)/(e^-βħω-1)]∗Ψ_0(t-t') =1/2∫^τ_0 dt ∫^τ_0 dt' λ̇_t λ̇_t' ℱ^-1[βħω f_η(e^-βħω)/(e^-βħω-1)]∗Ψ_0(t-t') in terms of the relaxation function. Using the definition (<ref>), a straightforward rearrangement confirms our main result (<ref>). § EXAMPLES: OVERDAMPED VERSUS UNDERDAMPED In this section, we look in greater detail at two different phenomenological models for the relaxation function, given by Ψ^(1)_0(t)=Ψ^(1)_0(0)e^-γ |t| , Ψ^(2)_0(t)=Ψ^(2)_0(0)e^-γ |t|(cos(ν t)+(γ/ν)sin(ν |t|)). These models represent, respectively, overdamped and underdamped Brownian motion. Their Fourier transforms are found to be Ψ̃^(1)_0(ω)=Ψ^(1)_0(0)√(2/π)γ/γ^2+ω^2, Ψ̃^(2)_0(ω)=Ψ^(2)_0(0)√(2/π) 2γ (ν^2+γ^2)/ν^4+2ν^2(γ-ω)(γ+ω)+(γ^2+ω^2)^2, which are clearly positive as required by (<ref>), implying that we can use them as valid linear response models. We choose a linear protocol λ_t=α t/τ so that |∫^τ_0 dt e^iω tλ̇_t|^2=α^2/τ^2|∫^τ_0 dt e^iω t|^2=2α^2/τ^2ω^2(1-Cos(τω))=4α^2/τ^2ω^2Sin^2(τω/2) . For ω↦Ψ̃^(1)_0(ω), we compute the average as β⟨⟩=γα^2/2πτ^2Ψ^(1)_0(0)∫_ℝdω4 Sin^2(τω/2)/γ^2ω^2+ω^4= α^2 Ψ^(1)_0(0)/γ^2 τ^2(γτ +e^-γτ-1). Going further, the variance is given by β^2Var(W) =γα^2/πτ^2Ψ^(1)_0(0)∫_ℝdω4 Sin^2(τω/2)/γ^2 ω^2+ω^4(βħω/2)Coth(βħω/2), =4βħγ α^2/πγ^2τ^2Ψ^(1)_0(0)∫_0^∞ dω̃ Sin^2(τγω̃/2)/ω̃+ω̃^3Coth(βħγω̃/2), This integral cannot be done analytically for arbitrary β>0, so we proceed numerically. The key point to note is that the Fano factor will become independent of Ψ_0(0) and depend only on the relaxation timescale γ>0 and temperature, with β F_W(x,y)=4 x/(y +e^-y-1)×∫_0^∞ dω̃ Sin^2(y ω̃/2)/ω̃+ω̃^3Coth(x ω̃/2) now a function of dimensionless variables x:=ħβγ and y:=τγ. At high temperatures, we can use x Coth(x)=1+x^2/3+ 𝒪(x^4), to get β F_W =1+2ħ^2β^2γ^2/3∫^∞_0 dω̃ Sin^2(τγω̃/2)/ω̃^2+1+𝒪(ħ^4β^4γ^4) =1+π/6ħ^2β^2γ^2 e^γτ-1/1+(τγ-1)e^γτ+𝒪(ħ^4β^4γ^4). For our second example, the average and variance become β⟨⟩=8α^2/πγ^2τ^2Ψ^(2)_0(0)∫^∞_0 dω̃ (ν̃^2+1)Sin^2(τγω̃/2)/ω̃^2 ν̃^4+2ν̃^2(ω̃-ω̃^2)(ω̃+ω̃^2)+(ω̃+ω̃^3)^2) , β^2Var(W)=8α^2 βħγ/πγ^2τ^2Ψ^(2)_0(0)∫^∞_0 dω̃ (ν̃^2+1)ω̃ Coth(βħγω̃/2)Sin^2(τγω̃/2)/ω̃^2 ν̃^4+2ν̃^2(ω̃-ω̃^2)(ω̃+ω̃^2)+(ω̃+ω̃^3)^2) where we have set ν:=γν̃. These integrals must again be computed numerically. Since in principle, the Fano factor F_W can be measured experimentally from the work statistics <cit.> at different inverse temperatures β>0, say, in settings that simulate environments in trapped ion settings <cit.>, one may be able to observe signatures of the characteristic β↦Coth(βħγω̃/2) dependence.
http://arxiv.org/abs/2307.02529v1
20230705180001
Frustrating quantum batteries
[ "Alberto Giuseppe Catalano", "Salvatore Marco Giampaolo", "Oliver Morsch", "Vittorio Giovannetti", "Fabio Franchini" ]
quant-ph
[ "quant-ph", "cond-mat.str-el" ]
Institut Ruđer Bošković, Bijenička cesta 54, 10000 Zagreb, Croatia Université de Strasbourg, 4 Rue Blaise Pascal, 67081 Strasbourg, France Institut Ruđer Bošković, Bijenička cesta 54, 10000 Zagreb, Croatia CNR-INO and Dipartimento di Fisica dell’Università di Pisa, Largo Pontecorvo 3, 56127 Pisa, Italy NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy [email protected] Institut Ruđer Bošković, Bijenička cesta 54, 10000 Zagreb, Croatia RBI-ThPhys-2023-02 We propose to use a quantum spin chain as a device to store and release energy coherently (namely, a quantum battery) and we investigate the interplay between its internal correlations and outside decoherence. We employ the quantum Ising chain in a transverse field, and our charging protocol consists of a sudden global quantum quench in the external field to take the system out of equilibrium. Interactions with the environment and decoherence phenomena can dissipate part of the work that the chain can supply after being charged, measured by the ergotropy. We find that the system shows overall remarkably better performances, in terms of resilience, charging time, and energy storage, when topological frustration is introduced by setting AFM interactions with an odd number of sites and periodic boundary conditions. Moreover, we show that in a simple discharging protocol to an external spin, only the frustrated chain can transfer work and not just heat. Frustrating quantum batteries F. Franchini August 1, 2023 ================================ § INTRODUCTION In recent years, there has been a global surge of interest in harnessing quantum phenomena at the microscopic level, driven by the rapid advancement of new quantum technologies <cit.>. Within this context, an intriguing area of exploration is the study of "quantum batteries" <cit.>, which are quantum mechanical systems designed for energy storage. Quantum batteries (QBs) utilize quantum effects to achieve more efficient and rapid charging processes compared to classical systems. This burgeoning field of research encompasses numerous intriguing questions, ranging from the stabilization of stored energy <cit.> to the investigation of optimal charging protocols <cit.>. One of the first practical implementations of this type of device is the quantum Dicke battery in Ref. <cit.> where the energy from a photonic cavity mode (acting as a charger) is transferred to a battery comprising N quantum units, each described by a two-level system. Such a model exhibits a collective speed-up <cit.> in the charging process. The Dicke battery has garnered significant interest due to its versatility in various implementation platforms (e.g., superconducting qubits <cit.>, quantum dots <cit.>, coupled with a microwave resonator, Rydberg atoms in a cavity <cit.>, etc.), leading to the exploration of numerous variations of this model <cit.>. To have a practical application, QBs must however not only rapidly store energy but also be able to provide useful energy (i.e. work) once charged <cit.>. A crucial aspect of the problem is to assess the stability of these models in realistic scenarios where they are subject to environmental noise. Preliminary studies in this direction have been obtained in Refs. <cit.> where various schemes have been proposed to stabilize QBs in the presence of specific types of perturbations. In Refs. <cit.> a general theory of work extraction for noisy QB models composed by large collections of non-interacting subsystems (quantum cells) have been presented. The fundamental theoretical tool for this type of study is provided by the ergotropy <cit.>, a non-linear functional that gauges the maximum amount of energy that can be extracted from an assigned input state of a quantum system under reversible, i.e. unitary operations that do not alter the system entropy. The present paper contributes to the development of the theory of QBs. Here, we analyze the work extraction from QBs made of N interacting spins which, once charged, undergo complete dephasing in the energy eigenbasis of the associated Hamiltonian. More precisely, our analysis is focused on many-body models which exhibit exotic behaviors when proper Frustrated Boundary Conditions (FBCs) are imposed. A typical example is represented by a linear chain of an odd number of spins arranged in a ring geometry (i.e. with periodic boundary conditions): when classically paired with antiferromagnetic (AFM) interactions, such a system cannot realize the perfect Neel ordering, hence exhibiting topological frustration due to the presence of a ferromagnetic (FM) kink along the chain. At the quantum level, the introduction of such frustration radically modifies the structure of both the ground-state manifold <cit.> and of the low-energy spectrum <cit.>, leading to a whole set of novel behaviors <cit.> which are potentially interesting for technological applications. One important example is that while in non-frustrated models (at least far from critical points) the ground state manifold is separated by a finite energy gap from the rest of the spectrum, for the topologically frustrated systems it belongs to a band (for the ring geometry discussed above the gap closes as N^-2). As a charging mechanism we consider a simple (relatively easy to implement) global quantum quench. Moreover, we show how topological frustration enhances the robustness to decoherence of a quantum battery: while in the non-frustrated case the ergotropy of the battery can be reduced to less than 30% of its initial value by decoherence phenomena, we observe that a frustrated battery manages to retain more than 90% of the original ergotropy in all the parameter ranges analyzed. Finally, we propose a simple discharging protocol that shows how it is possible to transfer energy from a many-body quantum battery charged with our protocol to an ancillary spin. Surprisingly, we observe that only frustrated batteries can transfer work efficiently, in the form of ergotropy, while the non-frustrated battery only manages to heat up the ancillary system. The manuscript is organized as follows: in Sec. <ref> we introduce the quantum spin models and the charging protocol that we consider to realize a quantum battery, as well as introducing the role of decoherence in these systems. In Sec. <ref> we compare the performances of frustrated and non-frustrated batteries under the assumption of fast charging, i.e. considering a purely coherent charging protocol. In Sec. <ref> we drop this assumption and analyze the effect of decoherence during the charging protocol introducing a non-unitary dynamics during the quantum quench. In Sec. <ref> we present a protocol for energy transfer from a many-body quantum battery to a single ancillary spin. Finally, we discuss our results and possible developments in Sec. <ref>. § THE THEORETICAL FRAMEWORK §.§ The model While the phenomenology of topological frustration has already been described in detail for more general models like the XYZ chain <cit.>, without losing generality, here we will focus on the simplest case, i.e. a ring of an odd number N of spin-1/2 particles coupled via the quantum Ising chain in a transverse magnetic field. The Hamiltonian of such a model is H(J,h)= J∑_j=1^N σ^x_lσ^x_l+1-h∑_l=1^Nσ^z_l , where σ_l^α (α=x, y, z) are Pauli operators acting on the l-th spin, σ_N+1^α= σ_1^α enforces the period boundary conditions, and h is the strength of the external field. The constant J governs the nature of the couplings among the spins: its modulus |J| determines the strength of the Ising interactions and in our analysis will be fixed equal to 1, while its sign allows us to tune from an AFM frustrated system, hence frustrated for J>0, to an non-frustrated FM one for J<0. Regardless of the sign of J, the model is analytically integrable, and a detailed solution can be found in Appendix <ref>. Thanks to this, it is possible to observe how while some properties of the system are not affected by the presence or absence of topological frustration, others assume very different behaviors. An example of the latter is the existence of an energy gap between the ground state manifold and the closest set of excited states in the ordered phase |h|<1. If, in the non-frustrated case, in the thermodynamic limit the two-fold ground state manifold is separated from the band of excited states by a finite energy gap equal to Δ_FM=1-|h|, this disappears completely in presence of frustration. In fact, for J=1, at a finite size, the ground state is part of a band made of 2N states in which the gap between the elements with lower energy closes according to the law Δ_AFM=2|h|/1-|h|π^2/N^2+O(N^-4). Hence, the frustrated AFM model presents a gap that vanishes quadratically with the system's size. It is worth noting that, fixing N to an odd integer number and moving towards the critical points h_c=± 1, the gap of the non-frustrated model decreases while that of the frustrated one increases. §.§ The charging protocol To store energy in a spin system as the one described in (<ref>), i.e. to use such a system as a QB, we propose a simple protocol based on quenches of the external magnetic field sketched in Fig. <ref>. The starting point is the ground state |ϵ_0⟩ with associated energy ϵ_0 of the Hamiltonian H_0=H(J,h_0). Such Hamiltonian admits a set of eigenstates that we denoted as {|ϵ_ℓ⟩}, ordered in such a way that the associated eigenvalues ϵ_l satisfy the following condition ϵ_ℓ≤ϵ_ℓ+1. At time t=0, we perform a sudden global quench to the Hamiltonian H_1=H(J,h_1), whose eigenstates we denote by {|μ_k⟩}, ordered in such a way that μ_k≤μ_k+1. The system then evolves unitarily under the action of H_1 for a certain time interval τ at which the system Hamiltonian is quenched back to H_0 to close the charging cycle. Note that h_1 can also be greater than |J|=1 crossing the Ising quantum critical point and thus the charging process can also happen in a different phase, before we return to h_0. In the absence of external interferences, the QB at the end of the charging process is described by the vector |ψ(τ)⟩= e^- H_1τ |ϵ_0⟩, and the energy stored is given by E_in=⟨ψ(τ)| H_0 |ψ(τ)⟩-ϵ_0 = ∑_ℓ P_ℓ(τ) (ϵ_ℓ - ϵ_0) , where the populations P_ℓ(τ) are P_ℓ(τ) = |∑_k e^-μ_k τ⟨μ_k||ϵ_0⟩⟨ϵ_ℓ||μ_k⟩|^2 . Thanks to the integrability of the Hamiltonian in (<ref>), it is possible to derive analytically the populations P^(0)_ℓ(τ), see Appendix <ref> for details. In Fig. <ref> we have plotted the P^(0)_ℓ(τ) for both the frustrated and non-frustrated case for a specific choice of the system parameters. From the figure, it is possible to observe that, at the level of the populations of the eigenstates of H_0, there is no clear difference between the two cases. As a consequence, also the amount of energy stored in the system is almost the same with small differences that vanish by increasing the system size. §.§ The role of decoherence As long as the evolution of the system remains unitary, it is always possible to reverse it, hence completely recovering the stored energy E_in. But, in presence of decoherence, the dynamics of a quantum system becomes non-unitary, and hence there is no unitary transformation that can bring the system back to its initial state, thereby reducing the amount of energy that can be extracted from it <cit.>. However, one of the main problems in the study of decoherence is that the results obtained are, in general, strongly dependent on the non-unitary dynamics model taken into account which, in turn, depends on the specifics of the experimental apparatus in which the model is tested. Since we aim to carry out a theoretical analysis as independent as possible from the possible experimental realization, we have decided to consider, as a source of decoherence, a purely dephasing dynamics such as the one induced by the master equation proposed by Milburn <cit.> ρ̇(t)=- [H(t),ρ(t)]-1/2ν[H(t),[H(t),ρ(t)]]. Here ρ(t) and H(t) are the instantaneous system density matrix and Hamiltonian, ρ̇(t) is the derivative of the density matrix and ν parametrizes the characteristic decoherence rate of the model. Taking into account the charging process that we have introduced, and hence the dependence on time of the Hamiltonian, the second term of the r.h.s. of (<ref>) implies that all the off-diagonal terms of the matrix are exponentially suppressed in the energy eigenbasis with a characteristic decoherence time equal to τ_k,l≈2ν/Δ E_k,l^2, where Δ E_k,l is the energy difference between the states |ϵ_k⟩ |ϵ_l⟩ (or |μ_k⟩ |μ_l⟩ when we will consider a slow charging process.). At this point, it is important to note that the global quench H_0↔ H_1 preserves all the symmetries of the Hamiltonian, most importantly the translational and the parity symmetry. Therefore, since the initial state |ϵ_0⟩ is an eigenstate of the momentum operator with zero momentum and fixed parity, then the states ϵ_ℓ with P_ℓ(τ)≠ 0 are also eigenstates with the same parity as the ground state and vanishing momentum <cit.>. Each of these states is associated with a different energy and hence, for such states, Δ E_k,l always differs from zero. As a consequence, after a sufficiently long time, due to the effect of the non-unitary dynamics, the state of the QB will be well approximated by a diagonal density matrix with zero coherence in the eigenbasis of the Hamiltonian. To understand how large the characteristic times of the system's dephasing are, from Fig. <ref> we can see that states with a non-vanishing population belong to different energy bands. Hence, if the off-diagonal term ρ_k,l is generated from two states coming from two different bands, the timescale of its exponential suppression will be proportional to the fast decoherence time τ_1, which, by dimensional analysis, we expect of the order of ν/(J-h)^2 and independent of N. On the contrary, if the two states |ϵ_k⟩ and |ϵ_l⟩ belong to the same energy band, such timescale will be related to the slow decoherence time to τ_2≫τ_1, which depends on the average intraband level energy spacing between occupied levels and turns out to be proportional to the system's size N. The existence of two different timescales in the non-unitary dynamics induced by eq. (<ref>) can be appreciated by looking at Fig. <ref> in which, we have depicted the behavior of the relative entropy of coherence for the state ρ(t), i.e. the C_RE(ρ(t)) <cit.>, and provided an estimate of τ_1,2 for some parameter choice. The relative entropy of coherence is defined as C_RE(ρ(t))=S(ρ_D(t))-S(ρ(t)) where S(x)=-∑_i λ_i logλ_i is the von Neumann entropy of the density matrix x with eigenvalues {λ_i} and ρ_D(t) is the diagonal matrix obtained by ρ(t) artificially suppressing all the off-diagonal element in a given basis (the Hamiltonian eigenbasis in our case). From the plot, it is easy to see the existence of two very different time scales. The estimation of these times allows for identifying different operating regimes for the QB. Since, ideally, a QB should be able to store energy for a long time, we have decided that, in this article, we will focus on the worst-case scenario, i.e. one where you try to extract work after a time T ≫τ_2 has passed since the end of the charging process and we leave a detailed analysis of the time-scales τ_1,2 and of the behaviors for intermediate times for further works. In this long-time scenario, the decoherence leads to the complete collapse of the QB density matrix into a diagonal ensemble in the system's energy eigenbasis. On the other hand, regarding the charging process, we will specifically examine two distinct charging regimes. The first of these is the so-called fast-charging regime, in which the charging time τ is considered to be much faster than that of any decoherence time τ≪τ_1. As a consequence, the charging process can be considered a unitary process. On the other hand, the slow-charging regime, in which τ_1 and τ are comparable, a partial dephasing occurs also during the charging process. § BATTERIES IN THE FAST-CHARGING REGIME In the fast-charging regime (i.e. for τ_1 ≫τ) we can neglect the effect of the dephasing during the charging process. Under this hypothesis, the asymptotic state of the QB which emerges from Eq. (<ref>) at time T≫τ_2 corresponds to the completely incoherent (in the basis of the eigenstates of H_0) diagonal density matrix state ρ(T) = ∑_ℓ P_ℓ(τ) |ϵ_ℓ⟩⟨ϵ_ℓ| , where P_ℓ(τ) are the populations defined in Eq. (<ref>). Following the prescription of <cit.> the lowest energy state that can be reached with (reversible) unitary process acting on the density matrix ρ̃(T) is its passive counterpart ρ̃(T) =∑_ℓP̃_ℓ(τ) |ϵ_ℓ⟩⟨ϵ_ℓ| , where P̃_ℓ(τ) are the eigenvalues of ρ_D rearranged in decreasing order (P̃_ℓ(τ) ≥P̃_ℓ+1(τ)). The energy we can recover from the system via unitary operations can then be computed in terms of the system ergotropy, i.e. the difference between the input energy of ρ_D, and the mean energy of ρ̃_D, W = (ρ(T) H_0)-(ρ̃_D H_0) = ∑_ℓ (P_ℓ(τ) - P̃_ℓ(τ)) (ϵ_ℓ-ϵ_0) = E_in-∑_ℓP̃_ℓ(τ) (ϵ_ℓ-ϵ_0)=E_in-E_loss . The quantity E_loss=∑_ℓP̃_ℓ(τ) (ϵ_ℓ-ϵ_0) represents the amount of energy that we cannot extract any more from the battery. Since E_loss is a positive quantity, we have that, due to the non-unitary dynamics acting after the end of the charge phase, the work W that we can extract from the battery is less than the energy E_in stored in it. To quantify how robust the QB is towards decoherence, we compute the ratio between the amount of work we can extract from it at time T≫τ_2 and the energy initially stored in the QB, i.e. η=W/E_in. The results obtained with a semi-analytical approach, see App. <ref>, are shown in Fig. <ref>. The results are obtained by maximizing η throughout the charging time τ in the interval [0,50] in the unit of 1/J. In the top panel, we depict the behavior of η for a fixed value of Δ h=h_1-h_0 as a function of h_0, while in the bottom one, we plot the result obtained keeping h_0 fixed and changing Δ h. As we can see, in the top panel, well below the critical point h_0=1, the frustrated AFM battery is very resilient to the decoherence processes and, for a wide range of h_0, η is close to 1 and well above 0.9. On the contrary, in the same range of parameters, the loss in the work extraction capability for an FM non-frustrated QB can go up to 80%. Moreover, in the bottom panel, the value of η for the frustrated battery is strikingly close to 1 in the whole range, while being considerably smaller for its non-frustrated counterpart. To understand the difference between the frustrated and non-frustrated systems, we have to consider the different characteristics of their energy spectrum. In the magnetically ordered phase of non-frustrated systems such as the one we are considering, the energy spectrum is characterized by two quasi-degenerate states separated from the first band of excited states by a finite energy gap. Conversely, in frustrated systems, the ground state belongs to a band made of 2N states whose width is related to the value of the external field. By comparing these behaviors, taking into account the definition of E_loss, it is easy to explain the different behavior. Indeed, in the case of non-topologically frustrated models, already the third term of the summation in the definition of E_loss provides a non-negligible contribution to the loss of extractable energy and, likewise, all the others that follow. Conversely, in frustrated systems, the contribution of the first 2N terms to E_loss, since all states belong to the same band, is small and decreases as |h_0| decreases. This greatly reduces the loss of energy that can be extracted from the battery and, consequently, increases η. However, when |h_0| increases, the bandwidth of the frustrated model increases, reducing the value of W and hence of η while the gap of the ferromagnetic model narrows, resulting in an increment of η. These two different dependences on |h_0| explain why, close to the quantum critical point, the performance of the two systems becomes comparable. Moreover, since the number of states in the first band of the frustrated systems is proportional to the size of the system itself, it is natural to expect that the effect of reduction of the relative weight of E_loss increases with N. This expected behavior is confirmed by the results shown in Fig. <ref>. In varying the system size, the value of η of the frustrated system remains approximately constant while it goes down with the system size for the non-frustrated model. A further parameter that is useful in characterising the performances of the QB is the time needed to complete the charging process. Since for our model the amount of energy stored in the system is not a monotonic function of the duration of the charging process, we decided to define as charging time the one for which the first local maximum of the E_in as a function of time is reached. This choice seems quite natural considering the necessity of having the charging time as short as possible. In Fig. <ref> we show the results of this analysis obtained varying Δ h_0 for a fixed value of h_0. From the figure, we observe that, regardless of the presence or the absence of topological frustration in the system, the charging time generally decreases with increasing Δ h while the energy stored in the system increases. This fact gives rise to a virtuous circle in which the time required for this storage decreases as the energy stored by the system increases. Moreover, for frustrated systems, the ratio η always remains greater than 0.8 and significantly higher than the one of the non-frustrated counterparts. This means that by increasing the jump in the external magnetic field it is possible to charge the battery more, faster, and with higher resistance to decoherence. While this behavior is valid for both frustrated and non-frustrated QBs, the data witness the fact that the performances of the first are always better than the ones of the second. § BATTERIES IN THE SLOW-CHARGING REGIME The results presented so far were obtained under the hypothesis that the charging protocol was so fast that we can completely neglect any decoherence effects during it. However, such a hypothesis is quite strong and an analysis of what happens when the charging process is affected by decoherence is mandatory. Therefore, in this section, we study the performance of our QB model in the slow-charging regime where the fast decoherence time τ_1 and the charging time τ are comparable. To this end, we numerically integrate Eq. (<ref>) during the charging time up to τ. Even if, in this regime, the analysis is more complex, the basic concepts are the same as in the previous section, and we recover that, after the end of the charging process, waiting for a time T≫τ_2 the state of the QB reduces to a completely incoherent state of the form ρ(T) ≃∑_ℓ P'_ℓ( τ) |ϵ_ℓ⟩⟨ϵ_ℓ| , where the populations P'_ℓ( τ) are P'_ℓ (τ) = ∑_k,k'⟨ϵ_ℓ| μ_k⟩⟨μ_k|ϵ_0⟩⟨ϵ_0| μ_k'⟩⟨μ_k'|ϵ_ℓ⟩× × exp[-(μ_k-μ_k')^22ντ -i (μ_k-μ_k')τ ] , (see Appendix <ref> for details) that correctly reduces to Eq. (<ref>) when all the exponential decays can be neglected. Note that, although the derivation of Eq. (<ref>) in <cit.> is not valid in the ν→ 0 limit, we can take this limit of fast dephasing by instanteneously removing all off-diagonal contributions. Similarly to what was done in the previous section, we have compared the performances of the frustrated and non-frustrated QB models using the ratio η and the velocity of charging. We show the outcomes of the analysis in Fig. <ref>. For several values of the decoherence frequency ν, we charged the battery as a function of Δ h, maximizing over time the robustness parameter η=W/E_in. As expected, by decreasing the decoherence frequency ν, hence making the decoherence stronger and faster in destroying the coherence of the QB state, η decrease but does not disappear completely even in the limiting case ν=0 where all the off-diagonal elements of the density matrix are instantaneously suppressed soon after the quench of the external field from h_0 → h_1. However, once again, the frustrated battery shows a higher robustness with respect to its non-frustrated counterpart. Even in the limiting case of ν=0 (dashed line), the value of η is above 0.9 for a frustrated battery, while it drops below 0.5 for the non-frustrated one. These drops in η are more pronounced for larger values of the quench jump Δ h. Indeed, for smaller quenches, the most populated state is still the ground state. Hence, at least in principle, one could still try to get as close as possible to the initial state when discharging the battery. However, this becomes more difficult when increasing Δ h, as the number of excited states that are macroscopically populated increases, and therefore the loss of quantum coherence has a stronger impact on the value of the ergotropy and hence of η. The decoherence also affects the charging time, making the charging slower for both the frustrated and the non-frustrated batteries. Therefore, we extend the analysis carried out in the previous section also to the case of the slow charging process. The results in Fig. <ref> confirm the fact that the charging processes for both frustrated and non-frustrated systems are comparable, but, the virtuous circle that we have seen in the fast charging regime has disappeared. Indeed, while in the fast charging regime by increasing the quench amplitude Δ h we would increase the ergotropy of the battery and observe a reduction of the charging time τ, in the slow charging regime it is still true that the ergotropy increases with the quench amplitude, but instead the charging time tends to increase, reducing the perfomances of the quantum battery. § DISCHARGING THE BATTERY Up to now, we were mainly focused on the ergotropy of the system and how much it could be affected by the presence of an unavoidable non-unitary dynamic that continues to act even after the end of the charging process. However, such a quantity represents an upper bound of energy that can be extracted from a QB, which is hard to approach when this last is represented by a many-body system. Therefore, in this section, we have decided to analyze a more realistic situation. We take into account a situation in which a QB, after ending the charging process and waiting a time T≫τ_2 such that its state can be considered completely incoherent, is made to interact with an ancillary two-level system. The Hamiltonian of the total system will therefore read as H_W=J∑_k=1^N σ^x_kσ^x_k+1-h∑_k=1^Nσ^z_k + λ H_int + ωσ_S^z, where {σ_k^α}_k=1^N and {σ_S^α} (α=x,y,z) are respectively the spin operators of the k-th site of the QB and the ancillary spin, while ω is the characteristic energy of the ancillary spin. To simulate a realistic condition, we consider that only one of the spins of the battery directly interacts with the ancillary system. Moreover, for the same reason, we do not try to optimize the kind of interaction between the QB and the ancillary system, which can give rise to extremely hard-to-simulate interactions, but we directly take into account a realistic one as the hopping term. Accordingly with these assumptions, H_int reads H_int=σ^+_1 σ^-_S+σ^-_1 σ^+_S, where σ^±_1=σ^x_1± i σ^y_1 and σ^±_S=σ^x_S± i σ^y_S and the strength of the interaction is parametrized by λ. Therefore, the ancillary spin is interacting with a single spin in the battery. The interaction that we have chosen breaks both the translational invariance and the parity symmetry of the battery, increasing the number of states accessible during the discharging process. Moreover, it can be experimentally realizable in Rydberg atom systems <cit.>. Before going further, let us underline that, ideally, one would want an interaction term that commutes with the rest of the combined battery/system Hamiltonian. However, since our battery is a many-body system where the eigenstates are delocalized, this would require an interaction term that interacts with the battery as a whole. However, such interaction, even if theoretically achievable, would be unrealistic from an experimental point of view. In our simulation, we consider that, at time zero, the battery and the ancillary system are brought into contact by turning on the interaction in the equation. Before this, the two systems have been prepared. As far as the QB is concerned, we have at first charged it with the unitary protocol, stopping at the time of the first peak in η, for h_0=0.018, h_1=1.5 and then let it relax to the diagonal ensemble. On the contrary, the ancillary system is initialized in its lowest energy state ρ_S, i.e. spin down configuration, for ω=2|J|=2. This value of ω is chosen in such a way as to resonate with the band gaps of the battery spectrum, which for h_0 close to the classical point are proportional to J. The strength of the interaction between the spin and the battery was chosen small enough that the interaction does not carry too much energy into the system, but strong enough to allow for energy transfer. We established numerically that λ=0.02 is a good compromise that ensures that no appreciable energy is absorbed or released from the interaction term. Soon after t=0, the global system, initialized in the product state ρ_B⊗ρ_s, is allowed to evolve under the action of global Hamiltonian H_W and the energy starts to flow from the QB to the ancillary system. As for all systems, when some energy is provided to the ancillary spin, a part of it will be stored as work, while the rest will be dissipated as heat. Hence, a critical point is whether and how much of this energy can be seen as work performed by the QB on the ancillary spin. One way to reply to this question is to analyze the ratio κ between the ergotropy W_S acquired by the ancillary spin (that is initialized in a zero ergotropy state, i.e. its ground state) and its maximal ergotropy, i.e. κ=W_S/2ω. In other words, κ is the amount of energy that can be later used by the ancillary spin to perform some work. The results obtained through exact diagonalization for κ, for both the frustrated and non-frustrated battery, are shown in Fig. <ref>, for different values of Δ h and of the size of the chain. The plot shows that none of the energy transferred from the non-frustrated battery is translated into ergotropy for the ancillary spin. On the contrary, the frustrated battery manages to charge the ancillary spin up to 42% of its maximal ergotropy. This percentage decreases with increasing size of the battery, due to the very local nature of the interaction between the battery and the ancillary spin. § CONCLUSION AND DISCUSSION OF RESULTS We propose a quantum battery based on a quantum many-body system, namely the quantum Ising chain, designing a cyclic charging protocol based on a global quench in the external magnetic field to store energy in the battery. We used different figures of merit to characterize the efficiency of such devices. In every case, we observed that the frustrated batteries present a very strong resistance to decoherence effects. This remarkable result is related to the fact that the ground state of the frustrated system belongs to a gapless band, allowing for a more efficient energy extraction with respect to the non-frustrated models, where the presence of a finite gap between the ground state and the rest of the spectrum increases the energy of the final equilibrium state of the battery, therefore reducing the fraction of energy which is possible to extract. Thus, our results show that topologically frustrated systems can represent a much more efficient option for the realization of a quantum battery with respect to their non-frustrated counterparts. For the ergotropy, we tested the stability of these results by varying the parameters governing the model (N, h_0) and the charging protocol (Δ h). In all the measured ranges we always observe a higher robustness to decoherence for the frustrated model. In the range of parameters that we considered, the charging time of the frustrated and non-frustrated batteries are comparable, even though for some values of the parameter we have observed shorter charging times for the non-frustrated one. However, even when the non-frustrated battery is charged a bit faster, the frustrated battery still possesses a higher ergotropy and a larger value of η. We expect these results to be valid even after increasing the system size. Moving towards the thermodynamic limit, one might expect that the value of η might decrease since the system will start populating states in higher energy bands (this has to be better investigated). However, at the same time, the density of the states within a band will increase as the gaps tend to close as N^-2, and for the frustrated system, the degeneracy will always be larger than for the non-frustrated one. Therefore, also in the thermodynamic limit we expect the frustrated model to be more robust to decoherence than the non-frustrated one. Moreover, we analyzed what happens in a discharging process in which we connect an ancillary spin to the battery. We defined a protocol that allows energy transfer from the battery to the spin and measured the level of charge acquired by the spin, measured as the fraction of its maximal ergotropy κ. The results show that the energy transferred from the non-frustrated battery is not translated into ergotropy for the ancillary spin, while, within the considered parameters, the frustrated battery charges the spin up to 42% of the maximal ergotropy, which is 2ω. Therefore, once again we find that the performance of the frustrated battery, due to internal correlations of the chain, overcomes its non-frustrated counterpart. As a final remark, we would like to point out that spin models such as the 1D quantum Ising chain can be experimentally realized with Rydberg atoms. In the currently available experimental platforms, typical values of the couplings are J≈ħ· 672 MHz, h≈ħ· 25 MHz, and the system can be stabilized for times of the order of 20-70 μ s. The fastest decoherence time scale of the system can be estimated as the time it takes for the oscillating coherences to average out, i.e. τ_d≈ħ / (JΔ E), where Δ E is the largest dimensionless energy difference between the energy eigenstates populated after the quantum quench for the Ising chain described by the dimensionless Hamiltonian H(1,h̃/J). Since Δ E is of order unity, the typical decoherence time will be of the order of a few tenths of nanoseconds. For the parameters mentioned above, we would have τ_d≈ 1.4 ns. Since this time is considerably smaller than the time for which the system can be stabilized, decoherence effects might become relevant for a quantum battery realized on these platforms. Therefore, topologically frustrated quantum batteries, because of their high robustness to decoherence, might represent a valid alternative for the realization of these quantum devices. AGC and OM acknowledge support from the MOQS ITN programme, a European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement number 955479. SMG and FF, acknowledge support from the QuantiXLie Center of Excellence, a project co-financed by the Croatian Government and the European Union through the European Regional Development Fund – the Competitiveness and Cohesion (Grant KK.01.1.1.01.0004). FF and SMG also acknowledge support from the Croatian Science Foundation (HrZZ) Projects No. IP–2019–4–3321. OM acknowledges support from the Julian Schwinger Foundation. VG and OM acknowledge financial support by MUR (Ministero dell’ Università e della Ricerca) through the PNRR MUR project PE0000023-NQSTI. § SOLUTION OF THE ISING MODEL: FRUSTRATED VS NON-FRUSTRATED CASE It is well known that the Ising model in equation (<ref>) can be diagonalized exactly. For the sake of simplicity, we limit our analysis to the case with 0≤ h < 1, but our results can be easily extended also to the other regions of parameters space. We will consider the case of an odd number of spins N and periodic boundary conditions, i.e. we apply the frustrated boundary conditions FBC. In such a way we can easily switch from the frustrated to the non-frustrated model by setting J=1 or J=-1 respectively. In particular, in the first case, in presence of an antiferromagnetic interaction between nearest neighbors spins, frustration has been proven to produce non-trivial modifications to the ground-state properties of the model. The standard procedure prescribes a mapping of spin operators into fermionic ones <cit.>, which are defined by the Jordan-Wigner transformation: σ^-_j=∏_l<jσ^z_lψ_l^†, σ^+_j=∏_l<jσ^z_lψ_j, σ^z_j=1-2ψ_j^†ψ_j, where ψ_l (ψ^†_l) are fermionic annihilation (creation) operators. In terms of such operators, taking into account the periodic boundary conditions and discarding constant terms, the Hamiltonian thus becomes H = J∑_j=1^N-1[ψ^†_j+1ψ_j+ψ^†_jψ_j+1+ψ^†_jψ^†_j+1+ψ_j+1ψ_j ] + 2h∑_j=1^Nψ^†_jψ_j + -JΠ^z [ψ^†_1ψ_N+ψ^†_Nψ_1+ψ^†_Nψ^†_1+ψ_1ψ_N]. The latter expression is not quadratic itself, but reduces to a quadratic form in each of the parity sectors of Π^z. Therefore, it is convenient to write it in the form H=1+Π^z/2H^+1+Π^z/2+1-Π^z/2H^-1-Π^z/2, where both H^± are quadratic. Hence we can bring the Hamiltonian into a free fermion form by means of two final steps. First, we perform a Fourier transform ψ_q=e^-π/4/√(N)∑_j=1^N e^- qjψ_j. It is worth noting that, due to the different quantization conditions, H^± are defined on two different sets of fermionic modes, respectively q∈Γ^-={2π n/N}_n=0^N-1 in the odd sector and q∈Γ^+={2π/N(n+1/2)}_n=0^N-1 in the even one. Finally a Bogoliubov rotation in Fourier space b_q=cosθ_q ψ_q + sinθ_q ψ^†_-q, with momentum-dependent Bogoliubov angles θ_q=1/2arctan(sin q/h+Jcos q) q≠0,π , θ_0,π=0, leads to the Hamiltonians H^- = ∑_q∈Γ^-/{0}Λ(q)(b^†_q b_q-1/2) +ϵ(0)(b^†_0 b_0-1/2) H^+ = ∑_q∈Γ^+/{π}Λ(q)(b^†_q b_q-1/2) +ϵ(π)(b^†_π b_π-1/2), Here b_q (b^†_q) are the Bogoliubov annihilation (creation) fermionic operators. The dispersion relation Λ(q) for q≠0, π obeys Λ(q)=√((h+Jcos q)^2+sin^2q), while for the two specific modes q=0∈Γ^- and q=π∈Γ^+ we have ϵ(0)=h+J, ϵ(π)=h-J. It is important to observe that, all fermionic modes are associated with a positive energy, but the 0 and π mode which carry negative energy respectively for J=-1 and J=1 since we have chosen h<1. Let us start by considering the FM case J=-1. In this case, for 0≤ h < 1 the only mode that carries negative energy is the 0-mode in the odd sector, while all the modes in the even parity sector carry positive energy. Therefore, in this case the ground state is given by a state with a 0-mode populated b^†_0|∅^-⟩ in the odd parity sector. One can observe that the vacuum states in each sector can be written in terms of the fermionic states |0,1⟩_q and of the Bogoliubov angles θ_q, and read |∅^+⟩ = |0_π⟩⊗_q∈Γ_2^+(cosθ_q|0⟩_q|0⟩_-q-sinθ_q|1⟩_q|1⟩_-q) |∅^-⟩ = |0_0⟩⊗_q∈Γ_2^-(cosθ_q|0⟩_q|0⟩_-q-sinθ_q|1⟩_q|1⟩_-q) where Γ^+_2 (Γ^-_2) is the subset of momenta q ∈Γ^+ ( q ∈Γ^-) that live in the interval q∈(0,π). One can explicitely check that the ground-state energy is given by E_0^-=-1/2∑_q∈Γ^-Λ(q). Let us now focus on the analysis of the AFM model obtained for J=1. In this case we observe that the π-mode is the only one that can be characterized by a negative excitation energy. However, the π-mode exists only in the even parity sector where the addition of a single excitation is forbidden by the parity constraint. For this reason, the lowest energy state of the even sector in this region is still its Bogoliubov vacuum |∅^+⟩. In the odd sector, states with a single excitation are allowed, but all fermionic modes hold positive energy and, it is easy to check that each state that can be defined in this sector has an energy greater than the one associated to |∅^+⟩. Despite this, the lowest admissible states in this sector, those with one occupied mode with momentum closest to π (exactly π is not possible because of the quantization rule of this sector) have an energy gap closing as 1/N^2 compared to |∅^+⟩. The ground-state energy is given by E_0^+=Λ(π)-1/2∑_q∈Γ^+Λ(q). § PROJECTION COEFFICIENTS AFTER A GLOBAL QUENCH In this section we compute analytically the projection coefficients after a global quench from an Hamiltonian H_0≡ H(J,h_0) to H_1≡ H(J,h_1). Since the fermionic structure of the states is the same, the results hold both for the non-frustrated (FM) and frustrated (AFM) case. The intial state before the quench is considered to be the ground state of H_0. In the FM (J=1) case this can be given by |G_0^+⟩=|∅^+⟩_0, or |G_0^-⟩=b^†_0|∅^-⟩_0=|0⟩_0, depending on the parity sector, while in the AFM (J=-1) the ground state is always in the even sector and we have |G_0^+⟩= |∅^+⟩_0. In both situations, since the global quench in the magnetic field preserves the translational invariance and parity of the model, after the quench the initial state will have non-vanishing projection only onto those eigenstates of H_1 with its same parity and momentum, i.e. states with zero momentum. Moreover, since all of the eigenstates are constructed by addition of quasi-particles with a certain quasi-momentum q to a fermionic vacuum, it turns out that the projections will be non-zero only onto those states where excitations are added in couples with opposite momentum, i.e. applying the operator b^†_q b^†_-q to the ground-state. Using simple combinatorics, one could hence easily understand that in a system with N spins, starting from the intial states |∅^+⟩ or |0⟩ the number of states with non-zero projections will be M=∑_l=0^N-1/2N-1/2l=2^N-1/2. The projections can explicitly be computed by evaluating scalar products between different states. These are easily evaluated when the states are expressed in the fermion basis rather than in the Bogoliubov one as in (<ref>), since the fermionic operators are independent of the parameters of the hamiltonian, which will only enter the Bogoliubov angles. Using the notation |∅_k⟩=cosθ_k|0⟩_k|0⟩_-k-sinθ_k|1⟩_k|1⟩_-k, we also have that b^†_k b^†_-k|∅_k⟩=sinθ_k|0⟩_k|0⟩_-k+cosθ_k|1⟩_k|1⟩_-k. Therefore, because of the selection rules imposed by the global quench, we have only four possibilities for the scalar products after the quench: ⟨∅_k^(1)||∅_k^(0)⟩=cosΔ_k, ⟨∅_k^(1)|b^†(0)_k b^† (0)_-k|∅_k^(0)⟩=-sinΔ_k, ⟨∅_k^(1)|b^†(1)_-k b^† (1)_k|∅_k^(0)⟩=sinΔ_k, ⟨∅_k^(1)|b^†(1)_-k b^† (1)_kb^†(0)_k b^† (0)_-k|∅_k^(0)⟩=cosΔ_k. where Δ_k=θ^(1)_k-θ^(0)_k. Finally, we can introduce the notation |P_0⟩=∏_p∈ P_0 b^†_p b^†_-p|G_0⟩, to the describe a generic zero-momentum state, P_0 being a subset of Γ^+∖{π} or Γ^-∖{0} depending on the parity sector. With this in mind, the projection coefficient that we are looking for will take the form ⟨Q_1||P_0⟩=∏_k_1∈Γ∖ (Q_1∪ P_0 ∪{0,π}), k_2 ∈ Q_1∩ P_0, k_3 ∈ P_0∖ Q_1, k_4∈ Q_1∖ P_0cosΔ_k_1cosΔ_k_2 (-sinΔ_k_3) sinΔ_k_4. These coefficients, for opportune choices of the quasi-momenta, will correspond to the ⟨ϵ_k||μ_ℓ⟩ appearing in Eq. (<ref>). Therefore, knowing them allows us to compute the populations P_ℓ. § FORMAL INTEGRATION OF EQ. (<REF>) In case where the system Hamilton H is time independent, the ME (<ref>) admits analytical integration <cit.>. To see this let us write H as H = ∑_ϵΠ_ϵϵ , where ϵ are the eigenvalues of such operator, and {Π_ϵ}_ϵ is the set of orthogonal projectors which decompose the the Hilbert space of the system in the associated energy eigenspaces. Exploiting the fact that ∑_ϵΠ_ϵ =, Π_ϵΠ_ϵ'=δ_ϵ,ϵ'Π_ϵ, one can then verify that an explicit solution of (<ref>) is provided by ρ(t) = Φ^(H)_t [ρ(0)] = ∑_ϵ,ϵ'Π_ϵρ(0) Π_ϵ' e^ - (ϵ -ϵ')^2/2ν t - i (ϵ -ϵ')t , where Φ^(H)_t is the dynamical superoperator <cit.> Φ^(H)_t [⋯] = ∑_ϵ,ϵ'Π_ϵ⋯Π_ϵ' e^ - (ϵ -ϵ')^2/2ν t - i (ϵ -ϵ')t . Notice that for t≫τ_2, where τ_2 is the long dephasing time identified in the main text, such evolution induce complete suppression of the off-diagonal terms that involves superpositions associated with energy eigenvectors of different eigenvalues, i.e. Φ^(H)_t [⋯]|_ν t≫ 1 ⟶ D^(H) [⋯] = ∑_ϵΠ_ϵ⋯Π_ϵ . For the model we are considering H is equal to H_1 for t ∈ ]0,τ[ and to H_0 for t≥τ. Accordingly we can write ρ(t) = {[ Φ_t^(H_1) [ρ(0)] , ∀ t ∈ ]0,τ[ ,; ; Φ_t-τ^(H_0)[ Φ_τ^(H_1)[ρ(0)]] , ∀ t≥τ , ]. which for t=T≥τ such that T≫τ_2, leads to ρ(T) ≃ D^(H_0)[ Φ_τ^(H_1)[ρ(0)]] , with D^(H_0) the dephasing map (<ref>) of H_0. Equation (<ref>) finally follows from (<ref>) observing that under the assumption the the initial state of the QB is the ground state of H_0, then all the eigenspaces involved in the writing of both of Φ_τ^(H_1) and Φ_t-τ^(H_0) only involves eigenspaces with zero momentum which turn out to be non-degenerate (i.e. their associated projectors are all rank one). 99 Acin2018 A. Acín, et al., New Journal of Physics, 20, 080201, (2018). Riedel2017 M. F. Riedel, D. Binosi, R. Thew, and T. Calarco, Quantum Science and Technology, 2, 030501, (2017). Alicki2013 R. Alicki and M. Fannes, Phys. Rev. E 87, 042123 (2013). Binder2015 F.C. Binder, S. Vinjanampathy, K. Modi, and J. Goold, New J. Phys. 17, 075015 (2015). Campaioli2017 F. Campaioli, F.A. Pollock, F.C. Binder, L. Céleri, J. Goold, S. Vinjanampathy, and K. Modi, Phys. Rev. Lett. 118, 150601 (2017). Ferraro2018 D. Ferraro, M. Campisi, G.M. Andolina, V. Pellegrini, and M. Polini, Phys. Rev. Lett. 120, 117702 (2018). Andolina2018 G. M. Andolina, D. Farina, A. Mari, V. Pellegrini, V. Giovannetti, and M. Polini, Phys. Rev. B, vol. 98, 205423 (2018). Farina2019 D. Farina, G. M. Andolina, A. Mari, M. Polini, and V. Giovannetti, Phys. Rev. B, 99, 035421 (2019). Andolina2019b G.M. Andolina, M. Keck, A. Mari, M. Campisi, V. Giovannetti, and M. Polini, Phys. Rev. Lett. 122, 047702 (2019). Hovh2013 K. V. Hovhannisyan, M. Perarnau-Llobet, M. Huber, and A.Acín, Phys. Rev. Lett., 111, 240401 (2013). Ghera2020 S. Gherardini, F. Campaioli, F. Caruso, and F. C. Binder, Physical Review Research, 2, 013095 (2020). Rosa2020 D. Rosa, D. Rossini, G. M. Andolina, M. Polini, and M. Carrega, J. High E. Physics 2020, 67 (2020). Tirone2021 S. Tirone, R. Salvia, and V. Giovannetti, Phys. Rev. Lett. 127, 210601 (2021). Tirone2022 S. Tirone, R. Salvia, S. Chessa, and V. Giovannetti, arXiv:2211.02685 [quant-ph]. Tirone2023a S. Tirone, R. Salvia, S. Chessa, and V. Giovannetti, arXiv:2304.01270 [quant-ph]. Tirone2023b S. Tirone, R. Salvia, S. Chessa, and V. Giovannetti, arXiv:2305.16803 [quant-ph]. Rodri2022 R. R. Rodriguez, B. Ahmadi, G. Suarez, P. Mazurek, S. Barzanjeh, and P. Horodecki, Eprint Arxive: quant-ph2207.00094, (2022). Pirmo2019 F. Pirmoradian and K. Mølmer, Physical Review A 100, 43833 (2019). Mazzoncini2022 F. Mazzoncini, V. Cavina, G. M. Andolina, P. A. Erdmann, and V. Giovannetti, Phys. Rev. A 107, 032218 (2022). Erdman2022 P. A. Erdmann, G. M. Andolina, V. Giovannetti, and F. Noé, arXiv:2212.12397 [quant-ph]. Andolina2019a G.M. Andolina, M. Keck, A. Mari, V. Giovannetti, and M. Polini, Phys. Rev. B 99, 205437 (2019). Wang2020 Z. Wang, et al., Phys. Rev. Lett. 124, 013601 (2020). Stock2017 A. Stockklauser, P. Scarlino, J.V. Koski, S. Gasparinetti, C.K. Andersen, C. Reichl, W. Wegscheider, T. Ihn, K. Ensslin, and A. Wallraff, Phys. Rev. X 7, 011030 (2017). Samk2018 N. Samkharadze, G. Zheng, N. Kalhor, D. Brousse, A. Sammak, U.C. Mendes, A. Blais, G. Scappucci, and L.M.K. Vandersypen, Science 359, 1123-1127 (2018). Haroche2012 S. Haroche, Rev. Mod. Phys. 85, 1083 (2013). Zhang2019 Y.-Y. Zhang, T.-R. Yang, L. Fu, and X. Wang, Phys. Rev. E 99, 052106 (2019). Crescente2020a A. Crescente, M. Carrega, M. Sassetti, and D. Ferraro, New J. Phys. 22, 063057 (2020). Crescente2020b A. Crescente, M. Carrega, M. Sassetti, and D. Ferraro, Phys. Rev. B 102, 245407 (2020). Crescente2020c A. Crescente, D. Ferraro, M. Carrega, and M. Sassetti, Phys. Rev. Research 4, 033216 (2022). Dou2022a F.-Q. Dou, Y.-Q. Lu, Y.-J. Wang, and J.-A. Sun, Phys. Rev. B 105, 115405 (2022). Dou2022b F.-Q. Dou, H. Zhou, and J.-A. Sun, Phys. Rev. A 106, 032212 (2022). Zhao2022 F. Zhao, F.-Q. Dou, and Q. Zhao, Phys. Rev. Research 4, 013172 (2022). Rossini2019 D. Rossini, G. M. Andolina, and M. Polini, Phys. Rev. B, 100, 115142 (2019). Rossini2020 D. Rossini, G. M. Andolina, D. Rosa, M. Carrega, and M. Polini, Phys. Rev. Lett. 125, 236402 (2020). Quach2022 J.Q. Quach, K.E. McGhee, L. Ganzer, D.M. Rouse, B.W. Lovett, E.M. Gauger, J. Keeling, G. Cerullo, D.G. Lidzey, and T. Virgili, Science Advances 8, eabk3160 (2022). Monsel2020 J. Monsel, M. Fellous-Asiani, B. Huard, and A. Auff`eves, Phys. Rev. Lett. 124, 130601 (2020). Maffei2021 M. Maffei, P.A. Camati, and A. Auff`eves, Phys. Rev. Research 3, L032073 (2021). Oppe2002 J. Oppenheim, M. Horodecki, P, Horodecki, and R. Horodecki, Phys. Rev. Lett. 89, 180402 (2002). Carrega2020 M. Carrega, A. Crescente, D. Ferraro, and M. Sassetti, New J. Phys. 22, 083085 (2020). Bai2020 S.-Y. Bai and J.-H. An, Phys. Rev. A 102, 060201 (2020). Tabesh2020 F. T. Tabesh, F. H. Kamin, and S. Salimi, Phys. Rev. A 102, 052223 (2020). Ghosh2021 S. Ghosh, T. Chanda, S. Mal, and A. Sen(De), Phys. Rev. A 104, 032207 (2021). Santos2021 A. C. Santos, Phys. Rev. E 103, 042118 (2021). Zaka2021 S. Zakavati, F. T. Tabesh, and S. Salimi, Phys. Rev. E 104, 054117 (2021). Landi2021 G. T. Landi, Entropy 23, 10.3390/e23121627 (2021). Morrone2023 D. Morrone, M. A. C. Rossi, A. Smirne, and M. G. Genoni, Quantum Sci. Technol. 8, 035007 (2023). Sen2023 K. Sen and U. Sen, arXiv:2302.07166 [quant-ph] Liu2019 J. Liu, D. Segal, and G. Hanna, The Journal of Physical Chemistry C 123, 18303 (2019). Santos2019 A. C. Santos, B. Cakmak, S. Campbell, and N. T. Zinner, Phys. Rev. E 100, 032107 (2019). Quach2020 J. Q. Quach and W. J. Munro, Phys. Rev. App. 14, 024092 (2020). Santos2020 A. C. Santos, A. Saguia, and M. S. Sarandy, Phys. Rev. E 101, 062114 (2020). Liu2021 J. Liu and D. Segal, arXiv:2104.06522 [quant-ph]. Arjmandi2022 M. B. Arjmandi., H. Mohammadi, and A. C. Santos, Phys. Rev. E 105, 054115 (2022). Alla2004 A.E. Allahverdyan, R. Balian, and T.M. Nieuwenhuize, Europhys. Lett. 67, 565 (2004). Niedenzu2019 W. Niedenzu, M. Huber, and E. Boukobza, Quantum 3, 195 (2019). Maric2020_induced V. Marić, S. M. Giampaolo and F. Franchini, "Quantum phase transition induced by topological frustration", Communications Physics 3, 220 (2020). Maric2020_destroy V. Marić, S. M. Giampaolo and F. Franchini, "The frustration of being odd: how boundary conditions can destroy local order", New Journal of Physics, 22, 083024 (2020). Catalano2022 A. G. Catalano, D. Brtan, F. Franchini and S. M. Giampaolo, "Simulating continuous symmetry models with discrete ones", Physical Review B, 106, 125145 (2022). Maric2022_fate V. Marić, S. M. Giampaolo, and Fabio Franchini, "Fate of local order in topologically frustrated spin chains", Physical Review B 105, 064408 (2022). Giampaolo2019 S. M. Giampaolo, F. B. Ramos and F. Franchini, "The Frustration of being Odd: Universal area law violation in local systems", Journal of Physics Communication 3, 081001 (2019). Maric2022_nature V. Marić, G. Torre, F. Franchini, and S. M. Giampaolo, "Topological Frustration can modify the nature of a Quantum Phase Transition", SciPost Physics 12, 075 (2022). Torre2022 G. Torre, V. Marić, D. Kuić, F. Franchini, and S. M. Giampaolo, "Odd thermodynamic limit for the Loschmidt echo", Physical Review B, 105, 184424 (2022) Odavic2022 J. Odavić, T. Haug, G. Torre, A. Hamma, F. Franchini, and S. M. Giampaolo, "Complexity of frustration: a new source of non-local non-stabilizerness", arXiv:2209.10541 (2022). ZENO B. Misra, and E.C.G. Sudarshan, J. Math. Phys. 18, 756 (1977). Open F. Petruccione and H.-P. Breuer "The Theory of Open Quantum Systems", (Oxford University Press, USA, 2007). Milburn1991 G. J. Milburn, "Intrinsic decoherence in quantum mechanics", Physical Review A, 44, 5401 (1991) Baumgratz2014 T. Baumgratz, M. Cramer, and M., B. Plenio, "Quantifying coherence", Physics Review Letters 113, 140401 (2014). Barredo2015 D. Barredo, H. Labuhn, S. Ravets, T. Lahaye, A. Browaeys, and C. S. Adams, "Coherent Excitation Transfer in a Spin Chain of Three Rydberg Atoms", Physiscal Review Letters, 114, 113002 (2015) Breuer H. P. Breuer, F. Petruccione, The theory of open quantum systems, Oxford University Press (2007).
http://arxiv.org/abs/2307.04816v1
20230701035032
Q-YOLO: Efficient Inference for Real-time Object Detection
[ "Mingze Wang", "Huixin Sun", "Jun Shi", "Xuhui Liu", "Baochang Zhang", "Xianbin Cao" ]
cs.CV
[ "cs.CV" ]
Beihang University, Beijing, China {wmz20000729,sunhuixin,ShiJun2020,1332671326,bczhang,xbcao}@buaa.edu.cn Q-YOLO: Efficient Inference for Real-time Object Detection † Equal contribution. ⋆ Corresponding author. Mingze Wang ^† Huixin Sun ^†Jun Shi ^† Xuhui Liu Baochang Zhang Xianbin Cao^⋆ ======================================================================================================== Real-time object detection plays a vital role in various computer vision applications. However, deploying real-time object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper describes a low-bit quantization method to build a highly efficient one-stage detector, dubbed as Q-YOLO, which can effectively address the performance degradation problem caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO introduces a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme, which determines the maximum truncation values through histogram analysis by minimizing the Mean Squared Error (MSE) quantization errors. Extensive experiments on the COCO dataset demonstrate the effectiveness of Q-YOLO, outperforming other PTQ methods while achieving a more favorable balance between accuracy and computational cost. This research contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory overhead. § INTRODUCTION Real-time object detection is a crucial component in various computer vision applications, such as multi-object tracking <cit.>, autonomous driving <cit.>, and robotics <cit.>. The development of real-time object detectors, particularly YOLO-based detectors, has yielded remarkable performance in terms of accuracy and speed. For example, YOLOv7-E6 <cit.> object detector achieves 55.9% mAP on COCO 2017, outperforming both transformer-based detector SWINL Cascade-Mask R-CNN <cit.> and convolutional based detector ConvNeXt-XL Cascade-Mask R-CNN <cit.> in both speed and accuracy. Despite their success, the computational cost during inference remains a challenge for real-time object detectors on resource-limited edge devices, such as mobile CPUs or GPUs, limiting their practical usage. Substantial efforts on network compression have been made towards efficient online inference <cit.>. Methods include enhancing network designs <cit.>, conducting network search <cit.>, network pruning <cit.>, and network quantization <cit.>. Quantization, in particular, has gained significant popularity for deployment on AI chips by representing a network using low-bit formats. There are two prevailing quantization methods, Quantization-Aware Training (QAT) <cit.> and Post-Training Quantization (PTQ) <cit.>. Although QAT generally achieves better results than PTQ, it requires training and optimization of all model parameters during the quantization process. The need for pretraining data and significant GPU resources makes QAT challenging to execute. On the other hand, PTQ is a more efficient approach for quantizing real-time object detectors. To examine low-bit quantization for real-time object detection, we first establish a PTQ baseline using YOLOv5 <cit.>, a state-of-the-art object detector. Through empirical analysis on the COCO 2017 dataset, we observe notable performance degradation after quantization, as indicated in Table <ref>. For example, a 4-bit quantized YOLOv5s employing Percentile achieves only 7.0% mAP, resulting in a performance gap of 30.4% compared to the original real-valued model. We find the performance drop of quantized YOLOs can be attributed to the activation distribution imbalance. As shown in Fig. <ref>, we observe high concentration of values close to the lower bound and the significant decrease in occurrences above zero. When employing fixed truncation values such as MinMax, representing activation values with extremely low probabilities would consume a considerable number of bits within the limited integer bit width, resulting in further loss of information. In light of the above issue, we introduce Q-YOLO, a fully end-to-end PTQ quantization architecture for real-time object detection, as depicted in Fig. <ref>. Q-YOLO quantizes the backbone, neck, and head modules of YOLO models, while employing standard MinMax quantization for weights. To tackle the problem of activation distribution imbalance, we introduce a novel approach called Unilateral Histogram-based (UH) activation quantization. UH iteratively determines the maximum truncation value that minimizes the quantization error through histograms. This technique significantly reduces calibration time and effectively addresses the discrepancy caused by quantization, optimizing the quantization process to maintain stable activation quantization. By mitigating information loss in activation quantization, our method ensures accurate object detection results, thereby enabling precise and reliable low-bit real-time object detection performance. Our contributions can be summarized as follows: * We introduce a fully end-to-end PTQ quantization architecture specifically designed for real-time object detection, dubbed as Q-YOLO. * A Unilateral Histogram-based (UH) activation quantization method is proposed to leverage histogram analysis to find the maximum truncation values, which can effectively minimize the MSE quantization error. * Through extensive experiments on various object detectors, we demonstrate that Q-YOLO outperforms baseline PTQ models by a significant margin. The 8-bit Q-YOLO model applied on YOLOv7 achieves a 3× acceleration while maintaining performance comparable to its full-precision counterpart on COCO, highlighting its potential as a general solution for quantizing real-time object detectors. § RELATED WORK §.§ Quantization Quantized neural networks are based on low-bit weights and activations to accelerate model inference and save memory. The commonly used model quantization methods include quantization-aware training (QAT) and post-training quantization (PTQ). In QAT, Zhang et al. <cit.> builds a binarized convolutional neural network based on a projection function and a new updated rule during the backpropagation. Li et al. <cit.> proposed an information rectification module and distribution-guided distillation to push the bit-width in a quantized vision transformer. TTQ <cit.> uses two real-valued scaling coefficients to quantize the weights to ternary values. Zhuang et al. <cit.> present a low-bit (2-4 bit) quantization scheme using a two-stage approach to alternately quantize the weights and activations, providing an optimal trade-off among memory, efficiency, and performance. In <cit.>, the quantization intervals are parameterized, and optimal values are obtained by directly minimizing the task loss of the network. ZeroQ <cit.> supports uniform and mixed-precision quantization by optimizing for a distilled dataset which is engineered to match the statistics of the batch normalization across different network layers. <cit.> enabled accurate approximation for tensor values that have bell-shaped distributions with long tails and found the entire range by minimizing the quantization error.While QAT often requires high-level expert knowledge and huge GPU resources for training or fine-tuning, especially the large-scale pre-trained model. To reduce the above costs of quantization, PTQ, which is training-free, has received more widespread attention and lots of excellent works arise. MinMax, EMA <cit.> methods are commonly used to compress or reduce the weights of the PTQ model. MinMax normalizes the weights and bias values in the model to a predefined range, such as [-1, 1], to reduce the storage space and increase the inference speed. MSE quantization involves evaluating and adjusting the quantized activation values to minimize the impact of quantization on model performance. §.§ Real-time Object Detection Deep Learning based object detectors can be generally classified into two categories: two-stage and single-stage object detectors. Two-stage detectors, such as Faster R-CNN <cit.>, RPN <cit.>, and Cascade R-CNN <cit.>, first generate region proposals and then refine them in a second stage. On the other hand, single-stage object detectors have gained significant popularity in real-time object detection due to their efficiency and effectiveness. These detectors aim to predict object bounding boxes and class labels in a single pass of the neural network, eliminating the need for time-consuming region proposal generation. One of the pioneering single-shot detectors is YOLO <cit.>, which divides the input image into a grid and assigns bounding boxes and class probabilities to predefined anchor boxes. The subsequent versions, YOLOv2 <cit.> and YOLOv3 <cit.>, introduced improvements in terms of network architecture and feature extraction, achieving better accuracy without compromising real-time performance. Another influential single-shot detector is SSD <cit.>, which employs a series of convolutional layers at different scales to detect objects of various sizes. By using feature maps at multiple resolutions, SSD achieves high accuracy while maintaining real-time performance. Variants of SSD, such as MobileNet-SSD <cit.> and Pelee <cit.>, further optimize the architecture to achieve faster inference on resource-constrained devices. Efficiency is a critical aspect of real-time object detection, especially for deployment on computationally limited platforms. MobileNet<cit.> and its subsequent variants, such as MobileNetV2<cit.> and MobileNetV3 <cit.>, have received significant attention for their lightweight architectures. These networks utilize depth-wise separable convolutions and other techniques to reduce the number of parameters and operations without significant accuracy degradation. ShuffleNet<cit.> introduces channel shuffling operations to exploit group convolutions, enabling a trade-off between model size and computational cost. ShuffleNetV2<cit.> further improves the efficiency by introducing a more efficient block design and exploring different network scales. § METHODOLOGY §.§ Preliminaries §.§.§ Network Quantization Process. We first review the main steps of the Post-Training Quantization (PTQ) process and supply the details. Firstly, the network is either trained or provided as a pre-trained model using full precision and floating-point arithmetic for weights and activations. Subsequently, numerical representations of weights and activations are suitably transformed for quantization. Finally, the fully-quantized network is deployed either on integer arithmetic hardware or simulated on GPUs, enabling efficient inference with reduced memory storage and computational requirements while maintaining reasonable accuracy levels. §.§.§ Uniform Quantization. Assuming the quantization bit-width is b, the quantizer Q(𝐱|b) can be formulated as a function that maps a floating-point number 𝐱∈ℝ to the nearest quantization bin: Q(𝐱|b): ℝ→𝐱̂, 𝐱̂= { {-2^b-1,⋯ ,2^b-1-1} Signed, {0⋯ ,2^b-1} Unsigned. . There are various quantizer Q(𝐱|b), where uniform <cit.> are typically used. Uniform quantization is well supported on most hardware platforms. Its unsigned quantizer Q(𝐱|b) can be defined as: Q(𝐱|b)=clip(⌊𝐱/s_𝐱⌉+zp_𝐱, 0, 2^b-1), where s_𝐱 (scale) and zp_𝐱 (zero-point) are quantization parameters. In Eq. <ref>, u (upper) and l (lower) define the quantization grid limits. s_𝐱= u-l/2^b-1,zp_𝐱=clip(⌊-l/s⌉, 0, 2^b-1). The dequantization process can be formulated as: 𝐱=(𝐱̂-zp_𝐱) × s_𝐱. §.§ Quantization Range Setting Quantization range setting is the process of establishing the upper and lower clipping thresholds, denoted as u and l respectively, of the quantization grid. The crucial trade-off in range setting lies in the balance between two types of errors: clipping error and rounding error. Clipping error arises when data is truncated to fit within the predefined grid limits, as described in Eq.<ref>. Such truncation leads to information loss and a decrease in precision in the resulting quantized representation. On the other hand, rounding error occurs due to the imprecision introduced during the rounding operation, as described in Eq.<ref>. This error can accumulate over time and have an impact on the overall accuracy of the quantized representation. The following methods provide different trade-offs between the two quantities. §.§.§ MinMax. In the experiments, we use the MinMax method for weight quantization, where clipping thresholds l_𝐱 and u_𝐱 are formulated as: l_𝐱= min(𝐱), u_𝐱=max(𝐱). This leads to no clipping error. However, this approach is sensitive to outliers as strong outliers may cause excessive rounding errors. §.§.§ Mean Squared Error (MSE). One way to mitigate the problem of large outliers is by employing MSE-based range setting. In this method, we determine l_𝐱 and u_𝐱 that minimize the mean squared error (MSE) between the original and quantized tensor: l_𝐱, u_𝐱arg min MSE(𝐱, 𝐐_l_𝐱, u_𝐱), where 𝐱 represents the original tensor and 𝐐_l_𝐱, u_𝐱 denotes the quantized tensor produced using the determined clipping thresholds l_𝐱 and u_𝐱. The optimization problem is commonly solved using grid search, golden section method or analytical approximations with closed-form solution. §.§ Unilateral Histogram-based (UH) Activation Quantization To address the issue of activation value imbalance, we propose a new approach called Unilateral Histogram-based (UH) activation quantization. We first provide an empirical study of the activation values after forward propagation through the calibration dataset. As depicted in Figure <ref>, we observe a concentrated distribution of values near the lower bound, accompanied by a noticeable decrease in occurrences above zero. Further analysis of the activation values reveals that the empirical value of -0.2785 serves as the lower bound. This phenomenon can be attributed to the frequent utilization of the Swish (SILU) activation function in the YOLO series. Based on the empirical evidence, we introduce an asymmetric quantization approach called Unilateral Histogram-based (UH) activation quantization. In UH, we iteratively determine the maximum truncation value that minimizes the quantization error, while keeping the minimum truncation value fixed at -0.2785, as illustrated in the following: u_𝐱= l_𝐱, u_𝐱arg min MSE(𝐱, 𝐐_l_𝐱, u_𝐱), l_𝐱=-0.2785. To evaluate the quantization error during the search for the maximum truncation value, we utilize the fp32 floating-point numbers derived from the center values of the gathered 2048 bins, as introduces in Algorithm <ref>. These numbers are successively quantized, considering the current maximum truncation value under consideration. Through this iterative process, we identify the optimal truncation range. The UH activation quantization method offers two key advantages. Firstly, it significantly reduces calibration time. Secondly, it ensures stable activation quantization by allowing a larger set of integers to represent the frequently occurring activation values between 0 and -0.2785, thereby improving quantization accuracy. § EXPERIMENTS In order to assess the performance of the proposed Q-YOLO detectors, we conducted a comprehensive series of experiments on the widely recognized COCO 2017 <cit.> detection benchmark. As one of the most popular object detection datasets, COCO 2017 <cit.> has become instrumental in benchmarking state-of-the-art object detectors, thanks to its rich annotations and challenging scenarios. Throughout our experimental analysis, we employed standard COCO metrics on the bounding box detection task to evaluate the efficacy of our approach. §.§ Implementation Details We randomly selected 1500 training images from the COCO train2017 dataset <cit.> as the calibration data, which served as the foundation for optimizing the model parameters. Additionally, the performance evaluation took place on the COCO val2017 dataset <cit.>, comprising 5000 images. The image size is set to 640x640. In our experiments, unless otherwise noted, we employed symmetric channel-wise quantization for weights and asymmetric layer-wise quantization for activations. To ensure a fair and unbiased comparison, we consistently applied the MinMax approach for quantizing weights. The input and output layers of the model are more sensitive to the loss of accuracy. In order to maintain the overall performance of the model, the original accuracy of these layers is usually retained. We also follow this practice. §.§ Main results We apply our proposed Q-YOLO to quantize YOLOv5s <cit.>, YOLOv5m <cit.>, YOLOv7 <cit.> and YOLOv7x <cit.>, which have an increasing number of parameters.The results of the full-precision model, as well as the 8-bit and 4-bit quantized models using MinMax, Percentile, and Q-YOLO methods, are all presented in Table. <ref>. Table. <ref> lists the comparison of several quantization approaches and detection methods in computing complexity, storage cost. Our Q-YOLO significantly accelerates computation and reduces storage requirements for various YOLO detectors. Similarly, in terms of detection accuracy, when using Q-YOLO to quantize the YOLOv5 series models to 8 bits, there is virtually no decline in the average precision (AP) value compared to the full-precision model. As the number of model parameters increases dramatically, quantizing the YOLOv7 series models to 8 bits results in an extremely slight decrease in accuracy. When quantizing models to 4 bits, the accuracy experiences a significant loss due to the reduced expressiveness of 4-bit integer representation. Particularly, when using the MinMax quantization method, the model loses all its accuracy; whereas the Percentile method, which roughly truncates 99.99% of the extreme values, fails to bring notable improvement. Differently, Q-YOLO successfully identifies a more appropriate scale for quantization, resulting in a considerable enhancement compared to conventional Post-Training Quantization (PTQ) methods. §.§ Ablation Study §.§.§ Symmetry in Activation Quantization. Nowadays, quantization schemes are often subject to hardware limitations; for instance, NVIDIA<cit.> only supports symmetric quantization, as it is more inference-speed friendly. Therefore, discussing the symmetry in activation value quantization is meaningful. Table. <ref> presents a comparison of results using Q-YOLO for symmetric and asymmetric quantization, with the latter exhibiting higher accuracy. The range of negative activation values lies between 0 and -0.2785, while the range of positive activation values exceeds that of the negative ones. If we force equal integer expression bit numbers on both positive and negative sides, the accuracy will naturally decrease. Moreover, this decline becomes more pronounced as the quantization bit number decreases. §.§.§ Quantization Type. In Table. <ref>, we analyze the impact of different quantization types on the performance of the YOLOv5s and YOLOv5m models, considering three cases: quantizing only the weights (only weights), quantizing only the activation values (only activation), and quantizing both weights and activation values (weights+activation). The results demonstrate that, compared to quantizing the activation values, quantizing the weights consistently induces larger performance degradation. Additionally, the lower the number of bits, the greater the loss incurred by quantization. In YOLO, the weights learned by a neural network essentially represent the knowledge acquired by the network, making the precision of the weights crucial for model performance. In contrast, activation values serve as intermediate representations of input data propagating through the network, and can tolerate some degree of quantization error to a certain extent. §.§ Inference speed To practically verify the acceleration benefits brought about by our quantization scheme, we conducted inference speed tests on both GPU and CPU platforms. For the GPU, we selected the commonly used desktop GPU NVIDIA RTX 4090 <cit.> and the NVIDIA Tesla T4 <cit.> , often used in computing centers for inference tasks. Due to our limited CPU resources, we only tested Intel products, the i7-12700H and i9-10900, both of which have x86 architecture. For deployment tools, we chose TensorRT <cit.> and OpenVINO <cit.>. The entire process involved converting the weights from the torch framework into an ONNX model with QDQ nodes and then deploying them onto specific inference frameworks. The inference mode was set to single-image serial inference, with an image size of 640x640. As most current inference frameworks only support symmetric quantization and 8-bit quantization, we had to choose a symmetric 8-bit quantization scheme, which resulted in an extremely small decrease in accuracy compared to asymmetric schemes. As shown in Table. <ref>, the acceleration is extremely significant, especially for the larger YOLOv7 model, wherein the speedup ratio when using a GPU even exceeded 3× compared to the full-precision model. This demonstrates that applying quantization in real-time detectors can bring about a remarkable acceleration. § CONCLUSIONS Real-time object detection is crucial in various computer vision applications. However, deploying object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper introduces Q-YOLO, a highly efficient one-stage detector built using a low-bit quantization method to address the performance degradation caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO employs a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme. Extensive experiments conducted on the COCO dataset demonstrate the effectiveness of Q-YOLO. It outperforms other PTQ methods while achieving a favorable balance between accuracy and computational cost. This research significantly contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory requirements. splncs04
http://arxiv.org/abs/2307.01648v2
20230704110313
On the structural and combinatorial properties in 2-swap word permutation graphs
[ "Duncan Adamson", "Nathan Flaherty", "Igor Potapov", "Paul G. Spirakis" ]
math.CO
[ "math.CO", "cs.DM", "cs.DS", "G.2.1; G.2.2" ]
Task Planning Support for Arborists and Foresters: Comparing Deep Learning Approaches for Tree Inventory and Tree Vitality Assessment Based on UAV-DataSpecial thanks to our cooperation partner Smart City Bamberg. The project BaKIM is supported by Kommunal?Digital! funding of the Bavarian Ministry for Digital Affairs. Project funding period: 01.01.2022 - 31.03.2024. Jonas Troles 10000-0001-8686-5168 Richard Nieding1 Sonia Simons2 Ute Schmid10000-0002-1301-0326 August 1, 2023 =============================================================================================================================================================================================================================================================================================================================================================================== In this paper, we study the graph induced by the 2-swap permutation on words with a fixed Parikh vector. A 2-swap is defined as a pair of positions s = (i, j) where the word w induced by the swap s on v is v[1] v[2] … v[i - 1] v[j] v[i+1] … v[j - 1] v[i] v[j + 1] … v[n]. With these permutations, we define the Configuration Graph, G(P) defined over a given Parikh vector. Each vertex in G(P) corresponds to a unique word with the Parikh vector P, with an edge between any pair of words v and w if there exists a swap s such that v ∘ s = w. We provide several key combinatorial properties of this graph, including the exact diameter of this graph, the clique number of the graph, and the relationships between subgraphs within this graph. Additionally, we show that for every vertex in the graph, there exists a Hamiltonian path starting at this vertex. Finally, we provide an algorithm enumerating these paths from a given input word of length n with a delay of at most O(log n) between outputting edges, requiring O(n log n) preprocessing. § INTRODUCTION In information theory and computer science, there are several well known edit distances between strings which are based on insertions, deletions and substitutions of single characters or various permutations of several characters, including swaps of adjacent or none-adjacent characters, reverse operations, shuffling etc <cit.>. These operations are well motivated by problems in physical science, for example, the biological swaps which occur at a gene level are non-adjacent swap operations of two symbols (mutation swap operator) representing gene mutations <cit.>. In recent work on Crystal Structure Prediction the swap operation on a pair of symbols in a given word representing layers of atomic structures was used to generate new permutations of those layers, with the aim of exploring the configuration space of crystal structures <cit.>. In computer science string-to-string correction <cit.> has been studied for adjacent swaps <cit.> and also in the context of sorting networks <cit.>, motion on graphs and diameter of permutation groups <cit.>. In group theory, the distance between two permutations (the Cayley distance) measures the minimum number of transpositions of elements needed to turn one into the other <cit.>. In general, a configuration graph is a graph where words (also known as strings) are represented by vertices and operations by edges between the strings. For example, one may define the operations as the standard suite of edits (insertions, deletions, and substitutions), with each edge corresponding to a pair of words at an edit distance of one. In such a graph, the distance between any pair of words corresponds to the edit distance between these words. In this paper, we study the structural properties of such graphs defined by swap operations of two symbols on a given word (2-swap permutations), a permutation defined by a pair of indices (i, j) and changing a word w by substituting the symbol at position i with that at position j, and the symbol at position j with that at position i. As the number of occurrences of each symbol in a given word can not be changed under this operation, we restrict our work to only those words with a given Parikh vector[ The Parikh vector of a word w denotes a vector with the number of occurrences of the symbols in the word w. The standard permutation can be seen as a permutation of the word with all distinct symbols. ]. We focus on studying several fundamental properties of the structure of these graphs, most notably the diameter, clique number, number of cliques, and the Hamiltonicity of the graph. Similar problems have been heavily studied for Cayley graphs <cit.>, permutation graphs <cit.>. It has been conjectured that the diameter of the symmetric group of degree n is polynomially bounded in n, where only recently the exponential upper bound <cit.> was replaced by a quasipolynomial upper bound <cit.>. The diameter problem has additionally been studied with respect to a random pair of generators for symmetric groups <cit.>. In general finding the diameter of a Cayley graph of a permutation group is NP-hard and to find the distance between two permutations in directed Cayley graphs of permutation groups is PSPACE-hard <cit.>. To develop efficient exploration strategies of these graphs it is essential to investigate structural and combinatorial properties. As mentioned above the problem is motivated by problems arising in chemistry regarding Crystal Structure Prediction (CSP) which is computationally intractable in general <cit.>. In current tools <cit.>, chemists rely on representing crystal structures as a multiset of discrete blocks, with optimisation performed via a series of permutations, corresponding to swapping blocks operations. Understanding reachability properties under the swap operations can help to evaluate and improve various heuristic space exploration tools and extend related combinatorial toolbox <cit.>. We provide several key combinatorial properties of the graph defined by 2-swap permutations over a given word, including the exact diameter of this graph, the clique number of the graph, and the relationships between subgraphs within this graph. First, we show that this graph is locally isomorphic, that is, the subgraph of radius r centred on any pair of vertices w and u are isomorphic. We strengthen this by providing an exact diameter on the graph for any given Parikh vector. Finally, we show that, for every vertex v in the graph, there is a Hamiltonian path starting at v. We build upon this by providing a novel algorithm for enumerating the Hamiltonian path starting at any given vertex v in a binary graph with at most O(log n) delay between outputting the swaps corresponding to the transitions made in the graph. Our enumeration results correlate well with the existing work on the enumeration of words. This includes work on explicitly outputting each word with linear delay <cit.>, or outputting an implicit representation of each word with either constant or logarithmic delay relative to the length of the words <cit.>. The surveys <cit.> provide a comprehensive overview of a wide range of enumeration results. § PRELIMINARIES Let ℕ = {1, 2, …} denote the set of natural numbers, and ℕ_0 = ℕ∪{ 0 }. We denote by [n] the set {1, 2, …, n} and by [i, n] the set {i, i + 1, …, n}, for all i, n ∈ℕ_0, i ≤ n. An alphabet Σ is an ordered, finite set of symbols. Tacitly assume that the alphabet Σ = [σ] = {1, 2, …, σ}, where σ = |Σ|. We treat each symbol in Σ both as a symbol and by the numeric value, i.e. i ∈Σ represents both the symbol i and the integer i. A word is a finite sequence of symbols from a given alphabet. The length of a word w, denoted | w |, is the number of symbols in the sequence. The notation Σ^n denotes the set of n-length words defined over the alphabet Σ, and the notation Σ^* denotes the set of all words defined over Σ. For i ∈ [| w |], the notation w[i] is used to denote the i^th symbol in w, and for the pair i, j ∈ [| w |], w[i, j] is used to denote the sequence w[i] w[i + 1] … w[j], such a sequence is called a factor of w. We abuse this notation by defining, for any pair i, j ∈ [| w |] such that j < i, w[i, j] = ε, where ε denotes the empty string. Given a word w ∈Σ^n and pair i, j ∈ [n], i < j such that w[i] ≠ w[j], the 2-swap of w by (i, j), denoted w ∘ (i, j), returns the word w[1, i - 1] w[j] w[i + 1, j - 1] w[i] w[j + 1, n]. Given the word w = 11221122 and pair (2, 7), w ∘ (2, 7) = 12221112. Given a word w ∈Σ^n, the Parikh vector<cit.> of w, denoted P(w) is the σ-length vector such that the i^th entry of P(w) contains the number of occurrences of symbol i in w, formally, for i ∈ [σ] P(w)[i] = |{j ∈ [n] | w[j] = i }|, where n=| w|. For example, the word w = 11221122 has Parikh vector (4,4). The set of words with a given Parikh vector P over the alphabet Σ is denoted Σ^P, formally Σ^P = {w ∈Σ^* | P(w) = P }. It is notable that |Σ^P | = n!/∏_i ∈ [σ]P[i]. For a given alphabet Σ and Parikh vector P, the configuration graph of Σ^P is the undirected graph G(P) = {V(P), E(P) } where: * V(P) = {v_w | w ∈Σ^P}. * E(P) = {{v_w, v_u}∈ V(P) × V(P) |∃ i,j ∈ [n] s.t. w ∘ (i, j) = v}. Informally, the configuration graph for a given Parikh vector P is the graph with each vertex corresponding to some word in Σ^P, and each edge connecting every pair of words w, u ∈Σ^P such that there exists some 2-swap transforming w into u. Figure <ref> provides an example of the configuration graph when P=(3,2). A path (also called a walk) in a graph is an ordered set of edges such that the second vertex in the i^th edge is the first vertex in the (i + 1)^th edge, i.e. p = {(v_1, v_2), (v_2, v_3), …, (v_| p |, v_| p | + 1)}. Note that a path of length i visits i + 1 vertices. A path p visits a vertex v if there exists some edge e ∈ p such that v ∈ e. A cycle (also called a circuit) is a path such that the first vertex visited is the same as the last. A Hamiltonian path p is a path visiting each vertex exactly once, i.e. for every v ∈ V, there exists at most two edges e_1, e_2 ∈ p such that v ∈ e_1 and v ∈ e_2. A cycle is Hamiltonian if it is a Hamiltonian path and a cycle. A path p covers a set of vertices V if, for every v ∈ V, there exists some e ∈ p such that v ∈ e. Note that a Hamiltonian path is a path cover of every vertex in the graph. and a Hamiltonian cycle is a cycle cover of every vertex in the graph. The distance between a pair of vertices v, u ∈ V, denoted D(v, u) in the graph G is the smallest value d ∈ℕ_0 for which there exists some path p of length d covering both v and u, i.e. the minimum number of edges needed to move from v to u. If v = u, then D(v,u) is defined as 0. The diameter of a graph G is the maximum distance between any pair of vertices in the graph, i.e. max_v, u ∈ V D(v, u). Given two graphs G = (V, E) and G' = (V', E'), G is isomorphic to G' if there exists a bijective mapping f : V ↦ V' such that, for every v, u ∈ V, if (v, u) ∈ E, (f(v), f(u)) ∈ E'. The notation G ≅ G' is used to denote that G is isomorphic to G', and G ≇G' to denote that G is not isomorphic to G'. A subgraph of a graph G = (V, E) is a graph G' = (V', E') such that V' ⊆ V and E' ⊆ E. A clique G'=(V',E') is a subgraph, G' ⊆ G which is complete (i.e. for all u,v ∈ G', (u,v) ∈ E'). And the clique number ω of a graph G is the size of the largest clique in G. § BASIC PROPERTIES OF THE CONFIGURATION GRAPH In this section, we provide a set of combinatorial results on the configuration graph. We provide a set of results showing that every subgraph of the configuration graph G(p) = (V, E) with the vertex set V'(v) = {u ∈ V | D(v, u) ≤ℓ}, and edge set E' = (V' × V') ∩ E are isomorphic. We build on this by providing a tight bound on the diameter of these graphs. First, we consider some local structures within the graph, starting with cliques. Let P be a Parikh vector. Each vertex v ∈ V(P) is part of ∑_j ∈Σ∏_i ∈Σ∖{j} P[j] maximal cliques, each of which has cardinality in the set {P[i]+1 | i ∈ [σ]}. Consider first the words with Parikh vector P = (k, 1). Note that every word in Σ^P consists of k copies of the symbol 1, and one copy of the symbol 2. Therefore, given any pair of words w, u ∈Σ^P such that w[i] = u[j] = 2, the 2-swap (i, j) transforms w into u and hence there exists some edge between w and u. Hence G(P) must be a complete graph, a clique, of size k + 1. In the general case, consider the word w with the Parikh vector P = (k_1, k_2, …, k_σ). Further, let Pos(w, i) = {j ∈ [| w |] | w[j] = i}. First, given i,j ∈ [σ], i ≠ j, let i_1, i_2 ∈ Pos(w, i) and j_1 ∈ Pos(w, j) be a set of indexes. Let v_1 = w ∘ (i_1, j_1) and v_2 = w ∘ (i_2, j_1). Then, v_1[i_1] = v_2[i_2], v_2[i_2] = v_1[i_1], and v_1[j_1] = v_2[j_1]. Further, for every ℓ∈ [| w |] such that ℓ∉{i_1, i_2, j_1}, v_1[ℓ] = v_2[ℓ] as these positions are unchanged by the swaps. Therefore, v_1 = v_2 ∘ (i_1, i_2), and hence these words are connected in G(p). Further, as this holds for any j_1 ∈ Pos(v, j), the set of words induced by the swaps (j,ℓ), for some fixed ℓ∈ Pos(w, i) correspond to a clique of size P[j] + 1. Therefore, there exists ∏_i ∈Σ∖{j} P[i] cliques of size P[j] + 1 including w, for any j ∈Σ. We now show that the cliques induced by the set of swaps S(i, j) = {(i', j) | i' ∈ Pos(w, w[i])} are maximal. Let C(i, j, w) = { w }∪{w ∘ (i', j) | (i', j) ∈ S(i, j) }, i.e. the clique induced by the set of swaps in S(i, j). Consider a set of swaps, (i_1, j_1), (i_2, j_1), (i_1, j_2) and (i_2, j_2), where i_1, i_2 ∈ Pos(w, i) and j_1, j_2 ∈ Pos(w, j). Let v_1, 1 = w ∘ (i_1, j_1), v_2, 1 = w ∘ (i_2, j_1), v_1, 2 = w ∘ (i_1, j_2) and v_2,2 = w ∘ (i_2, j_2). Note that {w, v_1, 1, v_2, 1}⊆ C(i, j_1), {w, v_1, 1, v_1, 2}⊆ C(j, i_1), {w, v_2, 1, v_2, 2}⊆ C(i, j_2) and {w, v_2, 1, v_2, 2}⊆ C(j, i_2). To show that v_2, 2 does not belong to any clique containing w, v_1, 1, v_1, 2 or v_2, 1, we claim there exists no swap transforming v_1, 1 in to v_2, 2. Observe first that, for every ℓ∈ [| w |] such that ℓ∉{i_1, i_2, j_1, j_2}, v_1, 1[ℓ] = v_2, 2[ℓ]. As v_1, 1[i_1] = v_2, 2[j_1], and v_1, 1[j_1] = v_2, 2[i_2], exactly two swaps are needed to transform v_1, 1 into v_2, 2. Therefore, for any pair of swaps (i_1, j_1), (i_2, j_2) ∈ Pos(i, w) × Pos(j, w), such that i_1 ≠ i_2 and j_1 ≠ j_2, the words w ∘ (i_1, j_1) and w ∘ (i_2, j_2) are not adjacent in G(v). Similarly, given a set of indices i' ∈ Pos(i, w), j' ∈ Pos(j, w) and ℓ' ∈ Pos(ℓ, w) and swaps (i', j'), (i', j'), observe that as w[j'] ≠ w[ℓ'], the distance between w ∘ (i', j') and w ∘ (i', ℓ') is 2. Therefore, every clique induced by the set of swaps S(i, j) = {(i', j) | i' ∈ Pos(w, w[i])} is maximal. Every vertex v corresponding to the word w ∈Σ^P belongs only to the maximal cliques corresponding to the set of words {w ∘ (i, j) | i ∈ Pos(w, x)} for some fixed symbol x ∈Σ and position j ∈ [| w |], w[j] ≠ x, where Pos(w, x) = {i ∈ [| w |] |, w[i] = x}. Now since the edges in each maximal clique only swap two types of symbols we have the following corollary for the number of cliques, (here we define P[ε]:=0 and 0! :=1 ). There are ∑_(i,j) ∈Σ×Σ( ∑_k ∈Σ∖{i,j} P[k] )!/∏_k∈Σ∖{i,j} P[k]! maximal cliques in G(p). The clique number ω(G) is equal to max_i∈ [q] P_i +1 Let G_r(v) be the subgraph of G(P) induced by all vertices of distance at most r away from a given vertex v. Then, for any pair of vertices u,v ∈ V and given any r ∈ℤ^+, G_r(u) ≅ G_r(v). Let π∈ S_n be the permutation such that u ∘π = v. We can use the permutation π to define an isomorphism f: G_r(u) → G_r(v) such that f(w) = w ∘π. In order to show that f is an isomorphism we need to show that it preserves adjacency. We start by showing that for every word, w ∈ G_1(u), f(w) ∈ G_1(v). Let τ = (τ_1,τ_2) be the 2-swap such that w= u∘τ. We now have 3 cases for how π and τ interact, either none of the indices in τ are changed by π, just one of τ_1 or τ_2 are changed by π, or both τ_1 and τ_2 are changed by π. In the first case, f(w) is adjacent to v as v ∘τ = f(w). In the second case, let (τ_1, τ_2) be a swap that that π[τ_1] = τ_1, i.e. τ_1 is not changed by the permutation π. We define a new swap τ' such that v ∘τ' = f(w). Let x, y ∈ [n] be the positions in v such that π[x] = τ_2 and π[τ_2] = y. Now, let τ' = (τ_1, y). Observe that v[y] = w[τ_2], and v[τ_1] = w[τ_1]. Therefore, the word v ∘τ' = u ∘τ∘π. Note that as the ordering of the indexes in the swap does not change the swap, the same argument holds for the case when π[τ_2] = τ_2. In the final case, let τ' = (π[τ_1], π[τ_2]). Note that by the same arguments as above, u[π[τ_1]] = v[τ_1] and u[π[τ_2]] = v[τ_2], and hence v ∘τ' = u ∘τ∘π. Repeating this argument for each word at distance ℓ∈ [1, r] proves this statement. §.§ Diameter of the Graph We now provide the exact value of the diameter of any configuration graph G(P). First, Theorem <ref> provides the explicit diameter and main result of this section. The remainder of the paper is dedicated to proving this result. The diameter of the Configuration Graph, G(P) for a given Parikh vector P is n-max_i ∈ [σ] P[i]. Theorem <ref> is proven by first showing, in Lemma <ref> that the upper bound matches n-max_i ∈ [σ] P[i]. Lemma <ref> shows that the lower bound on the diameter matches the upper bound, concluding our proof of Theorem <ref>. The diameter of the Configuration Graph, G(P) for a given Parikh vector P is at most n-max_i ∈ [σ] P[i]. This claim is proven by providing a procedure to determine a sequence of n-max_i ∈ [σ] P[i] swaps to transform any word w ∈Σ^P into some word v ∈Σ^P. We assume, without loss of generality, that P[1] ≥ P[2] ≥…≥ P[σ]. This procedure operates by iterating over the set of symbols in Σ, and the set of occurrences of each symbol in the word. At each step, we have a symbol x ∈ [2, σ] and index k ∈ [1, P[x]]. The procedure finds the position i of the k^th appearance of symbol x in w, and the position j of the k^th appearance of x in v. Formally, i is the value such that w[i] = x and |{i' ∈ [1, i - 1] | w[i'] = x}| = k and j the value such that v[j] = x and |{j' ∈ [1, j - 1] | v[j'] = x}| = k. Finally, the algorithm adds the swap (i, j) to the set of swaps, and then moves to the next symbol. This procedure requires one swap for each symbol in w other than 1, giving a total of n - max_i ∈ [σ] P[i] swaps. Note that after each swap, the symbol at position j of the word is the symbol v[j]. Therefore, after all swaps have been applied, the symbol at position j ∈{i ∈ [1, | w |] | v[i] ∈Σ∖{1 }} must equal v[j]. By extension, for any index i such that v[i] = 1, the symbol at position i must be 1, and thus equal v[i]. all symbols in the set Σ∖{1} Therefore this procedure transforms w into v. In order to prove the lower bound on the diameter (i.e. that (G) ≥ n- max_i ∈ [σ] P[i]) we introduce a new auxiliary data structure, the 2-swap graph. Informally, the 2-swap graph, defined for a pair of words w, v ∈Σ^p and denoted G(w, v) = {V(w, v), E(w, v)} is a directed graph such that an edge exists between u_i, u_j ∈ V(w, v) if and only if w[i] = v[j]. Note that this definition allows for self-loops. Let w, v ∈Σ^p be a pair of words. The 2-swap graph G(w, v) = {V(w, v), E(w, v)} contains the vertex set V(w, v) = (u_1, u_2, …, u_| w |) and edge set E(v,w) = {(v_i, v_j) ∈ V(w, v) × V(w, v) | w[i] = v[j]}. The edge set, E, is defined as follows, for all i,j ∈ [n] there exists an edge (i,j) ∈ E if and only if w[i] = v[j]. An example of the 2-swap graph for w = aaabbc and v = bcbaaa is given in Figure <ref>. Let G(w,v) be a graph constructed as above for transforming w in to v using 2-swaps. Then, any cycle cover 𝒞 of G(w,v) can be transformed into a set of 2-swaps to transform w in to v. Let C ∈𝒞 be a cycle in the cycle cover such that C = (e^1, e^2, …, e^|C| and e^i_2 = e^i + 1 |C|_1. The set of 2-swaps is constructed as follows- Starting with i = 1 in increasing value of i ∈ [|C| - 1], the 2-swap (e^1_1, e^i_2) is added to the set of 2-swaps S. Let us assume, for the sake of contradiction, that S does not correspond to a proper set of 2-swaps converting w into v. Then, there must exist some symbol at position i such that the symbol at position i of w ends up at some position j such that w_i ≠ v_j. However, as w_i must be placed at some position that is connected to node i by an edge, there must be an edge between i and j, hence w_i = v_j, contradicting the initial assumption. Therefore, S must correspond to a proper set of 2-swaps. Let 𝒞 be a cycle cover of G(w,v). Then there exists a set of ∑_c ∈𝒞 |c| - 1 2-swaps transforming w in to v. Let S be the smallest set of 2-swaps transforming w in to v, then S must correspond to a vertex disjoint cycle cover of G(w,v). For the sake of contradiction, let S be the smallest set of 2-swaps transforming w in to v, such that S corresponds to the cycle cover 𝒞 where 𝒞 is not vertex disjoint. Let c_1, c_2 ∈𝒞 be a pair of cycles sharing some vertex u. Then, following the construction above, the symbol w_u must be used in two separate positions in v, contradicting the assumption that v can be constructed from w using 2-swaps. Hence S must correspond to a vertex disjoint cycle cover. Let w,v ∈Σ(P) be a pair of words sharing a common Parikh vector and let S be the smallest set of 2-swaps transforming w in to v. Then S corresponds to a vertex disjoint cycle cover of G(w,v) containing the maximum number of vertex disjoint cycles. From Corollary <ref>, the number of 2-swaps corresponds to the sum of the length of the cycles minus the number of cycles. As each cycle c contains | V ∩ c| + 1 edges (i.e. the number of vertices in c + 1), then the number of 2-swaps is minimised by maximising the number of cycles (and minimising the number of vertices in each cycle). Further this must be disjoint in order to satisfy Corollary <ref>. Let w,v ∈Σ^n(P) be a pair of words sharing the same Parikh vector P such that w_i = v_i for some i ∈ [n]. Then D(w,v) = D(w_[1,i - 1] + w_[i + 1, n], v_[1,i - 1] + v_[i + 1, n]). Assume first, for the sake of contradictions, that D(w,v) > D(w_[1,i - 1] + w_[i + 1, n], v_[1,i - 1] + v_[i + 1, n]). Then, the set of 2-swaps used to transform w_[1,i - 1] + w_[i + 1, n] into v_[1,i - 1] + v_[i + 1, n] can be transformed into a set of 2-swaps to transform w in to v. In the other direction, assume that D(w,v) > D(w_[1,i - 1] + w_[i + 1, n], v_[1,i - 1] + v_[i + 1, n]). Then, for any 2-swap involving position i, there must be a complimentary 2-swap that returns the symbol w_i to position i, and hence these can be replaced, leading to a set of 2-swaps of the same length. The diameter of G(p) is at least n-max_i∈[σ]P[i]. We assume w.l.o.g. that P[1] ≥ P[2] ≥…≥ P[σ]. Let w,v ∈Σ(P) such as w = (1 2 3 …σ)^P[σ] (1 2 3 …σ - 1)^P[σ - 1]- P[σ] (1 2 3 …σ - 2)^P[σ - 2]- P[σ - 1]… (1)^P[1] - P[2] and v = (2 3 … q 1)^P[σ] (2 3 …σ - 1 1)^P[σ - 1] - P[σ] (2 3 …σ - 2 1)^P[σ - 2] - P[σ - 1]… (1)^P[1] - P[2]. I.e. w is made up of P[1] subwords, each of which are of the form 12...k, and v is made up of the same subwords as w but each of them has been cyclically shifted by one (for example when P=(3,2,1) we have w=123121 and v=231211). Following Lemma <ref>, the minimum number of 2-swaps needed to convert w in to v can be derived from a disjoint vertex cover of G(w,v) with the maximum number of cycles. Observe that any occurrence of symbol σ must have an outgoing edge in G(w,v) to symbol 1, and an incoming edge from symbol σ - 1. Repeating this logic, each instance of σ must be contained within an cycle of length σ. Removing each such cycles and repeating this argument gives a set of P[1] cycles, with P[σ] cycles of length σ, P[σ- 1] - P[σ] cycles of length σ - 1, and generally P[i] - P[i + 1] cycles of length i. This gives the number of 2-swaps needed to transform w to v being a minimum of n-P[1] = n-max_i∈[σ]P[i]. The proof of Theorem <ref> follows from tight upper and lower bounds in Lemmas <ref> and <ref>. § HAMILTONICITY In this section, we provide a proof that the configuration graph contains Hamiltonian paths and, for binary alphabets, provide an efficient algorithm for enumerating a Hamiltonian path. We do so by first showing that every configuration graph over a binary Parikh vector is Hamiltonian. This is then generalised to the case of alphabets of size σ, using the binary case to build Hamiltonian paths with alphabets of size σ. §.§ Binary Alphabets For notational conciseness, given a symbol a the notation a is used to denote a∈Σ, a ≠a, i.e. if a = 1, then a = 2. We prove Hamiltonicity via a recursive approach that forms the basis for our enumeration algorithm. Our proof of Hamiltonicity works by taking an arbitrary word in the graph w, and constructing a path starting with w. At each step of the path, the idea is to find the shortest suffix of w such that both symbols in Σ appear in the suffix. Letting w = p s, the path is constructed by first forming a path containing every word p s', for every s' ∈Σ^P(s), i.e. a path from w transitioning through every word formed by maintaining the prefix p and permuting the suffix s. Once every such word has been added to the path, the algorithm repeats this process by performing some swap of the form (| p |, i) where i ∈ [| p | + 1, | w |], i.e. a swap taking the last symbol in the prefix p, and replacing it with the symbol w[| p |] from some position in the suffix. This process is repeated, considering increasingly long suffixes, until every word has been covered by the path. By working in this suffix-first approach, we ensure that every word with the same prefix is added to the path first, before shortening the prefix. Algorithm <ref> outlines this logic within the context of the enumeration problem, where each transition is output while constructing the path. For every Parikh vector P ∈ℕ_0^2 and word w ∈Σ^P, there exists a Hamiltonian path starting at w in the configuration graph G(P) = (V(P), E(P)). We prove this statement in a recursive manner. As a base case, consider the three vectors of length 2, (2, 0), (0, 2) and (1, 1). Note that there exists only a single word with the Parikh vectors (2, 0) or (0, 2), and thus the graph must, trivially, be Hamiltonian. For the Parikh vector (1,1), there exists only the words 1 2 and 2 1, connected by the 2-swap (1,2) and therefore is also Hamiltonian path and it can be found stating at either word. In the general case, assume that for every Parikh vector P' = (P_1', P_2') with P_1' + P_2' < ℓ, the graph G(P') contains a Hamiltonian path, and further there exists such a path starting at every word in Σ^P'. Now, let P = (P_1, P_2) be an arbitrary Parikh vector such that P_1 + P_2 = ℓ. Given some word w ∈Σ^P, observe that there must exist some Hamiltonian path starting at the word w[2, ℓ] in the subgraph G'(P) = (V'(P), E'(P)) where V'(P) = {u ∈ V(P) | u[1] = w[1]} and E'(P) = (V'(P) × V'(P)) ∩ E(P). Let w' be the last word visited by the Hamiltonian path in G'(P), and let i be some position in w' such that w'[i] = w[1]. Note that there must exist Hamiltonian path starting at (w ∘ (1, i))[2, ℓ] in the subgraph G”(P) = (V”(P), E”(P)) where V”(P) = {u ∈ V(P) | u[1] = w[1]} and E”(P) = (V”(P) × V”(P)) ∩ E(P). As every vertex in G(P) is either in the subgraph G'(P) or G”(P), the Hamiltonian paths starting at w in G'(P) and at w' ∘ (1, i) in G”(P) cover the complete graph. Further, as these paths are connected, there exists a Hamiltonian path starting at the arbitrary word w ∈Σ^P, and therefore the Theorem holds. *Enumeration We now provide our enumeration algorithm. Rather than output each word completely, we instead maintain the current state of the word in memory and output the swaps taken at each step, corresponding to the transitions. This way, at any given step the algorithm may be paused and the current word fully output, while the full path can be reconstructed from only the output. There are two key challenges behind this algorithm. First is the problem of deciding the next swap to be taken to move from the current word in the graph to the next word. Second, is the problem of minimising the worst-case delay in the output of these swaps, keeping in mind that the output is of constant size. High-Level Idea. From a given word w with Parikh vector,P, the algorithm works by first finding the shortest suffix s of w such that there exists some pair of indexes i, j for which s[i] ≠ s[j]. Using this suffix and letting w = u s, we find a path through every vertex in G(P) with the prefix u. Note that following the same arguments as Theorem <ref>, such a path must exist. Once every word in G(P) with the prefix u has been visited by the path, the algorithm then enumerates every word with the prefix u[1, | u | - 1], extending the current path. When adding every word with the prefix u[1, | u | - 1] to the path, note that as every word with the prefix u has already been added, all that is left is to add those words with the prefix u[1, | u | - 1] u[| u |], which is achieved via the same process as before. The swaps are determined as follows. From the initial word w, let R_1 (Rightmost 1) be the last occurrence of the symbol 1 in w, and let R_2 be the last occurrence of 2 in w. The first swap is made between min(R_1, R_2) and min(R_1, R_2) + 1, with the algorithm then iterating through every word with the Parikh vector P[w[min(R_1, R_2), | w |]] - P[w[min(R_1, R_2)]]. In the general case, a call is made to the algorithm with a Parikh vector P=(P_1, P_2), with the current word w fixed, and the assumption that no word with the prefix w[1, | w | - (P_1 + P_2)] has been added to the path other than w. The algorithm, therefore, is tasked with iterating through every word with the current prefix. Let R_1 be the last occurrence of the symbol 1, and R_2 be the last occurrence of the symbol 2 in the current word. The algorithm first enumerates every word with the prefix w[1, min(R_1, R_2) - 1]. Noting that there exists only a single word with the prefix w[1, min(R_1, R_2)], it is sufficient to only enumerate through those words with the prefix w[1, min(R_1, R_2) - 1] w[min(R_1, R_2)]. The first swap made by this algorithm is between (min(R_1, R_2), min(R_1, R_2) + 1), allowing a single recursive call to be made to HamiltonianEnumeration(P(w ∘ (min(R_1, R_2), min(R_1, R_2) + 1))[min(R_1, R_2) + 1, | w |]). Note that this call asks the algorithm to enumerate every word with the prefix w[1, min(R_1, R_2) - 1] w[min(R_1, R_2)]. As every word with the prefix w[1, min(R_1, R_2)] has already been output and added to the path, once this recursive call has been made, every word with the prefix w[1, min(R_1, R_2) - 1] will have been added to the path. Note that the word w is updated at each step, ending at the word w'. After every word with the prefix w'[1, min(R_1, R_2) - 1] has been added to the path, the next step is to add every word with the prefix w'[1, min(R_1, R_2) - 2]] to the path. As every word with the prefix w[1, min(R_1, R_2) - 1] is already in the path, it is sufficient to add just those words with the prefix w'[1, min(R_1, R_2) - 2] w[min(R_1, R_2)] to the path. This is achieved by making the swap between min(R_1, R_2) - 2, and the smallest value i > min(R_1, R_2) - 2 such that w[i] ≠ w'[min(R_1, R_2) - 1], then recursively enumerating every word with the prefix w'[1, min(R_1, R_2) - 2]. This process is repeated in decreasing length of prefix until every word has been enumerated. In order to quickly determine the last position in the current word w containing the symbols 1 and 2, a pair of balanced binary search trees are maintained. The tree T_1 corresponds to the positions of the symbol 1 in w, with each node in T_1 being labelled with an index and the tree sorted by the value of the labels. Analogously, tree T_2 corresponds to the positions of the symbol 2 in w. Using these trees, note that the last position in w at which either symbol appears can be determined in O(log n) time, and further each tree can be updated in O(log n) time after each swap. Let P be a Parikh vector of length n, and let w ∈Σ^P be a word. Algorithm <ref> outputs a path visiting every word in Σ^P starting at w. This lemma is proven via the same tools as Theorem <ref>. Explicitly, we show first that the algorithm explores every suffix in increasing length, relying on the exploration of suffixes of length 2 as a base case, then providing an inductive proof of the remaining cases. We assume that the starting word has been fully output as part of the precomputation. With this in mind, note that there are two cases for length 2 prefixes, either the suffix contains two copies of the same symbol or one copy of each symbol. In the first case, as w has been output, so has every permutation of the length 2 prefix of w. Otherwise, the algorithm outputs the swap (n - 1, n) and returns to the previous call. In the general case, we assume that for some ℓ∈ [n], every permutation of w[n - ℓ + 1, n] has been visited by the path. Further, we assume the algorithm can, given any word v, visit every word of the form v[1, n - ℓ] u, for every u ∈Σ^P(v[n - ℓ + 1, n]), i.e. the algorithm is capable of taking any word v as an input, and visiting every word with the same Parikh vector P(v) and the prefix v[1, n - ℓ + 1]. Note that in the case that w[n - ℓ, n] = w[n - ℓ]^ℓ, the algorithm has already visited every word in Σ^P(w) with the prefix w[1, n - ℓ]. Otherwise, as the algorithm has, by this point, visited every word of the form w[1, n - ell + 1] u, for every u ∈Σ^P(v[n - ℓ - 1, n]), it is sufficient to show that the algorithm visits every word of the form w[1, ℓ - 1] w[ℓ] u, for every u ∈Σ^P', P' = P(w[n - ℓ, n]) - P(w[ℓ]). Let w' be the last word visited by the algorithm with the prefix w[1,n - ℓ + 1]. Note that the first step taken by the algorithm is to determine the first position j in w'[n - ℓ + 1, n] containing the symbol w[ℓ]. Therefore, by making the swap (n - ℓ, j), the algorithm moves to some word with a suffix in Σ^P', where P' = P(w[n - ℓ, n]) - P(w[n - ℓ]). As the algorithm can, by inductive assumption, visit every word with a suffix of length ℓ - 1, the algorithm must also be able to visit every word with a suffix of length ℓ. Lemma <ref> shows that the path output by Algorithm <ref> covers every word in Σ^P, for some given Parikh vector P, starting from any arbitrary word w. To prove that the path is Hamiltonian, it is now necessary to show that every word is visited exactly once. Let P be a Parikh vector, and let w ∈Σ^P be a word. The path output by Algorithm <ref> does not visit any word in w ∈Σ^P more than once. Note that this property holds for length 2 words. By extension, the length at most 2 path visiting every word with the prefix w[1, n - 2] does not visit the same word twice before returning to a previous call on the stack. Assume now that, given any input word v ∈Σ^P, the algorithm visits every word in Σ^P with the prefix v[1, n - l + 1] without repetition, and has only visited words with this prefix. Further, assume that P(v[n - ℓ, n]) ≠ (0, ℓ - 1) or (ℓ - 1, 0). Then, after every such word has been visited by the path, the algorithm returns to the previous state, with the goal of enumerating every word with the prefix v[1, n - ℓ]. As every word in Σ^P with the prefix v[1, n - ℓ + 1] has been visited, it is sufficient to show that only those words with the prefix v[1, n - ℓ] v[n ℓ + 1] are enumerated. The first swap made at this state is between ℓ and the smallest index j ∈ [n - ℓ + 1, n] such that v[n ℓ] ≠ v[j], which, as the algorithm has only visited words with the prefix v[1, n - l + 1], has not previously been visited. After this swap, the algorithm enumerates every word with the prefix v[1, n - ℓ + 1] v[n - ℓ], which, by the inductive assumption, is done without visiting the same word. Therefore, by induction. every word with the prefix v[1, n - ℓ] is visited by the path output by Algorithm <ref> exactly once. Given a Parikh vector P = (P_1, P_2) such that P_1 + P_2 = n, and word w ∈Σ^P, Algorithm <ref> outputs a Hamiltonian path with at most O(log n) delay between the output of each edge after O(n log n) preprocessing. Following Lemmas <ref> and <ref>, the path output be Algorithm <ref> is Hamiltonian. In the preprocessing step, the algorithm constructs two balanced binary search trees T_1 and T_2. Every node in T_1 is labelled by some index i_1 ∈ [n] for which w[i_1] = 1, and sorted by the values of the labels. Similarly, every node in T_2 is labelled by some index i_2 ∈ [n] for which w[i_2] = 2, and sorted by the values of the labels. As each of these constructions requires at most O(n log n ) time, the total complexity of the preprocessing is O(n log n). During each call, we have one of three cases. If either value of the Parikh vector (P_1, P_2) is 0, then the algorithm immediately returns to the last state without any output. If the Parikh vector is (1, 1), then the algorithm outputs a swap between the two symbols, updates the trees T_1 and T_2, requiring at most O(log n) time, then returns to the last state. In the third case, the Parikh vector (P_1, P_2) satisfies P_1 > 0, P_2 > 0. First, the algorithm determines the last position in the current state of the word w containing the symbol 1 and the last position containing the symbol 2, i.e. the values R_1 = max_j ∈ [1, n] w[i_1] = 1 and R_2 = max_j ∈ [1, n] w[i_2] = 2. These values can be determined in O(log n) time using the trees T_1 and T_2. Using these values, the algorithm iterates through every length from min(T_1, T_2) to n - (P_1 + P_2 - 1), enumerating every word in Σ^P(w) with the prefix w[1, n - (P_1 + P_2 - 1)]. For each ℓ∈ [min(T_1, T_2), n - (P_1 + P_2 - 1)], the algorithm outputs the swap (ℓ, j), where j ∈ [n - ℓ, n] is the largest value for which w[j] = w[ℓ]. After this output, the algorithm updates the trees T_1 and T_2. Note that both finding the value of j and updating the trees require O(log n) time. After this swap, the algorithm makes the next call to HamiltonianEnumeration. Note that after this call, HamiltonianEnumeration must either return immediately to the last state or output some swap before either returning or making the next recursive call. Therefore, ignoring the time complexity of returning to a previous state in the stack, the worst case delay between outputs is O(log n), corresponding to searching and updating the trees T_1 and T_2. To avoid having to check each state in the stack after returning from a recursive call, the algorithm uses tail recursion. Explicitly, rather than returning to the state in the stack from which the algorithm was called, the algorithm is passed a pointer to the last state in the stack corresponding to a length ℓ such that some word with the prefix w[1, n - ℓ] has not been output. To do so, after the swap between n - (P_1 + P_2 - 1) and j is made, for the value j as defined above, the algorithm passes the pointer it was initially given, denoted in the algorithm as last_state to the call to HamiltonianEnumeration, allowing the algorithm to skip over the current state during the recursion process. We now give the pseudocode for our HamiltonianEnumeration algorithm: §.§ General Alphabets We now show that the graph is Hamiltonian for the alphabet Σ of size σ > 2. The main idea here is to build a cycle based on recursively grouping together sets of symbols. Given a Parikh vector P = (P_1, P_2, …, P_σ), the algorithm operates in a set of σ recursive phases. At step i, the algorithm finds a Hamiltonian path in the graph G(P_i, P_i + 1, …, P_σ), then maps that to a path in G(P). This mapping is done by considering each word in G(P_i, P_i + 1, …, P_σ) as a permutation of every occurrence of the symbols i, i + 1, …, σ. The paths in G(P_i, P_i + 1, …, P_σ) are generated in turn by a recursive proccess. Starting with the word w, first, the algorithm outputs the swaps corresponding to a path visiting every vertex corresponding to a permutation of the symbols i + 1, …, σ in w. Explicitly, every word v in this path is of the form v[i] = w[i] w[i] ∈{1, 2, …, i} x_i ∈{i + 1, …, σ} w[i] ∉{1, 2, …, i}, where x_i is some arbitrary symbol {i, i + 1, …, σ}. Further, every such word is visited exactly once. After this path is output, the algorithm makes a single swap, corresponding to the first swap in G(P_i, (P_i + 1 + P_i + 2, …, P_σ), ensuring that this swap must involve some position in w containing the symbol i. After this swap, another path visiting exactly once every word corresponding to a permutation of the symbols i + 1, …, σ in w is output. By repeating this for every swap in G(P_i, (P_i + 1 + P_i + 2, …, P_σ), inserting a path visiting exactly once every word corresponding to a permutation of the symbols i + 1, …, σ in w between each such swap, note that every permutation of the symbols i, i + 1, …, σ in w is output exactly once. In other words, every word v ∈Σ^P of the form v[i] = w[i] w[i] ∈{1, 2, …, i - 1} x _i ∈{i, i + 1, …, σ} w[i] ∈{i, i + 1, …, σ}, where x_i is an arbitrary symbol in {i, i + 1, …, σ}. Further, each such word is visited exactly once. Using the binary alphabet as a base case, this process provides an outline of the proof of the Hamiltonicity of G(P). A full proof of Theorem <ref> can be found in Appendix <ref>. Given an arbitrary Parikh vector P ∈ℕ^σ, there exists a Hamiltonian path starting at every vertex v in the configuration graph G(P). This is formally proven from the outline above via an inductive argument. As a base case, following Theorem <ref> for any binary Parikh vector P and word w ∈Σ^p, there exists a Hamiltonian path in G(P) starting at w. We assume now that there exists, for any Parikh vector p ∈ℕ_0^ℓ - 1 and word w ∈Σ^P, there exists some Hamiltonian path in G(P) starting at w. Let q = (q_1, q_2, …, q_ℓ) be a Parikh vector, and let v ∈Σ^q be an arbitrary word with the Parikh vector q. To construct the Hamiltonian path starting at v in G(q), we first form a Hamiltonian path P_1 in G(q_2, q_3, …, q_ℓ starting at the word v' formed by deleting every symbol 1 from v. We assume that we have a table T such that T[i] returns the index in [1, | v |] for which v'[i] = v[T[i]]. To avoid any repetition, we require T[1] < T[2] < … < T[| v' |]. With this table, each swap (s_1, s_2) in the path P_1 can be converted to the swap (T[s_1], T[s_2]) in the graph G(q) swapping the same symbols in v as in the reduced word v'. With this conversion, P_1 constructs a path in G(q) visiting exactly once each word where the symbol 1 appears only at the position {i ∈ [1, | v |] | v[i] = 1}. Next, we construct a Hamiltonian path P_2 in the graph G(q_1, q_2 + q_3 + … + q_ℓ) starting at the word v' where v'[i] = 1 v[i] = 1 x v[i] ≠ 1, for some new symbol x. This graph can be seen as an abstraction of G(q), considering only swaps between some position labelled 1 and any position with a different symbol. The first swap in P_2 is applied to the current word, however, rather than proceeding along this path, a new set of swaps is inserted corresponding to some Hamiltonian path in G(q_2, q_3, …, q_ℓ) generated in the way manner as before. Again, this new path corresponds to a permutation of every symbol in the set {2, 3, …, σ}, while fixing the positions of the symbol 1 in the word. This is repeated by taking a single swap from the path P_2, followed by a complete path corresponding to a Hamiltonian path in G(q_2, q_3, …, q_ℓ). By combining these paths, the new path must visit exactly once every word in Σ^q_2, q_3, …, q_ℓ where the positions of symbol 1 are fixed, each time a word w with a new permutation of the symbol 1 is visited. Similarly, every word in Σ^q_1, q_2 + q_3 + … + q_σ is visited exactly once, corresponding to a path through every permutation of the positions of the symbols 1 in the word. Therefore, the output path is Hamiltonian. Conclusion Following the work on 2-swap, the most natural step is to consider these problems for k-swap based on two variants with exactly k and less or equal to k. Note that a configuration graph for exactly k-swap permutation might not have a single component. We also would like to point on other attractive directions of permutations on multidimensional words <cit.> and important combinatorial objects such as necklaces and bracelets <cit.>.
http://arxiv.org/abs/2307.00509v1
20230702080910
HeGeL: A Novel Dataset for Geo-Location from Hebrew Text
[ "Tzuf Paz-Argaman", "Tal Bauman", "Itai Mondshine", "Itzhak Omer", "Sagi Dalyot", "Reut Tsarfaty" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.IR", "cs.LG" ]
A re-examination to the SCoTLASS problems for SPCA and two projection-based methods for themt1 [ ============================================================================================== The task of textual geolocation — retrieving the coordinates of a place based on a free-form language description — calls for not only grounding but also natural language understanding and geospatial reasoning. Even though there are quite a few datasets in English used for geolocation, they are currently based on open-source data (Wikipedia and Twitter), where the location of the described place is mostly implicit, such that the location retrieval resolution is limited. Furthermore, there are no datasets available for addressing the problem of textual geolocation in morphologically rich and resource-poor languages, such as Hebrew. In this paper, we present the Hebrew Geo-Location (HeGeL) corpus, designed to collect literal place descriptions and analyze lingual geospatial reasoning. We crowdsourced 5,649 literal Hebrew place descriptions of various place types in three cities in Israel. Qualitative and empirical analysis show that the data exhibits abundant use of geospatial reasoning and requires a novel environmental representation.Equal contribution.[For data and code see https://github.com/OnlpLab/HeGeL ] § INTRODUCTION AND BACKGROUND Textual Geolocation Identification, a crucial component of Geographic Information Retrieval (GIR), is the task of resolving the location, i.e., coordinates of a place, based on the reference to it in a text. It requires a combination of language and environmental knowledge. On top of the usual non-spatial linguistic challenges in Natural Language Understanding (NLU), such as named entity recognition (NER), anaphora resolution, bridging anaphora, etc., the textual geolocation task presents geospatial challenges that require multimodal processing and grounding <cit.>. Proper names, such as *Rabin Square, also known as named entities in Natural Language Procesing (NLP), and as rigid designators in formal semantics <cit.>, can be easily grounded based on a Gazetteer or a simple map. However, geolocating linguistic terms that involve spatial expressions without the explicit mention of a proper name still present an open challenge. This interpretation challenge includes the understanding and resolution of (at least): (i) definite descriptions, such as *the school (ii) geospatial terms, such as cardinal directions; *east of; and (iii) geospatial numerical reasoning; *two buildings away from the pharmacy. To address these and other challenges, we need to both ground entity mentions to their corresponding physical entities in the environment, and to reason about geospatial relations expressed between entities — these two processes being closely intertwined. To do so, we need a corpus for the geolocation task that maps rich geospatial place descriptions to their corresponding location coordinates. However, current corpora for geolocation are based on naturally-occurring open-source resources, such as Wikipedia articles <cit.>, which are not spatially oriented, i.e., the description of locations is implicit or absent in the corresponding text. Subsequently, the accuracy of retrieval is fairly low (around 100 km). Furthermore, all geolocation datasets previously studied in NLP are in English, with a dearth of corpora for low-resource languages, in particular, for morphologically rich languages, such as Hebrew. To understand the geolocation challenges and build models that do various spatial reasoning tasks, English cannot be our sole focus <cit.>. Hebrew, a Semitic morphologically rich language is notoriously difficult to parse <cit.>. Moreover, resources that are available for Hebrew NLP research focus on traditional tasks, such as Part-of-speech (POS) tagging, syntactic parsing, etc; and lack corpora for understanding and reasoning in real-world situations. In this work we present HeGeL, a novel dataset for Hebrew Geo-Location, the first ever Hebrew NLU benchmark involving both grounding and geospatial reasoning. To create HeGeL, we crowdsourced 5,649 geospatially-oriented Hebrew place descriptions of various place types from three cities in Israel. We designed our task based on a realistic scenario of human place description, relying on people’s memory of the world, rather than, e.g., using a map <cit.>. Crucially, relying on environmental cognition results in various levels of geospatial knowledge <cit.> that are manifested in the descriptions and the geospatial reasoning that is required to resolve their location <cit.>. To avoid the much simpler task of grounding proper named entities, we explicitly restricted the use of proper names in the description of the place and adjacent landmarks. Unlike the text-based navigation task <cit.>, which requires representing an agent's current perspective, reflecting its route knowledge, we show that the HeGeL task requires a full-environment representation, thus, capturing complex geospatial relations among multiple physical entities. Through a thorough linguistic and empirical analysis, we demonstrate the characteristics and challenges associated with Hebrew place descriptions, showing that HeGeL serves both as a challenging NLU benchmark and as a corpus for geospatial cognition research. § THE HEGEL TASK AND DATASET This work addresses the task of geolocating places on a map based on natural language (NL) geospatial descriptions that are given in a colloquial language and based on participants' memory of the environment (i.e., cognitive map). The input to the HeGeL task is as follows: (i) an NL place description of the whereabouts of the place, and (ii) a map with rich details of the environment (e.g., physical entities names, geospatial relations, and attributes). The output is a pair of coordinates (x,y) specifying the physical location of the place described in the text. Figure <ref> shows an example of a place description from HeGeL translated from Hebrew. To simplify the crowdsourcing task and encourage participants’ engagement, we frame the data crowdsourcing process as the well-known game, the treasure hunt task <cit.>, in which the instructor-participant is required to describe in writing the location of the treasure, a known place in the city, to a different follower-participant who then needs to locate it on a map. Thus, the online assignment is divided into two tasks: the instructor's writing of place descriptions and the follower's validation. To avoid preconceived notions as to the *correct way to describe a place, we first presented the participants with the task of writing a place description, and once completed, the validation task was given.[Appendix <ref> includes additional data collection details. Appendix <ref> presents a display of the online assignment's UI translated from Hebrew to English.] We hereby provide the details of the two UI tasks: (i) Task 1. Writing a place description In this task we requested participants to describe in a free-form text the location of a place known to them, to a third party who might not be familiar with the whereabouts of that place. To collect place descriptions based solely on people’s memory, we did not visualize the area of the place, e.g., on a map. Instead, we ensured that the participants are well familiarized with the place by asking them to state how familiar they are with the place on a scale of 1-5. If this score was 1 or 2, we presented the participant with a different place to describe. To ensure diverse human-generated textual descriptions, places were chosen based on their type, position/location in the city (places were spread across the city), geometry, size, and context. To avoid the use of proper names, we developed a rule-based methodology to make sure that the explicit name of the goal (place) or of the nearby landmarks (< 100 meters) will not appear explicitly in the description. The original description was saved, and the participants were asked to input another description without the above names. (i) Task 2. Place description validation To verify that a person who reads the text description will understand where the treasure is hidden, i.e., geolocate the place, we developed a map-based retrieval task. The participant in the follower role was asked to read the crowdsourced textual description and mark its location on the map, i.e., where the treasure is hidden. For marking the location, we implemented an interactive online map based on http://www.openstreetmap.orgOpenStreetMap (OSM),[OSM is a free, editable, map of the whole world, that was built by volunteers, with millions of users constantly adding informative tags to the map.] which allows the participants to move and zoom-in to precisely pin the described place on the map. The map supports the cognitive process needed to ground mentioned entities to physical entities, reason about the geospatial relations, and locate the described place. To familiarize participants with the interactive map tool and task, they had to first pass a simple map marking test, and only then they could start task 2 of reading place descriptions (given by other participants), marking place locations on the map, and rate the clarity of the textual description on a scale of 1-5. Target Selection and Retrieval Errors The treasure-hunt task we devised included 167 places in the three largest cities in Israel: Tel Aviv, Haifa, and Jerusalem. These three cities are differently shaped, and show different physical, morphological and topographic features, which potentially affect the legibility and imageability of urban components, and therefore also on place descriptions. These differences can be expressed in the use of various physical features and prepositions, e.g., frequent use of the physical object *landmark and the prepositions *above or *below in hilly terrains that characterize Haifa and Jerusalem. To assess the quality and interpretability of the place descriptions, we calculate the shortest Euclidean distance between the coordinates of the goal's (physical element) shape (polygon, line or point), and the location marked by the 'follower' on the map (task 2); we term this distance as retrieval error. To determine the agreement rate among human participants, each textual place description is validated by at least two participants. To ensure that we work with descriptions that can be geolocated, we set a hard distance threshold of 300 meters, based on analysis of the descriptions' clarity score that we had conducted on a prior (held-out) development corpus we collected for the task. § DATA STATISTICS AND ANALYSIS The resulting HeGeL dataset contains 5,649 validated descriptions paired with their coordinates on a map. The locations are divided among three cities: 2,142 in Tel Aviv, 1,442 in Haifa, and 2,065 in Jerusalem. 1,833 participants completed the writing task, inserting in total 10,946 place descriptions, and 2,050 participants completed 12,655 validation tasks. The dataset is balanced, with about 33 descriptions per place. Figure <ref> shows a Venn diagram representing the relation of the three sets of city-based vocabularies (formed from unique lemmas produced by <cit.> lemmatization tool). The intersection of the three cities contains only 15.07% of the entire vocabulary (the union of the three cities’ vocabularies). The shared language is not focused on city-specific terms, such as *Knesset. Instead, it includes rich spatial terms, such as *between, modified prepositions such as *next to, and non-definite entities, such as *street. From the Venn diagram we also conclude that almost half of the lemmas of the three vocabularies, corresponding to the three cities, contain city-specific lemmas: 48.6%, 40.65%, and 49.3% for Tel Aviv, Haifa, and Jerusalem, respectively. As such, HeGeL enables a city-split setup, training on one city and testing on a different unseen city, where city-reserved named entities present an out-of-vocabulary (OOV) challenge for models trained on another city. Table <ref> shows an analysis of the linguistic phenomena manifested in the HeGeL dataset, demonstrating the spatial knowledge and reasoning skills required for solving the HeGeL task. We analyzed the frequency of the five types of elements in a city defined by <cit.>, along with the three types of spatial knowledge defined in <cit.>, and other spatial properties. The frequent use of cardinal directions, as well as the use of survey knowledge, suggests that any NLP model built to deal with the HeGeL task should not only represent a local view of the goal, or possible routes, but also take into consideration the full region, and mimic people’s map-like view of the environment. Therefore, unlike navigation tasks where only the agent's current perspective is represented in the model, this task requires full representation of the environment. We further perform a quantitative analysis of word tokens and lemmas that appear in HeGeL, depicted in Table <ref>. Overall, the HeGeL dataset contains a large vocabulary of 9,207 unique tokens and 6,663 unique lemmas. There are mentions of physical entities, but as we limited the mentions of named-entities of the described place and landmarks adjacent to it; these are relatively rare, and are mostly references to prominent city landmarks. Also, as most place descriptions are not route-based descriptions, there are only few verbs used in the descriptions. Prepositions, on the other hand, are abundant. In Table <ref>, using a one-way analysis of variance (ANOVA) test, we found a significantly (p<0.05) different distribution between place type descriptions and the following features: number of named entities, number of verbs, human verification retrieval error, and clarity score. § EXPERIMENTS We create a zero-shot (ZS) city-based split, such that we train on one city and test on another. The train, development, and test sets correspond to the descriptions collected in Tel Aviv, Haifa, and Jerusalem, respectively. We evaluate different baseline models for the geolocation task on the HeGeL dataset. We use three evaluation metrics based on retrieval error: mean, median, and task completion (TC) accuracy – the percentage of place descriptions located within the 300 meters threshold. We provide three baselines for the HeGeL task. We first assess a brute-force NER approach; i.e., we test whether recognizing named entities in the text and retrieving their corresponding coordinates is sufficient for solving the HeGeL task of geolocation. To this end, we used Google Maps API and produced two baseline models: (i) Google Maps API Query — we queried the API with the full raw text descriptions as input, with no prepossessing; and (ii) Oracle NER — we queried all 1-5 n-grams against Google Maps API and retrieved the closest geolocation to the goal. In our second approach, we employ a dual-encoder model. One encoder encodes the text using a Hebrew Monolingual pre-trained encoder, AlephBERT <cit.>, which produces a 768-dimension vector representation of the text. The other encoder processes the environment, which is represented as a graph based on OSM data. Each point of interest in the graph is connected to an S2Cell[S2Cells are based on S2-geometry (https://s2geometry.io/), a hierarchical discretization of the Earth’s surface <cit.>.], which contains its geometry and is based on S2-geometry. These S2Cells are encoded using a random-walk algorithm to produce a 64-dimensional vector for each cell. These vectors are then passed through a linear layer to produce 768-dimensional vectors. We calculate the cosine similarity score between the text and environment vectors and use it to align the respective representations via maximization of the cosine similarity score with a cross-entropy loss over the scores. Performing an ANOVA test, we found a significantly (p<0.05) different distribution between place type descriptions and the retrieval error of the Oracle NER. The mean retrieval error of the Path and Node place types were the lowest in both human verification and Oracle NER. This suggests that both of these place types are easier for humans to geolocate. The results in Table <ref> show that our task is not solvable with adequate resolution by the Google Maps API. The human performance provides an upper bound for the HeGeL task performance, while the simple Google Maps API Query provides a lower bound. The Google API model's low performance suggests that NER and the Gazetteer-based methods in and of themselves are insufficient to handle the HeGeL task successfully, and that geospatial reasoning is necessary. The Dual-encoder's low performance on the ZS split suggests that OOV is a major challenge. The few-shot (FS) split shows an improvement of the model after fine-tuning on additional samples from the test-region (FS 20% and 80%). This suggests that a possible solution for the city-split setup might be data-augmentation via generating grounded descriptions for the tested region – an approach we reserve for future research. § CONCLUSION The contribution of this paper is threefold. First, we present the first geolocation benchmark with Hebrew place descriptions. Second, to the best of our knowledge, this is the only crowdsourced geolocation dataset, thus, eliciting explicit geospatial descriptions, allowing for better retrieval resolution. Finally, our analysis shows that the dataset presents complex spatial reasoning challenges which require novel environmental model representation. § LIMITATIONS While we aim for our HeGeL crowdsourcing methodology to be applicable to other languages, and in particular low-resource languages, the UI design and our analyses require knowledge of the intended language, as well as familiarity with the regions where it is spoken. Moreover, as our methodology relies on people's familiarity with the places, it limits the cities chosen for the task and the participants that could take part, restricting the demographics of the participants accordingly. In addition, relying on people's memory of the environment causes many of the descriptions to be too vague for humans to geolocate, thus, many of the descriptions were disqualified during the validation process as they could not have been resolved. The relatively low percentage of place descriptions that were successfully validated, raises the costs of collecting such a dataset. § ACKNOWLEDGEMENTS This research is funded by a grant from the European Research Council, ERC-StG grant number 677352, and a grant by the Israeli Ministry of Science and Technology (MOST), grant number 3-17992, for which we are grateful. acl_natbib § DATA COLLECTION DETAILS We used the services of an Israeli surveying company to distribute the assignment to native Hebrew-speakers participants in Israel only. The survey company was charged with distributing the assignments to a balanced set of participants in terms of their demographic and geographic characteristics (e.g., an equal number of males and females). All participants were given full payment, non-respective of whether they correctly completed the task. The first page the participants viewed contains a disclosure about the assignments being part of academic research and the purpose of the assignments. The assignment protocol was approved by a behavioral review board. This approval was also presented to the participants on the initial screen. Also, the participants were required to read an informed consent form and sign an agreement box. § PARTICIPANT INTERFACE The tasks are performed via an online assignment application, depicted in Figures <ref>-<ref>. § EXPERIMENTAL SETUP DETAILS The cross-entropy loss function was optimized with Adam optimizer <cit.>. The hyperparameter tuning is based on the average results run with three different seeds. The Learning rate was searched in [1e-5, 1e-4, 1e-3] and a 1e-5 was chosen. The S2-cell level was searched in [13, 15, 17] and 13 was chosen. Number-of-epochs for early stopping was based on their average learning curve.
http://arxiv.org/abs/2307.02057v1
20230705064731
Benchmark computations of dynamic poroelasticity
[ "Mathias Anselmann", "Markus Bause", "Nils Margenberg", "Pavel Shamko" ]
math.NA
[ "math.NA", "cs.NA", "65M60, 65M55, 76S05" ]
Benchmark computations of dynamic poroelasticity Mathias Anselmann^†, Markus Bause^†[email protected] (corresponding author) , Nils Margenberg^†, Pavel Shamko^† ^† Helmut Schmidt University, Faculty of Mechanical and Civil Engineering, Holstenhofweg 85, 22043 Hamburg, Germany ==================================================================================================================================================================================================================================================== We present benchmark computations of dynamic poroelasticity modeling fluid flow in deformable porous media by a coupled hyperbolic-parabolic system of partial differential equations. A challenging benchmark setting and goal quantities of physical interest for this problem are proposed. Computations performed by space-time finite element approximations with continuous and discontinuous discretizations of the time variable are summarized. By this work we intend to stimulate comparative studies by other research groups for the evaluation of dynamic poroelasticity solver regarding the accuracy of discretization techniques, the efficiency and robustness of iterative methods for the linear systems and the arrangement of the model equations in terms of their variables (two-field or multi-field formulations). § INTRODUCTION AND MATHEMATICAL MODEL In this work we present benchmark computations for families of space-time finite element approximations (cf., e.g., <cit.>) to the coupled hyperbolic-parabolic problem ρ∂_t^2 u⃗ - ∇·(C⃗ ε⃗(u⃗)) + α∇⃗p = ρf⃗ , c_0∂_t p + α∇·∂_t u⃗ - ∇·(K⃗ ∇p) = g , in Ω×(0,T] , u⃗ (0) = u⃗_0 , ∂_t u⃗ (0) = u⃗_1 , p(0) = p_0 , in Ω×{0} , u⃗ = u⃗_D , on Γ_u⃗^D ×( 0,T] , -(C⃗ε⃗(u⃗) - αpE⃗) n⃗ = t⃗_N , on Γ_u⃗^N ×(0,T] , p = p_D , on Γ_p^D ×( 0,T] , - K⃗ ∇p ·n⃗ = p_N , on Γ_p^N ×(0,T] . Componentwise or directional boundary conditions for u⃗, given by u⃗·n⃗ = 0 and (σ⃗(u⃗)n⃗) ·t⃗_i = 0 , for i=1,…, d-1 , on Γ^d_u⃗× (0,T] , are applied further for the sake of physical realism; cf. Fig. <ref>. In (<ref>), Ω⊂^d, with d∈{2,3}, is an open bounded Lipschitz domain with outer unit normal vector n⃗ to the boundary ∂Ω and T>0 is the final time point. For (<ref>) and (<ref>), we let ∂Ω = Γ_u⃗^D∪Γ_u⃗^N and ∂Ω = Γ_p^D∪Γ_p^N with closed portions Γ_u⃗^D and Γ_p^D of non-zero measure. In (<ref>), we denote by t⃗_i, for i=1,…,d-1, the unit basis vectors of the tangent space at x⃗∈Γ^d_u⃗. For (<ref>), the decomposition ∂Ω = Γ_u⃗^N∪Γ^d_u⃗ of the boundary of a L-shaped domain Ω that is used for our computations is illustrated in Fig. <ref>. The unknowns in (<ref>) are the vector-valued variable u⃗ and the scalar function p. The quantity ε⃗(u⃗):= (∇u⃗ + (∇u⃗)^⊤)/2 denotes the symmetrized gradient and E⃗∈^d,d is the identity matrix. For brevity, the positive quantities ρ>0, α>0 and c_0 >0 as well as the tensors C⃗ and K⃗ are assumed to be constant in space and time. The tensors C⃗ and K⃗ are supposed to be symmetric and positive definite, ∃ k_0>0 ∀ξ⃗= ξ⃗^⊤∈^d,d: ∑_i,j,k,l=1^d ξ_ij C_ijklξ_kl≥ k_0 ∑_j,k=1^d |ξ_jk|^2 , ∃ k_1>0 ∀ξ⃗∈^d: ∑_i,j,=1^d ξ_i K_ijξ_j≥ k_1 ∑_i=1^d |ξ_i|^2 . Under these assumptions, well-posedness of (<ref>) is ensured. This has been shown by different mathematical techniques and for several combinations of boundary conditions in, e.g., <cit.>. In <cit.>, well-posedness of a fully discrete space-time finite element approximation is proved for the boundary conditions in (<ref>), applied to the L-shaped domain Ω of Fig. <ref>. Important and classical applications of the system (<ref>) arise in poroelasticity and thermoelasticity; cf., e.g., <cit.> and <cit.>. Recently, generalizations of the system (<ref>) to soft materials have strongly attracted researchers' interest in biomedicine; cf., e.g., <cit.> and the references therein. In neurophysiology, such generalizations are used to model, simulate and elucidate circulatory diseases, such as ischaemic stroke, or also Alzheimer's disease. In poroelasticity, Eqs. (<ref>) are referred to as the dynamic Biot model. The system (<ref>) is used to describe flow of a slightly compressible viscous fluid through a deformable porous matrix. The small deformations of the matrix are described by the Navier equations of linear elasticity, and the diffusive fluid flow is described by Duhamel’s equation. The unknowns are the effective solid phase displacement u⃗ and the effective fluid pressure p. The quantity ε⃗(u⃗) is the strain tensor. Further, ρ is the effective mass density, C⃗ is Gassmann’s fourth order effective elasticity tensor, α is Biot’s pressure-storage coupling tensor, c_0 is the specific storage coefficient and K⃗ is the permeability. In thermoelasticity, p denotes the temperature, c_0 is the specific heat of the medium, and K⃗ is the conductivity. Then, the quantity α∇ p arises from the thermal stress in the structure, and α∇·∂_t u⃗ models the internal heating due to the dilation rate. § SPACE-TIME FINITE ELEMENT APPROXIMATION We rewrite (<ref>) as a first-order in time system by introducing the new variable v⃗:= ∂_t u⃗. Then, we recover (<ref>) as ∂_t u⃗ - v⃗ = 0⃗ , ρ∂_t v⃗ - ∇· (C⃗ε⃗(u⃗)) + α∇⃗p = ρf⃗ , c_0∂_t p + α∇·v⃗ - ∇· (K⃗∇⃗p) = g , along with the initial and boundary conditions (<ref>) to (<ref>). We benchmark the application of space-time finite element methods to (<ref>), with continuous and discontinuous Galerkin methods for the discretization of the time variable and inf-sup stable pairs of finite elements for the approximation of the space variables. To introduce the scheme we need notation. For the time discretization, we decompose I:=(0,T] into N subintervals I_n=(t_n-1,t_n], n=1,…,N, where 0=t_0<t_1< ⋯ < t_N-1 < t_N = T such that I=⋃_n=1^N I_n. We put τ := max_n=1,…, Nτ_n with τ_n = t_n-t_n-1. The set ℳ_τ := {I_1,…, I_N} of time intervals is called the time mesh. For a Banach space B, any k∈_0 and ℙ_k(I_n;B) := {w_τ : I_n → B , w_τ(t) = ∑_j=0^k W^j t^j ∀ t∈ I_n , W^j ∈ B ∀ j } we define the space of piecewise polynomial functions in time with values in B by Y_τ^k (B) := {w_τ: I→ B | w_τ_|I_n∈ℙ_k(I_n;B) ∀ I_n∈ℳ_τ, w_τ(0)∈ B }⊂ L^2(I;B) . For any function w: I→ B that is piecewise sufficiently smooth with respect to the time mesh ℳ_τ, for instance for w∈ Y^k_τ (B), we define the right-hand sided and left-hand sided limit at a mesh point t_n by w^+(t_n) := lim_t→ t_n+0 w(t) for n<N and w^-(t_n) := lim_t→ t_n-0 w(t) for n>0. Further, for a Banach space B and any k∈ we define X_τ^k (B) := {w_τ∈ C(I;B) | w_τ_|I_n∈ℙ_k(I_n;B) ∀ I_n∈ℳ_τ} . For time integration, it is natural to use the right-sided (k+1)-point Gauss–Radau (GR) quadrature formula in a discontinuous Galerkin approach and the (k+1)-point Gauss–Lobatto (GL) quadrature formula in a continuous one. On I_n, they read as Q_n^q(w) := τ_n/2∑_μ=1^k+1ω̂_μ^q w(t_n,μ^q ) ≈∫_I_n w(t) t , for q∈{GR,GL} , where t_n,μ^q=T_n(t̂_μ^q), for μ = 1,…,k+1, are the quadrature points on I_n and ω̂_μ^q the corresponding weights of the respective quadrature formula. Here, T_n(t̂):=(t_n-1+t_n)/2 + (τ_n/2)t̂ is the affine transformation from Î = [-1,1] to I_n and t̂_μ^q are the quadrature points on Î. Formula (<ref>) is exact for all w∈ℙ_2k (I_n;) if q=GR, and for all w∈ℙ_2k-1 (I_n;) if q=GL . For the space discretization, let 𝒯_h={K} be the quasi-uniform decomposition of Ω into (open) quadrilaterals or hexahedrals, with mesh size h>0. These element types are chosen for our implementation (cf. Sec. <ref> and <ref>) that uses the deal.II library <cit.>. The finite element spaces used for approximating the unknowns u⃗, v⃗ and p of (<ref>) in space are of the form V⃗_h := {v⃗_h ∈C(Ω)^d |v⃗_h_|K∈V⃗(K) for all K ∈𝒯_h} , Q_h := {q⃗_h ∈L^2(Ω)|q⃗_h_|K∈Q(K) for all K ∈𝒯_h} . For the local spaces V⃗(K) and Q(K) we employ mapped versions of the inf-sup stable pair ℚ_r^d/ℙ_r-1^, for r≥ 2, of finite element spaces; cf. <cit.>. The pair ℚ_r^d/ℙ_r-1^ with a discontinuous approximation of p in the broken polynomial space Q_h has proved excellent accuracy and stability properties for higher-order approximations of mixed (or saddle point) systems like the Navier–Stokes equations and the applicability of geometric multigrid preconditioner for the algebraic systems; cf., e.g., <cit.>. For w⃗_h, χ⃗_h∈V⃗_h and q_h, ψ_h ∈ Q_h we define the bilinear forms A_γ(w⃗_h,χ⃗_h) := ⟨C⃗ ε⃗(w⃗_h),ε⃗(χ⃗_h)⟩- ⟨C⃗ε⃗(w⃗_h) n⃗, χ⃗_h⟩_Γ^D_u⃗+ a_γ(w⃗_h,χ⃗_h) , C (χ⃗_h,q_h) := -α⟨∇·χ⃗_h, q_h⟩+ α⟨χ⃗_h ·n⃗ , q_h ⟩_Γ^D_u⃗ , B_γ(q_h,ψ_h) := [ ∑_K∈𝒯_l ⟨K⃗ ∇q_h, ∇ψ_h ⟩_K - ∑_F∈ℱ_h (⟨K⃗ ∇q_h ·n⃗, ψ_h ⟩_F + ⟨q_h, K⃗ ∇ψ_h ·n⃗ ⟩_F) + b_γ(q_h,ψ_h) , ] where a_γ (·,·) in (<ref>) is given by a_γ (w⃗ ,χ⃗_h) := - ⟨w⃗, C⃗ε⃗(χ⃗_h) n⃗⟩_Γ^D_u⃗ + γ_ah_F^-1⟨w⃗, χ⃗_h ⟩_Γ^D_u⃗, for w⃗∈H⃗^1/2(Γ^D_u⃗), and b_γ (·,·) in (<ref>) is defined by b_γ (q_h,ψ_h) = ∑_F∈ℱ_hγ_bh_F^-1⟨ q_h , ψ_h⟩_F. The form B_γ yields a symmetric interior penalty discontinuous Galerkin discretization of the scalar variable p; cf., e.g., <cit.>. As usual, the average · and jump · for a function w of a broken space on an interior face F between two elements K^+ and K^-, such that F=∂ K^+ ∩∂ K^-, are defined by w := 1/2 (w^++ w^-) and w : = w^+ - w^-. For boundary faces F ⊂∂ K ∩∂Ω, we set w:= w_|K and w := w_|K. The set of all faces (interior and boundary faces) on 𝒯_h is denoted by ℱ_h. In (<ref>), the parameter γ_b in b_γ has to be chosen sufficiently large, such that the discrete coercivity of B_γ on Q_h is preserved. The local length h_F is chosen as h_F = h_F := 1/2 (|K^+|_d + |K^-|_d) with Hausdorff measure |· |_d; cf. <cit.>. For boundary faces we set h_F := |K|_d. In a_γ (·,·) , the quantity γ_a is the algorithmic parameter of the stabilization (or penalization) term in the Nitsche formulation <cit.> for incorporating Dirichlet boundary conditions in weak form which is applied here. To ensure well-posedness of the discrete systems, the parameter γ_a has to be chosen sufficiently large as well; cf. <cit.>. Based on our numerical experiments we choose the algorithmic parameter γ_a and γ_b as γ_a = 5· 10^4 · r · (r+1) and γ_b = 1/2· r · (r-1), where r is the polynomial degree of V⃗_h in (<ref>). Finally, for f⃗∈H⃗^-1(Ω), u⃗_D ∈H⃗^1/2(Γ^D_u⃗), t⃗_N ∈H⃗^-1/2(Γ^N_u⃗) and g∈ H^-1(Ω), p_D ∈ H^1/2(Γ^D_p), p_N∈ H^-1/2(Γ^N_p) we put F_γ(χ⃗_h) := ⟨f⃗ , χ⃗_h⟩- ⟨t⃗_N,χ⃗_h ⟩_Γ^N_u⃗ + a_γ(u⃗_D ,χ⃗_h) , G_γ(ψ_h) := ⟨g,ψ_h ⟩- ∑_F∈ℱ_h^D,u⃗ α⟨v⃗_D ·n⃗ , ψ_h ⟩_F - ∑_F∈ℱ_h^D,p⟨p_D, K⃗ ∇ψ_h ·n⃗ ⟩_F + ∑_F∈ℱ_h^D,p γ_bh_F^-1 ⟨p_D, ψ_h⟩_F - ∑_F⊂ℱ_h^N,p ⟨p_N, ψ_h ⟩_F . Here, we denote by ℱ_h^D,p⊂ℱ_h and ℱ_h^N,p⊂ℱ_h the set of all element faces on the boundary parts Γ_p^D and Γ_p^N, respectively; cf. (<ref>). The second of the terms on the right-hand side of G_γ(·,·) with v⃗_D = ∂_t u⃗_D, is added to ensure consistency of the form C(·,·) in (<ref>); cf. <cit.> for details. We use a temporal test basis that is supported on the subintervals I_n. Then, a time marching process is obtained. In that, we assume that the trajectories u⃗_τ,h, v⃗_τ,h and p_τ,h have been computed before for all t∈ [0,t_n-1], starting with approximations u⃗_τ,h(t_0) :=u⃗_0,h, v⃗_τ,h(t_0) :=u⃗_1,h and p_τ,h(t_0) := p_0,h of the initial values u⃗_0, u⃗_1 and p_0. Then, we consider solving the following local problems on I_n of the discontinuous (dG(k)) and continuous (cG(k)) Galerkin approximation in time; cf. <cit.>. [I_n-problem for dG(k)] Let k∈_0. For given u⃗_h^n-1:= u⃗_τ,h(t_n-1)∈V⃗_h, v⃗_h^n-1:= v⃗_τ,h(t_n-1)∈V⃗_h, and p_h^n-1:= p_τ,h(t_n-1) ∈ Q_h with u⃗_τ,h(t_0) :=u⃗_0,h, v⃗_τ,h(t_0) :=u⃗_1,h and p_τ,h(t_0) := p_0,h, find (u⃗_τ,h,v⃗_τ,h,p_τ,h) ∈ℙ_k (I_n;V⃗_h) ×ℙ_k (I_n;V⃗_h) ×ℙ_k (I_n;Q_h) such that for all (ϕ⃗_τ,h,χ⃗_τ,h,ψ_τ,h)∈ℙ_k (I_n;V⃗_h) ×ℙ_k (I_n;V⃗_h) ×ℙ_k (I_n;Q_h), Q_n^GR (⟨∂_t u⃗_τ,h , ϕ⃗_τ,h ⟩- ⟨v⃗_τ,h , ϕ⃗_τ,h ⟩) + ⟨u⃗^+_τ,h(t_n-1), ϕ⃗_τ,h^+(t_n-1)⟩= ⟨u⃗_h^n-1, ϕ⃗_τ,h^+(t_n-1)⟩ , Q_n^GR (⟨ρ∂_t v⃗_τ,h , χ⃗_τ,h ⟩+ A_γ(u⃗_τ,h, χ⃗_τ,h ) + C(χ⃗_τ,h,p_τ,h)) + ⟨ρv⃗^+_τ,h(t_n-1), χ⃗_τ,h^+(t_n-1)⟩ = Q_n^GR (F_γ(χ⃗_τ,h)) + ⟨ρv⃗_h^n-1, χ_τ,h^+(t_n-1)⟩ , Q_n^GR (⟨c_0 ∂_t p_τ,h,ψ_τ,h ⟩- C(v⃗_τ,h,ψ_τ,h)+ B_γ(p_τ,h, ψ_τ,h)) + ⟨c_0 p^+_τ,h(t_n-1), ψ_τ,h^+(t_n-1)⟩ = Q_n^GR ( G_γ(ψ_τ,h)) + ⟨c_0 p_h^n-1, ψ_τ,h^+(t_n-1)⟩ . The trajectories defined by Problem <ref>, for n = 1,…,N, satisfy that u⃗_τ,h,v⃗_τ,h∈ Y_τ^k(V⃗_h) and p_τ,h∈ Y_τ^k(Q_h); cf. (<ref>). Well-posedness of Problem <ref> is ensured; cf. <cit.>. [I_n-problem for cG(k)] Let k∈. For given u⃗_h^n-1:= u⃗_τ,h(t_n-1)∈V⃗_h, v⃗_h^n-1:= v⃗_τ,h(t_n-1)∈V⃗_h, and p_h^n-1:= p_τ,h(t_n-1) ∈ Q_h with u⃗_τ,h(t_0) :=u⃗_0,h, v⃗_τ,h(t_0) :=u⃗_1,h and p_τ,h(t_0) := p_0,h, find (u⃗_τ,h,v⃗_τ,h,p_τ,h) ∈ℙ_k (I_n;V⃗_h) ×ℙ_k (I_n;V⃗_h) ×ℙ_k (I_n;Q_h) such that u⃗_τ,h^+(t_n-1)=u⃗_h^n-1 , v⃗_τ,h^+(t_n-1)=v⃗_h^n-1 and p_τ,h^+(t_n-1)= p_h^n-1 and, for all (ϕ⃗_τ,h,χ⃗_τ,h,ψ_τ,h)∈ℙ_k-1 (I_n;V⃗_h) ×ℙ_k-1 (I_n;V⃗_h) ×ℙ_k-1 (I_n;Q_h), Q_n^GL (⟨∂_t u⃗_τ,h , ϕ⃗_τ,h ⟩- ⟨v⃗_τ,h , ϕ⃗_τ,h ⟩) = 0 , Q_n^GL (⟨ρ∂_t v⃗_τ,h , χ⃗_τ,h ⟩+ A_γ(u⃗_τ,h, χ⃗_τ,h ) + C(χ⃗_τ,h,p_τ,h)) = Q_n^GL (F_γ(χ⃗_τ,h)) , Q_n^GL (⟨c_0 ∂_t p_τ,h,ψ_τ,h ⟩- C(v⃗_τ,h,ψ_τ,h)+ B_γ(p_τ,h, ψ_τ,h)) = Q_n^GL ( G_γ(ψ_τ,h)) . The trajectories defined by Problem <ref>, for n = 1,…,N, satisfy that u⃗_τ,h,v⃗_τ,h∈ X_τ^k(V⃗_h) and p_τ,h∈ X_τ^k(Q_h); cf. (<ref>). Well-posedness of Problem <ref> can be shown following the arguments <cit.>. Precisely, the cG(k) scheme of Problem <ref> represents a Galerkin–Petrov approach since trial and test spaces differ from each other. Problem <ref> and <ref> lead to large linear algebraic systems with complex block structure, in particular for larger values of the piecewise polynomial order in time k. This puts a facet of complexity on their solution, in particular if three space dimensions are involved. To solve such type of block systems, we use GMRES iterations that are preconditioned by a V-cycle geometric multigrid method based on a local Vanka smoother; cf. <cit.> for details. § NUMERICAL CONVERGENCE TEST [4] Firstly, we investigate the schemes proposed in Problem <ref> and <ref> by a numerical convergence study. From the point of view of numerical costs for solving the algebraic counterparts of (<ref>) and (<ref>), (<ref>), respectively, the dG(k) member of the family of schemes in Problem <ref> can be compared with the cG(k+1) scheme of Problem <ref>. If in cG(k) the Gauss–Lobatto quadrature points are used for building Lagrange interpolation in time on I_n, the degrees of freedom at time t_n-1 are directly obtained from the vector identities corresponding to (<ref>). Using this, the algebraic system of cG(k) can be condensed. Then, the dimension of the resulting algebraic system for cG(k+1) coincides with the one obtained for dG(k). We study (<ref>) for Ω=(0,1)^2 and I=(0,2] and the prescribed solution u( x, t) = ϕ( x, t) E_2 and p( x, t) = ϕ( x, t) with ϕ( x, t) = sin(ω_1 t^2) sin(ω_2 x_1) sin(ω_2 x_2) and ω_1=ω_2 = π. We put ρ=1.0, α=0.9, c_0=0.01 and K= E_2 with the identity E⃗_2∈^2,2. For the fourth order elasticity tensor C, isotropic material properties with Young's modulus E=100 and Poisson's ratio ν=0.35, corresponding to the Lamé parameters λ = 86.4 and μ = 37.0, are chosen. For the space-time convergence test, the domain Ω is decomposed into a sequence of successively refined meshes of quadrilateral finite elements. The spatial and temporal mesh sizes are halved in each of the refinement steps. The step sizes of the coarsest space and time mesh are h_0=1/(2√(2)) and τ_0=0.1. For the dG(k) scheme of Problem <ref> we choose the polynomial degrees k=2 and r=4, such that discrete solutions u⃗_τ,h, v⃗_τ∈ Y_τ^2(V⃗_h), p_τ,h∈ Y_τ^2(Q_h) with local spaces ℚ_4^2/ P_3^disc are obtained. For the cG(k) scheme of Problem <ref> we choose the polynomial degrees k=3 and r=4, such that discrete solutions u⃗_τ,h, v⃗_τ∈ X_τ^3(V⃗_h), p_τ,h∈ X_τ^3(Q_h) with local spaces ℚ_4^2/ P_3^disc are obtained. The calculated errors and corresponding experimental orders of convergence are summarized in Tab. <ref> and <ref>, respectively. The error is measured in the quantities associated with the energy of the system (<ref>); cf. <cit.> and <cit.>. Table <ref> nicely confirms the optimal rates of convergence with respect to the polynomial degrees in space and time for the cG(3) scheme. The superiority of the cG(3) scheme over the dG(2) scheme is clearly observed. We note that r=3 would have been sufficient for dG(2) to ensure third order convergence in space and time, r=4 was only chosen to equilibrate the costs for solving the algebraic systems and, thereby, compare either approaches fairly to each other. § BENCHMARK COMPUTATIONS Here we propose the two-dimensional case of our benchmark problem for dynamic poroelasticity. We intend to stimulate other research groups to use this benchmark for the evaluation of their schemes and implementations and contribute to its dissemination and, possibly, further improvement. The test problem is expected to enable comparative studies of different formulations of (<ref>) in terms of unknowns (two-field versus multi-field arrangements) and to benchmark numerical approaches with respect to their accuracy and efficiency. We consider the L-shaped domain Ω⊂^3 sketched in Fig. <ref> along with the boundary conditions for u⃗ prescribed on the different parts of ∂Ω. Beyond the boundary conditions (<ref>), we apply the (homogeneous) directional boundary conditions (<ref>) on the portion Γ_u⃗^d of ∂Ω. For their implementation in the forms of Subsec. <ref> we refer to <cit.>. We aim to compute goal quantities of physical interest that are defined by G_u⃗ = ∫_Γ_mu⃗·n⃗ o and G_p = ∫_Γ_m p o for Γ_m:= {0.75}× (0,0.5) . On the left part of the upper boundary, i.e. for Γ^N_u⃗:=(0,0.5)×{1}, we impose the traction force t⃗_N = -64x^2(16x-3)sin(8π t) , x ∈ [0,18] , 16/27(2x-1)^2(16x+1)sin(8π t) , x ∈ (18,0.5] . On the right boundary we put t⃗_N=0⃗. For the variable p we prescribe a homogeneous Dirichlet condition (<ref>) on the left upper part of ∂Ω, i.e., for (x_1,x_2)∈Γ_p^D:=[0,0.5]×{1}. On Γ_p^N:=∂Ω\Γ_p^D we impose the homogeneous Neumann condition p_N=0 in (<ref>). We put ρ=1.0, α=0.9, c_0=0.01 and K= E_2 with the identity E⃗_2∈^2,2. For the elasticity tensor C, isotropic material properties with Young's modulus E=20000 and Poisson's ratio ν=0.3 are used. The final time is T=8. Fig. <ref> and <ref> illustrate and compare the results of our computations for the schemes presented in Problem <ref> and <ref>, respectively. In Fig. <ref>, the convergence in space and time of the goal quantity G_u⃗ of (<ref>) is clearly observed for either families of discretizations. In Fig. <ref>, the superiority of the higher order members of the dG(k) and cG(k) schemes is illustrated. The scheme dG(1) and, on the coarser time mesh, the scheme cG(2) are strongly erroneous. For the poroelasticity system (<ref>), this illustrates the sensitivity of numerical predictions with respect to the time discretization and argues for higher order approaches that, however, put an additional facet of complexity on the efficient (iterative) solution of the algebraic equations; cf. <cit.>. Finally, in Tab. <ref> characteristics of the computed goal quantities are summarized. § ACKNOWLEDGEMENTS Computational resources (HPC-cluster HSUper) have been provided by the project hpc.bw, funded by dtec.bw — Digitalization and Technology Research Center of the Bundeswehr. dtec.bw is funded by the European Union — NextGenerationEU. 1 AB23 M. Anselmann, M. Bause, A geometric multigrid method for space-time finite element discretizations of the Navier–Stokes equations and its application to 3d flow simulation, ACM Trans. Math. Softw., 49 (2023), Article No.: 5, pp. 1–25, https://doi.org/10.1145/3582492. ABMS23 M. Anselmann, M. Bause, N. Margenberg, P. Shamko, An energy-efficient GMRES–Multigrid solver for space-time finite element computation of dynamic poro- and thermoelasticity, Comput. Mech., submitted (2023), pp. 1–30; arXiv:2303.06742. AB22 M. Anselmann, M. Bause, Efficiency of local Vanka smoother geometric multigrid preconditioning for space-time finite element methods to the Navier–Stokes equations, PAMM Proc. Appl. Math. Mech., 22 (2022), doi:10.1002/pamm.202200088, pp. 1–6. Aetal21 D. Arndt, W. Bangerth, B. Blais, M. Fehling, R. Gassmöller, T. Heister, L. Heltai, U. Köcher, M. Kronbichler, M. Maier, P. Munch, J.-P. Pelteret, S. Proell, K. Simon, B. Turcksin, D. Wells, J. Zhang, The deal.II Library, Version 9.3, J. Numer. Math., 29 (2021), pp. 171–186. BAKR22 M. Bause, A. Anselmann, U. Köcher, F. A. Radu, Convergence of a continuous Galerkin method for hyperbolic-parabolic systems, Comput. Math. with Appl., submitted (2022), pp. 1–24; arXiv:2201.12014. B02 R. Becker, Mesh adaptation for Dirirchlet flow control via Nitsche's method, Commun. Numer. Meth. Engrg., 18 (2002), pp. 669–680. B41 M. Biot, General theory of three-dimensional consolidation, J. Appl. Phys., 12 (1941), pp. 155–164. B55 M. Biot, Theory of elasticity and consolidation for a porous anisotropic solid, J. Appl. Phys., 26 (1955), pp. 182–185. B72 M. Biot, Theory of finite deformations of porous solids, Indiana Univ. Math. J., 21 (1972), pp. 597–620. BKNR22 J. W. Both, N. A. Barnafi, F. A. Radu, P. Zunino, A. Quarteroni, Iterative splitting schemes for a soft material poromechanics model, Comput. Methods Appl. Mech. Engrg., 388 (2022), 114183. CADQ22 M. Corti, P. F. Antonetti, L. Dede, A. M. Quarteroni, Numerical modelling of the brain poromechanics by high-order discontinuous Galerkin methods, arXiv:2210.02272, pp. 1–28. C72 D. E. Carlson, Linear thermoelasticity, Handbuch der Physik V Ia/2, Springer, Berlin, 1972. HST13 S. Hussain, F. Schieweck, S. Turek, An efficient and stable finite element solver of higher order in space and time for nonstationary incompressible flow, Internat. J. Numer. Methods Fluids, 73 (2013), pp. 927–952. HST11 S. Hussain, F. Schieweck, S. Turek, Higher order Galerkin time discretizations and fast multigrid solvers for the heat equation, J. Numer. Math., 19 (2011), pp. 41–61. JR18 S. Jiang, R. Racke, Evolution equations in thermoelasticity, CRC Press, Boca Raton, 2018. J16 V. John, Finite Element Methods for Incompressible Flow Problems, Springer, Cham, 2016. KM04 O. Karakashian, C. Makridakis, Convergence of a continuous Galerkin method with mesh modification for nonlinear wave equations, Math. Comp., 74 (2004), pp. 85–102. KB14 U. Köcher, M. Bause, Variational space-time methods for the wave equation, J. Sci. Comput., 61 (2014), pp. 424–453. L86 R. Leis, Initial boundary value problems in mathematical physics, Teubner, Stuttgart, John Wiley & Sons, Chichester, 1986. PE12 D. A. Di Pietro, A. Ern, Mathematical Aspects of Discontinuous Galerkin Methods, Springer, Berlin, 2012. STW22 C. Seifert, S. Trostorff, M. Waurick, Evolutionary Equations: Picard's Theorem for Partial Differential Equations, and Applications, Birkhäuser, Cham, 2022. S89 M. Slodička, Application of Rothe's method to integrodifferential equation, Comment. Math. Univ. Carolinae, 30 (1989), pp. 57–70.
http://arxiv.org/abs/2307.03141v2
20230706171020
Cosmological Interpretation for the Stochastic Signal in Pulsar Timing Arrays
[ "Yu-Mei Wu", "Zu-Cheng Chen", "Qing-Guo Huang" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ph" ]
[4] x̅ ρ_eq z_eq λ̃ β δ Δ j⃗ l⃗ x̂ ŷ j 𝒥 𝒫 M_⊙ ≈ [1]⟨#1 ⟩ [1]Eq. (<ref>) α X_∗ f_pbh θ⃗ λ⃗ d⃗ M_min d m_min m_max ℛ ℛ̃ σ Ω_GW [add ref] Ω γ Γ Gpc^-3 yr^-1 [1]Eq. (<ref>) [1]Fig. <ref> [1]Table <ref> LIGO/Virgo [1]Sec. <ref> e.g.  SNR ϵ 𝐧 𝐝 𝐚 ϵ ν 𝐭 θ ϵ U log-U RN BN GN 𝒩 GW yr 𝒜 𝒟 ℋ Soviet Ast. BAYESEPHEM Ω_GW^ST Ω_GW^TT Ω_GW^VL Ω_GW^SL β γ_PL A_PL 12 ∂ α' 12 () [ ] [1]#1 [email protected] of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, ChinaSchool of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, China Corresponding author: [email protected] of Astronomy, Beijing Normal University, Beijing 100875, ChinaAdvanced Institute of Natural Sciences, Beijing Normal University, Zhuhai 519087, ChinaDepartment of Physics and Synergistic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha, Hunan 410081, China Corresponding author: [email protected] of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, ChinaSchool of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, ChinaCAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences,Beijing 100190, China The pulsar timing array (PTA) collaborations have recently reported compelling evidence for the presence of a stochastic signal consistent with a gravitational-wave background. In this letter, we combine the latest data sets from NANOGrav, PPTA and EPTA collaborations to explore the cosmological interpretations for the detected signal from first-order phase transitions, domain walls and cosmic strings, separately. We find that the first-order phase transitions and cosmic strings can give comparable interpretations compared to supermassive black hole binaries (SMBHBs) characterized by a power-law spectrum, but the domain wall model is strongly disfavored with the Bayes factor compared to the SMBHB model being 0.009. Furthermore, the constraints on the parameter spaces indicate that: 1) a strong phase transition at temperatures below the electroweak scale is favored and the bubble collisions make the dominant contribution to the energy density spectrum; 2) the cosmic string tension is G μ∈ [1.46, 15.3]× 10^-12 at 90% confidence interval and a small reconnection probability p<6.68× 10^-2 is preferred at 95% confidence level, implying that the strings in (super)string theory are strongly favored over the classical field strings. Cosmological Interpretation for the Stochastic Signal in Pulsar Timing Arrays Qing-Guo Huang ============================================================================= Introduction. After the detection of the gravitational waves (GWs) from compact binary coalescences by the ground-based detectors <cit.>, one of the most anticipated GW sources is the stochastic gravitational-wave background (SGWB) promisingly captured by the pulsar timing arrays (PTAs) <cit.>. Several individual PTA collaborations, including North American Observatory for Gravitational Waves (NANOGrav) <cit.>, Parkes PTA (PPTA) <cit.>, European PTA (EPTA) <cit.>, and the joint collaboration International PTA (IPTA) <cit.>, have been endeavoring to search for the SGWB with increasing sensitivity, by accumulating more than a decade's timing data from dozens of pulsars. The emerging Chinese PTA (CPTA) <cit.>, Indian PTA (InPTA) <cit.>, and MeerKAT PTA <cit.> are also making significant contributions. Recently, NANOGrav <cit.>, PPTA <cit.>, EPTA <cit.>, and CPTA <cit.> all have found evidence supporting the existence of a stochastic signal consistent with the Hellings-Downs <cit.> inter-pulsar correlations, pointing to the GW origin of the signal. The next critical task is to identify the origin of the signal. It is believed that the large population of supermassive black hole binaries (SMBHBs) produce the brightest SGWB source at nanoHertz <cit.>. If all contributing SMBHBs are inspiraling in circular orbits and their orbital evolution is dominated by gravitational radiation, the timing residual power spectral density induced by the SGWB can be very well modeled by a simple power-law f^13/3<cit.>. Although the power-law spectral shape used to model the observed excess low-frequency residual powers in NANOGrav 15-year data set deviates somewhat from the predicted value of 13/3, the discrepancy can be explained by considering a more realistic scenario for SMBHBs accounting for the effects of galactic environmental processes, such as dynamical friction and stellar scattering <cit.>, possible significant eccentricity in the orbits of SMBHBs <cit.>, and the intrinsic discrete nature of the sources <cit.>. In fact, further investigations have been conducted on SMBHB sources in the NANOGrav 15-year data set, utilizing astrophysically informed models <cit.>. Although the SGWB from SMBHBs is supposed to be the most promising source for the signal detected by the PTAs, the signal at nanoHertz can also originate from some cosmological processes <cit.>, such as cosmological first-order phase transition <cit.>, cosmic strings <cit.>, domain walls <cit.>, and scalar-induced GW <cit.> accompanying the formation of primordial black holes <cit.>. Each of these predicted sources exhibits a distinct spectral shape in the PTA frequency band. The NANOGrav and EPTA collaborations have already searched for signals from these new physics in their respective latest data sets <cit.>. Some recent studies have also explored the non-astrophysical interpretations of the SGWB signal and their potential implications for individual data set <cit.>. In this letter, we aim to further investigate cosmological scenarios using the combined data sets from NANOGrav, PPTA, and EPTA collaborations, with the goal of breaking the degeneracy among these models. fig:spectrum presents a comparison between the joint PTA posteriors on the GW energy density Ω_GW(f) and the spectra from both astrophysical and cosmological sources. It is noteworthy that except the domain-wall model, all other models share comparable consistency with the data. Hence, cosmological sources could serve as an alternative interpretation for the detected signal, and the implications for the fundamental physics underlying fig:spectrum will be explored. Data analyses. The spectrum of an isotropic SGWB can be described by the dimensionless GW energy density parameter per logarithm frequency, _GW(f)=1/ρ_cd ρ_GW/d ln f, where ρ_c=3H_0^2 /8π G is the critical energy density of the Universe. When the Hellings-Downs correlations <cit.> reflect the geometric property of the quadruple nature in an SGWB, the energy spectrum will yield the information about the source of the SGWB. For example, the power-law spectrum predicted by the SMBHB sources takes the form of <cit.>_PL(f) =2π^2^2/3H_0^2f/f_^5-f_^2, where is the amplitude of the GW characteristic strain measured at f_=1/year and is the power-law index with the expected value of γ=13/3. While the NANOGrav 15-yr data set prefers a shallower slope than the excepted value, the combined data sets are more compatible with the prediction, see the posteriors of the power-law model in PL_post. The 5% and 95% quantiles for the model parameters are: log_10∈ [2.52,6.02]× 10^-15 and ∈ [3.42, 4.27]. In addition, the Bayes factor (see BF) between the power-law model (with γ varied) and SMBHB model (with γ=13/3) is about 0.57, indicating that these two models are comparable. Hence, when we compare the fitness to data of different models, we still use the highly expected SMBHB sources as the fiducial model. NANOGrav, PPTA and EPTA collaborations all analyzed their data sets by using a free spectrum which allows the amplitude of GW spectrum in each frequency bin to vary independently. Following the methodology outlined in <cit.>, we utilize the posterior distribution of the free spectrum with Hellings-Downs correlations to conduct a Hierarchical Bayesian analysis on the SGWB parameters for various models. Initially, we construct kernel density estimates for the posterior distribution of Ω_GW(f_i) at each frequency using the latest NANOGrav, PPTA, and EPTA data sets. Subsequently, for each candidate model, we compute the logarithmic probability density functions (log-PDFs) at different frequencies by utilizing the kernel density estimates. These log-PDFs are then summed to obtain the overall log-likelihood function. We use the  <cit.> package to evaluate the likelihood and employ the  <cit.> implementation to perform the sampling needed for parameter estimations. Bayesian model comparison is employed to assess which model is more favored by the available data. In this letter, we adopt the SMBHB model as the fiducial model, and calculate the Bayes factor for a particular cosmological source ℳ_XX against the SMBHB model ℳ_SMBHB as BF_SMBHB^XX=Pr(𝒟/ℳ_XX)/Pr(𝒟/ℳ_SMBHB), where Pr(D/M) is the evidence that measures the probability to obtain the data D under the hypothesis of model M. According to interpretation of the Bayes factor <cit.>, if 0.33≤BF_SMBHB^XX≤ 3, then the evidence supporting ℳ_XX over ℳ_SMBHB (BF_SMBHB^XX>1) or ℳ_SMBHB over ℳ_XX (BF_SMBHB^XX<1) is “not worth more than a bare mention", while when 0.05≤BF_SMBHB^XX≤ 20, the model with lower evidence is strongly disfavored. In the following parts, we will discuss the cosmological SGWB sources including the first-order phase transitions, domain walls and cosmic strings, respectively. We will report their respective Bayes factors compared to the SMBHB model and discuss their physical implications through the parameter-space exploration. For later convenience, we summarize the parameters and their priors for each model in prior. SGWB from first-order phase transitions. Some extensions of the Standard Model predict the occurrence of a first-order phase transition <cit.>. It happens when the temperature drops to some level, the original symmetry is broken, and the true vacuum state with the lower energy condensate as bubbles in the plasma which is still in the false vacuum background. These bubbles absorb energy from the false vacuum which turns into the kinetic energy of the bubble walls, and expand in the false vacuum. The collisions between nearby bubbles and the interaction between bubbles and the surrounding plasma will produce GWs. There are three main sources of GWs orginated from first-order phase transitions: (i) collisions of bubble walls; (ii) collisions of sound waves in the plasma; (iii) turbulence in the plasma. Because the turbulence usually contributes subordinately compared with the sound waves, we do not include it in this work. The contribution from the sound waves to GW spectrum is <cit.> h^2_PT^SW(f)= 1.8× 10^-5v_wκ_sw_PT/1+_PT^2H_n/10/g_*^1/3 ×f/f_sw^37/4+3(f/f_sw)^2^7/2Υ(τ_sw), where v_w is the bubble wall velocity; _PT is the strength of the phase transition; κ_sw is the fraction of the vacuum energy transferred into the kinetic energy of plasma and depends on v_w and α_PT<cit.>; β/H_n is the bubble nucleation rate; g_* is the effective number of relativistic degrees of freedom and takes different values at different nucleation temperature T_n, i.e, g_*≈100 for T_n>0.2 GeV, g_*≈10 for 0.1MeV<T_n<0.2 GeV and g_*≈ 3 for T_n<0.1 MeV<cit.>; Υ(τ_sw)=1-(1+2τ_swH_n)^-1/2 is a suppression factor accounting for the effect of the finite lifetime of the sound wave <cit.>, which is approximately given by τ_sw≈ R_n/U̅_f<cit.>, with the average bubble separation R_n=(8π)^1/3^-1Max(v_w,c_s) and root-mean-square fluid velocity U̅_f=√(3 κ_sw_PT/[4(1+_PT)])<cit.>. The value of the peak frequency f_sw is given by <cit.> f_sw≈6.1× 10^-10Hz1/v_w/H_nT_n/10MeVg_*/10^1/6. Meanwhile, the contribution from the bubble collisions to GW spectrum is given by <cit.> h^2_PT^BC(f)= 3.6× 10^-5Δ(v_w)κ_ϕ_PT/1+_PT^2H_n/^2 10/g_*^1/3 × S(f/f_bc), where Δ(v_w)=0.48 v_w^3/(1+5.3v_w^2+5v_w^4)<cit.> and κ_ϕ is the efficiency of the vacuum energy transformed directly into the field. The spectral shape S(x) can be parameterized as S(x)=(a+b)^c/(b x^-a/c+a x^b/c)^c. The parameters a, b, c vary with models and the derivation methods including the envelope approximation, semi-analytic approach and lattice simulation <cit.>. In this letter, we take a=1, b=2.2, c=2 allowed in the semi-analytic method <cit.>. The peak frequency locates at <cit.>, f_bc≈1.1× 10^-9Hzf_*//H_nT_n/10MeVg_*/10^1/6 and f_*/≈0.1 from the semi-analytic method <cit.>. The relative contributions from the sound waves and bubble collisions to the GW spectrum depend strongly on the dynamics of the phase transition <cit.>. In the “non-runaway" scenario where bubbles expanding in the plasma can reach a terminal velocity and most of the energy released during the phase transition transfers to the surrounding plasma through its interaction with the expanding walls, the sound wave contribution dominates the GW spectrum. However, if the released energy is so large that the friction between the bubble walls and the plasma cannot prevent the walls from accelerating, the runaway scenario is reached, and the bubble collision could also contribute significantly to the GW spectrum. Based on this picture, we introduce a friction efficiency parameter <cit.>, η, to describe the interaction strength between the bubbles and surrounding plasma. We also relate the bubble wall velocity v_w and the efficiency factor κ_sw and κ_ϕ, to the parameters α_PT and η, which will determine the contributions of plasma and bubbles to the energy budget. The Bayes factor of the first-order phase transition model versus the SMBHB model is BF^PT_SMBHB=0.799, indicating that this cosmological interpretation receive comparable support from the current data as the SMBHB model. The posterior distributions for the parameters in the model of first-order phase transitions are shown in PT_post. The 5% and 95% quantiles for the model parameters are: T_n∈[[2.46× 10^-2, 9.27]] GeV, _PT∈ [0.35,8.84], H_n/∈ [0.048,0.83], and η∈ [0.013,2.17]. The results suggest that a strong phase transition (_PT>0.1) is favored, and the bubble collisions dominate the phase transition process (η<1). SGWB from domain walls. Domain walls are lamellar topological defects formed in the early Universe when a discrete symmetry is broken <cit.>. They arise in various well-motivated particle physics frameworks, such as Higgs models <cit.>, supersymmetry <cit.>, grand unification <cit.>. However, domain wall networks cannot stay stable and begin annihilating before they dominate over the total energy density of the Universe <cit.>. The motions and annihilations of the domain walls accompany with time-varying quadruples and thus act as an efficient source of GWs. The energy spectrum from the domain walls in a model-independent search can be expressed as <cit.> h^2_DW(f)= 10^-10_DW/0.01^210/g_*^1/3 S_DW(f/f_dw). The peak frequency locates at f_dw=10^-9Hzg_*/10^1/6T_a/10 MeV, where _DW=ρ_DW/ρ_tot is the fraction of total energy density in domain walls at the annihilation temperature T_a; is an efficient parameter to be extracted from numerical simulations and we fix =0.7 <cit.>. The spectral shape can also take the parameterized form of S_x. The causality requires that the slope of the spectrum approximates _DW∝ f^3 when f<f_dw, so we fix a=3. Although numerical simulations suggest that b≈ c≈ 1, we set b and c to be free parameters following <cit.>, The Bayes factor of the domain-wall model versus the SMBHB model is BF^DW_SMBHB=0.009, suggesting that domain walls are strongly disfavored and hence are unlikely to act as an viable interpretation of the signal in the PTA data sets. The posterior distributions for the parameters in the model of domain walls are shown in DW_post2. It is clear the parameter space is highly compressed, and the 5% and 95% quantiles for the model parameter are: T_a∈[[9.98, 197]] MeV, _DW∈ [0.051,0.096], b ∈ [0.518,0.96], and c ∈ [1.22,2.95]. Although the constraint _DW<0.3 ensures that there are no deviations from radiation domination, and T_a>[2.7]MeV guarantees that Big Bang Nucleosynthesis (BBN) is not affected, the spectral width c>1.22 has a conflict with the numerical simulation of c≈ 1 <cit.>. SGWB from cosmic strings. Comic strings are one-dimensional topological defects that may result from the spontaneous symmetry breaking happened in the phase transitions <cit.>. They can be the field strings that are predicted by well-motivated inflationary models <cit.>, and can also be the fundamental strings in (super)string theory and naturally arise in a brane inflation scenario <cit.>. With huge string tension, the cosmic strings are dramatically highly relativistic. When two cosmic strings collide in the three-dimensional space, they can reconnect with a characteristic probability p and form loops. While the field strings always reconnect when they meet and exchange partners and hence take p=1, the strings in (super)string theory are predicted to take a smaller reconnection probability, e.g., 10^-3<p<1 for fundamental strings and 0.1<p<1 for Dirichlet strings <cit.>, because they are actually moving in a higher-dimensional space. Once the loops are formed, they start oscillating and shrinking in size through radiating GWs <cit.>. The energy density spectrum from the cosmic string network for both the classical field strings and superstrings can be characterized by the dimensionless string tension G μ and reconnection probability p<cit.>, _CS=8π G f/3H_0^2 p(Gμ)^2∑_k=1^∞ C_k P_k, where C_k(f)=∫_0^t_0dt/(1+z)^52n/f^2n(l,t). Here z is the redshift, k labels the harmonic modes of cosmic-string loops, P_k is the radiation power spectrum of each loop, and n(l,t) is the number of loops per unit volume per unit range of loop length l existing at time t. The SGWB from a network of cosmic strings has been computed in <cit.> and the output of the expected energy density spectrum has been publicly available[<http://cosmos.phy.tufts.edu/cosmic-string-spectra/>]. In the analyse, the prior of the reconnection probability is set as p ∈[-3,0] to align with the constraints from the fundamental strings. The Bayes factor of the cosmic-string model versus the SMBHB model is BF^CS_SMBHB=1.699, indicating that the cosmic strings are also a viable source of the PTA signal. The posterior distributions for the parameters are shown in CS_post. The 95% upper limit of the reconnection probability p is 6.68× 10^-2, and the 5% and 95% quantiles for the cosmic string tension are G μ∈ [1.46,15.3]× 10^-12. The result that a smaller reconnection probability is more favored indicates that if the detected signal in the PTA data sets originate from the cosmic strings, it should come from strings in (super)strings theory and is unlikely of the classical field strings. Conclusion and discussion. In this letter, we combine the NANOGrav-15yr data set, PPTA DR3 and EPTA DR2 to explore the possible cosmological interpretations, including the first-order phase transition, domain walls and cosmic strings, for the recent discovered stochastic signal in PTA data sets. By computing the Bayes factors between these cosmological models and the fiducial SMBHB model (summarized in BFs), we find that the first-order phase transitions and cosmic strings share comparable support as the SMBHB model, thus their possibilities as the the source of the signal in the PTA data sets cannot be excluded. However, the Bayes factor of the domain wall model versus the SMBHB model is 0.009, indicating that domain walls are strongly disfavored by the combined data sets. This is a new implication different from what NANOGrav collaboration obtained when analyzing the new physics based on their own data set. A recent work also shows that domain wall interpretation is hardly compatible with the stochastic signal because it leads to the overproduction of primordial black holes <cit.>. The exploration on the parameter space of these models also provides us some interesting physical implications. For instance, we find that 1) when simultaneously considering both the sound waves and bubble collisions contributions to the GW spectrum in the first-order phase transition, it turns out that the bubble collisions contribute more dominantly than the sound waves. Additionally, this strong phase transition should take place at temperature below the electroweak phase transition of the Standard Model; 2) cosmic strings are more likely to have a low reconnection probability, with a 95% upper limit of p<6.68×10^-2 and a 99.9% upper limit of p<0.311. It implies that the detected signal can only be explained by fundamental strings at a confidence level of 2σ, but Dirichlet strings may also be a possible explanation within 3σ confidence level. Note added. While finalizing the manuscript, we notice that two parallel independent works <cit.> also investigate the possible cosmological origin of the signal detected by PTAs, by comparing the Bayes factors between models. Our analyses differ from theirs in the sense that <cit.> uses the NANOGrav, PPTA and EPTA data sets separately and employs the data from only the first five frequencies, <cit.> uses the combined NANOGrav+EPTA data and considers models different from us. Acknowledgements. We are grateful to Lang Liu and Yang Jiang for helpful discussions. We acknowledge the use of HPC Cluster of ITP-CAS. QGH is supported by the grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, CAS Project for Young Scientists in Basic Research YSBR-006, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15). ZCC is supported by the National Natural Science Foundation of China (Grant No. 12247176 and No. 12247112) and the China Postdoctoral Science Foundation Fellowship No. 2022M710429.
http://arxiv.org/abs/2307.10182v1
20230702110908
Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation
[ "Zeyu Tang", "Xiaodan Xing", "Guang Yang" ]
eess.IV
[ "eess.IV", "cs.AI", "cs.CV", "physics.med-ph" ]
IEEEexample:BSTcontrol IEEE Journal of Biomedical and Health Informatics,  Vol. XXX, No. X, July 2023 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation Zeyu Tang1,2, Xiaodan Xing1, Guang Yang1,3, Senior Member, IEEE This study was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC/NSFC/211235), the Imperial College UROP, the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, NIHR Imperial Biomedical Research Centre (RDA01), and the UKRI Future Leaders Fellowship (MR/V023799/1). 1. National Heart and Lung Institute, Imperial College London, SW7 2BX London, U.K.2. Department of Bioengineering, Imperial College London, SW7 2AZ London, U.K.3. Royal Brompton Hospital, SW3 6NP London, U.K. August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This study aims to develop and evaluate an innovative simulation algorithm for generating thick-slice CT images that closely resemble actual images in the AAPM-Mayo's 2016 Low Dose CT Grand Challenge dataset. The proposed method was evaluated using Peak Signal-to-Noise Ratio (PSNR) and Root Mean Square Error (RMSE) metrics, with the hypothesis that our simulation would produce images more congruent with their real counterparts. Our proposed method demonstrated substantial enhancements in terms of both PSNR and RMSE over other simulation methods. The highest PSNR values were obtained with the proposed method, yielding 49.7369 ± 2.5223 and 48.5801 ± 7.3271 for D45 and B30 reconstruction kernels, respectively. The proposed method also registered the lowest RMSE with values of 0.0068 ± 0.0020 and 0.0108 ± 0.0099 for D45 and B30, respectively, indicating a distribution more closely aligned with the authentic thick-slice image. Further validation of the proposed simulation algorithm was conducted using the TCIA LDCT-and-Projection-data dataset. The generated images were then leveraged to train four distinct super-resolution (SR) models, which were subsequently evaluated using the real thick-slice images from the 2016 Low Dose CT Grand Challenge dataset. When trained with data produced by our novel algorithm, all four SR models exhibited enhanced performance. Thick-slice CT, Super-Resolution, Synthetic Data § INTRODUCTION Around late 1980s, the concept of helical or spiral Computed Tomography (CT) emerged <cit.>. This technique involves gathering data without interruption as the patient is moved at a consistent pace through the gantry. The distance the patient travels per gantry rotation during the helical scan is known as the table speed. Since there are no breaks in data collection, the duty cycle of the helical scan is significantly enhanced, reaching close to 100%. Nevertheless, a significant problem became evident: helical CT posed considerable strain on x-ray tubes. To address this problem of overheating, a potential solution was to optimize the utilization of the x-ray beam. By widening the x-ray beam in the z-direction (slice thickness) and employing multiple rows of detectors, it became possible to gather data for multiple slices simultaneously. Implementing this method would reduce the overall number of rotations required to cover the desired anatomy, consequently minimizing the x-ray tube usage. This concept forms the foundation of multi-slice helical CT <cit.>. §.§ Clinical Application Multi-slice helical CT has been extensively utilized as a non-intrusive imaging technique since then and has proven especially valuable in the examination of lung diseases like lung cancer <cit.>, Chronic Obstructive Pulmonary Disease (COPD) <cit.>, and Idiopathic Pulmonary Fibrosis (IPF) <cit.>. Moreover, the application of CT scans is not restricted to pulmonary investigations alone; it is greatly employed to examine other anatomical regions, including the head-neck <cit.>, heart <cit.>, kidney <cit.>, and liver <cit.>. CT has indeed become the preferred method in medical practice for imaging and identifying these conditions, thanks to its remarkable sensitivity and specificity. The effective execution of a lung disease screening protocol is deeply contingent upon the precise identification and evaluation of biomarkers, including pulmonary nodules, thickening of the airway walls, and traction bronchiectasis. However, such elusive radiomics features (RF) are highly affected by he acquisition thickness of CT images shown in Figure <ref>. Previous research showed that RFs derived from thin-slice (1.25 mm) CT scans performed considerably better in differentiating benign from malignant solitary pulmonary nodules than those from thick-slice CT scans (5 mm). This indicates that thin-slice CT scans provide a richer source of information for radiomics analyses <cit.>. Nevertheless, it is essential to acknowledge that the acquiring of thin-slice CT images augments not only the requisites for data storage, but also escalates the radiation dose. To reduce the radiation exposure thereby diminishing the lifelong risk of secondary malignancies <cit.> - notably among pediatric patients - thick slice CT images are customarily obtained across various anatomical sites of interest in clinical practice. Such practice further mitigates the complexity associated with contouring on an extensive series of slices. In summary, thin-slice CT scans provide richer information for identifying diseases but require more storage and increase radiation exposure, while thick-slice scans are safer and simpler but less detailed. §.§ Super-Resolution To get the best from both worlds, one potential strategy is the use of post-processing super-resolution (SR) algorithms. These algorithms are capable of generating high-resolution (HR) thin-slice CT images from low-resolution (LR) thick-slice CT images. Image SR is a sub-field of image processing that aims to generate a HR image from its corresponding LR counterpart. It is typically formulated as an inverse problem. The relationship between a LR image Y and its corresponding HR image X is often described using the following equation: I_LR = D · S ·I_HR + N Where S is the downsampling operator, D is the blurring operator (which represents the real-world degradation that often occurs when an image is captured), and N is the noise. The aim of SR is to reverse this process, effectively trying to find the inverse of D(S(·) + N) to generate a super-resolved image I'_HR that is an approximation of the original I_HR. Ideally, we want I'_HR to minimize some loss function L over inverse function parameters θ with respect to I_HR: θ̂ = min_θ L(I'_HR, I_HR) In practice, I_HR is unknown during the inference phase, and the model is trained on pairs of HR and LR images to learn the mapping from I_LR to I_HR. A myriad of machine learning and deep learning models have been put forth to tackle the challenge of axial-plane SR. However, the lack of publicly available paired LR-HR training data forces researchers to create their own thick-slice CT images derived from thin-slice CT datasets. Regrettably, their simulation methods often overlook the crucial parameters of slice thickness and interval, leading to an inability to accurately capture the distribution of thick-slice images. This results in underperformance of the trained models when processing true thick-slice images. §.§ Aims and Objectives The objective of this research is to present an innovative simulation algorithm that generates images closely resemble the actual thick-slice CT images found in the AAPM-Mayo's 2016 Low Dose CT Grand Challenge (LDCT) dataset <cit.>. According to our current understanding, the AAPM-Mayo's LDCT dataset stands as the sole public resource providing thin-thick slice pairs. We intend to utilise metrics such as the Peak Signal-to-Noise Ratio (PSNR) and the Root Mean Square Error (RMSE) to evaluate the quality of thick-slice images generated by different algorithms. The initial hypothesis posits that our proposed simulation method will yield thick-slice images more congruent with their real counterparts. Furthermore, a secondary hypothesis suggests that Super Resolution (SR) models trained on our dataset will demonstrate enhanced efficacy when applied to actual thick-slice images, in contrast to models trained on alternative simulated data. The evaluation of the SR model's performance will also leverage PSNR and RMSE metrics. § RELATED WORKS Deep learning <cit.> has revolutionized the field of medical imaging super-resolution by enabling models to learn complex mappings from low to high-resolution images. Convolutional Neural Networks (CNNs) <cit.> and Generative Adversarial Networks (GANs) <cit.> are commonly used architectures in these tasks. These models are typically trained on pairs of low and high-resolution medical images, learning a function that can effectively upscale the low-resolution images. Their capacity to learn hierarchical features allows for the reconstruction of fine details in high-resolution images. Mansoor et al. <cit.> applied a 3D Gaussian smoothing filter on slices with 1mm thickness followed by downsampling to create slices with 4mm thickness on their in-house chest CT dataset. Subsequently, they utilized a VGG-like GAN model for the reconstruction of HR thin-slice images. Park et al. <cit.> employed a method for thick-slice image simulation that involved averaging five slices with a 3mm thickness to create a single slice with a 15mm slice thickness on their in-house Brain CT dataset. In this approach, the middle slice was selected as the ground truth high-resolution image. To reconstruct the HR thin-slice image from the LR thick-slice images, they employed a 2-D U-Net model. Wang et al. <cit.> generated thick slices by downsampling directly on the 1mm thin-slice images. They trained a CycleGAN in a self-supervised manner to recover the HR image. Xie et al. <cit.> employed a method for simulating thick-slice images by averaging three or seven slices with a 1mm thickness from their in-house brain CT dataset. This averaging process resulted in the creation of a single slice with a thickness of 3mm or 7mm, respectively. Additionally, they implemented a self-supervised CycleGAN model to synthesize HR thin-slice images. Park et al. <cit.> utilized a direct reconstruction approach from the sinogram in their in-house dataset to generate CT images with slice thicknesses of 1mm, 3mm, and 5mm. Their findings highlighted a significant variability in radiomic features across different CT slice thicknesses, particularly in the context of lung cancer. To address this variability, they proposed the use of a residual CNN-based SR algorithm. Wu et al. <cit.> simulated thick slices by downsampling directly on the 2.5mm thin-slice images. They proposed a parallel U-net architecture for CT slice reconstruction along the z-axis. Kudo et al. <cit.> simulated various combinations of slice thickness and slice interval by reducing the number of slices to either 1/4 or 1/8. They applied spline interpolation and random Gaussian noise to the reduced slices and utilized a 3D GAN model to convert the thick-slice images back to thin-slice images. In summary, prior simulation methodologies predominantly fall into three categories: Simple Averaging, Gaussian Averaging, and Direct Downsampling. Furthermore, the super-resolution models employed in these studies typically rely on CNN-based and GAN-based frameworks. § METHODS §.§ Dataset This study principally utilized two data sets: the TCIA LDCT-and-Projection-data <cit.> and the 2016 Low Dose CT Grand Challenge <cit.>. We simulated thick-slice data based on the TCIA LDCT-and-Projection-data, employing it for the training phase. The 2016 Low Dose CT Grand Challenge was then deployed for testing purposes. Important features of these two datasets have been compiled and are presented in Table <ref> and Table <ref> for reference. §.§.§ TCIA LDCT-and-Projection-data This compilation includes 99 neuro scans (denoted by N), 100 chest scans (denoted by C), and 100 liver scans (denoted by L). Half of each scan category comes from a SOMATOM Definition Flash CT scanner, a product of Siemens Healthcare from Forchheim, Germany. The remaining scans, consisting of 49 for the head, 50 for the chest, and 50 for the liver, were captured with a Lightspeed Volume Computed Tomography (VCT) CT scanner from GE Healthcare, based in Waukesha, WI. Some data in this collection might be utilized to reconstruct a human face. In order to protect the privacy of individuals involved, those accessing the data are required to sign and submit a TCIA Restricted License Agreement upon usage. §.§.§ 2016 Low Dose CT Grand Challenge The dataset consists of 30 deidentified contrast-enhanced abdominal CT patient scans, which were obtained using a Siemens SOMATOM Flash scanner in the portal venous phase. The data comprises two types: Full Dose (FD) data and Quarter Dose (QD) data. Full Dose data corresponds to scans acquired at 120 kV and 200 quality reference mAs (QRM), while Quarter Dose data refers to simulated scans acquired at 120 kV and 50 QRM. The provided dataset includes various components: 1) Projection data for all 30 patient scans, including 10 cases for training purposes (both FD and QD) and 20 cases for testing (QD only). 2) DICOM images for the 10 training cases, encompassing FD and QD data, with reconstructions using 1 mm thick B30 and D45 kernels, as well as 3 mm thick B30 and D45 kernels. 3) DICOM images for the 20 testing cases, consisting of QD data only, with the same reconstruction configurations as the training cases (1 mm thick B30 and D45 kernels, and 3 mm thick B30 and D45 kernels). §.§ Evaluation Metrics The Peak Signal-to-Noise Ratio (PSNR) and the Root Mean Square Error (RMSE) are two popular metrics used to measure the quality of image, particularly for comparing the differences between the original and reconstructed data. PSNR is defined as the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Mathematically, PSNR is calculated using the following formula: PSNR = 20 ·log_10(MAX_I/√(MSE)) Where MAX_I is the maximum possible pixel value of the image. MSE stands for Mean Squared Error, which is the mean of the squared differences between the original and the reconstructed image. The mathematical expression is as follows: MSE = 1/mn∑_i=0^m ∑_j=0^n (I(i,j) - Î(i,j))^2 where, I(i,j) is the original value, Î(i,j) is the predicted value, and m × n is the image dimension (i.e. total number of pixels in an image). RMSE is a quadratic scoring rule that measures the average magnitude of the error, and it is defined as the square root of the average of squared differences between prediction and actual observation. The RMSE gives a relatively high weight to large errors, so it is especially useful when large errors are particularly undesirable. It can be calculated using equation <ref>: RMSE = √(MSE) Both PSNR and RMSE are higher-is-better metrics, with higher values indicating better match between the original and reconstructed data. §.§ Thick-slice Simulation Drawing inspiration from the Weighted Filter Backprojection (wFBP) <cit.>, we have crafted a weighted sum algorithm designed to simulate thick-slice images from thin-slice counterparts. Our initial step involves determining the new positions of the simulated-thick slices, which is based on the slice increment, d. This process is concisely delineated in Algorithm <ref>. Subsequently, our focus shifts to estimating the extent to which contiguous thin-slice images at location l ∈ L_thin contribute to the reconstruction of the thick-slice image at location p ∈ L_thick. To this end, we define a triangular weight function g(s), where s represents the slice thickness: g(s) = max(0, 1 - | p - l |/s) The generated thick slices are weighted sums of the thin slices, normalized by the total weight of the slices used. The detailed steps for the generation of thick-slice images is illustrated in Algorithm <ref>. We applied our simulation algorithm to the 2016 Low Dose CT Grand Challenge dataset, thereby establishing the superior efficacy of our method compared to existing alternatives. Subsequently, we produced thick-slice data for the chest CT images from the TCIA LDCT-and-Projection-data dataset using Simple Averaging and our proposed method. These synthesized thick-thin image pairs served as the training set for the super-resolution models. §.§ Super-Resolution Models To fairly evaluate the quality of our simulated dataset in model training, We selected four SR models to benchmark our simulated thick-slice data. §.§.§ Convolutional Neural Network Convolutional Neural Networks (CNNs), a class of artificial neural networks, have been instrumental in the evolution of deep learning, particularly in image analysis tasks. The basic underlying principle of CNNs is the application of a convolution operation, which uses a kernel or filter that passes over the input data. This kernel convolves with the input layer to compute the dot product of their entries, thereby preserving the spatial relationships in the image. Layers of these convolutional operations are often interspersed with other layers such as pooling (downsampling) and normalization layers. Over time, the CNN learns the optimal values for this kernel, allowing it to detect complex patterns in the input data. CNN-based Super-Resolution models revolutionized this field, with the most famous being the Super-Resolution Convolutional Neural Network (SRCNN) introduced by Dong et al. in 2014 <cit.>. This model uses a three-layer CNN to learn an end-to-end mapping of low-resolution images to their high-resolution counterparts. It applies a patch extraction and representation layer, a non-linear mapping layer, and a reconstruction layer. The learning objective of SRCNN is to minimize the Mean Squared Error (MSE) between the original high-resolution image and the super-resolved output. Since then, newer models like the very Deep Super-Resolution (VDSR) <cit.> have further improved super-resolution performance. We have re-engineered the VDSR architecture to better suit our needs, transforming all 2-D operations into 3-D. Further, we streamlined the structure by reducing the convolutional layer count to twelve and integrated a global residual connection. This refined model will serve as our first SR model for benchmarking our simulated dataset. §.§.§ U-Net U-Net <cit.>, a type of CNN that was first developed for biomedical image segmentation, has also been utilized in super-resolution tasks due to its unique architecture that excels in capturing both local features and global context. The U-Net architecture consists of a contracting (downsampling) path and an expansive (upsampling) path with skip connections between layers of the same level, which help in localizing the high resolution features. One example of a U-Net-based super-resolution model is the Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR) model <cit.> that's been adapted into a U-Net-like structure. In this model, the downsampling path captures context information, while the upsampling path allows the model to make precise localization decisions to super-resolve the image. We have adapted the original U-Net architecture to better suit the 3-D medical image super-resolution task. Our modifications include transforming all 2-D operations into 3-D and introducing two additional residual blocks. This revised model now serves as our second benchmark SR model. §.§.§ Residual Neural Nerwork ResNet, short for Residual Network, was introduced by He et al. in their 2015 paper <cit.>. This deep learning model, distinguished by its "skip" or "shortcut" connections, was developed to address the problem of vanishing gradients that plagued training of very deep neural networks. These shortcut connections allow the model to learn identity functions that effectively prevent the higher layers from destroying the learned information of lower layers, thereby enabling the successful training of networks that are significantly deeper than those previously possible. ResNet has had a significant impact on the field of computer vision, including the domain of super-resolution. SRResNet, or Super-Resolution Residual Network, is a direct adaptation of the principles of ResNet to super-resolution tasks. SRResNet was proposed by Ledig et al. in the SRGAN (Super-Resolution Generative Adversarial Networks) paper and serves as the backbone for the generator network in SRGAN <cit.>. SRResNet incorporates residual blocks to learn the residual mapping between the low-resolution and high-resolution images. It is designed with a shallow network at the beginning and end (for feature extraction and reconstruction) while having a deeper body composed of residual blocks. Building upon the success of SRResNet, researchers proposed an enhanced version known as Enhanced SRResNet (ESRResNet) <cit.>. The enhanced version incorporates several advancements in deep learning, like the use of dense connections (as seen in DenseNet <cit.>) and parameter-efficient convolutions to improve the network's learning capacity while keeping the computational load manageable. In addition, one key feature of the ESRResNet is the use of Residual-in-Residual Dense Blocks (RRDBs) instead of the simple residual blocks in the original SRResNet. The RRDB is essentially a stack of residual blocks where the output of each residual block is added to its input (forming a local skip connection), and the output of the last residual block in the stack is also added to the input of the stack (forming a long skip connection). This architecture allows for better propagation of features and gradients through the network, making it easier for the network to learn the mapping from low-resolution to high-resolution images. The ESRResNet has proven to be highly effective in super-resolution tasks, leading to significant improvements over the original SRResNet in terms of both quantitative metrics and perceptual quality of the super-resolved images. Consequently, we selected the ESRResNet architecture, modifying all its 2-D operations to 3-D. This adaptation makes it more suitable for the specific demands of the medical imaging super-resolution task. This adaptation now constitutes our third super-resolution model for benchmarking purposes. §.§.§ Generative Adversarial Network Generative Adversarial Networks (GANs) are a type of neural network architecture introduced by Ian Goodfellow et al. in 2014 <cit.>. GANs consist of two parts: a generator network, which produces synthetic data, and a discriminator network, which tries to distinguish between real and synthetic data. The two networks are trained together, with the generator network attempting to produce data that the discriminator network cannot distinguish from real data, and the discriminator network attempting to get better at distinguishing real data from the data generated by the generator. This adversarial process leads to the generator network producing high-quality synthetic data. In the context of image super-resolution, GANs have shown great potential. Ledig et al. introduced the Super-Resolution Generative Adversarial Network (SRGAN) in 2016 <cit.>, which uses a GAN in combination with a super-resolution network (SRResNet, described in section <ref>). In this architecture, the generator is a super-resolution model that transforms a low-resolution input into a high-resolution output, and the discriminator is trained to distinguish the super-resolved images from real high-resolution images. The result is super-resolved images that are perceptually more similar to real high-resolution images compared to those produced by traditional super-resolution methods. ESRGAN <cit.>, or Enhanced Super-Resolution Generative Adversarial Networks, is a direct and substantial improvement of SRGAN. It keeps the GAN structure of the original SRGAN model, with a generator and discriminator, but upgrades the generator with RRDB (desribed in section <ref>). RRDB allows the model to effectively capture more complex texture and detail information by encouraging both short and long skip connections. The discriminator in these networks is typically constructed as a deep CNN, with several layers of convolution, batch normalization, and activation function such as LeakyReLU, culminating in a final decision layer that outputs a probability indicating how likely it is that the input image is real. There are more advanced architectures such as PatchGAN <cit.>. Unlike traditional discriminators that try to classify the entire image as real or fake, a PatchGAN discriminator employs the concept of Markov Random Field and classifies individual patches of an image. The underlying assumption is that natural images are locally coherent, meaning that the authenticity of an image can be determined by examining its subregions or patches. In this study, we included the PatchGAN discriminator during the training of ESRResNet, adding it as our fourth SR model for benchmarking. §.§ Implementation Details The scripts used in this study were developed in Python3 and the PyTorch framework, executed on Imperial College's HPC Clusters. Computations were run on an NVIDIA RTX 6000 GPU with 24 GB of memory. Four distinct Super Resolution (SR) models: VDSR, U-Net, ESRResNet and ESRGAN (ESRResNet+PatchGAN Discriminator), were trained over 1000 epochs using the Adam solver (learning rate = 1e-4) and augmented with random horizontal flips during training. The source code can be accessed at: https://github.com/Feanor007/Thick2Thin. § RESULTS The results from all conducted experiments, represented as mean ± standard deviation, are tabulated in this section. We assessed the performance of our proposed simulation method against Simple Averaging, Gaussian Averaging, and Direct Downsampling. This was accomplished by simulating images with a thickness of 3mm from those with a thickness of 1mm, utilizing the 2016 Low Dose CT Grand Challenge dataset. The results outlined in Table <ref> provide a comparative analysis of different thick-slice simulation methods used in two datasets from the 2016 Low Dose CT Grand Challenge. Both the PSNR and the RMSE were used as key performance indicators for these methods. The data clearly demonstrate that the proposed method significantly outperformed Simple Averaging, Gaussian Averaging, and Direct Downsampling in both datasets (D45 and B30). The highest PSNR values were obtained with the proposed method, yielding 49.7369 ± 2.5223 and 48.5801 ± 7.3271 for D45 and B30 datasets, respectively. The proposed method also registered the lowest RMSE with values of 0.0068 ± 0.0020 and 0.0108 ± 0.0099 for D45 and B30, respectively. These results indicate a superior level of accuracy and reliability in the proposed method. The statistically significant differences were confirmed by a Wilcoxon signed-rank test with p-value < 0.05, implying that the improvements from the proposed method were not due to random chance. These findings support our first hypothesis that the proposed simulation method provides a more efficient and precise approach to thick-slice simulations compared to traditional methods. To provide a more comprehensive evaluation, visual comparisons from axial, coronal and sagittal plane were also undertaken, as depicted in Figures <ref> to <ref>. In summary, Our proposed method demonstrated substantial enhancements in terms of both PSNR and RMSE, indicating a distribution more closely aligned with the authentic thick-slice image. We investigated our second hypothesis by training four different SR models using the data generated by top two simulation methods, as referenced in <ref>, which are our proposed method and Simple Averaging. The results in Table <ref> highlight the comparison of various super-resolution (SR) models trained by different simulation methods on two distinct datasets: 2016 Low Dose CT Grand Challenge (D45 and B30). In each case, the performance of the SR model trained by proposed simulation method outperforms itself trained by the Simple Averaging simulation method in terms of both PSNR and RMSE, indicating improved image quality and lower error rates, respectively. The differences observed were statistically significant as determined by the Wilcoxon signed-rank test (p-value < 0.05). For the D45 dataset, the PSNR and RMSE of the proposed method ranged from 37.2176 ± 3.0876 and 0.0296 ± 0.0128 (for VDSR) to 37.9786 ± 2.4597 and 0.0264 ± 0.0089 (for ESRGAN). Similarly, for the B30 dataset, the proposed method's PSNR and RMSE varied between 38.5657 ± 4.6613 and 0.0280 ± 0.0196 (for VDSR) and 40.5083 ± 3.8736 and 0.0215 ± 0.0144 (for ESRGAN). The SR model that demonstrated the best performance using the proposed method on the B30 dataset was the ESRGAN model, yielding the highest PSNR (40.5083 ± 3.8736) and the lowest RMSE (0.0215 ± 0.0144). In contrast, the lowest performance in the proposed method was observed in the VDSR model for the D45 dataset, exhibiting the lowest PSNR (37.2176 ± 3.0876) and the highest RMSE (0.0296 ± 0.0128). Regardless of the dataset and the SR model, the proposed simulation method consistently outperformed the simple averaging method, signifying its effectiveness in training super-resolution models in the task of thick-to-thin CT image transformation. § DISCUSSION AND CONCLUSION The findings from our study underscore the potential effectiveness of our proposed simulation algorithm in generating images that closely resemble authentic thick-slice CT images from the AAPM-Mayo's 2016 Low Dose CT Grand Challenge dataset. According to the comparison metrics used, specifically PSNR and RMSE, our proposed method presented a significant improvement over previous simulation methods, implying an image distribution more closely aligned with the actual thick-slice images. This superior performance over the other methods (Simple Averaging, Gaussian Averaging and Direct Downsampling) demonstrates its potential as a reliable tool for training super-resolution (SR) models for the task of thick-to-thin CT image transformation. The positive results reaffirm our initial hypothesis and add to the growing body of evidence that highlights the importance of using high-fidelity simulated data in training deep learning models, especially in the realm of medical imaging. However, it is essential to acknowledge the limitations of our study. One of the significant limitations stems from the reliance on the AAPM-Mayo's LDCT dataset, which, to the best of our knowledge, is the only public resource providing thin-thick slice pairs. The performance of our proposed method has been evaluated primarily with respect to recreating the specific image characteristics of the AAPM-Mayo's LDCT dataset. It remains unclear how well our algorithm would perform with images obtained from patients with different health conditions. Furthermore, its exclusive provision of 3mm-1mm thick-thin slice pairs, presents an inherent constraint in our study. In clinical settings, various slice thicknesses, including 5mm, 10mm, or even 15mm, are commonly used depending on the particularities of the clinical question and the body region being scanned. The unique features and details that are present in thicker slices may be vital for diagnosing certain conditions or predicting disease progression. If our algorithm has only been validated for a 3mm-1mm pairing, it remains uncertain how well it can generate and maintain these crucial features in thicker slices. In addition, the evaluation metrics used (PSNR and RMSE) may not fully capture perceptual image quality and radiomics features as perceived by human experts, thereby potentially limiting the perceived accuracy of the simulation. Looking forward, further research could explore more advanced SR models, such as diffusion-based generative models <cit.>. Additionally, researchers could validate the utility of super-resolved CT images by applying segmentation models that delineate anatomical structures such as airway tree <cit.> and lung nodules <cit.>. Finally, these results should prompt a reconsideration of the role of simulated data in training SR models, urging further exploration and innovation in this area. By providing a robust simulation method, we can help ensure the best possible data for training these models, which in turn has the potential to improve the overall quality and accuracy of medical imaging, particularly in the realm of CT scanning. In conclusion, this study successfully presented an innovative simulation algorithm capable of generating thick-slice CT images closely resembling the actual images in the AAPM-Mayo's 2016 Low Dose CT Grand Challenge dataset. Furthermore, Our method demonstrated its potential utility for training super-resolution models aimed at the task of thick-to-thin CT image transformation. IEEEtran
http://arxiv.org/abs/2307.03184v1
20230706175828
The split majoron model confronts the NANOGrav signal
[ "Pasquale Di Bari", "Moinul Hossain Rahat" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "astro-ph.HE", "hep-th" ]
]Pasquale Di Bari, ]and Moinul Hossain Rahat []School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ, U.K. In the light of the evidence of a gravitational wave background from the NANOGrav 15yr data set, we reconsider the split majoron model as a new physics extension of the standard model able to generate a needed contribution to solve the current tension between the data and the standard interpretation in terms of inspiraling supermassive black hole massive binaries. In the split majoron model the seesaw right-handed neutrinos acquire Majorana masses from spontaneous symmetry breaking of global U(1)_B-L in a strong first order phase transition of a complex scalar field occurring above the electroweak scale. The final vacuum expectation value couples to a second complex scalar field undergoing a low scale phase transition occurring after neutrino decoupling. Such a coupling enhances the strength of this second low scale first order phase transition and can generate a sizeable primordial gravitational wave background contributing to the NANOGrav 15yr signal. Moreover, the free streaming length of light neutrinos can be suppressed by their interactions with the resulting Majoron background and this can mildly ameliorate existing cosmological tensions, thus providing a completely independent motivation for the model. The split majoron model confronts the NANOGrav signal [ August 1, 2023 ======================================================= α β̱ χ̧ δ̣ ϵ ϕ γ η ıι ȷψ κ̨ λ μ ν øω π θ ρ̊ σ τ ῠ ξ ζ Δ Φ Γ Ψ ŁΛ ØΩ Π Θ Σ Υ Ξ ε φ ϱ ς ϑ † #1#1 m_1 m_i m_j r_1 m_2 m_3 r_2 m #1⟨#1⟩ → #1#1 § INTRODUCTION The NANOGrav collaboration has found evidence for a gravitational wave (GW) background at ∼ nHZ frequencies in the 15-year data set <cit.>. This strongly relies on the observed correlations among 67 pulsars following an expected Hellings-Downs pattern for a stochastic GW background <cit.>. A simple baseline model is provided by a standard interpretation in terms of inspiraling black hole binaries (SMBHBs) with a fiducial f^-2/3 characteristic strain spectrum. Such a baseline model provides a poor fit to the data and some deviation is currently favoured. In particular, models where in addition to SMBHBs one also has a contribution from new physics provide a better fit of the NANOGrav data than the baseline model, resulting in Bayes factors between 10 and 100 <cit.>[see Refs. <cit.> for some recent new physics approaches.]. First order phase transitions at low scales can provide such an additional contribution. For temperatures of the phase transition in the range 1 MeV – 1 GeV, the resulting GW background can explain the entire NANOGrav signal <cit.>. However, when a realistic model is considered, one needs also to take into account the cosmological constraints on the amount of extra radiation from big bang nucleosynthesis (BBN) and CMB anisotropies. A phase transition associated to the spontaneous breaking of a U(1)_L_I symmetry, where a Majorana mass term is generated, has been previously discussed <cit.> as a potential origin for the NANOGrav signal from 12.5-year data set <cit.>. In this case a complex scalar field gets a non-vanishing vacuum expectation value at the end of the phase transition and a right-handed neutrino, typically the lightest, coupling to it acquires a Majorana mass. The phase transition involves only a few additional degrees of freedom forming a dark sector, and some of them can decay into ordinary neutrinos potentially producing extra radiation so that cosmological constraints need to be considered. It has been shown that these can be respected if the phase transition occurs after neutrino decoupling since in this case the dark sector thermalises only with decoupled ordinary neutrinos and the amount of extra radiation does not exceed upper bounds from big bang nucleosynthesis and CMB temperature anisotropies. In <cit.> it was concluded that the amplitude of the NANOGrav signal was too high to be explained by such a phase transition since the peak of the predicted spectrum was two order of magnitudes below the signal. This conclusion was based on 12.5-year data and on a way to calculate the sound wave contribution to the GW spectrum valid for values of the strength of the phase transition α≲ 0.1 that is now outdated <cit.>. In this paper we reexamine this conclusion in the light of the 15 year data set and adopting an improved description of the sound wave contribution, applicable for larger values of α. We confirm that such a phase transition can hardly reproduce the whole signal but can well be combined to the contribution from the SMBHBs baseline model to improve the fit of the signal. The paper is structured as follows. In Section 2 we discuss the split Majoron model. In Section 3 we discuss the cosmological constraints deriving by the presence of extra radiation in the model. In Section 4 we review the calculation of the GW spectrum and show the results we obtain and compare them to the NANOGrav 15 year-data set. In Section 5 we draw our conclusions and discuss future developments . § THE SPLIT MAJORON MODEL We discuss now a model that was sketched in <cit.> and that can be regarded as an extension of the multiple majoron model proposed in <cit.>. Compared to the traditional majoron model <cit.>, we have two complex scalar fields each undergoing its own first order phase transition, one at high scale, above the electroweak scale, and one at much lower scale, dictated by the possibility to address the NANOGrav signal. If we denote by ϕ and ϕ' the two complex scalar fields, respectively, we can write the Lagrangian as (I=1,…,N and I'=N,…,N+N'): - L_ N_I+N_I'+ϕ+ϕ' = L_ h_ I N_I Φ + λ_I 2 ϕ N_I^c N_I + L_ h_ I' N_I' Φ + λ_I' 2 ϕ' N_I'^c N_I' + V_0(ϕ,ϕ') + h.c. , where Φ is the SM Higgs doublet and Φ its dual and the N_I, N_I' are the RH neutrinos coupling to ϕ,ϕ'. Imposing that the Lagrangian (<ref>) obeys a U(1)_∑_I L_I× U(1)_∑_I' L_I' symmetry, we can take as (renormalisable) tree level potential (with no ϕ-Φ and ϕ'-Φ couplings) V_0(ϕ, ϕ') = -μ^2 |ϕ|^2 + λ |ϕ|^4 -μ'^2|ϕ'|^2 + λ' |ϕ'|^4 + ζ |ϕ|^2 |ϕ'|^2 . We will assume that ϕ undergoes a phase transition, breaking a U(1)_L_1+… +L_N global symmetry, at some scale above the electroweak scale. In the broken phase we can rewrite ϕ as ϕ = e^iθ√(2) (v_0 + S + i J ) , where v_0 is the ϕ vacuum expectation value, S is a massive boson field with mass m_S= √(2 λ) v_0 and J is a Majoron, a massless Goldstone field. The vacuum expectation value of ϕ generates RH neutrino masses M_I = λ_I v_0/√(2). After electroweak symmetry breaking, the vacuum expectation value of the Higgs generates Dirac neutrino masses m_D I = v_ ew h_ I /√(2) and m_D I' = v_ ew h_ I' /√(2), where v_ ew = 246 GeV is the standard Higgs vacuum expectation value. In the case of the RH neutrinos N_I, their Majorana masse lead, via type-I seesaw mechanism, to a light neutrino mass matrix given by the seesaw formula, (m_ν)_αβ = - v_ ew^2 2 h_α I h_β I M_I , Notice that we are writing the neutrino Yukawa matrices in the flavour basis where both charged leptons and Majorana mass matrices are diagonal. The Yukawa couplings h_ I' have to be taken much smaller than usual massive fermions Yukawa couplings or even vanishing, as we will point out. Eventually, at a lower scale, much below the electroweak scale, ϕ' also undergoes a first order phase transition breaking the U(1)_L_1'+… +L_N' symmetry. In the broken phase we can rewrite ϕ' as ϕ' = e^iθ√(2) (v'_0 + S' + i J' ) , where v'_0 is the ϕ vacuum expectation value and again S' is a massive boson field with mass m_S'= √(2 λ') v'_0 and J' is a massless Majoron. The vacuum expectation value of ϕ' generates RH neutrino masses M_I' = λ_I' v'_0/√(2). Let us now draw two different cases we will consider. We have a minimal case with N=2 and N'=1. In this case one would have that the seesaw formula generates the atmospheric and solar neutrino mass scales while the lightest neutrino would be massless. However, after the electroweak symmetry breaking and before the ϕ' phase transition, the small Yukawa couplings h_ 3 generate a small Dirac neutrino mass for the lightest neutrino in a way to have an hybrid case where two neutrino mass eigenstates are Majorana neutrinos and the lightest is a Dirac neutrino. Finally, at the ϕ' phase transition a Majorana mass M_3 is generated and one has a second seesaw giving a lightest neutrino mass m_1 = ∑_ |m_D 3|^2/M_3.[Notice that N_3 is the lightest RH neutrino and not the heaviest.] In a second case one has N=3 and a generic N'. In this case the Yukawa couplings h_ I' can even vanish. The RH neutrinos N_I' acquire a Majorana mass at the ϕ' phase transition but they do not contribute to the ordinary neutrino masses. They can be regarded as massive neutral leptons in the dark sector, with no interactions with the visible sector (including the seesaw neutrinos). As we will be better explain in Section 4, the vacuum expectation value of the complex scalar field ϕ will increase the strength of the ϕ' phase transition and this will be crucial in enhancing the amplitude of the generated GW spectrum. § COSMOLOGICAL CONSTRAINTS Let us now consider the impact on the model of cosmological constraints on the amount of extra-radiation coming from big bang nucleosynthesis and CMB anisotropies. Let us first of all calculate the evolution of the number of degrees of freedom in the model. The number of energy density ultra-relativistic degrees of freedom g_ρ(T) is defined as usual by ρ_R(T) ≡ g_ρ(T) (π^2 /30) T^4, where ρ_R(T) is the energy density in radiation. In our case it receives contributions from the SM sector and from the dark sector, so that we can write g_ρ(T) = g_ρ^ SM(T) + g_ρ^ dark(T). At the ϕ phase transition, occurring at a phase transition temperature T_⋆ above the electroweak scale, one has for the SM contribution g_ρ^ SM(T_⋆) =106.75 and for the dark sector contribution g_ρ^ dark(T_⋆) = g_ρ^J+S + 7 4 N , where g_ρ^J+S = 2. Notice here we are assuming the N seesaw neutrinos all thermalise at the phase transition. This is something that can always be realised since all their decay parameters, defined as K_I ≡ (h^† h)_II v̅_ ew^2 /(M_I m_ eq) with the effective equilibrium neutrino mass m_ eq = [16π^5/2√(g^⋆_ρ)/(3√(5))] (v_ eq/M_ P) and v̅_ ew = v_ ew/√(2) = 174 GeV, can be larger than unity in agreement with neutrino oscillation experiments. Therefore, at the high scale phase transition the dark sector is in thermal equilibrium with the SM sector thanks to the seesaw neutrino Yukawa couplings. After the phase transition all massive particles in the dark sector, S plus the N seesaw neutrinos will quickly decay, while the massless majoron J will contribute to extra radiation. We can then track the evolution of g_ρ(T) at temperatures below T_⋆ and prior to the low scale phase transition occurring at a temperature T_⋆' and also prior to any potential process of rethermalisation of the dark sector that we will discuss later. In particular, we can focus on temperatures T ≪ m_μ∼ 100 MeV. In this case the SM contribution can be written as <cit.> g^ SM_ρ(T ≪ 100 MeV) = g^+e^±+3ν_(T) = 2 + 7 8 [4 g_^e(T) + 6 r_ν^4(T) ] , where the number of energy density ultra-relativistic degrees of freedom of electrons per single spin degree is given by g_ρ^e(T) =90 7 π^4 ∫_0^∞ dx x^2 √(x^2 + z^2) e^√(x^2 + z^2) + 1 , where z ≡ m_e/T. Above the electron mass one has g_ρ^e(T ≫ m_e) = 1, while of course g_ρ^e(T ) 0 for T / m_e 0. The neutrino-to-photon temperature ratio r_ν(T) ≡ T_ν(T)/T can be as usual calculated using entropy conservation, r_ν(T) = (2 11)^1 3 [g_s^γ + e^±(T)]^1 3 , where g_s^γ + e^±(T) = 2 + 7 2 g^e_s(T) , having defined the contribution to the number of entropy density ultra-relativistic degrees of freedom of electrons per single spin degree given by g_s^e(T) = 8 745 4π^4 ∫_0^∞ dx x^2 √(x^2 + z^2) +1 3 x^4 √(x^2 + z^2) e^√(x^2 + z^2) + 1 . where z ≡ m_e/T. One can again verify that g_s^e(T ≫ m_e) = 1 and g_s^e(T) 0 for T / m_e 0 so that one recovers the well known result r_ν (T≪ m_e) = (4/11)^1/3. With this function one can write the SM number of entropy density ultra-relativistic degrees of freedom as g^ SM_s(T ≪ m_μ ) = g^+e^±+3ν_s(T) = 2 + 7 8 [4 g_s^e(T) + 6 r^3_ν(T) ] . For T ≪ m_e one recovers the well known result g^ SM_s(T ≪ m_e ) = 43/ 11 ≃ 3.91 and g^ SM_ρ(T ≪ m_e ) ≃ 3.36. Let us now focus on the dark sector contribution. This is very easy to calculate since one has simply g_ρ^ dark(T) = g_J [r_ dark(T)]^4, where g_J =1 and where the dark sector-to-photon temperature ratio can be again calculated from entropy conservation as r_ dark(T) = [ g^ SM_s(T) g^ SM_s(T_⋆)]^1/3 . For example, for m_μ≫ T ≫ m_e one finds r_ dark(T) = (43/427)^1/3≃ 0.465. We can also rewrite g_ρ^ dark(T) in terms of the extra-effective number of neutrino species Δ N_ν(T) defined as g_ρ^ dark(T) = 7 4 Δ N_ν^ eff [r_ν(T)]^4 . For example, at m_μ≫ T ≫ m_e one finds Δ N_ν^ eff(T) = 4 7 g_J [r_ dark(T)]^4 = 4 × 43 7 × 427^4 3≃ 0.05 . Such a small contribution to extra radiation is in agreement with all cosmological constraints that we can summarise as: * From primordial helium-4 abundance measurements combined with the baryon abundance extracted from cosmic microwave background (CMB) anisotropies placing a constraint on N_ν^ eff(t) at t=t_ f∼ 1 s, the time of freeze-out of the neutron-to-proton ratio <cit.>: N_ν^ eff(t_ f) ≃ -0.1 ± 0.3 ⇒ N_ν^ eff(t_ f) ≲ 0.5 (95% C.L.) . * From measurements of the primordial deuterium abundance at the time of nucleosynthesis, t_ nuc≃ 310 s, corresponding to T_ nuc≃ 65 keV <cit.>: N_ν^ eff(t_ nuc) = -0.2 ± 0.3 ⇒ N_ν^ eff(t_ nuc) ≲ 0.4 (95% C.L.) . * From CMB temperature and polarization anisotropies constrain N_ν^ eff(t) at recombination, when T ≃ T_ rec≃ 0.3 eV, and the Planck collaboration finds <cit.> N_ν^ eff(t_ rec) = -0.06 ± 0.17 ⇒ N_ν^ eff(t_ rec) ≲ 0.3 (95% C.L.) . Let us now consider the low scale phase transition, assuming this occurs at a temperature T'_⋆ above neutrino decoupling, so that r_ν(T'_⋆) = 1. At such low temperatures, below 1 GeV, Yukawa couplings are ineffective to rethermalise the dark sector <cit.>. Therefore, at the phase transition, the dark sector will have a temperature T'_ dark,⋆≃ 0.465 T'_⋆ and with such a small temperature one would obtain a GW production much below the NANOGrav signal. Notice that after the phase transition the second majoron J' would contribute to dark radiation with a contribution to Δ N_ν(T) equivalent to the contribution from J in a way that Δ N_ν(T) ≃ 0.1. One could envisage some interaction able to rethermalise the dark sector so that r_ dark(T'_ dark,⋆ = 1. However, in this case one would have that the J' abundance would yield Δ N_ν(T) ≃ 8/14 ≃ 0.6 throughout BBN and recombination, in disagreement with the cosmological constraints we just reviewed.[A possible interesting caveat to this conclusion is to modify the model introducing an explicit symmetry breaking term that would give J' a mass. In this way J' might decay prior to neutron-to-proton freeze out thus circumventing all constraints. We will be back in the final remarks on this possible scenario.] For this reason we now consider, as in <cit.>, the case when the phase transitions occurs below neutrino decoupling (T'_⋆≲ 1 MeV). In this case a rethermalisation can occur between the dark sector and just the decoupled ordinary neutrino background. Prior to the ϕ' phase transition one has the interactions - L_ν- dark = i 2 ∑_i=2,3λ_i ν_i ^5 ν_i J that can thermalise the majoron J with the ordinary neutrinos and via the coupling ζ J |ϕ'|^2 also the complex scalar field ϕ'. In this way ordinary neutrinos would loose part of their energy that is transferred to the dark sector in a way that they reach a common temperature such that <cit.> r_T = (4 11)^1 3 (3.044 3.044 + N' + 12/7)^1 4 . For N' =1 one finds r_T ≃ 0.6. One also has some extra radiation equivalent to Δ N_ν^ eff≃ 3.044 (3.044 + N' + 12/7 3.044 + N' + 12/7 - N_ h)^1 3 - 3.044≃ 0.5 , where N_ h is the number of massive states that after the phase transition decay and produce the excess radiation. In our case these states are given by S' and the N' RH neutrinos so that N_h = N' + 1. For N' =1 one gets Δ N_ν^ eff≃ 0.5. This model can also ameliorate the Hubble tension <cit.> compared to the ΛCDM model since one has a simultaneous injection of extra radiation together with a reduction of the neutrino free streaming length due to the interactions between the ordinary neutrinos and the majorons. Recently a new analysis of this model has been presented in <cit.> where it has been found that the improvement is more contained than previously found. Still however it is interesting that this model can nicely link the NANOGrav signal, that we are going to discuss in the next section, to the cosmological tensions. § GW SPECTRUM PREDICTIONS CONFRONTING THE NANOGRAV SIGNAL We first briefly review how the first order phase transition parameters relevant for the production of GW spectrum in the split Majoron model are calculated and refer the interested reader to Ref. <cit.> for a broader discussion. The finite-temperature effective potential for the scalar ϕ' can be calculated perturbatively at one-loop level and is the summation of zero temperature tree-level and one-loop Coleman-Weinberg potential and one-loop thermal potential. Using thermal expansion of the one-loop thermal potential, this can be converted in a dressed effective potential given by V^T_ eff(φ') ≃1 2 M_T^2 φ^'2 - (A T+C) φ^'3 + 1/4λ_T φ^'4 . Here, a zero-temperature cubic term C = ζ^2 v_0'/(2λ') is introduced due to the presence of the scalar ϕ with a high scale vacuum expectation value during the phase transition of ϕ' at a lower scale. This term significantly enhances the strength of the phase transition. The other parameters in Eq. (<ref>) are given by M_T^2≡ 2 D (T^2 - T_0^2) , where the destabilisation temperature T_0 is given by 2 D T_0^2 = λ' v_0^'2 +N' 8 π^2 M^'4 v_0^'2 -3 8 π^2λ^'2 v_0^'2 , and the dimensionless constant coefficients D and A are expressed as D = λ' 8 + N' 24 M^'2 v_0^'2 A = (3 λ')^3/2 12π . The dimensionless temperature dependent quartic coefficient λ_T is given by λ_T = λ' - N' M^'4/8 π^2 v_0^'4 loga_F T^2 e^3/2 M^'2 + 9λ^2 16 π^2 loga_B T^2 e^3/2 m_S^'2 . The cubic term is negligible at very high temperatures and the potential is symmetric with respect to ϕ'. But at lower temperatures it becomes important and a stable second minimum forms at a nonzero ϕ', and bubbles nucleate from the `false' vacuum to the `true' vacuum with a nonzero probability. We refer to T'_⋆ as the characteristic phase transition temperature and identify it with the percolation temperature when 1/e fraction of space is still in the false vacuum. Two other parameters relevant for the calculation of GW spectrum from phase transition are α and β/H_⋆, where the first denotes the strength of the phase transition and the latter describes the inverse of the duration of the phase transition. These parameters are defined as β H_⋆≃ T_⋆.d(S_3/T) dT|_T_⋆ , and ≡(T_⋆) ρ(T_⋆) , where S_3 is the spatial Euclidean action, (T_⋆) is the latent heat released during the phase transition and ρ(T_⋆) is the total energy density of the plasma, including both SM and dark sector degrees of freedom. An approximate analytical estimate for calculating S_3/T and T'_⋆ in terms of the model parameters can be found in Ref. <cit.>. In calculating α for phase transition at low temperatures, one must be careful about various cosmological constraints, as outlined in section 3. We now proceed to calculate the GW spectrum of the model relevant for nanoHZ frequencies. Assuming that first order phase transition occurs in the detonation regime where bubble wall velocity v_w > c_s = 1/√(3), the dominant contribution to the GW spectrum mainly comes from sound waves in the plasma, and is given by <cit.> h^2Ω_ sw 0(f) =1.45× 10^-6 Ω_ gw 10^-2 v_ w()/β/H_⋆[κ() α/1+α]^2 ( 106.75/ g_ρ^⋆)^1/3 S_ sw (f) Υ(α,β/H_⋆). Here the spectral shape function S_ sw (f) is given by S_ sw (f) = (f/f_ sw)^3 [7/4+3(f/f_ sw)^2]^7/2 , and the peak frequency f_ sw can be expressed as f_ sw =8.9 μ Hz 1/v_ wβ/H_⋆T_⋆/ 100 GeV( g_ρ^⋆/106.75)^1/6 . We adopt Jouguet detonation solution for which the efficiency factor is given by  <cit.> κ() ≃α 0.73+0.083√(α)+ , and the bubble wall velocity v_ w() = v_ J(), where v_ J() ≡√(1/3) + √(α^2 +2α/3)/1+α . There are two suppression factors in Eq. (<ref>). One is the prefactor Ω_ gw= 10^-3–10^-2<cit.>, whose exact value depends on various simulation parameters. We will show the spectrum for Ω_ gw=10^-2 with solid/dashed lines below which there is a band covering up to Ω_ gw=10^-3 in our GW spectrum plots in Figs. <ref> and <ref>. The other suppression factor Υ(α,β/H_⋆) ≤ 1 and is given by <cit.>: Υ(α,β/H_⋆) = 1- 1√(1+ 2 H_⋆τ_ sw) , where we can write H_⋆τ_ sw = (8 π)^1 3v_ wβ/H_⋆[ 1 + κ() α]^1/2 . We refer the interested reader to Ref. <cit.> for a detailed discussion about the known issues and caveats in using the above expressions for calculating GW spectrum. Nevertheless, we show the GW spectrum for a set of benchmark points given in Table <ref> and <ref> in Figs. <ref> and <ref>, respectively. The left panel of Fig. <ref> shows GW spectra peaked in the frequency range probed by NANOGrav for N'=1. The parameters α and β decreases and increases, respectively, from the benchmark point A to C, weakening the GW peak amplitude. In Fig. <ref>, we show two more benchmark points D and E which have the same α as the benchmark point A, and slightly different β/H_⋆. The resulting spectra is very similar, showing that the maximum GW signal that can be achieved in this model in the NANOGrav frequencies is somewhat independent of N'. In the right panel of Figs. <ref> and <ref> we show the dimensionless strain h_c(f) of the GW signals, given by h_c (f) = √(2 H_0^2 Ω_ GW(f)/2π^2 f^2), where H_0 ≈ 68 km/s/Mpc is today's Hubble rate. We compare the results with the spectral slope β = dlog h_c(f)/d log f modeling the NANOGrav strain spectrum with a simple power law of the form h_c(f) = A_ GW (f/f_ PTA)^β. Expressing β in terms of another parameter γ_ GW = 3-2β, the 1σ fit to NANOGrav 15-yr data gives γ_ GW≃ 3.2 ± 0.6 around f∼ 1/(10 yr)<cit.>. This favorable range is shown with gray bands superimposed on our strain plot. We find that the spectral tilt of the phase transition signal is in tension in some range of the frequency band probed by NANOGrav. § FINAL REMARKS Here we want to draw some final remarks on the results we obtained and how these can be further extended and improved. * Our results are compatible with those presented in <cit.>. The differences can be all understood in terms of the different expression we are using to describe the GW spectrum from sound waves, the Eq. (<ref>). This supersedes the expression used in <cit.> based on <cit.>. The suppression factor taking into account the shorter duration of the stage of GW production compared to the duration of the phase transition is somehow compensated by the fact that the new expression we are using is extended to higher values of α. However, our description of bubble velocity in terms of Jouguet solutions should be clearly replaced by a more advanced one taking into account friction though we do not expect huge changes. Another important difference is that compared to <cit.> the peak frequency is more than doubled and this explains why we obtain higher values of T'_⋆∼ 100 keV for the peak value to be in the nHz range spanned by the NANOGrav signal. * Our peak amplitude can be at most h^2Ω_ gw(f) ∼ 10^-11 at the NANOGrav frequencies and cannot reproduce the whole NANOGrav signal. However, it can certainly help the contribution from SMBHBs to improve the fit, one of the two options for the presence of new physics suggested by the NANOGrav collaboration analysis. This is sufficient to make greatly interesting our results. Clearly a statistical analysis would be required to find the best fit parameters in our model and to quantify the statistical significance. * The values T'_⋆∼ 100 keV that we found in our solutions that enter the NANOGrav frequency range, imply a Δ N_ν^ eff≃ 0.5 at the time of nucleosynthesis, for N' =1. This is in marginal agreement with the constraint Eq. (<ref>) from primordial Deuterium measurements and, therefore, future measurements might be determinant for our solution. On the other hand, the solutions we found can ameliorate the Hubble tension. * We have not explored the possibility to have T'_⋆≫ 1 MeV with a massive majoron J' quickly decaying before big bang nucleosynthesis thus avoiding all cosmological constraints. This requires an extension of the model introducing explicit symmetry breaking terms. The introduction of these terms can potentially jeopardise the phase transition as noticed in <cit.> and for this reason it requires special care. In conclusion the split majoron model is an appealing possibility to address the NANOGrav signal and it can have connections with many different phenomenologies. It is certainly an example of how the evidence of a GW cosmological background from NANAOGrav are a clear demonstration of how the discovery of GWs have opened a new era in our quest of new physics, that should not too much let us regret for the non-evidence of new physics at the LHC (so far). §.§ Acknowledgments The authors acknowledge financial support from the STFC Consolidated Grant ST/T000775/1. JHEP99 NANOGrav:2023gor G. Agazie et al. [NANOGrav], The NANOGrav 15-year Data Set: Evidence for a Gravitational-Wave Background, Astrophys. J. Lett. 951 (2023) no.1, L8 [arXiv:2306.16213 [astro-ph.HE]]. Hellings:1983fr R. w. Hellings and G. s. Downs, UPPER LIMITS ON THE ISOTROPIC GRAVITATIONAL RADIATION BACKGROUND FROM PULSAR TIMING ANALYSIS, Astrophys. J. Lett. 265 (1983), L39-L42 NANOGrav:2023hvm A. Afzal et al. [NANOGrav], The NANOGrav 15-year Data Set: Search for Signals from New Physics, Astrophys. J. Lett. 951 (2023) no.1, L11 [arXiv:2306.16219 [astro-ph.HE]]. DiBari:2021dri P. Di Bari, D. Marfatia and Y. L. Zhou, Gravitational waves from first-order phase transitions in Majoron models of neutrino mass, JHEP 10 (2021), 193 [arXiv:2106.00025 [hep-ph]]. NANOGrav:2020bcs Z. Arzoumanian et al. [NANOGrav], The NANOGrav 12.5 yr Data Set: Search for an Isotropic Stochastic Gravitational-wave Background, Astrophys. J. Lett. 905 (2020) no.2, L34 [arXiv:2009.04496 [astro-ph.HE]]. NANOGrav:2021flc Z. Arzoumanian et al. [NANOGrav], Searching for Gravitational Waves from Cosmological Phase Transitions with the NANOGrav 12.5-Year Dataset, Phys. Rev. Lett. 127 (2021) no.25, 25 [arXiv:2104.13930 [astro-ph.CO]]. Caprini:2015zlo C. Caprini, M. Hindmarsh, S. Huber, T. Konstandin, J. Kozaczuk, G. Nardini, J. M. No, A. Petiteau, P. Schwaller and G. Servant, et al. Science with the space-based interferometer eLISA. II: Gravitational waves from cosmological phase transitions, JCAP 04 (2016), 001 [arXiv:1512.06239 [astro-ph.CO]]. DiBari:2023mwu P. Di Bari, S. F. King and M. H. Rahat, Gravitational waves from phase transitions and cosmic strings in neutrino mass models with multiple Majorons, [arXiv:2306.04680 [hep-ph]]. Chikashige:1980ui Y. Chikashige, R. N. Mohapatra and R. D. Peccei, Are There Real Goldstone Bosons Associated with Broken Lepton Number?, Phys. Lett. B 98 (1981), 265-268. Fujikura:2023lkn K. Fujikura, S. Girmohanta, Y. Nakai and M. Suzuki, NANOGrav Signal from a Dark Conformal Phase Transition, [arXiv:2306.17086 [hep-ph]]. Yi:2023mbm Z. Yi, Q. Gao, Y. Gong, Y. Wang and F. Zhang, [arXiv:2307.02467 [gr-qc]]. Kuang:2023ygc Y. T. Kuang, J. Z. Zhou, Z. Chang, X. Zhang and Q. H. Zhu, [arXiv:2307.02067 [astro-ph.CO]]. Figueroa:2023zhu D. G. Figueroa, M. Pieroni, A. Ricciardone and P. Simakachorn, [arXiv:2307.02399 [astro-ph.CO]]. Unal:2023srk C. Unal, A. Papageorgiou and I. Obata, [arXiv:2307.02322 [astro-ph.CO]]. Abe:2023yrw K. T. Abe and Y. Tada, [arXiv:2307.01653 [astro-ph.CO]]. Cacciapaglia:2023kat G. Cacciapaglia, D. Y. Cheong, A. Deandrea, W. Isnard and S. C. Park, [arXiv:2307.01852 [hep-ph]]. Anchordoqui:2023tln L. A. Anchordoqui, I. Antoniadis and D. Lust, [arXiv:2307.01100 [hep-ph]]. Li:2023bxy S. P. Li and K. P. Xie, [arXiv:2307.01086 [hep-ph]]. Xiao:2023dbb Y. Xiao, J. M. Yang and Y. Zhang, [arXiv:2307.01072 [hep-ph]]. Lu:2023mcz B. Q. Lu and C. W. Chiang, [arXiv:2307.00746 [hep-ph]]. Zhang:2023lzt C. Zhang, N. Dai, Q. Gao, Y. Gong, T. Jiang and X. Lu, [arXiv:2307.01093 [gr-qc]]. Konoplya:2023fmh R. A. Konoplya and A. Zhidenko, [arXiv:2307.01110 [gr-qc]]. Chowdhury:2023opo D. Chowdhury, G. Tasinato and I. Zavala, [arXiv:2307.01188 [hep-th]]. Niu:2023bsr X. Niu and M. H. Rahat, [arXiv:2307.01192 [hep-ph]]. Liu:2023ymk L. Liu, Z. C. Chen and Q. G. Huang, [arXiv:2307.01102 [astro-ph.CO]]. Westernacher-Schneider:2023cic J. R. Westernacher-Schneider, J. Zrake, A. MacFadyen and Z. Haiman, [arXiv:2307.01154 [astro-ph.HE]]. Gouttenoire:2023nzr Y. Gouttenoire, S. Trifinopoulos, G. Valogiannis and M. Vanvlasselaer, [arXiv:2307.01457 [astro-ph.CO]]. Ebadi:2023xhq R. Ebadi, S. Kumar, A. McCune, H. Tai and L. T. Wang, [arXiv:2307.01248 [astro-ph.CO]]. Ghosh:2023aum T. Ghosh, A. Ghoshal, H. K. Guo, F. Hajkarim, S. F. King, K. Sinha, X. Wang and G. White, [arXiv:2307.02259 [astro-ph.HE]]. Datta:2023vbs S. Datta, [arXiv:2307.00646 [hep-ph]]. Borah:2023sbc D. Borah, S. Jyoti Das and R. Samanta, [arXiv:2307.00537 [hep-ph]]. Barman:2023fad B. Barman, D. Borah, S. Jyoti Das and I. Saha, [arXiv:2307.00656 [hep-ph]]. Bi:2023tib Y. C. Bi, Y. M. Wu, Z. C. Chen and Q. G. Huang, [arXiv:2307.00722 [astro-ph.CO]]. Wang:2023ost S. Wang, Z. C. Zhao, J. P. Li and Q. H. Zhu, [arXiv:2307.00572 [astro-ph.CO]]. Broadhurst:2023tus T. Broadhurst, C. Chen, T. Liu and K. F. Zheng, [arXiv:2306.17821 [astro-ph.HE]]. Yang:2023qlf A. Yang, J. Ma, S. Jiang and F. P. Huang, [arXiv:2306.17827 [hep-ph]]. Eichhorn:2023gat A. Eichhorn, R. R. Lino dos Santos and J. L. Miqueleto, [arXiv:2306.17718 [gr-qc]]. Huang:2023chx H. L. Huang, Y. Cai, J. Q. Jiang, J. Zhang and Y. S. Piao, [arXiv:2306.17577 [gr-qc]]. Gouttenoire:2023ftk Y. Gouttenoire and E. Vitagliano, [arXiv:2306.17841 [gr-qc]]. Cai:2023dls Y. F. Cai, X. C. He, X. Ma, S. F. Yan and G. W. Yuan, [arXiv:2306.17822 [gr-qc]]. Inomata:2023zup K. Inomata, K. Kohri and T. Terada, [arXiv:2306.17834 [astro-ph.CO]]. Lazarides:2023ksx G. Lazarides, R. Maji and Q. Shafi, [arXiv:2306.17788 [hep-ph]]. Depta:2023qst P. F. Depta, K. Schmidt-Hoberg and C. Tasillo, [arXiv:2306.17836 [astro-ph.CO]]. Blasi:2023sej S. Blasi, A. Mariotti, A. Rase and A. Sevrin, [arXiv:2306.17830 [hep-ph]]. Bian:2023dnv L. Bian, S. Ge, J. Shu, B. Wang, X. Y. Yang and J. Zong, [arXiv:2307.02376 [astro-ph.HE]]. Franciolini:2023wjm G. Franciolini, D. Racco and F. Rompineve, [arXiv:2306.17136 [astro-ph.CO]]. Shen:2023pan Z. Q. Shen, G. W. Yuan, Y. Y. Wang and Y. Z. Wang, [arXiv:2306.17143 [astro-ph.HE]]. Lambiase:2023pxd G. Lambiase, L. Mastrototaro and L. Visinelli, [arXiv:2306.16977 [astro-ph.HE]]. Han:2023olf C. Han, K. P. Xie, J. M. Yang and M. Zhang, [arXiv:2306.16966 [hep-ph]]. Guo:2023hyp S. Y. Guo, M. Khlopov, X. Liu, L. Wu, Y. Wu and B. Zhu, [arXiv:2306.17022 [hep-ph]]. Wang:2023len Z. Wang, L. Lei, H. Jiao, L. Feng and Y. Z. Fan, [arXiv:2306.17150 [astro-ph.HE]]. Ellis:2023tsl J. Ellis, M. Lewicki, C. Lin and V. Vaskonen, [arXiv:2306.17147 [astro-ph.CO]]. Vagnozzi:2023lwo S. Vagnozzi, [arXiv:2306.16912 [astro-ph.CO]]. Fujikura:2023lkn K. Fujikura, S. Girmohanta, Y. Nakai and M. Suzuki, [arXiv:2306.17086 [hep-ph]]. Kitajima:2023cek N. Kitajima, J. Lee, K. Murai, F. Takahashi and W. Yin, [arXiv:2306.17146 [hep-ph]]. Franciolini:2023pbf G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae, [arXiv:2306.17149 [astro-ph.CO]]. Megias:2023kiy E. Megias, G. Nardini and M. Quiros, [arXiv:2306.17071 [hep-ph]]. Ellis:2023dgf J. Ellis, M. Fairbairn, G. Hütsi, J. Raidal, J. Urrutia, V. Vaskonen and H. Veermäe, [arXiv:2306.17021 [astro-ph.CO]]. Bai:2023cqj Y. Bai, T. K. Chen and M. Korwar, [arXiv:2306.17160 [hep-ph]]. Yang:2023aak J. Yang, N. Xie and F. P. Huang, [arXiv:2306.17113 [hep-ph]]. Ghoshal:2023fhh A. Ghoshal and A. Strumia, [arXiv:2306.17158 [astro-ph.CO]]. Deng:2023btv H. Deng, B. Bécsy, X. Siemens, N. J. Cornish and D. R. Madison, [arXiv:2306.17130 [gr-qc]]. Athron:2023mer P. Athron, A. Fowlie, C. T. Lu, L. Morris, L. Wu, Y. Wu and Z. Xu, [arXiv:2306.17239 [hep-ph]]. Addazi:2023jvg A. Addazi, Y. F. Cai, A. Marciano and L. Visinelli, [arXiv:2306.17205 [astro-ph.CO]]. Oikonomou:2023qfz V. K. Oikonomou, [arXiv:2306.17351 [astro-ph.CO]]. Kitajima:2023vre N. Kitajima and K. Nakayama, [arXiv:2306.17390 [hep-ph]]. Mitridate:2023oar A. Mitridate, D. Wright, R. von Eckardstein, T. Schröder, J. Nay, K. Olum, K. Schmitz and T. Trickle, [arXiv:2306.16377 [hep-ph]]. NANOGrav:2023icp A. D. Johnson et al. [NANOGrav], [arXiv:2306.16223 [astro-ph.HE]]. NANOGrav:2023gor G. Agazie et al. [NANOGrav], Astrophys. J. Lett. 951, no.1, L8 (2023) doi:10.3847/2041-8213/acdac6 [arXiv:2306.16213 [astro-ph.HE]]. NANOGrav:2023hfp G. Agazie et al. [NANOGrav], [arXiv:2306.16220 [astro-ph.HE]]. NANOGrav:2023hde G. Agazie et al. [NANOGrav], Astrophys. J. Lett. 951, no.1, L9 (2023) doi:10.3847/2041-8213/acda9a [arXiv:2306.16217 [astro-ph.HE]]. NANOGrav:2023tcn G. Agazie et al. [NANOGrav], [arXiv:2306.16221 [astro-ph.HE]]. King:2023cgv S. F. King, D. Marfatia and M. H. Rahat, [arXiv:2306.05389 [hep-ph]]. Liu:2023hte J. Liu, [arXiv:2305.15100 [astro-ph.CO]].
http://arxiv.org/abs/2307.01973v1
20230705010526
Holographic Einstein rings of a black hole with a global monopole
[ "Xiao-Xiong Zeng", "Li-Fang Li", "Peng Xu" ]
hep-th
[ "hep-th", "gr-qc" ]
=1
http://arxiv.org/abs/2307.03043v1
20230706150748
A Near-Linear Time Algorithm for the Chamfer Distance
[ "Ainesh Bakshi", "Piotr Indyk", "Rajesh Jayaram", "Sandeep Silwal", "Erik Waingarten" ]
cs.DS
[ "cs.DS", "cs.CG", "cs.GR", "cs.LG" ]
[ [ August 1, 2023 ================== For any two point sets A,B ⊂^d of size up to n, the Chamfer distance from A to B is defined as (A,B)=∑_a ∈ Amin_b ∈ B d_X(a,b), where d_X is the underlying distance measure (e.g., the Euclidean or Manhattan distance). The Chamfer distance is a popular measure of dissimilarity between point clouds, used in many machine learning, computer vision, and graphics applications, and admits a straightforward d n^2-time brute force algorithm. Further, the Chamfer distance is often used as a proxy for the more computationally demanding Earth-Mover (Optimal Transport) Distance. However, the quadratic dependence on n in the running time makes the naive approach intractable for large datasets. We overcome this bottleneck and present the first (1+ϵ)-approximate algorithm for estimating the Chamfer distance with a near-linear running time. Specifically, our algorithm runs in time nd log (n)/ϵ^2 and is implementable. Our experiments demonstrate that it is both accurate and fast on large high-dimensional datasets. We believe that our algorithm will open new avenues for analyzing large high-dimensional point clouds. We also give evidence that if the goal is to report a (1+)-approximate mapping from A to B (as opposed to just its value), then any sub-quadratic time algorithm is unlikely to exist. § INTRODUCTION For any two point sets A,B ⊂^d of sizes up to n, the Chamfer distance[This is the definition adopted, e.g., in <cit.>. Some other papers, e.g., <cit.>, replace each distance term d_X(a,b) with its square, e.g., instead of a-b_2 they use a-b_2^2. In this paper we focus on the first definition, as it emphasizes the connection to Earth Mover Distance and its relaxed weighted version in  <cit.>.] from A to B is defined as (A,B) = ∑_a ∈ Amin_b ∈ B d_X(a,b) where d_X is the underlying distance measure, such as the Euclidean or Manhattan distance. The Chamfer distance, and its weighted generalization called Relaxed Earth Mover Distance <cit.>, are popular measures of dissimilarity between point clouds. They are widely used in machine learning (e.g., <cit.>), computer vision (e.g., <cit.>) and computer graphics <cit.>. Subroutines for computing Chamfer distances are available in popular libraries, such as Tensorflow <cit.>, Pytorch <cit.> and PDAL <cit.>. In many of those applications (e.g., <cit.>) Chamfer distance is used as a faster proxy for the more computationally demanding Earth-Mover (Optimal Transport) Distance. Despite the popularity of Chamfer distance, the naïve algorithm for computing it has quadratic n^2 running time, which makes it difficult to use for large datasets. Faster approximate algorithms can be obtained by performing n exact or approximate nearest neighbor queries, one for each point in A. By utilizing the state of the art approximate nearest neighbor algorithms, this leads to (1+ϵ)-approximate estimators with running times of n (1/ϵ)^𝒪(d)log n in low dimensions <cit.> or roughly dn^1+1/2 (1+ϵ)^2 -1 in high dimensions <cit.>. Alas, the first bound suffers from exponential dependence on the dimension, while the second bound is significantly subquadratic only for relatively large approximation factors. §.§ Our Results In this paper we overcome this bottleneck and present the first (1+ϵ)-approximate algorithm for estimating Chamfer distance that has a near-linear running time, both in theory and in practice. Concretely, our contributions are as follows: * When the underlying metric d_X is defined by the ℓ_1 or ℓ_2 norm, we give an algorithm that runs in time nd log (n)/ϵ^2 and estimates the Chamfer distance up to 1± with 99 % probability (see Theorem <ref>). In general, our algorithm works for any metric d_X supported by Locality-Sensitive Hash functions (see Definition <ref>), with the algorithm running time depending on the parameters of those functions. Importantly, the algorithm is quite easy to implement, see Figures <ref> and <ref>. * For the more general problem of reporting a mapping g:A → B whose cost ∑_a ∈ A d_X(a,g(a)) is within a factor of 1+ from (A,B), we show that, under a popular complexity-theoretic conjecture, an algorithm with a running time analogous to that of our estimation algorithm does not exist, even when d_X(a,b)=a-b_1. Specifically, under a Hitting Set Conjecture <cit.>, any such algorithm must run in time Ω(n^2-δ) for any constant δ>0, even when the dimension d=Θ(log^2 n) and =Θ(1)/d. (In contrast, our estimation algorithm runs in near-linear time for such parameters). This demonstrates that, for the Chamfer distance, estimation is significantly easier than reporting. * We experimentally evaluate our algorithm on real and synthetic data sets. Our experiments demonstrate the effectiveness of our algorithm for both low and high dimensional datasets and across different dataset scales. Overall, it is much faster (>5x) than brute force (even accelerated with KD-trees) and both faster and more sample efficient (5-10x) than simple uniform sampling. We demonstrate the scalability of our method by running it on billion-scale Big-ANN-Benchmarks datasets <cit.>, where it runs up to 50x faster than optimized brute force. In addition, our method is robust to different datasets: while uniform sampling performs reasonably well for some datasets in our experiments, it performs poorly on datasets where the distances from points in A to their neighbors in B vary significantly. In such cases, our algorithm is able to adapt its importance sampling probabilities appropriately and obtain significant improvements over uniform sampling. We begin by stating our main algorithmic result: Given d-dimensional point sets A and B, each of size n, and 0<<1, there exists an algorithm that runs in ndlog(n)/^2 time and outputs an estimate ẑ such that with probability at least 9/10, (1-)(A,B) ≤ẑ≤ (1+)(A,B). A natural extension of our result is to ask for a sub-quadratic time algorithm that outputs a mapping from A to B such that the resulting map is a relative error approximation to (A,B). We rule out the existence of such an algorithm, under a well-studied complexity-theoretic conjecture. Under the Hitting Set Conjecture (see Conjecture <ref>), there is no sub-quadratic time algorithm for outputting a map f: A → B such that ∑_a ∈ A a - f(a) is a (1+) relative-error approximation to (A,B), for sufficiently small . § ALGORITHM AND ANALYSIS In this section, we establish our main result for estimating Chamfer distance: Given as input two datasets A, B ⊂ℝ^d such that |A|, |B| ≤ n, and an accuracy parameter 0< <1, runs in time ndlog(n)/^2 and outputs an estimator η such that with probability at least 99/100, 1-(A,B) ≤η≤ 1+(A,B), when the underlying metric is Euclidean (ℓ_2) or Manhattan (ℓ_1) distance. For ease of exposition, we make the simplifying assumption that the underlying metric is Manhattan distance, i.e. d_X (a,b) = a-b_1. Our algorithm still succeeds whenever the underlying metric admits a locality-sensitive hash function (see Definition <ref>). Uniform vs Importance Sampling. A natural algorithm for estimating (A,B) proceeds by uniform sampling: sample an a ∈ A uniformly at random and explicitly compute min_b ∈ Ba-b_1. In general, we can compute the estimator ẑ for (A,B) by averaging over s uniformly chosen samples, resulting in runtime nds. It is easy to see that the resulting estimator is un-biased, i.e. ẑ = (A,B). However, if a small constant fraction of elements in A contribute significantly to (A,B), then s = Ω(n) samples could be necessary to obtain, say, a 1% relative error estimate with constant probability. Since each sample requires a linear scan to find the nearest neighbor, this would result in a quadratic runtime. While such an approach has good empirical performance for well-behaved datasets, it does not work for data sets where the distribution of the distances from points in A to their nearest neighbors in B is skewed. Further, it is computationally prohibitive to verify the quality of the approximation given by uniform sampling. Towards proving Theorem <ref>, it is paramount to obtain an algorithm that works regardless of the structure of the input dataset. A more nuanced approach is to perform importance sampling where we sample a ∈ A with probability proportional to its contribution to (A,B). In particular, if we had access to a distribution, _a, over elements a∈ A such that, min_b∈ Ba-b_1 ≤_a ≤λmin_b∈ Ba-b_1, for some parameter λ>1, then sampling Oλ samples results in an estimator ẑ that is within 1% relative error to the true answer with probability at least 99%. Formally, we consider the estimator defined in Algorithm <ref>, where we assume access to (A, B), a sub-routine which receives as input A and B and outputs estimates _a ∈_≥ 0 for each a ∈ A which is guaranteed to be an upper bound for min_b ∈ B a - b_1. Based on the values {_a }_a ∈ A we construct an importance sampling distribution supported on A. As a result, we obtain the following lemma: Let n, d ∈ and suppose A, B are two subsets of ^d of size at most n. For any T∈, the output of (A, B, T) satisfies [ ] = (A, B), [ ] ≤1/T·(A, B)^2 ( /(A, B) - 1 ), for from Line <ref> in Figure <ref>. The expectations and variance are over the randomness in the samples of Line <ref> of (A, B, T). In particular, [ | - (A,B) | ≥·(A, B) ] ≤1/^2 · T(/(A, B) - 1). The proof follows from a standard analysis of importance sampling and is deferred to Appendix <ref>. Observe, if ≤λ(A,B), it suffices to sample T = Oλ/^2 points in A, leading to a running time of Ondλ/^2. Obtaining importance sampling probabilities. It remains to show how to implement the (A, B) subroutine to obtain the distribution over elements in A which is a reasonable over-estimator of the true probabilities. A natural first step is to consider performing an log n-approximate nearest neighbor search (NNS): for every a' ∈ A, find b' ∈ B satisfying a' - b'_1/min_b ∈ Ba' - b_1 = log n. This leads to the desired guarantees on {_a}_a ∈ A. Unfortunately, the state of the art algorithms for log n-approximate NNS, even under the ℓ_1 norm, posses extraneous (log n) factors in the runtime, resulting in a significantly higher running time. These factors are even higher for the ℓ_2 norm. Therefore, instead of performing a direct reduction to approximate NNS, we open up the approximate NNS black-box and give a simple algorithm which directly satisfies our desired guarantees on {_a}_a ∈ A. To begin with, we assume that the aspect ratio of all pair-wise distances is bounded by a fixed polynomial, (n/) (we defer the reduction from an arbitrary input to one with polynomially bounded aspect ratio to Lemma <ref>). We proceed via computing log(n/) different (randomized) partitions of the dataset A ∪ B. The i-th partition, for 1 ≤ i ≤log (n/), can be written as A ∪ B = ∪_j 𝒫^i_j and approximately satisfies the property that points in A ∪ B that are at distance at most 2^i will be in the same partition 𝒫^i_j with sufficiently large probability. To obtain these components, we use a family of locality-sensitive hash functions, whose formal properties are given in Definition <ref>. Intuitively, these hash functions guarantee that: * For each a' ∈ A, its true nearest neighbor b' ∈ B falls into the same component as a' in the i_0-th partition, where 2^i_0 = Θ(a' - b'_1) [Recall we assumed all distances are between 1 and (n) resulting in only log n different partitions], and * Every other extraneous b b' is not in the same component as a' for each i < i_0. It is easy to check that any hash function that satisfies the aforementioned guarantees yields a valid set of distances {_a}_a ∈ A as follows: for every a' ∈ A, find the smallest i_0 for which there exists a b' ∈ B in the same component as a' in the i_0-th partition. Then set _a' = a'-b'_1. Intuitively, the b' we find for any fixed a' in this procedure will have distance that is at least the closest neighbor in B and with good probability, it won't be too much larger. A caveat here is that we cannot show the above guarantee holds for 2^i_0 = Θ(a' - b'_1). Instead, we obtain the slightly weaker guarantee that, in the expectation, the partition b' lands in is a log n-approximation to the minimum distance, i.e. 2^i_0 = Θ(log n ·a' - b'_1). Therefore, after running (A, B), setting λ=log n suffices for our nd log(n)/^2 time algorithm. We formalize this argument in the following lemma: Let (X, d_X) be a metric space with a locality-sensitive hash family at every scale (see Definition <ref>). Consider two subsets A, B ⊂ X of size at most n and any ∈ (0, 1) satisfying 1 ≤min_a ∈ A, b ∈ B a ≠ b d_X(a,b) ≤max_a ∈ A, b ∈ B d_X(a,b) ≤(n/). Algorithm <ref>, (A,B), outputs a list of (random) positive numbers {_a }_a ∈ A which satisfy the following two guarantees: * With probability 1, every a ∈ A satisfies _a ≥min_b ∈ B d_X(a, b). * For every a ∈ A, [_a] ≤log n·min_b∈ Bd_X(a,b). Further, Algorithm <ref>, runs in time dnlog(n/) time, assuming that each function used in the algorithm can be evaluated in d time. Given the lemmas above, it is straight-forward to complete the proof of Theorem <ref>. First, we reduce to the setting where the aspect ratio is (n/) (see Lemma <ref> for a formal reduction). We then invoke Lemma <ref> and apply Markov's inequality to obtain a set of distances _a such that with probability at least 99/100, for each a ∈ A, min_b∈ B a- b _1 ≤_a and ∑_a∈ A_a ≤log(n)A, B. We then invoke Lemma <ref> and set the number of samples, T=log(n)/^2. The running time of our algorithm is then given by the time of (A, B), which is O(nd log(n/)), and the time needed to evaluate the estimator in Lemma <ref>, requiring ndlog(n)/^2 time. Refer to Section <ref> for the full proof. § EXPERIMENTS We perform an empirical evaluation of our Chamfer distance estimation algorithm. Summary of Results Our experiments demonstrate the effectiveness of our algorithm for both low and high dimensional datasets and across different dataset sizes. Overall, it is much faster than brute force (even accelerated with KD-trees). Further, our algorithm is both faster and more sample-efficient than uniform sampling. It is also robust to different datasets: while uniform sampling performs well for most datasets in our experiments, it performs poorly on datasets where the distances from points in A to their neighbors in B vary significantly. In such cases, our algorithm is able to adapt its importance sampling probabilities appropriately and obtain significant improvements over uniform sampling. §.§ Experimental Setup We use three different experimental setups, small scale, outlier, and large scale. They are designed to `stress test' our algorithm, and relevant baselines, under vastly different parameter regimes. The datasets we use are summarized in Table <ref>. For all experiments, we introduce uniform sampling as a competitive baseline for estimating the Chamfer distance, as well as (accelerated) brute force computation. All results are averaged across 20+ trials and 1 standard deviation error bars are shown when relevant. Small Scale These experiments are motivated from common use cases of Chamfer distance in the computer vision and NLP domains. In our small scale experiments, we use two different datasets: (a) the ShapeNet dataset, a collection of point clouds of objects in three dimensions <cit.>. ShapeNet is a common benchmark dataset frequently used in computer graphics, computer vision, robotics and Chamfer distance is a widely used measure of similarity between different ShapeNet point clouds <cit.>. (b) We create point clouds of words from text documents from <cit.>. Each point represents a word embedding obtained from the word-to-vec model of <cit.> in ^300 applied to the Federalist Papers corpus. As mentioned earlier, a popular relaxation of the common Earth Mover Distance is exactly the (weighted) version of the Chamfer distance <cit.>. Since ShapenNet is in three dimensions, we implement nearest neighbor queries using KD-trees to accelerate the brute force baseline as KD-trees can perform exact nearest neighbor search quickly in small dimensions. However, they have runtime exponential in dimension meaning they cannot be used for the text embedding dataset, for which we use a standard naive brute force computation. For both these datasets, we implement our algorithms using Python 3.9.7 on an M1 MacbookPro with 32GB of RAM. We also use an efficient implementation of KD trees in Python and use Numpy and Numba whenever relevant. Since the point clouds in the dataset have approximately the same n value, we compute the symmetric version (A,B) + (B,A). For these experiments, we use the ℓ_1 distance function. Outliers This experiment is meant to showcase the robustness of our algorithm. We consider two point clouds, A and B, each sampled from Gaussian points in ^100 with identity covariance. Furthermore, we add an "outlier" point to A equal to 0.5n ·1, where 1 is the all ones vector. This example models scenarios where the distances from points in A to their nearest neighbors in B vary significantly, and thus uniform sampling might not accurately account for all distances, missing a small fraction of large ones. Large Scale The purpose of these experiments is to demonstrate that our method scales to datasets with billions of points in hundreds of dimensions. We use two challenging approximate nearest neighbor search datasets: DEEP1B <cit.> and Microsoft Turing-ANNS <cit.>. For these datasets, the set A is the query data associated with the datasets. Due to the asymmetric sizes, we compute (A,B). These datasets are normalized to have unit norm and we consider the ℓ_2 distance function. These datasets are too large to handle using the prior configurations. Thus, we use a proprietary in-memory parallel implementation of the SimHash algorithm, which is an ℓ_2 LSH family for normalized vectors according to Definition <ref> <cit.>, on a shared virtual compute cluster with 2x64 core AMD Epyc 7763 CPUs (Zen3) with 2.45Ghz - 3.5GHz clock frequency, 2TB DDR4 RAM and 256 MB L3 cache. We also utilize parallization on the same compute cluster for naive brute force search. §.§ Results Small Scale First we discuss configuring parameters. Recall that in our theoretical results, we use log n different scales of the LSH family in . then computes (over) estimates of the nearest neighbor distance from points in A to B (in near linear time) which is then used for importance sampling by . Concretely for the ℓ_1 case, this the LSH family corresponds to imposing log n grids with progressively smaller side lengths. In our experiments, we treat the number of levels of grids to use as a tuneable parameter in our implementation and find that a very small number suffices for high quality results in the importance sampling phase. Figure <ref> (b) shows that only using 3 grid levels is sufficient for the crude estimates _a to be within a factor of 2 away from the true nearest neighbor values for the ShapeNet dataset, averaged across different point clouds in the dataset. Thus for the rest of the Small Scale experiments, we fix the number of grid levels to be 3. Figure <ref> (a) shows the sample complexity vs accuracy trade offs of our algorithm, which uses importance sampling, compared to uniform sampling. Accuracy is measured by the relative error to the true value. We see that our algorithm possesses a better trade off as we obtain the same relative error using only 10 samples as uniform sampling does using 50+ samples, resulting in at least a 5x improvement in sample complexity. For the text embedding dataset, the performance gap between our importance sampling algorithm and uniform sampling grows even wider, as demonstrated by Figure <ref> (b), leading to > 10x improvement in sample complexity. In terms of runtimes, we expect the brute force search to be much slower than either importance sampling and uniform sampling. Furthermore, our algorithm has the overhead of first estimating the values _a for a ∈ A using an LSH family, which uniform sampling does not. However, this is compensated by the fact that our algorithm requires much fewer samples to get accurate estimates. Indeed, Figure <ref> (a) shows the average time of 100 Chamfer distance computations between randomly chosen pairs of point clouds in the ShapeNet dataset. We set the number of samples for uniform sampling and importance sampling (our algorithm) such that they both output estimates with (close to) 2% relative error. Note that our runtime includes the time to build our LSH data structures. This means we used 100 samples for importance sampling and 500 for uniform. The brute force KD Tree algorithm (which reports exact answers) is approximately 5x slower than our algorithm. At the same time, our algorithm is 50% faster than uniform sampling. For the Federalist Papers dataset (Figure <ref> (b)), our algorithm only required 20 samples to get a 2% relative error approximation, whereas uniform sampling required at least 450 samples. As a result, our algorithm achieved 2x speedup compared to uniform sampling. Outliers We performed similar experiments as above. Figure <ref> (c) shows the sample complexity vs accuracy trade off curves of our algorithm and uniform sampling. Uniform sampling has a very large error compared to our algorithm, as expected. While the relative error of our algorithm decreases smoothly as the sample size grows, uniform sampling has the same high relative error. In fact, the relative error will stay high until the outlier is sampled, which typically requires Ω(n) samples. Large Scale We consider two modifications to our algorithm to optimize the performance of on the two challenging datasets that we are using; namely, note that both datasets are standard for benchmarking billion-scale nearest neighbor search. First, in the algorithm, when computing _a for a∈ A, we search through the hash buckets h_1(a),h_2(a),… containing a in increasing order of i (i.e., smallest scale first), and retrieve the first W (window size) distinct points in B from these buckets. Then, the whole process is repeated k times, with k independent LSH data structures, and _a is set to be the distance from a to the closest among all Wk retrieved points. Note that previously, for our smaller datasets, we set _a to be the distance to the first point in B colliding with a, and repeated the LSH data structure once, corresponding to W=k=1. In our figures, we refer to these parameter choices as k × W and test our algorithm across several choices. For the DEEP and Turing datasets, Figures <ref> (d) and <ref> (e) show the sample complexity vs relative error trade-offs for the best parameter choice (both 64 × 10^6) compared to uniform sampling. Qualitatively, we observe the same behavior as before: importance sampling requires fewer samples to obtain the same accuracy as uniform sampling. Regarding the other parameter choices, we see that, as expected, if we decrease k (the number of LSH data structures), or if we decrease W (the window size), the quality of the approximations {_a}_a ∈ A decreases and importance sampling has worse sample complexity trade-offs. Nevertheless, for all parameter choices, we see that we obtain superior sample complexity trade-offs compared to uniform sampling, as shown in Figure <ref>. A difference between these parameter choices are the runtimes required to construct the approximations {_a}_a ∈ A. For example for the DEEP dataset, the naive brute force approach (which is also optimized using parallelization) took approximately 1.3· 10^4 seconds, whereas the most expensive parameter choice of 64 × 10^6 took approximately half the time at 6.4 × 10^3 and the cheapest parameter choice of 8 × 10^5 took 225 seconds, leading to a 2x-50x factor speedup. The runtime differences between brute force and our algorithm were qualitative similar for the Turing dataset. Similar to the small scale dataset, our method also outperforms uniform sampling in terms of runtime if we require they both output high quality approximations. If we measure the runtime to get a 1% relative error, the 16× 2 · 10^5 version of our algorithm for the DEEP dataset requires approximately 980 samples with total runtime approximately 1785 seconds, whereas uniform sampling requires > 1750 samples and runtime >2200 seconds, which is >23% slower. The gap in runtime increases if we desire approximations with even smaller relative error, as the overhead of obtaining the approximations {_a}_a ∈ A becomes increasingly overwhelmed by the time needed to compute the exact answer for our samples. Additional Experimental Results We perform additional experiments to show the utility of our approximation algorithm for the Chamfer distance for downstream tasks. For the ShapeNet dataset, we show we can efficiently recover the true exact nearest neighbor of a fixed point cloud A in Chamfer distance among a large collect of different point clouds. In other words, it is beneficial for finding the `nearest neighboring point cloud'. Recall the ShapeNet dataset, contains approximately 5 · 10^4 different point clouds. We consider the following simple (and standard) two step pipeline: (1) use our algorithm to compute an approximation of the Chamfer distance from A to every other point cloud B in our dataset. More specifically, compute an approximation to (A,B) + (B,A) for all B using 50 samples and the same parameter configurations as the small scale experiments. Then filter the dataset of points clouds and prune down to the top k closest point cloud candidates according to our approximate distances. (2) Find the closest point cloud in the top k candidates via exact computation. We measure the accuracy of this via the standard recall @k measure, which computes the fraction of times the exact nearest neighbor B of A, averaged over multiple A's, is within the top k choices. Figure <ref> (a) shows that the true exact nearest neighbor of A, that is the point cloud B which minimizes (A,B) + (B,A) among our collection of multiple point clouds, is within the top 30 candidates >98%, time (averaged over multiple different choices of A). This represents a more than 1000x reduction in the number of point clouds we do exact computation over compared to the naive brute force method, demonstrating the utility of our algorithm for downstream tasks. § LOWER BOUND FOR REPORTING THE ALIGNMENT We presented an algorithm that, in time nd log(n)/^2, produces a (1+)-approximation to (A,B). It is natural to ask whether it is also possible to report a mapping g:A → B whose cost ∑_a ∈ Aa-g(a)_1 is within a factor of 1+ from (A,B). (Our algorithm uses on random sampling and thusdoes not give such a mapping). This section shows that, under a popular complexity-theoretic conjecture called the Hitting Set Conjecture <cit.>, such an algorithm does not exists. For simplicity, we focus on the case when the underlying metric d_X is induced by the Manhattan distance, i.e., d_X(a,b)= a-b_1. The argument is similar for the Euclidean distance, Euclidean distance squared, etc. To state our result formally, we first define the Hitting Set (HS) problem. The input to the problem consists of two sets of vectors A, B ⊆{0,1}^d, and the goal is to determine whether there exists some a ∈ A such that a · b ≠ 0 for every b ∈ B. If such an a ∈ A exists, we say that a hits B. It is easy to see that the Hitting Set problem can be solved in time n^2 d. The Hitting Set Conjecture <cit.> postulates that this running time is close to the optimal. Specifically: Suppose d = Θ(log^2 n). Then for every constant δ > 0, no randomized algorithm can solve the Hitting Set problem in n^2-δ time. Our result can be now phrased as follows. Let T(N,D,) be the running time of an algorithm ALG that, given sets of A",B" ⊂{0,1}^D of sizes at most N, reports a mapping g:A" → B" with cost (1+)A",B"), for D = Θ(log^2 N) and =Θ(1)/D. Assuming the Hitting Set Conjecture, we have that T(N,D,) is at least Ω(N^2-δ) for any constant δ>0. § CONCLUSION We present an efficient approximation algorithm for estimating the Chamfer distance up to a 1+ε factor in time nd log(n)/ε^2. The result is complemented with a conditional lower bound which shows that reporting a Chamfer distance mapping of similar quality requires nearly quadratic time. Our algorithm is easy to implement in practice and compares favorably to brute force computation and uniform sampling. We envision our main tools of obtaining fast estimates of coarse nearest neighbor distances combined with importance sampling can have additional applications in the analysis of high-dimensional, large scale data. alpha § DEFERRED ANALYSIS FROM SECTION <REF> The proof follows from a standard analysis of importance sampling. The fact that our estimator is unbiased holds from the definition of _ℓ, since we are re-weighting samples according to the probability with which they are sampled in (in particular, the estimator is unbiased for all distributions where _a > 0 for all a ∈ A). The bound on the variance is then a simple calculation: [ ] ≤1/T·([ ∑_a ∈ A(/_a) min_b ∈ Ba - b_2^2] - (A,B)^2) ≤1/T·[ ∑_a ∈ Amin_b ∈ Ba-b_2 ·] - (A,B)^2/T ≤1/T·(A,B)^2 ( /(A,B) - 1) . The final probability bound follows from Chebyshev's inequality. Locality Sensitive Hashing at every scale. We now discuss how to find such partitions. For the ℓ_1 distance, each partition i is formed by imposing a (randomly shifted) grid of side length 2^i on the dataset. Note that while the grid partitions the entire space ^d into infinitely many components, we can efficiently enumerate over the non empty components which actually contain points in our dataset. To this end, we introduce the following definition: There exists a fixed constant c_1 > 0 and a parameterized family (r) of functions from X to some universe U such that for all r > 0, and for every x, y ∈ X * Close points collide frequently: _∼(r)[ (x) ≠(y) ] ≤x -y _1/r, * Far points collide infrequently: _∼(r)[ (x) = (y) ] ≤exp(- c_1 · x - y_1 /r). We are now ready to make this approach concrete via the following lemma: Let (X, d_X) be a metric space with a locality-sensitive hash family at every scale (see Definition <ref>). Consider two subsets A, B ⊂ X of size at most n and ∈ (0, 1) satisfying 1 ≤min_a ∈ A, b ∈ B a ≠ b d_X(a,b) ≤max_a ∈ A, b ∈ B d_X(a,b) ≤(n/). Algorithm <ref>, (A,B), outputs a list of (random) positive numbers {_a }_a ∈ A which satisfy the following two guarantees: * With probability 1, every a ∈ A satisfies _a ≥min_b ∈ B d_X(a, b). * For every a ∈ A, [_a] ≤log n·min_b∈ Bd_X(a,b). Further, Algorithm <ref>, runs in time dnlog(n/) time, assuming that each function used in the algorithm can be evaluated in d time. Finally, we show that it always suffices to assume bounded aspect ratio: Given an instance A, B⊂ℝ^d such that |A|, |B| ≤ n, and 0<<1 there exists an algorithm that runs in time ndlog(n)/^2 and outputs a partition A_1, A_2, … A_T of A and B_1, B_2, … B_T of B such that T= n and for each t ∈ [T], 1 ≤min_a ∈ A_t , b ∈ B_t a ≠ ba-b_1 ≤max_a ∈ A_t, b ∈ B_ta-b_1 ≤(n/). Further, 1-A,B) ≤∑_t ∈ [T]A_t,B_t) ≤1+A,B). We defer the proofs of Lemma <ref> and Lemma <ref> to sub-sections <ref> and <ref> respectively. We are now ready to complete the proof of Theorem <ref>. Observe, by Lemma <ref>, we can partition the input into pairs (A_t, B_t)_t∈[T] such that each pair has aspect ratio at most (n/) and the (A,B) is well-approximated by the direct sum of (A_t, B_t). Next, repeating the construction from Lemma <ref>, and applying Markov's inequality, we have list {_a}_a∈ A such that with probability at least 99/100, for all a∈ A, _a ≥min_b∈ Ba -b _1 and = ∑_a∈ A_a ≤log(n)A,B. Invoking Lemma <ref> with the aforementioned parameters, and T = log(n)/^2 suffices to obtain an estimator η which is a (1±) relative-error approximation to (A,B). Since we require computing the exact nearest neighbor for at most log(n)/^2 points, the running time is dominated by ndlog(n)/^2, which completes the proof. §.§ Analysis for In this subsection, we focus analyze the algorithm and provide a proof for Lemma <ref>. A construction of hash family satisfying Definition <ref> is given in Section <ref>. Each function from the family can be evaluated in d time per point. We are now ready to prove Lemma <ref>. We note that the first item is trivially true, since (A,B) always sets _a to be some distance between a and a point in B. Thus, this distance can only be larger than the true minimum distance. The more challenging aspect is obtaining an upper bound on the expected value of _a. Consider a fixed setting of a ∈ A, and the following setting of parameters: b = _b' ∈ B d_X(a, b') γ_a = d_X(a,b) i_0 = ⌈log_2 γ_a ⌉, and notice that since γ_a is between 1 and (n/), we have i_0 is at least 0 and at most L = log (n/). We will upper bound the expectation of _a by considering a parameter c > 1 (which will later be set to log n)), and integrating over the probability that _a is at least γ, for all γ≥ c ·γ_a: [ _a ] ≤ c ·γ_a + ∫_cγ_a^∞[ _a ≥γ] dγ. We now show that for any a ∈ A, the probability that _a is larger than γ can be appropriately bounded. Consider the following two bad events. * _1(γ): This event occurs when there exists a point b' ∈ B at distance at least γ from a and there exists an index i ≤ i_0 for which _i(a) = _i(b'). * _2(γ): This event occurs when there exists an index i > i_0 such that: * For every i' ∈{ i_0, …, i-1}, we have _i'(a) ≠_i'(b) for all b ∈ B. * There exists b' ∈ B at distance at least γ from a where _i(a) = _i(b'). We note that whenever (A,B) set _a larger than γ, one of the two events, _1(γ) or _2(γ), must have been triggered. To see why, suppose (A,B) set _a to be larger than γ because a point b' ∈ B with d_X(a,b') ≥γ happened to have _i(a) = _i(b'), for an index i ∈{0, …, L}, and that the index i was the first case where it happened. If i ≤ i_0, this is event _1(γ). If i > i_0, we claim event _2(γ) occurred: in addition to _i(a) = _i(b'), it must have been the case that, for all i' ∈{i_0, …, i - 1}, _i'(a) ≠_i'(b) (otherwise, i would not be the first index). We will upper bound the probability that either event _1(γ) or _2(γ) occurs. We make use of the tail bounds as stated in Definition <ref>. The upper bound for the probability that _1(γ) is simple, since it suffices to union bound over at most n points at distance larger than γ, using the fact that r_i_0 = 2^i_0 is at most 2·γ_a: [ _1(γ) ] ≤ n ·exp( - c_1 ·γ/2γ_a). We will upper bound the probability that event _2(γ) a bit more carefully. We will use the fact that for all i, the parameter r_i is always between 2^i - i_0γ_a and 2^i-i_0 + 1γ_a. [ _2(γ) ] ≤∑_i > i_0(∏_i'=i_0^i-1γ_a/r_i') ·max{ n ·exp( - c_1 ·γ/r_i), 1} ≤∑_i > i_0 2^- (0 + … + (i-1-i_0))max{exp( ln(n) -c_1 ·γ/2^i - i_0 + 1·γ_a) , 1 } ≤∑_k ≥ 0 2^-Ω(k^2)·max{exp( ln(n) - c_1 ·γ/2^k+2·γ_a), 1 }. With the above two upper bounds in place, we upper bound (<ref>) by dividing the integral into the two contributing summands, from _1(γ) and _2(γ), and then upper bounding each individually. Namely, we have ∫_γ:c γ_a^∞[ _a ≥γ]dγ ≤∫_γ:cγ_a^∞[ _1(γ) ] dγ + ∫_γ:cγ_a^∞[ _2(γ) ]dγ. The first summand can be simply upper bounded by using the upper bound from (<ref>), where we have ∫_γ:cγ_a^∞[ _1(γ) ] dγ≤∫_γ:cγ_a^∞ n exp( - c_1 ·γ/2 γ_a) dγ≤n · 2γ_a/c_1· e^-c_1 c / 2≤γ_a for a large enough c = Θ(log n). The second summand is upper bounded by the upper bound in (<ref>), while being slightly more careful in the computation. In particular, we first commute the summation over k and the integral; then, for each k ≥ 0, we define α_k := 2^k+3ln(n) ·γ_a / c_1, and we break up the integral into the interval [c ·γ_a, α_k] (if α_k < c γ_a, the interval is empty), as well as [α_k, ∞): ∫_γ:cγ_a^∞[ _2(γ) ] dγ ≤∑_k≥ 0 2^-Ω(k^2)∫_γ:c γ_a^∞max{exp( ln(n) - c_1 ·γ/2^k+2·γ_a), 1 } dγ ≤∑_k≥ 0 2^-Ω(k^2)( (α_k - c ·γ_a )^+ + ∫_γ:α_k^∞exp( -c_1/2·γ/2^k+2γ_a)dγ), where in the second inequality, we used the fact that the setting of α_k, the additional ln(n) factor in the exponent can be removed up to a factor of two. Thus, ∫_γ:cγ_a^∞[ _2(γ) ]dγ≤∑_k≥ 0 2^-Ω(k^2)( γ_a ·2^klog n + γ_a ·2^k) = log n·γ_a. Finally, the running time is dominated by the cost of evaluating log (n/) functions on n points in dimension d. Since each evaluation takes d time, the bound follows. §.§ Locality-Sensitive Hashing at Every Scale For any r ≥ 0 and any d ∈, there exists a hash family (r) such that for any two points x, y ∈^d, _∼(r)[ (x) ≠(y)] ≤x-y_1/r _∼(r)[ (x) = (y) ] ≤exp( - x- y_1/r). In addition, for any ∼(r), (x) may be computed in d time. The construction proceeds in the following way: in order to generate a function ^d →^d sampled from (r), * We sample a random vector ∼ [0, r]^d. * We let (x) = ( ⌈x_1 + _1/r⌉, ⌈x_2 + _2/r⌉, … , ⌈x_d + _d/r⌉). Fix x, y ∈^d. If (x) ≠(y), there exists some coordinate k ∈ [d] on which (x)_k ≠(y)_k. This occurs whenever (i) |x_k - y_k| > r, or (ii) |x_k - y_k|≤ r, but _k happens to fall within an interval of length |x_k - y_k|, thereby separating x from y. By a union bound, _∼(r)[ (x) ≠(y) ] ≤∑_k=1^d |x_k - y_k|/r = x-y_1/r. On the other hand, in order for (x) = (y), it must be the case that every |x_k - y_k| ≤ r, and in addition, the threshold _k always avoids an interval of length |x_k - y_k|. The probability that this occurs is _∼(r)[ (x) = (y)] = ∏_k=1^d max{ 0, 1 - |x_k - y_k|/r}≤exp( - ∑_k=1^d |x_k-y_k|/r) ≤exp(-x-y_1/r). Extending the above construction to ℓ_2 follows from embedding the points A ∪ B into ℓ_1 via a standard construction. Let ∈ (0,1) and define : ^d →^k by (x)_i = 1/β k∑_j=1^d Z_ijx_j, i= 1, …, k where β = √(2/π). Then for every vector x ∈^d, we have [(1-)x_2 ≤(x)_1 ≤ (1+) x_2] ≥ 1-e^c ^2 k, where c > 0 is a constant. The map ^d →^k with k = log n/^2 gives an embedding of A ∪ B into ℓ_1^k of distortion (1±) with high probability. Formally, with probability at least 1 - 1/n over the draw of with t = log n/^2, every a ∈ A and b ∈ B satisfies (1-) a - b_2 ≤(a) - (b) _1 ≤ (1+) a - b_2. This embedding has the effect of reducing ℓ_2 to ℓ_1 without affecting the Chamfer distance of the mapped points by more than a (1±)-factor. In addition, the embedding incurs an extra additive factor of nd log n/^2 to the running time in order to perform the embedding for all points. §.§ Reduction to (n/) Aspect Ratio for ℓ_p, p ∈ [1,2] In this section, we discuss how to reduce to the case of a (n/) aspect ratio. The reduction proceeds by first obtaining a very crude estimate of (A, B) (which will be a (n)-approximation), applying a locality-sensitive hash function in order to partition points of A and B which are significantly farther than (n) ·(A, B). Finally, we add log n coordinates and add random vector of length (/n) ·(A, B) in order to guarantee that the minimum distance is at least ( /n) ·(A, B) without changing (A,B) significantly. Partitioning Given Very Crude Estimates. In particular, suppose that with an nd + nlog n time computation, we can achieve a value of ∈_≥ 0 which satisfies (A, B) ≤≤ c·(A, B), with high probability (which we will show how to do briefly with c = (n)). Then, consider sampling ∼(c n ·) and partitioning A and B into equivalence classes according to where they hash to under . The probability that two points at distance farther than log n· cn· collide under is small enough to union bound over at most n^2 many possible pairs of vectors. In addition, the probability that there exists a ∈ A for which b ∈ B minimizing a - b_p satisfies (a) ≠(b) is at most (A, B) / (cn ·) ≤ 1/n. This latter inequality implies that computing the Chamfer distance of the corresponding parts in the partition and summing them is equivalent to computing (A, B). Getting Very Crude Estimates. We now show how to obtain a (n)-approximation to (A, B) in time nd + n log n for points in ^d with ℓ_p distance. This is done via the p-stable sketch of Indyk <cit.>. In particular, we sample a vector ∈^d by independent p-stable random variables (for instance, is a standard Gaussian vector for p = 2 and a vector of independent Cauchy random variables for p = 1). We may then compute the scalar random variables {⟨ a, ⟩}_a ∈ A and {⟨ b, ⟩}_b ∈ B, which give a projection onto a one-dimensional space. By p-stability, for any a ∈ A and b ∈ B, the distribution of ⟨ a, ⟩ - ⟨ b, ⟩ is exactly as a - b_p ·', where ' is an independent p-stable random variable. Hence, we will have that for every a ∈ A and b ∈ B, a - b_p/(n)≤| ⟨ a , ⟩ - ⟨ b, ⟩| ≤a - b_p ·(n), with probability 1-1/(n) and hence ({⟨ a, ⟩}_a ∈ A, {⟨ b , ⟩}_b ∈ B), which is computable by 1-dimensional nearest neighbor search (i.e., repeatedly querying a binary search tree), gives a (n)-approximation to (A,B). Adding Distance Finally, we now note that / c gives us a lower bound on (A, B). Suppose we append log n coordinates to each point and in those coordinates, we add a random vector of norm · / (cn). With high probability, every pair of points is now at distance at least · / (cn). In addition, the Chamfer distance between the new set of points increases by at most an additive / c, which is at most ·(A, B), proving Lemma <ref>. § DEFERRED ANALYSIS FROM SECTION <REF> To set the notation, we let n_A=|A|, n_B=|B|. The proof mimics the argument from <cit.>, which proved a similar hardness result for the problem of computing the Earth-Mover Distance. In particular, Lemma 4.3 from that paper shows the following claim. For any two sets A, B ⊆{0,1}^d, there is a mapping f:{0,1}^d →{0,1}^d" and a vector v ∈{0,1}^d", such that d"=d and for any a ∈ A, b ∈ B: * If a · b=0 then f(a)-f(b)_1 = 4d+2, * If a · b>0 then f(a)-f(b)_1 ≥ 4d+4, * f(a)-v_1 = 4d+4. Furthermore, each evaluation f(a) can be performed in d time. We will be running ALG on sets A"={f(a): a ∈ A} and B"={f(b): b ∈ B}∪{v}. It can be seen that, given a reported mapping g, we can assume that for all a" ∈ A" we have a"-g(a")_1 ≤ 4d+4, as otherwise g can map a” to v. If for all a ∈ A there exists b ∈ B such that a · b=0, i.e., A does not contain a hitting vector, then the optimal mapping cost is n_A(4d+2). More generally, let H be the set of vectors a ∈ A hitting B, and let h=|H|. It can be seen that (A",B") = h (4d+4) + (n_A -h)(4d+2) = n_A(4d+2) + 2h. Thus, if we could compute (A",B") exactly, we would determine if h=0 and solve HS. In what follows we show that even an approximate solution can be used to accomplish this task as long as is small enough. Let t=c log (n)/ for some large enough constant c>1. Consider the algorithm HittingSet(A, B) that solves HS by invoking the algorithm ALG. It can be seen that the first three steps of the algorithm take at most ntd time. Furthermore, if the algorithm terminates, it reports the correct answer, as only vectors a that are guaranteed not to be hitting are removed in the recursion. It remains to bound the total number and cost of the recursive steps. To this end, we will show that, with high probability, in each recursive call we have |A-M| ≤ |A|/2. This will yield a total time of log n [(n t d) + T(n+1, d, )]. Since t=c log (n)/, d = log^2 n and = Θ(1)/d, it follows that the time is at most n log^5 (n) + log (n) T(n+1, d, ), and the theorem follows. To show that |A-M| ≤ |A|/2, first observe that if the algorithm reaches step (2), then for a large enough constant c>1 it holds, with high probability, that the set H of hitting vectors a has cardinality at most · n_A, as otherwise one such vector would have been sampled. Thus, the subroutine ALG returns a map where the vast majority of the points f(a) have been matched to a point f(b) such that f(a)-f(b)_1=4d+2. More formally, the cost of the mapping g is C = ∑_a" ∈ A"a" -g(a")_1 ≤ (1+) [ n_A(4d+2) + 2 |H|] ≤ (1+) [ n_A(4d+2) + 2 n_A] ≤ n_A(4d+2) + 4 n_A (d+2) ≤ n_A(4d+2) + n_A where in the last step we used the assumption about . Denote m=|M|. Observe that the cost C of the mapping g can be alternatively written as: C = m(4d+2) + (n_A-m)(4d+4) = n_A (4d+4)-2m This implies m=(n_A (4d+4) - C)/2. Since we showed earlier that C ≤ n_A (4d+2)+n_A, we conclude that m=(n_A (4d+4) - C)/2 ≥ (2 n_A - n_A)/2 = n_A/2 . Thus, |A-M|=n_A-m ≤ n_A/2, completing the proof.
http://arxiv.org/abs/2307.02673v1
20230705220446
Panel Data Nowcasting: The Case of Price-Earnings Ratios
[ "Andrii Babii", "Ryan T. Ball", "Eric Ghysels", "Jonas Striaukas" ]
econ.EM
[ "econ.EM", "stat.AP", "stat.CO", "stat.ML" ]
Panel Data Nowcasting: The Case of Price-Earnings Ratios[We benefited from comments by Rudy De Winne, Geert D'Haene, Max Farrell, Christian Hafner, Peter Reinhard Hansen, Dacheng Xiu, and participants at the 2021 SoFiE UC San Diego conference, 26th International Panel Data Conference, Data Science and Machine Learning workshop at the University of Amsterdam, the 2022 IAAE Conference, King's College, London, and the 2022 Vienna Copenhagen Conference on Financial Econometrics. This work was in part completed when Jonas Striaukas was a Research Fellow at Fonds de la Recherche Scientifique FNRS.] Andrii BabiiUniversity of North Carolina at Chapel Hill - Gardner Hall, CB 3305 Chapel Hill, NC 27599-3305. Email: [email protected]. Ryan T. BallStephen M. Ross School of Business, University of Michigan, 701 Tappan Street, Ann Arbor, MI 48109. Email: [email protected]. Eric GhyselsDepartment of Economics and Kenan-Flagler Business School, University of North Carolina–Chapel Hill. Email: [email protected]. Jonas StriaukasDepartment of Finance, Copenhagen Business School, Frederiksberg, Denmark. Email: [email protected]. August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The paper uses structured machine learning regressions for nowcasting with panel data consisting of series sampled at different frequencies. Motivated by the problem of predicting corporate earnings for a large cross-section of firms with macroeconomic, financial, and news time series sampled at different frequencies, we focus on the sparse-group LASSO regularization which can take advantage of the mixed frequency time series panel data structures. Our empirical results show the superior performance of our machine learning panel data regression models over analysts' predictions, forecast combinations, firm-specific time series regression models, and standard machine learning methods. Keywords: Corporate earnings, nowcasting, data-rich environment, high-dimensional panels, mixed frequency data, textual news data, sparse-group LASSO. empty § INTRODUCTION Nowcasting is intrinsically a mixed frequency data problem as the object of interest is a low-frequency data series — observed say quarterly — whereas real-time information — daily, weekly or monthly — during the quarter can be used to assess and potentially continuously update the state of the low-frequency series, or put differently, nowcast the series of interest. Traditional methods being used for nowcasting rely on dynamic factor models which treat the underlying low-frequency series of interest as a latent process with high-frequency data noisy observations. These models are naturally cast in a state-space form, and inference can be performed using standard techniques (in particular the Kalman filter, see <cit.> for a recent survey). Things get more complicated when we are operating in a data-rich environment and we have many target variables. Put differently, we are no longer interested in nowcasting a single key series such as the GDP growth where we could devote a lot of resources to that particular series. A good example is corporate earnings nowcasting for a large cross-section of corporate firms. The fundamental value of equity shares is determined by the discounted value of future payoffs. Every quarter investors get a glimpse of firms' potential payoffs with the release of corporate earnings reports. In a data-rich environment, stock analysts have many indicators regarding future earnings that are available much more frequently. <cit.> took a first stab at automating the process using MIDAS regressions. Since their original work, much progress has been made on machine learning (ML) regularized mixed frequency regression models. In the context of earnings, we are potentially dealing with a large set of individual firms for which there are many predictors. From a practical point of view, this is clearly beyond the realm of nowcasting using state space models. In the current paper, we significantly expand the tools of nowcasting in a data-rich environment by exploiting panel data structures. Panel data regression models are well suited for the firm-level data analysis as both the time series and cross-sectional dimensions can be exploited. In such models, time-invariant firm-specific effects are typically used to capture cross-sectional heterogeneity in the data. This is combined with regularized regression machine learning methods which are becoming increasingly popular in economics and finance as a flexible way to model predictive relationships via variable selection. We focus on the panel data regressions in a high-dimensional data setting where the number of covariates could be large and potentially exceed the available sample size. This may happen when the number of firm-specific characteristics, such as textual analysis news data or firm-level stock returns, is large, and/or the number of aggregates, such as market returns, macro data, etc., is large. Our paper relates to several existing papers in the literature. <cit.> consider low-dimensional dynamic mixed frequency panel data models but do not deal with high-dimensional data situations in the context of nowcasting or forecasting. Similarly, <cit.> consider nowcasting with a mixed-frequency VAR panel data model, but not in the context of a high-dimensional data-rich environment that we are interested in here. <cit.> introduce the sparse-group LASSO (sg-LASSO) regularization machine learning methods for heavy-tailed dependent panel data regressions potentially sampled at different time series frequencies. They derive oracle inequalities for the pooled and fixed effects models, the debiased inference for pooled regression, and consider an application to the Granger causality testing. In this paper, we explore how to use their framework for nowcasting large panels of low-frequency time series. We focus on nowcasting current quarter firm-specific price-earnings ratios (henceforth P/E ratios). This means we focus on evaluating model-based within-quarter predictions for very short horizons. It is widely acknowledged that P/E ratios are a good indicator of the future performance of a company and, therefore, are used by analysts and investment professionals to base their decisions on which stocks to pick for their investment portfolios. Typically investors rely on consensus forecasts of earnings made by a pool of analysts. We, therefore, choose such consensus forecasts as the benchmark for our proposed machine learning methods. <cit.> and <cit.> documented that analysts tend to focus on their firm/industry when making earnings predictions while not fully taking into account the impact of macroeconomic events. <cit.> tested formally in a high-dimensional data setting the hypothesis that systematic and predictable errors occur in analyst forecasts and confirmed empirically that they leave money on the table. The analysis in the current paper is therefore an logical extension of this prior work. In addition, we also compare our proposed new methods with the MIDAS regression forecast combination approach used by <cit.> as well as a simple random walk model. Our high-frequency regressors include traditional macro and financial series as well as non-standard series generated by textual analysis of financial news. We consider structured pooled and fixed effects sg-LASSO panel data regressions with mixed frequency data (sg-LASSO MIDAS). By “structured” we mean that the ML procedure is set up such that it recognizes the time series and panel structure of the data. This is a departure from standard ML which is rooted in a tradition of i.i.d. covariates and therefore time series and panel data structures are not recognized. For the purpose of comparison, we include elastic net estimators in our analysis, as a representative example of standard ML. In our empirical analysis we study nowcasting the firm-level P/E ratio for a large set of firms. Moreover, we decompose the (log of) the P/E ratio into the return for firm i and analyst prediction errors. Therefore, nowcasting the log P/E ratio could also be achieved via nowcasting its two components. The decomposition corresponds to the distinction between analyst assessments of firm i's earnings and market/investor assessments of the firm. Our empirical results can be summarized as follows. Predictions based on analyst consensus exhibit significantly higher mean squared forecast errors (MSEs) compared to model-based predictions. These model-based predictions involve either direct log P/E ratio nowcasts or their individual components. The MSE for the random walk model and analysts' concensus are quite similar, and therefore random walk predictions are outperformed by the model-based ones as well. A substantial proportion of firms (approximately 60%) exhibit low MSE values, indicating a high level of prediction accuracy. However, there are a few firms for which the MSEs are relatively larger, suggesting lower prediction performance for these specific cases. Comparing direct log P/E ratio nowcasts versus those based on its components, we observe a substantial improvement in prediction accuracy when using the individual components. This improvement is consistently evident across individual, pooled, and fixed effects regression models. Moreover, the sparsity patterns differ significantly across the direct versus component prediction models. Our framework allows us to go beyond providing quarterly nowcasts and generate daily updates of earnings series. Leveraging the daily influx of information throughout the quarter, we continuously re-estimate our models and produce nowcast updates as soon as new data becomes available. We report the distribution of Mean Squared Errors (MSEs) across firms for five distinct nowcast horizons: 20-day, 15-day, 10-day, and 5-day ahead, as well as the end of the quarter and show that as the horizons become shorter, both the median and upper quartile of MSEs decrease. The sg-LASSO estimator we employ in our study is well-suited for incorporating grouped fixed effects. This approach involves grouping firm-specific intercepts based on either statistical procedures or economic reasoning, as outlined in <cit.>. In our analysis, we utilize the Fama French industry classification to form 10 distinct groups for grouping fixed effects. Our findings suggest that grouped fixed effects strike a better balance between capturing heterogeneity and pooled parameters, resulting in more accurate nowcast predictions. These results support the notion that incorporating group fixed effects enhances the overall performance of our forecasting model. Next we address the challenge of missing earnings data, which can complicate the analysis. We examine the performance of parameter imputation methods in computing nowcasts, see, e.g, <cit.>, even when earnings and/or earnings forecasts are missing for certain observations in the sample. The results obtained through parameter imputation outperform the analyst consensus nowcasts in terms of prediction accuracy. The paper is organized as follows. Section <ref> introduces the models and estimators. A simulation study reporting the finite sample nowcasting performance of our proposed methods appears in Section <ref>. The results of our empirical application analyzing price-earnings ratios for a panel of individual firms are reported in Section <ref>. Section <ref> concludes. All technical details and detailed data descriptions appear in the Appendix and the Online Appendix. § HIGH-DIMENSIONAL MIXED FREQUENCY PANEL DATA In this section, we describe the methodological approach of the paper. Motivated by our application, we will refer to the cross-sectional observations as firms, the low-frequency observations as quarterly while the high-frequency observations are daily or monthly. However, the notation presented in this section is generic and can correspond to other entities and frequencies. The objective is to nowcast {y_i,t:i∈[N],t∈[T]} (where for a positive integer p, we put [p]={1,2,…,p}), in our case a panel of P/E ratios (or its decomposition into returns and analyst forecast errors) for N firms observed at T time periods. The covariates consist of K time-varying predictors measured potentially at higher frequencies {x_i,t-j/n_k^H,k: i∈[N],t∈[T],j=0,…,n^L_kn_k^H-1,k∈[K]}, where n_k^H is the number of high-frequency observations for the k^ th covariate in a low-frequency time period t, and n^L_k is the number of low-frequency time periods used as lags. For instance, n^L_k=1 corresponds in our application to a quarter of high-frequency lags used as covariates and n_k^H=3 corresponds to monthly data with 3 month of data available per quarter. Note that we can think of mixtures of say annual, quarterly, monthly and weekly data, and therefore n_k^H represents different high frequency sampling frequencies and associated lags n_k^L n_k^H. In our empirical analysis we examine three types of regression model specifications: (a) regularized single equation regressions for each individual firm, (b) regularized panel regressions with pooling, and (c) regularized panel regressions with fixed effects. Hence, in (a) we do not explore the panel structure of the data, whereas in (b) and (c) we do. To discuss the model specifications, we focus here on (b) and (c), keeping in mind that the single regression case is a straightforward simplification of the panel regression models. Consider the mixed frequency panel data regression for y_i,t|τ, that is observation i for low-frequency nowcasting y at time t using information up to τ: y_i,t|τ = α_i + ∑_k=1^Kψ(L^1/n_k^H;β_k)x_i,τ,k + u_i,t|τ, where α_i is the entity-specific intercept (depending on τ but we suppress this detail to simplify notation), and ψ(L^1/n_k^H;β_k)x_i,τ,k = 1/k_max∑_j=0^k_max-1β_j,kL^j/n_k^Hx_i,τ,k where k_max is the maximum lag length which may depend on the covariate k, and for each high frequency covariate x_i,τ,k we have the most up to date information available at time τ. This may imply that for some high frequency regressors this is stale information as they have not been updated yet, but presumably at least some of the high frequency data are fresh real-time information at the time τ the nowcast is being made. For instance, in our quarterly/monthly application we can have τ = (t - 1) + 1/3 in which case we nowcast quarter t with information available at the end of the first month of that quarter. In this example, some high frequency series for the first month may be available while some may not due to say publication lags. Likewise, with τ = (t - 1) + 2/3 we can revise the previous nowcast with one extra month of information, which taking into account publication lags may include observations from the first month as the most recent releases. It should parenthetically be noted that for τ ≤ t - 1, we are dealing with a forecasting situation and therefore our analysis applies to both nowcasting and - ceteris paribus - forecasting. To reduce the dimensionality of the high-frequency lag polynomial, we follow the MIDAS ML literature, see <cit.>, and estimate a weight function ω parameterized by a relatively small number of coefficients L ψ(L^1/n_k^H;β_k)x_i,τ,k = 1/k_max∑_j=0^k_max-1ω(j/n_k^H;β_k)x_i,τ,k, where the MIDAS weight function is ω(s;β_k) = ∑_l=0^L-1β_l,kw_l(s), (w_l)_l≥ 0 is a collection of L approximating functions, called the dictionary, and β_k∈^L is the unknown parameter. An example of a dictionary used in the MIDAS ML literature is the set of orthogonal Legendre polynomials. To streamline notation it will be convenient to assume, without loss of generality, a common lag length, i.e. k̅_max = k_max ∀ k ∈ [K]. The linear in parameters dictionaries map the MIDAS regression to a standard linear regression framework. In particular, define 𝐱_i = (X_i,1W,…,X_i,KW), where for each k∈[K], X_i,k = (x_i,τ-j/n_k^H,k,j = 0, …, k̅_max - 1)_τ∈[T] is a T×k̅_max matrix of covariates and k̅_max W = (w_l(j/n_k^H; β_k)_0≤ l≤ L-1, 0 ≤ j≤k̅_max is a k̅_max× L matrix corresponding to the dictionary. In addition, let 𝐲_i = (y_i,t|τ, t, τ∈ [T])^⊤ and 𝐮_i = (u_i,t|τ,t, τ∈ [T])^⊤. The regression equation after stacking time series observations for each firm i∈[N] is as follows 𝐲_i = ια_i + 𝐱_iβ + 𝐮_i, where ι∈^T is the all-ones vector and β∈^LK is a vector of slope coefficients. Lastly, put 𝐲 = (𝐲_1^⊤,…, 𝐲_N^⊤)^⊤, 𝐗=(𝐱_1^⊤, …, 𝐱_N^⊤)^⊤, and 𝐮 = (𝐮_1^⊤,…,𝐮_N^⊤)^⊤. Then the regression equation after stacking all cross-sectional observations is 𝐲 = Bα + 𝐗β + 𝐮, where B=I_N⊗ι, I_N is N× N identity matrix, and ⊗ is the Kronecker product. Given that the number of potential predictors K can be large, additional regularization can improve the predictive performance in small samples. To that end, we take advantage of the sg-LASSO regularization, suggested by <cit.>. The fixed effects sg-LASSO estimator ρ̂=(α̂^⊤,β̂^⊤)^⊤ solves min_(a,b)∈^N+p𝐲 - Ba - 𝐗b _NT^2 + 2 λΩ(b), where Ω is the sg-LASSO regularizing functional. It is worth stressing that the design matrix 𝐗 does not include the intercept and that we do not penalize the fixed effects which are typically not sparse. In addition, ._NT^2 = |.|^2/(NT) is the empirical norm and Ω(b) = γ|b|_1 + (1-γ)b_2,1, is a regularizing functional. It is a linear combination of the ℓ_1 LASSO and ℓ_2,1 group LASSO norms. Note that for a group structure 𝒢 described as a partition of [p]={1,2,…,p}, the group LASSO norm is computed as b_2,1=∑_G∈𝒢|b_G|_2, while |.|_q denotes the usual ℓ_q norm. The group LASSO penalty encourages sparsity between groups whereas the ℓ_1 LASSO norm promotes sparsity within groups and allows us to learn the shape of the MIDAS weights from the data. The parameter γ∈[0,1] determines the relative weights of the ℓ_1 (sparsity) and the ℓ_2,1 (group sparsity) norms, while the amount of regularization is controlled by the regularization parameter λ≥ 0. In Section <ref>, we called our approach structured ML because the group structure allows us to embed the time series structure of the data. More specifically, these structures are represented by groups covering lagged dependent variables and groups of lags for a single (high-frequency) covariate. Throughout the paper, we assume that groups have fixed size, and the group structure is known by the econometrician. Both are reasonable assumptions to make in the context of our empirical application. For pooled regressions, we assume that all entities share the same intercept parameter α_1=…=α_N=α. The pooled sg-LASSO estimator ρ̂=(α̂,β̂^⊤)^⊤ solves min_r=(a,b)∈^1+p𝐲 - aι - 𝐗b_NT^2 + 2 λΩ(r). Pooled regressions are attractive since the effective sample size NT can be huge, yet the heterogeneity of individual time series may be lost. If the underlying series have a substantial heterogeneity over i∈[N], then taking this into account might reduce the projection error and improve the predictive accuracy. <cit.> provide the theoretical analysis of predictive performance of regularized panel data regressions with the sg-LASSO regularization, including as special cases (a) standard LASSO, (b) group LASSO regularizations as well as (c) generic high-dimensional panels not involving mixed frequency data. Finally, <cit.> also develop the debiased inferential methods and Granger causality tests for pooled panel data regressions. § MONTE CARLO EXPERIMENTS It is not clear that the aforementioned theory is of practical use in the context of nowcasting using modestly sized samples of data. For this reason, we investigate in this section the finite sample nowcasting performance of the machine learning methods covered so far. We consider the standard (unstructured) elastic net with UMIDAS (called Elnet-U), where UMIDAS refers to unconstrained MIDAS proposed by <cit.> in a classic non-ML context, and sg-LASSO with MIDAS. Both methods require selecting two tuning parameters λ and γ. In the case of sg-LASSO, γ is the relative weight of LASSO and group LASSO penalties while in the case of the elastic net γ interpolates between LASSO and ridge. In both cases we report results on a grid γ∈{0, 0.2, …, 1}. In addition to evaluating the performance over the grid of γ tuning parameter values, we need to select the λ tuning parameter. To do so, we consider several approaches. First, we adapt the K-fold cross-validation to the panel data setting. To that end, we resample the data by blocks respecting the time-series dimension and creating folds based on cross-sectional units instead of the pooled sample. We use the 5-fold cross-validation both in the simulation experiments and the empirical application. We also consider the following three information criteria: BIC, AIC, and corrected AIC (AICc) of <cit.>. Assuming that y_i,t|x_i,t are i.i.d. draws from N(α_i + x_i,t^⊤β, σ^2), the log-likelihood of the sample is ℒ(α,β,σ^2) ∝ - 1/2σ^2∑_i=1^N∑_t=1^T(y_i,t - α_i - x_i,t^⊤β)^2. Then, the BIC criterion is BIC = 𝐲 - μ̂- 𝐗β̂_NT^2/σ̂^2 + log(NT)/NT×df, where df denotes the degrees of freedom, σ̂^2 is a consistent estimator of σ^2, μ̂=α̂ι for the pooled regression, and μ̂=Bα̂ for fixed effects regression. The degrees of freedom are estimated as df = |β̂|_0+1 for the pooled regression and df = |β̂|_0+N for the fixed effects regression, where |.|_0 is the ℓ_0-norm defined as a number of non-zero coefficients; see <cit.> for more details. The AIC is computed as AIC = 𝐲 - μ̂- 𝐗β̂_NT^2/σ̂^2 + 2/NT×df, and the corrected Akaike information criteria is AICc = 𝐲 - μ̂- 𝐗β̂_NT^2/σ̂^2 + 2df/NT - df - 1. The AICc is typically a better choice when p is large relative to the sample size. We report the results for each of the tuning parameter selection criteria for λ, along the grid choice for γ. §.§ Simulation Design To assess the predictive performance of pooled panel data models, we simulate the data from the following DGP with a quarterly/monthly frequency mix in mind and k̅_max = k_max with n_k^H = n^H ∀ k: y_i,t|τ = α + ∑_k=1^K k̅_max^-1∑_j=0^k̅_max-1ω(j/n^H;β_k)x_i,τ-j/n^H,k + u_i,t|τ, where i∈[N], t∈[T], α is the common intercept, k̅_max^-1∑_j=0^k̅_max-1ω(j/n_k;β_k) the weight function for k-th high-frequency covariate and the error term is either u_i,t|τ∼_i.i.d.N(0,1) or u_i,t|τ∼_i.i.d.student-t(5). We are interested in a quarterly/monthly data mix, and use four quarters of data for the high-frequency regressors which covers 12 high-frequency lags for each regressor. In terms of information sets we start with τ = t - 1, which corresponds to a prediction setting and then have τ = t - 1 + 1/3, i.e. nowcasting with one month's worth of information. We set the number of relevant high-frequency regressors K = 6. The high-frequency regressors are generated as K i.i.d. realizations of the univariate autoregressive (AR) process x_h = ρ x_h-1+ ε_h, where ρ=0.6 and either ε_h∼_i.i.d.N(0,1) or ε_h∼_i.i.d.student-t(5), where h denotes the high-frequency sampling. We rely on a commonly used weighting scheme in the MIDAS literature, namely ω(s;β_k) for k=1,2,…,6 are determined by beta densities respectively equal to Beta(1,3) for k=1,4, Beta(2,3) for k=2,5, and Beta(2,2) for k=3,6; see <cit.> or <cit.>, for further details. The MIDAS regressions are estimated using Legendre polynomials of degree L=3. We consider DGPs featuring pooled panels and fixed effects. For the pooled panel regression DGPs we simulate the intercepts as α∼Uniform(-4,4). For the fixed effects models the individual fixed effects are simulated as α_i ∼_i.i.dUniform(-4,4) and are kept fixed throughout the experiment. For τ = t - 1, the Baseline scenario, in the estimation procedure we add 24 noisy covariates which are generated in the same way as the relevant covariates, use 4 low-frequency lags and the error terms u_i,t|τ and ε_h are Gaussian. In the student-t(5) scenario we replace the Gaussian error terms with a student-t(5) distribution while in the large dimensional scenario we add 94 noisy covariates. For each scenario, we simulate N=25 i.i.d. time series of length T=50; next we increase the cross-sectional dimension to N=75 and time series to T=100. Finally, for τ = t - 1 + 1/3 the thought experiment in the simulation design is one where the first high-frequency observations during low frequency t are available. The nowcaster of course does not know which of the covariates are relevant nor does she know the parameters of the prediction rule. We will call this scheme “one-step ahead” nowcasts. §.§ Simulation results Tables <ref> and <ref> cover the average mean squared forecast errors (MSFE) for one-step ahead nowcasts for the three simulation scenarios. We report results for sg-LASSO with MIDAS weights (left block) and elastic net with UMIDAS (right block) using both pooled panel models (Table <ref>) and fixed effects ones (Table <ref>). We report results for the best choice of the γ tuning parameter.[Results for the grid of γ∈{0.0, 0.2, …, 1.0} are reported in the Online Appendix Tables <ref>-<ref>.] Firstly, structured sg-LASSO-MIDAS consistently outperforms unstructured Elnet-U for all DGPs and in both pooled and fixed effects cases. The most significant discrepancy between the two methods is observed in situations with small N and small T, specifically when N = 25 and T = 50. As either N or T increases, this gap gradually diminishes. When comparing the results of pooled and fixed effects, it becomes evident that the difference between the two approaches — structured sg-LASSO-MIDAS versus Elnet UMIDAS — widens further in the case of fixed effects with student-t(5) data. This indicates that our structured approach yields higher quality estimates for the fixed effects and thus more accurate nowcasts. In the case of sg-LASSO-MIDAS, the best performance is achieved for γ∉{0,1} for both pooled panel data and fixed effects cases, while γ=0, i.e. ridge regression, seems to be dominated by estimators that γ∉{0,1} in both pooled and fixed effects cases. For the student-t(5) and large dimensional DGP, we observe a decrease in the performance for all methods. However, the decrease in the performance is larger for the student-t(5) DGP, revealing that heavy-tailed data have — as expected — a stronger impact on the performance of the estimators. For the pooled panel data case, increasing N from 25 to 75 seems to have a larger positive impact on the performance than an increase in the time-series dimension from T=50 to T=100. The difference appears to be larger for student-t(5) and large dimensional DGPs and/or for the elastic net case. Turning to the fixed effects results, the differences seem to be even sharper, in particular for student-t(5) and large dimensional DGPs. When comparing the results across the different model selection methods, i.e., cross-validation and the three information criteria, we find that almost always cross-validation leads to smaller prediction errors in both pooled and fixed effects panel data cases. Notably, the gains appear to be larger for the large N and T values. Comparing BIC, AIC, and AICc information criteria, the results appear to be similar for AIC and AICc across DGPs and different sample sizes, while the BIC performance is slightly worse than AIC and AICc. § NOWCASTING PRICE-EARNINGS RATIOS <cit.>, <cit.> and <cit.> documented that analysts make systematic and predictable errors in their P/E forecasts. We therefore consider nowcasting the P/E ratios using a set of predictors that are sampled at mixed frequencies for a large cross-section of firms. A natural question one may ask: should we nowcast P/E ratio directly or it's components. We, therefore, decompose the (log of) the P/E ratio for firm i as follows: pe_i,t+1≡log (P_i,t+1/E_i,t+1) = log ((P_i,t+1/P_i,t)/(E_i,t+1/P_i,t)) = r_i,t+1 - log ((E_i,t+1/E^a_i,t+1|t)/(P_i,t/E^a_i,t+1|t)) = r_i,t+1 - e^a_i,t+1|t + log (P_i,t/E^a_i,t+1|t) where r_i,t+1 is the log return from t + 1 to t for firm i, E^a_i,t+1|t the analyst's prediction at time t pertaining to t + 1 earnings, and e^a_i,t+1|t ≡ log (E_i,t+1) - log (E^a_i,t+1|t) is the log earnings forecast error of analysts pertaining to their end of period t prediction for t + 1. Finally, log (P_i,t/E^a_i,t+1|t) is perfectly known at time t. The above defines an additive decomposition of the log P/E ratio into the return for firm i and the analyst prediction error. Therefore, nowcasting the log P/E ratio could also be achieved via nowcasting its two components. The decomposition corresponds to the distinction between analyst assessments of firm i's earnings and market/investor assessments of the firm. There is a considerable literature on using machine learning to predict returns, see e.g. <cit.>, <cit.>, <cit.>, <cit.>, among others. Here we are dealing with a slightly modified setting where we are nowcasting quarterly returns with information during quarter t + 1. Nevertheless, prediction and nowcasting are closely related. The second component, e^a_i,t+1|t has been explored by <cit.>, who revisit a topic raised by <cit.> and <cit.>, and confirmed in a rich data setting that analysts tend to focus on their firm/industry when making earnings predictions while not fully taking into account the impact of macroeconomic events. Put differently, one can forecast and nowcast analyst prediction errors. It should also parenthetically be noted that equation (<ref>) can be rewritten as a decomposition of returns, namely: r_i,t+1 = pe_i,t+1 + e^a_i,t+1|t + log (P_i,t/E^a_i,t+1|t) which can be viewed as an alternative decomposition of returns compared to <cit.>. They propose forecasting separately the three components of stock market returns: (a) the dividend price ratio, (b) earnings growth, and (c) price-to-earnings ratio growth. <cit.> argue that predicting the separate components yields better return predictions compared to the usual models producing direct forecasts of the latter. They estimate the expected earnings growth using a 20-year moving average of the growth in earnings per share. The expected dividend price ratio is estimated by the current dividend price ratio. This implicitly assumes that the dividend price ratio follows a random walk. While our application is different in many regards, the arguments being considered are similar. It is worth reminding ourselves that if the nowcast pe_i,t+1 is constructed from individual component nowcasts, then MSE(pe_i,t+1) = MSE(r_i,t+1) + MSE(ê^a_i,t+1|t) - 2[(r_i,t+1 - r̂_i,t+1)(e^a_i,t+1|t - ê^a_i,t+1|t)] Hence, depending on the co-movements between returns for firm i, r_i,t+1 and analyst earning prediction errors e^a_i,t+1|t, we are better off to directly predict pe_i,t+1 or its components. If the latter are positively correlated, then we are better off direct forecasting is preferred. Given the aforementioned decomposition, we are interested in the following LHS variables: pe_i,t+1, r_i,t+1 and e^a_i,t+1|t. First, we estimate the individual sg-LASSO MIDAS regressions for each firm i=1,…,N, namely: 𝐲_i = ια_i + 𝐱_iβ_i + 𝐮_i, where the firm-specific predictions are computed as ŷ_i,t+1 = α̂_i + x_i,t+1^⊤β̂_i. As noted in Section <ref>, 𝐱_i contains lags of the low-frequency target variable and high-frequency covariates to which we apply Legendre polynomials of degree L=3. Next, we estimate the following pooled and fixed effects sg-LASSO MIDAS panel data models 𝐲 = αι + 𝐗β + 𝐮 Pooled 𝐲 = Bα + 𝐗β + 𝐮 Fixed Effects and compute predictions as ŷ_i,t+1 = α̂+ x_i,t+1^⊤β̂ Pooled ŷ_i,t+1 = α̂_i + x_i,t+1^⊤β̂ Fixed Effects. Once we compute the forecast for the log of P/E ratio (pe_i,t+1), log returns (r_i,t+1) and log earnings forecast error (e^a_i,t+1|t), we compute the final prediction accuracy metrics by either taking directly log P/E nowcast or the sum of its components, i.e., Ŝ = r̂_i,t+1 - ê^a_i,t+1|t + log (P_i,t/E^a_i,t+1|t). We benchmark firm-specific and panel data regression-based nowcasts against two simple alternatives. First, we compute forecasts for the RW model as ŷ_i,t+1|t = y_i,t. Second, we consider predictions of P/E implied by analysts' earnings nowcasts using the information up to time t+1, i.e. ŷ_i,t+1|t = y̅_i,t+1|t^a, where the predicted/nowcasted log of P/E ratio is based on consensus earnings forecasts pertaining to the end of the t+1 quarter using the stock price at the end of quarter t. To measure the forecasting performance, we compute the mean squared forecast errors (MSE) for each method. Let 𝐲̅_i = (y_i,T_is+1,…, y_i,T_os)^⊤ represent the out-of-sample realized P/E ratio values, where T_is and T_os denote the last in-sample observation for the first prediction and the last out-of-sample observation respectively, and let ŷ_i = (ŷ_i,t_is+1,…, ŷ_i,t_os) collect the out-of-sample forecasts. Then, the mean squared forecast errors are computed as MSE = 1/N∑_i=1^N1/T-T_is+1 (𝐲̅_i-ŷ_i)^⊤ (𝐲̅_i-ŷ_i). We look at 210 US firms and use 24 predictors, including traditional macro and financial series as well as non-traditional series from textual analysis of financial news. We apply (a) single regression individual firm high-dimensional regressions, (b) pooled and (c) individual fixed effects sg-LASSO MIDAS panel data models and report results for several choices of the tuning parameters. We compare these three type of models with several benchmarks, which include a random walk (RW) model and analysts' consensus forecasts. The remainder of the section is structured as follows. We start with a short review of the data followed by a summary of the empirical results. §.§ Data description The full sample consists of observations between the 1^st of January, 2000 and the 30^th of June, 2017. Due to the lagged dependent variables in the models, our effective sample starts in the third fiscal quarter of 2000. We use the first 25 observations for the initial sample, and use the remaining 42 observations for evaluating the out-of-sample forecasts, which we obtain by using an expanding window forecasting scheme. We collect data from CRSP and I/B/E/S to compute the quarterly P/E ratios and firm-specific financial covariates; RavenPack is used to compute daily firm-level textual-analysis-based data; real-time monthly macroeconomic series are from the FRED-MD dataset, see <cit.> for more details; FRED is used to compute daily financial markets data and, lastly, monthly news attention series extracted from the Wall Street Journal articles are retrieved from <cit.>.[The dataset is publicly available at http://www.structureofnews.com/http://www.structureofnews.com/.] Online Appendix Section <ref> provides a detailed description of the data sources.[In particular, firm-level variables, including P/E ratios, are described in Online Appendix Table <ref>, and the other predictor variables in Online Appendix Table <ref>. The list of all firms we consider in our analysis appears in Online Appendix Table <ref>.] Our target variable is the P/E ratio for each firm. To compute it, we use CRSP stock price data and I/B/E/S earnings data. Earnings data are subject to release delays of 1 to 2 months depending on the firm and quarter. Therefore, to reflect the real-time information flow, we compute the target variable using stock prices that are available in real-time. We also take into account that different firms have different fiscal quarters, which also affects the real-time information flow. For example, suppose for a particular firm the fiscal quarters are at the end of the third month in a quarter, i.e. end of March, June, September, and December. The consensus forecast of the P/E ratio is computed using the same end-of-quarter price data which is divided by the earnings consensus forecast value. The consensus is computed by taking all individual prediction values up to the end of the quarter and aggregating those values by taking either the mean or the median. To compute the target variable, we adjust for publication lags and use prices of the publication date instead of the end of fiscal quarter prices. More precisely, suppose we predict the P/E ratio for the first quarter. As noted earlier, earnings are typically published with 1 to 2 months delay; say for a particular firm the data is published on the 25th of April. In this case, we record the stock price for the firm on 25th of April, and divide it by the earnings announced on that date. §.§ Models and main results To simplify the exposition, we denote y as one of the three target variables we consider. The main findings from our analysis are presented in Table <ref>. Column p̂e_i,t+1 reports results for directly nowcasting the log P/E ratio, column Ŝ reports the results of nowcasting and summing up the components, column r_i,t+1 reports results for the log return component and column ê^a_i,t+1|t reports results for the log earnings forecast error of analysts component. Row RW reports results for the random walk, while row Consensus for the median consensus nowcast. Panels Individual, Pooled and Fixed effects report results for different panel data models relative to the consensus MSE (columns p̂e_i,t+1 and Ŝ) and for the components (columns r_i,t+1 and ê^a_i,t+1|t) we report ratios relative to the RW MSE since there are obviously no concensus series notably for the analyst forecast errors. Nowcasting Performance In light of the simulation evidence, we report the empirical results using cross-validation in Table <ref> and provide the full set of results in Online Appendix Table <ref>. The entries in the top panel of Table <ref> reveal that predictions based on analyst consensus exhibit significantly higher mean squared forecast errors (MSEs) compared to model-based predictions since all the ratios with respect to the concensus are less than one (see first two columns). These model-based predictions involve either direct log P/E ratio nowcasts (first column) or their individual components (second column). Since the MSE for RW and concensus are quite similar, this also implies that RW predictions are outperformed by the model-based ones. The substantial improvement in the accuracy of model-based predictions compared to analyst-based predictions underscores the value of employing machine learning techniques for nowcasting log P/E ratios. Across various machine learning methods, including single-firm and panel data regressions, we consistently observe enhanced performance. When comparing the first and second columns, which correspond to direct log P/E ratio nowcasts versus those based on its components, we observe a substantial enhancement in prediction accuracy when using the individual components. This improvement is consistently evident across individual, pooled, and fixed effects regression models. To shed light on these findings, we computed the pooled correlation between returns and earnings for the entire sample, i.e. Corr(r_i,t+1, e^a_i,t+1|t) = -0.206. The correlation indicates a (weak) negative relationship between returns and earnings. Consequently, the prediction errors of each component tend to offset each other, resulting in more accurate aggregated nowcasts (recall equation (<ref>)). The last two columns of Table <ref> present the prediction results for these components. We observe that analyst earnings prediction errors appear to be more predictable than those of log returns. We also report <cit.> test statistic p-values comparing each model against the RW and consensus benchmarks, pooling all the nowcasting errors across firms. Using one-sided test critical values we observe that our models outperform both the RW and consensus benchmarks, particularly when we use the component approach. While we cannot compare the p̂e_i,t+1 component with the consensus, judging by the RW benchmark it is clear that the second component is the most important in terms of nowcasting gains. When we use individual MIDAS regressions the evidence is less compelling, underscoring the importance of using panel data models.[We also experimented with the forecast combination of MIDAS regressions used by <cit.> and found them to be inferior to the individual MIDAS ML regressions as well as the panel data models. We therefore refrain from reporting the details here. ] Sparsity Patterns Figure <ref> illustrates the sparsity patterns of selected covariates for the most effective methods in predicting either log P/E ratios (Panel a) or their components (Panels b and c). It is worth noting that the sparsity patterns differ significantly across the three panels. For instance, firm volatility is often chosen as a relevant covariate across all targets, albeit not consistently throughout the entire out-of-sample period. In the case of log P/E ratios, news series related to earnings are frequently selected, along with firm and market volatility series. Conversely, for log returns, a denser pattern of covariate selection is observed, distinct from the other two cases. Interestingly, none of the news-based firm series are chosen for this target. Regarding log analyst earnings forecast errors, macroeconomic series such as the unemployment rate, short-term rates, and TED rate are frequently selected. Moreover, unlike log P/E ratios and returns, news-based firm series occasionally appear in the selected covariates for this target. The fact that macroeconomic series are drivers for nowcasting the e^a_i,t+1|t component is a confirmation of the findings reported in <cit.>, <cit.> and <cit.>. Figure <ref> depicts the histogram of mean squared errors (MSEs) across firms. Notably, a substantial proportion of firms (approximately 60%) exhibit low MSE values, indicating a high level of prediction accuracy. However, there are a few firms for which the MSEs are relatively larger, suggesting lower prediction performance for these specific cases. The largest MSE is for Crown castle international corporation (CCI) which appears as a strong outlier. Removing the single outlier firm has a dramatic impact on the nowcasting performance evaluation as shown in the lower panel of Table <ref>. We now have very strong evidence that the panel regression models dominate analyst predictions. Again the component nowcasts are the best, but even the individual regression models do significantly better when the component specification is used. Daily Updates of Nowcasts Our framework allows us to go beyond providing quarterly nowcasts and generate daily updates of earnings series. Leveraging the daily influx of information throughout the quarter, we continuously re-estimate our models and produce nowcast updates as soon as new data becomes available. In Figure <ref>, we present the distribution of Mean Squared Errors (MSEs) across firms for five distinct nowcast horizons: 20-day, 15-day, 10-day, and 5-day ahead, as well as the end of the quarter. We report the best model based on Table <ref>. Notably, as the horizons become shorter, both the median and upper quartile of MSEs decrease. Therefore, updating nowcasts with daily information appears to significantly enhance the prediction performance of log earnings ratios. The largest errors persist for the same firm, CCI. Grouped Fixed Effects based on Fama-French Industry Classification The sg-LASSO estimator we employ in our study is well-suited for incorporating grouped fixed effects. This approach involves grouping firm-specific intercepts based on either statistical procedures or economic reasoning, as outlined in <cit.>. In our analysis, we utilize the Fama French industry classification to form 10 distinct groups for grouping fixed effects. Rather than assuming a common fixed effect for all firms within a group, we apply a group penalty to the fixed effects of firms belonging to the same industry. This allows us to capture industry-specific heterogeneity while avoiding overfitting. We present the findings in Table <ref>, which highlight several key observations. Similar to previous analyses, our results suggest that predicting individual components of the log price-earnings ratio leads to more accurate aggregate nowcasts compared to a direct nowcast approach. Furthermore, we observe that the use of group fixed effects improves the accuracy of our nowcasts when forecasting individual components. This can be seen in column 2 of both Tables <ref> and <ref>. Comparatively, when considering the best tuning parameter choice, grouped fixed effects outperform other panel models, including the pooled panel model. Therefore, our findings suggest that grouped fixed effects strike a better balance between capturing heterogeneity and pooled parameters, resulting in more accurate nowcast predictions. These results support the notion that incorporating group fixed effects enhances the overall performance of our forecasting model. In Figure <ref>, we present the distribution of (MSEs) across firms for five industries, based on the best model specification from Table <ref>. The industries we focus on are the ones with the highest number of firms in our sample. The results reveal variations in performance among different industries. Specifically, the firms categorized as Consumer Durables exhibit the lowest accuracy in terms of the median MSE, although the quartiles are comparatively lower compared to the other industries. On the other hand, the nowcasts for firms in the Consumer Nondurables and Others categories demonstrate the highest accuracy at the median. However, it is important to note that the largest errors occur within the firms classified as Others. Nowcasting with Missing Data — Parameter Imputation Method Next we address the challenge of missing earnings data, which can complicate the analysis. We examine the performance of parameter imputation methods in computing nowcasts, see, e.g, <cit.>, even when earnings and/or earnings forecasts are missing for certain observations in the sample. We identify a subset of 117 firms for which at least one earnings observation is available in our out-of-sample period, and for which we have matched daily news data. To handle missing data, we match these firms with missing observations to firms in our main sample using the Fama French industry classification. We then utilize the parameter estimates obtained from the best group fixed effects model, as shown in Table <ref>, to compute the nowcasts of log earnings ratios, either directly or based on its components. The results of this analysis appear in Table <ref>. Firstly, the results obtained through parameter imputation support the conclusion that nowcasting the components of the log earnings ratio yields higher quality predictions. This indicates that incorporating the individual components of the ratio improves the accuracy of the nowcasts. Secondly, the panel models with the parameter imputation method outperform the analyst consensus nowcasts in terms of prediction accuracy. This suggests that employing machine learning panel data models along with parameter imputation could be a straightforward yet effective approach in situations where earnings data is not available. Overall, these findings highlight the potential benefits of leveraging machine learning techniques and imputation methods for improving nowcasting accuracy, particularly in cases where earnings data may be missing. § CONCLUSIONS This paper uses a new class of high-dimensional panel data nowcasting models with dictionaries and sg-LASSO regularization which is an attractive choice for the predictive panel data regressions, where the low- and/or the high-frequency lags define a clear group structure. Our empirical results showcase the advantages of using regularized panel data regressions for nowcasting corporate earnings either directly or using a decomposition which separates stock market return predictions and analyst assessments of a firm's performance. While nowcasting earnings is a leading example of applying panel data MIDAS machine learning regressions, one can think of many other applications of interest in finance. Beyond earnings, analysts are also interested in sales, dividends, etc. Our analysis can also be useful for other areas of interest, such as regional and international panel data settings. econometrica ONLINE APPENDIX § ADDITIONAL SIMULATION RESULTS § DATA DESCRIPTION §.§ Firm-level data The full list of firm-level data is provided in Table <ref>. We also add two daily firm-specific stock market predictor variables: stock returns and a realized variance measure, which is defined as the rolling sample variance over the previous 60 days (i.e. 60-day historical volatility). §.§.§ Firm sample selection We select a sample of firms based on data availability. First, we remove all firms from I/B/E/S which have missing values in earnings time series. Next, we retain firms that we are able to match with CRSP dataset. Finally, we keep firms that we can match with the RavenPack dataset. §.§.§ Firm-specific text data We create a link table of RavenPack ID and PERMNO identifiers which enables us to merge I/B/E/S and CRSP data with firm-specific textual analysis generated data from RavenPack. The latter is a rich dataset that contains intra-daily news information about firms. There are several editions of the dataset; in our analysis, we use the Dow Jones (DJ) and Press Release (PR) editions. The former contains relevant information from Dow Jones Newswires, regional editions of the Wall Street Journal, Barron's and MarketWatch. The PR edition contains news data, obtained from various press releases and regulatory disclosures, on a daily basis from a variety of newswires and press release distribution networks, including exclusive content from PRNewswire, Canadian News Wire, Regulatory News Service, and others. The DJ edition sample starts at 1^st of January, 2000, and PR edition data starts at 17^th of January, 2004. We construct our news-based firm-level covariates by filtering only highly relevant news stories. More precisely, for each firm and each day, we filter out news that has the Relevance Score (REL) larger or equal to 75, as is suggested by the RavenPack News Analytics guide and used by practitioners, see for example <cit.>. REL is a score between 0 and 100 which indicates how strongly a news story is linked with a particular firm. A score of zero means that the entity is vaguely mentioned in the news story, while 100 means the opposite. A score of 75 is regarded as a significantly relevant news story. After applying the REL filter, we apply a novelty of the news filter by using the Event Novelty Score (ENS); we keep data entries that have a score of 100. Like REL, ENS is a score between 0 and 100. It indicates the novelty of a news story within a 24-hour time window. A score of 100 means that a news story was not already covered by earlier announced news, while subsequently published news story score on a related event is discounted, and therefore its scores are less than 100. Therefore, with this filter, we consider only novel news stories. We focus on five sentiment indices that are available in both DJ and PR editions. They are: Event Sentiment Score (ESS), for a given firm, represents the strength of the news measured using surveys of financial expert ratings for firm-specific events. The score value ranges between 0 and 100 - values above (below) 50 classify the news as being positive (negative), 50 being neutral. Aggregate Event Sentiment (AES) represents the ratio of positive events reported on a firm compared to the total count of events measured over a rolling 91-day window in a particular news edition (DJ or PR). An event with ESS > 50 is counted as a positive entry while ESS < 50 as negative. Neutral news (ESS = 50) and news that does not receive an ESS score does not enter into the AES computation. As ESS, the score values are between 0 and 100. Aggregate Event Volume (AEV) represents the count of events for a firm over the last 91 days within a certain edition. As in AES case, news that receives a non-neutral ESS score is counted and therefore accumulates positive and negative news. Composite Sentiment Score (CSS) represents the news sentiment of a given news story by combining various sentiment analysis techniques. The direction of the score is determined by looking at emotionally charged words and phrases and by matching stories typically rated by experts as having short-term positive or negative share price impact. The strength of the scores is determined by intra-day price reactions modeled empirically using tick data from approximately 100 large-cap stocks. As for ESS and AES, the score takes values between 0 and 100, 50 being the neutral. News Impact Projections (NIP) represents the degree of impact a news flash has on the market over the following two-hour period. The algorithm produces scores to accurately predict a relative volatility - defined as scaled volatility by the average of volatilities of large-cap firms used in the test set - of each stock price measured within two hours following the news. Tick data is used to train the algorithm and produce scores, which take values between 0 and 100, 50 representing zero impact news. For each firm and each day with firm-specific news, we compute the average value of the specific sentiment score. In this way, we aggregate across editions and groups, where the later is defined as a collection of related news. We then map the indices that take values between 0 and 100 onto [-1,1]. Specifically, let x_i ∈{ESS, AES, CSS, NIP} be the average score value for a particular day and firm. We map x_i ↦x̅_i∈[-1,1] by computing x̅_i = (x_i - 50)/50. r |ll cc Ticker Firm name PERMNO RavenPack ID 1 MMM 3M 22592 03B8CF 2 ABT Abbott labs 20482 520632 3 AUD Automatic data processing 44644 66ECFD 4 ADTN Adtran 80791 9E98F2 5 AEIS Advanced energy industries 82547 1D943E 6 AMG Affiliated managers group 85593 30E01D 7 AKST A K steel holding 80303 41588B 8 ATI Allegheny technologies 43123 D1173F 9 AB AllianceBernstein holding l.p. 75278 CB138D 10 ALL Allstate corp. 79323 E1C16B 11 AMZN Amazon.com 84788 0157B1 12 AMD Advanced micro devices 61241 69345C 13 DOX Amdocs ltd. 86144 45D153 14 AMKR Amkor technology 86047 5C8D61 15 APH Amphenol corp. 84769 BB07E4 16 AAPL Apple 14593 D8442A 17 ADM Archer daniels midland 10516 2B7A40 18 ARNC Arconic 24643 EC821B 19 ATTA AT&T 66093 251988 20 AVY Avery dennison corp. 44601 662682 21 BHI Baker hughes 75034 940C3D 22 BAC Bank of america corp. 59408 990AD0 23 BAX Baxter international inc. 27887 1FAF22 24 BBT BB&T corp. 71563 1A3E1B 25 BDX Becton dickinson & co. 39642 873DB9 26 BBBY Bed bath & beyond inc. 77659 9B71A7 27 BHE Benchmark electronics inc. 76224 6CF43C 28 BA Boeing co. 19561 55438C 29 BK Bank of new york mellon corp. 49656 EF5BED 30 BWA BorgWarner inc. 79545 1791E7 31 BP BP plc 29890 2D469F 32 EAT Brinker international inc. 23297 732449 33 BMY Bristol-Myers squibb co. 19393 94637C 34 BRKS Brooks automation inc. 81241 FC01C0 35 CA CA technologies inc. 25778 76DE40 36 COG Cabot oil & gas corp. 76082 388E00 37 CDN Cadence design systems inc. 11403 CC6FF5 38 COF Capital one financial corp. 81055 055018 39 CRR Carbo ceramics inc. 83366 8B66CE 40 CSL Carlisle cos. 27334 9548BB 41 CCL Carnival corporation & plc 75154 067779 42 CERN Cerner corp. 10909 9743E5 43 CHRW C.H. robinson worldwide inc. 85459 C659EB 44 SCHW Charles schwab corp. 75186 D33D8C 45 CHKP Check point software technologies ltd. 83639 531EF1 46 CHV Chevron corp. 14541 D54E62 47 CI CIGNA corp. 64186 86A1B9 48 CTAS Cintas corp. 23660 BFAEB4 49 CLX Clorox co. 46578 719477 50 KO Coca-Cola co. 11308 EEA6B3 51 CGNX Cognex corp. 75654 709AED 52 COLM Columbia sportswear co. 85863 5D0337 53 CMA Comerica inc. 25081 8CF6DD 54 CRK Comstock resources inc. 11644 4D72C8 55 CAG ConAgra foods inc. 56274 FA40E2 56 STZ Constellation brands inc. 69796 1D1B07 57 CVG Convergys corp. 86305 914819 58 COST Costco wholesale corp. 87055 B8EF97 59 CCI Crown castle international corp. 86339 275300 60 DHR Danaher corp. 49680 E124EB 61 DRI Darden restaurants inc. 81655 9BBFA5 62 DVA DaVita inc. 82307 EFD406 63 DO Diamond offshore drilling inc. 82298 331BD2 64 D Dominion resources inc. 64936 977A1E 65 DOV Dover corp. 25953 636639 66 DOW Dow chemical co. 20626 523A06 67 DHI D.R. horton inc. 77661 06EF42 68 EMN Eastman chemical co. 80080 D4070C 69 EBAY eBay inc. 86356 972356 70 EOG EOG resources inc. 75825 A43906 71 EL Estee lauder cos. inc. 82642 14ED2B 72 ETH Ethan allen interiors inc. 79037 65CF8E 73 ETFC E*TRADE financial corp. 83862 28DEFA 74 XOM Exxon mobil corp. 11850 E70531 75 FII Federated investors inc. 86102 73C9E2 76 FDX FedEx corp. 60628 6844D2 77 FITB Fifth third bancorp 34746 8377DB 78 FISV Fiserv inc. 10696 190B91 79 FLEX Flex ltd. 80329 B4E00D 80 F Ford motor co. 25785 A6213D 81 FWRD Forward air corp. 79841 10943B 82 BEN Franklin resources inc. 37584 5B6C11 83 GE General electric co. 12060 1921DD 84 GIS General mills inc. 17144 9CA619 85 GNTX Gentex corp. 38659 CC339B 86 HAL Halliburton Co. 23819 2B49F4 87 HLIT Harmonic inc. 81621 DD9E41 88 HIG Hartford financial services group inc. 82775 766047 89 HAS Hasbro inc. 52978 AA98ED 90 HLX Helix energy solutions group inc. 85168 6DD6BA 91 HP Helmerich & payne inc. 32707 1DE526 92 HSY Hershey co. 16600 9F03CF 93 HES Hess corp. 28484 D0909F 94 HON Honeywell international inc. 10145 FF6644 95 JBHT J.B. Hunt transport services Inc. 42877 72DF04 96 HBAN Huntington bancshares inc. 42906 C9E107 97 IBM IBM corp. 12490 8D4486 98 IEX IDEX corp. 75591 E8B21D 99 IR Ingersoll-Rand plc 12431 5A6336 100 IDTI Integrated device technology inc. 44506 8A957F 101 INTC Intel corp. 59328 17EDA5 102 IP International paper co. 21573 8E0E32 103 IIN ITT corp. 12570 726EEA 104 JAKK Jakks pacific inc. 83520 5363A2 105 JNJ Johnson & johnson 22111 A6828A 106 JPM JPMorgan chase & co. 47896 619882 107 K Kellogg co. 26825 9AF3DC 108 KMB Kimberly-Clark corp. 17750 3DE4D1 109 KNGT Knight transportation inc. 80987 ED9576 110 LSTR Landstar system inc. 78981 FD4E8D 111 LSCC Lattice semiconductor corp. 75854 8303CD 112 LLY Eli lilly & co. 50876 F30508 113 LFUS Littelfuse inc. 77918 D06755 114 LNC Lincoln national corp. 49015 5C7601 115 LMT Lockheed martin corp. 21178 96F126 116 MTB M&T bank corp. 35554 D1AE3B 117 MANH Manhattan associates inc. 85992 031025 118 MAN ManpowerGroup inc. 75285 C0200F 119 MAR Marriott international inc. 85913 385DD4 120 MMC Marsh & mcLennan cos. 45751 9B5968 121 MCD McDonald's corp. 43449 954E30 122 MCK McKesson corp. 81061 4A5C8D 123 MDU MDU resources group inc. 23835 135B09 124 MRK Merck & co. inc. 22752 1EBF8D 125 MTOR Meritor inc 85349 00326E 126 MTG MGIC investment corp. 76804 E28F22 127 MGM MGM resorts international 11891 8E8E6E 128 MCHP Microchip technology inc. 78987 CDFCC9 129 MU Micron technology inc. 53613 49BBBC 130 MSFT Microsoft corp. 10107 228D42 131 MOT Motorola solutions inc. 22779 E49AA3 132 MSM MSC industrial direct co. 82777 74E288 133 MUR Murphy oil corp. 28345 949625 134 NBR Nabors industries ltd. 29102 E4E3B7 135 NOI National oilwell varco inc. 84032 5D02B7 136 NYT New york times co. 47466 875F41 137 NFX Newfield exploration co. 79915 9C1A1F 138 NEM Newmont mining corp. 21207 911AB8 139 NKE NIKE inc. 57665 D64C6D 140 NBL Noble energy inc. 61815 704DAE 141 NOK Nokia corp. 87128 C12ED9 142 NOC Northrop grumman corp. 24766 FC1B7B 143 NTRS Northern trust corp. 58246 3CCC90 144 NUE NuCor corp. 34817 986AF6 145 ODEP Office depot inc. 75573 B66928 146 ONB Old national bancorp 12068 D8760C 147 OMC Omnicom group inc. 30681 C8257F 148 OTEX Open text corp. 82833 34E891 149 ORCL Oracle corp. 10104 D6489C 150 ORBK Orbotech ltd. 78527 290820 151 PCAR Paccar inc. 60506 ACF77B 152 PRXL Parexel international corp. 82607 EF8072 153 PH Parker hannifin corp. 41355 6B5379 154 PTEN Patterson-uti energy inc. 79857 57356F 155 PBCT People's united financial inc. 12073 449A26 156 PEP PepsiCo inc. 13856 013528 157 PFE Pfizer inc. 21936 267718 158 PIR Pier 1 imports inc. 51692 170A6F 159 PXD Pioneer natural resources co. 75241 2920D5 160 PNCF PNC financial services group inc. 60442 61B81B 161 POT Potash corporation of saskatchewan inc. 75844 FFBF74 162 PPG PPG industries inc. 22509 39FB23 163 PX Praxair inc. 77768 285175 164 PG Procter & gamble co. 18163 2E61CC 165 PTC PTC inc. 75912 D437C3 166 PHM PulteGroup inc. 54148 7D5FD6 167 QCOM Qualcomm inc. 77178 CFF15D 168 DGX Quest diagnostics inc. 84373 5F9CE3 169 RL Ralph lauren corp. 85072 D69D42 170 RTN Raytheon co. 24942 1981BF 171 RF Regions financial corp. 35044 73C521 172 RCII Rent-a-center inc. 81222 C4FBDC 173 RMD ResMed inc. 81736 434F38 174 RHI Robert half international inc. 52230 A4D173 175 RDC Rowan cos. inc. 45495 3FFA00 176 RCL Royal caribbean cruises ltd. 79145 751A74 177 RPM RPM international inc. 65307 F5D059 178 RRD RR R.R. donnelley & sons co. 38682 0BE0AE 179 SLB Schlumberger ltd. n.v. 14277 164D72 180 SCTT Scotts miracle-gro co. 77300 F3FCC3 181 SM SM st. mary land & exploration co. 78170 6A3C35 182 SONC Sonic corp. 76568 80D368 183 SO Southern co. 18411 147C38 184 LUV Southwest airlines co. 58683 E866D2 185 SWK Stanley black & decker inc. 43350 CE1002 186 STT State street corp. 72726 5BC2F4 187 TGNA TEGNA inc. 47941 D6EAA3 188 TXN Texas instruments inc. 15579 39BFF6 189 TMK Torchmark corp. 62308 E90C84 190 TRV The travelers companies inc. 59459 E206B0 191 TBI TrueBlue inc. 83671 9D5D35 192 TUP Tupperware brands corp. 83462 2B0AF4 193 TYC Tyco international plc 45356 99333F 194 TSN Tyson foods inc. 77730 AD1ACF 195 X United states Steel corp. 76644 4E2D94 196 UNH UnitedHealth group inc. 92655 205AD5 197 VIAV Viavi solutions inc. 79879 E592F0 198 GWW W.W. grainger inc. 52695 6EB9DA 199 WDR Waddell & reed financial inc. 85931 2F24A5 200 WBA Walgreens boots alliance inc. 19502 FACF19 201 DIS Walt disney co. 26403 A18D3C 202 WAT Waters corp. 82651 1F9D90 203 WBS Webster financial corp. 10932 B5766D 204 WFC Wells fargo & co. 38703 E8846E 205 WERN Werner enterprises inc. 10397 D78BF1 206 WABC Westamerica bancorp 82107 622037 207 WDC Western digital corp. 66384 CE96E7 208 WHR Whirlpool corp. 25419 BDD12C 209 WFM Whole foods market inc. 77281 319E7D 210 XLNX Xilinx inc. 76201 373E85 Final list of firms – The table contains the information about the full list of firms: tickers, firm names, CRSP PERMNO code and RavenPack ID. Tickers and firm names are taken as of June, 2017. PERMNO and RavenPack ID columns are used to match firms and firm news data. § ADDITIONAL EMPIRICAL RESULTS
http://arxiv.org/abs/2307.01102v2
20230703152723
Implications for the non-Gaussianity of curvature perturbation from pulsar timing arrays
[ "Lang Liu", "Zu-Cheng Chen", "Qing-Guo Huang" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ph" ]
α σ δ Δ Λ ∂ ω Ω d [1]𝒪(#1) κ () [ ] [1]Eq. (<ref>) [1]Fig. <ref> [1]Table <ref> [1]Sec. <ref> M_⊙ f_pbh f_pbh0 ℛ σ_eqΩ_GWGpc^-3 yr^-1LIGO/Virgo SNR m_minm_maxM_minf_minVTρ_GWθ⃗d⃗λ⃗N_obs[1]⟨ #1 ⟩ kmMpcT_obsN_temp [add ref] e.g. ≈1/2Add discussion here! m_pbhℛ𝒰M_cM_f[1]#1[1]#1[1]#1[1]#1F_NLG_NLℳ_Gℳ_NG [email protected] of Astronomy, Beijing Normal University, Beijing 100875, ChinaAdvanced Institute of Natural Sciences, Beijing Normal University, Zhuhai 519087, China Corresponding author: [email protected] of Astronomy, Beijing Normal University, Beijing 100875, ChinaAdvanced Institute of Natural Sciences, Beijing Normal University, Zhuhai 519087, ChinaDepartment of Physics and Synergistic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha, Hunan 410081, China Corresponding author: [email protected] Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, ChinaSchool of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, ChinaSchool of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China The recently released data by pulsar timing array (PTA) collaborations present strong evidence for a stochastic signal consistent with a gravitational-wave background. Assuming this signal originates from scalar-induced gravitational waves, we jointly use the PTA data from the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to probe the small-scale non-Gaussianity. We put the first-ever constraint on the non-Gaussianity parameter, finding |F_NL|≲ 13.9 for a lognormal power spectrum of the curvature perturbations. Furthermore, we obtain -13.9 ≲ F_NL≲ -0.1 to prevent excessive production of primordial black holes. Moreover, the multi-band observations with the space-borne gravitational-wave detectors, such as LISA/Taiji/TianQin, will provide a complementary investigation of primordial non-Gaussianity. Our findings pave the way to constrain inflation models with PTA data. Implications for the non-Gaussianity of curvature perturbation from pulsar timing arrays Qing-Guo Huang Received XXX; accepted XX ======================================================================================== Introduction. Various inflation models (see e.g. <cit.>) predict the existence of a sizable primordial non-Gaussianity, hence the non-Gaussianity plays an import role in exploring the early Universe <cit.>. How to probe the non-Gaussianity of the Universe is one of the key questions in modern physics. Over several decades, significant advancements have been made in precisely measuring a nearly scale-invariant power spectrum characterizing primordial density fluctuations. These measurements have been accomplished through the utilization of observational data from the cosmic microwave background (CMB) <cit.> and large-scale structure <cit.> surveys, offering valuable insights into the fundamental properties of the Universe. Although significant efforts have been dedicated to precisely characterizing power spectra of primordial perturbations on large scales, searching for new and independent probes becomes crucial when examining phenomena at the small scale. Gravitational waves (GWs) offer a fascinating avenue for acquiring insights into the history and composition of the Universe, serving as another probe of small-scale non-Gaussianity. In fact, space-borne GW detectors, such as LISA <cit.>, Taiji <cit.>, and TianQin <cit.>, can explore the non-Gaussianity through scalar-induced GWs (SIGWs) <cit.> in the mHz frequency band. Pulsar timing arrays (PTA) <cit.>, on the other hand, are sensitive in the nHz frequency band, providing another opportunity to probe the early Universe. Recently, NANOGrav <cit.>, PPTA <cit.>, EPTA <cit.>, and CPTA <cit.> all announced the evidence for a stochastic signal in their latest data sets consistent with the Hellings-Downs <cit.> spatial correlations expected by a stochastic gravitational-wave background (SGWB). Although there can be a lot of sources <cit.> in the PTA window, whether this signal is of astrophysical or cosmological origin is still under investigation <cit.>. A possible explanation for this signal is the SIGW produced by the primordial curvature perturbations at small scales. When the primordial curvature perturbations reach significant magnitudes, they can generate a considerable SGWB through second-order effects resulting from the non-linear coupling of perturbations. Additionally, large curvature perturbations can trigger the formation of primordial black holes (PBHs) (<cit.>). PBHs have attracted a lot of attention in recent years <cit.> (see also reviews <cit.>) as a promising candidate for dark matter and can explain the binary black holes detected by LIGO-Virgo-KAGRA <cit.>. The formation rate of PBHs would be entirely altered if there is any significant non-Gaussianity, as PBHs are produced at the large amplitude tail of the curvature perturbation probability distribution <cit.>. In this letter, assuming that the signal detected by PTAs is from SIGWs, we jointly use the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to constrain the small-scale non-Gaussianity when the scalar modes re-enter the horizon. As a demonstration, we employ a lognormal power spectrum of curvature perturbations and constrain the non-Gaussianity parameter as -13.9 ≲ F_NL≲ -0.1. SIGWs and PBHs. We will briefly review the SIGWs that arise as a result of the local-type non-Gaussian curvature perturbations, a significant phenomenon that has been previously discussed in <cit.>. The local-type non-Gaussianities are characterized by the expansion of the curvature perturbation, ℛ(x⃗), in terms of the Gaussian component in real space. Specifically, the expansion up to the quadratic order can be written as <cit.> ℛ(x⃗) = ℛ_G(x⃗) + ℛ_G^2(x⃗)- ⟨ℛ_G^2(x⃗) ⟩, where ℛ_G(x⃗) follows Gaussian statistics, and represents the dimensionless non-Gaussian parameters. It is worth noting that the non-Gaussianity parameter is related to the commonly used notation f_NL through the relation ≡ 3/5 f_NL. The non-Gaussian contributions are incorporated by defining the effective curvature power spectrum, P^NG_ℛ(k), as <cit.> P^NG_ℛ=P_ℛ(k)+^2 ∫_0^∞dv∫_|1-v|^1+vdu P_ℛ(uk)P_ℛ(vk)/2u^2 v^2. In the conformal Newton gauge, the metric perturbations can be expressed as ds^2 = a^2(η){-(1+2ϕ)dη^2+[(1-2ϕ)δ_ij+h_ij]dx^i dx^j}, where η represents the conformal time, ϕ is the Newtonian potential, and h_ij corresponds to the tensor mode of the metric perturbation in the transverse-traceless gauge. The equation of motion for h_ij can be obtained by considering the perturbed Einstein equation up to the second order, namely h_i j^''+2 ℋ h_i j^'-∇^2 h_i j=-4 𝒯_i j^ℓ m S_ℓ m, where the prime denotes a derivative with respect to the conformal time η, ℋ≡a'/a represents the conformal Hubble parameter, and 𝒯_i j^ℓ m is transverse and traceless projection operator in Fourier space. The source term S_ij, which is of second order in scalar perturbations, reads S_i j=3 ϕ∂_i ∂_j ϕ-1/ℋ(∂_i ϕ^'∂_j ϕ+∂_i ϕ∂_j ϕ^')-1/ℋ^2∂_i ϕ^'∂_j ϕ^'. The characterization of SGWBs often involves describing their energy density per logarithmic frequency interval relative to the critical density ρ_c(η), Ω_GW(k, η) ≡1/ρ_c(η)dρ_GW(k, η)/dln k=k^3/48 π^2(k/ℋ)^2 ⟨|h_k(η)|^2⟩, where the overline represents an average over a few wavelengths. During the radiation-dominated era, GWs are generated by curvature perturbations, and their density parameter at the matter-radiation equality is denoted as (k)=(k,η→∞). Using the relation between curvature perturbations ℛ and scalar perturbations ϕ in the radiation-dominated era, ϕ=(2/3)ℛ, we can calculate (k) as <cit.> Ω_GW(k) =∫_0^∞d v ∫_|1-v|^|1+v|d u 𝒯 P^NG_ℛ(v k) P^NG_ℛ(u k), where the transfer function 𝒯=𝒯(u,v) is given by 𝒯(u,v)= 3/1024 v^8 u^8[4 v^2-(v^2-u^2+1)^2]^2(v^2+u^2-3)^2 ×{[(v^2+u^2-3) ln(|3-(v+u)^2/3-(v-u)^2|)-4 v u]^2. .+π^2(v^2+u^2-3)^2 Θ(v+u-√(3))}. According to the NGP and Omega_1, Ω_GW(k) can be expanded as Ω_GW(k)= A^2Ω^(0)(k) + A^3 F_NL^2Ω^(2)(k) + A^4 F_NL^4Ω^(4)(k), where Ω^(0)(k), Ω^(2)(k), and Ω^(4)(k) represent the corresponding integral terms, and A ≡∫P_ℛ dln k is the amplitude of P_ℛ. From ogwexpand, we see that positive and negative will generate identical SIGWs. In other words, positive and negative are degenerate regarding their impact on SIGWs. Using the relation between the wavenumber and frequency, k=2π f, we obtain the energy density fraction spectrum of SIGWs at the present time, Ω_GW, 0(f)=Ω_r, 0[g_*, r(T)/g_*, r(T_eq)][g_*, s(T_eq)/g_*, s(T)]^4/3Ω_GW(k). It is given by the product of Ω_GW(k), the present energy density fraction of radiation, Ω_r,0, and two factors involving the effective degrees of freedom for entropy density, g_,s, and radiation, g_,r. To demonstrate the method, we adopt a commonly used power spectrum for P_ℛ, taking the lognormal form <cit.> P_ℛ(k) = A/√(2π)Δexp(-ln^2(k/k_*)/2Δ^2), where A is the amplitude and Δ characterizes the width of the spectrum. We note that a positive value of will increase the abundance of PBHs for a given power spectrum of curvature perturbations. Conversely, a negative value of will decrease the abundance of PBHs. This behavior highlights the impact of non-Gaussianity, quantified by , on the formation and abundance of PBHs. The Gaussian curvature perturbation ℛ_G can be determined by solving Eq. (<ref>) as <cit.> ℛ_G^± (ℛ) = 1/2 F_NL( -1 ±√(1+4 F_NLℛ+4 F_NL^2⟨ℛ_G^2⟩)). PBHs are expected to form when the curvature perturbation exceeds a certain threshold value ℛ_c∼ 1 <cit.>. The PBH mass fraction at formation time can be calculated as <cit.> β(M) ≃1/2{[ erfc(ℛ_G^+(ℛ_c)/√(2 ⟨ℛ_G^2 ⟩))+erfc(-ℛ_G^-(ℛ_c)/√(2 ⟨ℛ_G^2 ⟩)) ; F_NL>0,; erf(ℛ_G^+(ℛ_c)/√(2 ⟨ℛ_G^2 ⟩)) - erf(ℛ_G^-(ℛ_c)/√(2 ⟨ℛ_G^2 ⟩)) ; F_NL<0. ]. One can define the total abundance of PBHs in the dark matter at present as <cit.> f_PBH ≡Ω_PBH/Ω_CDM=2.7 × 10^8 ∫_-∞^∞dln M ×(g_*,r/10.75)^3/ 4(g_*,s/10.75)^-1(M/M_⊙)^-1 / 2β(M), where Ω_CDM is the cold dark matter density. Data analyses and results. We jointly use the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to estimate the model parameters. The ongoing efforts of these PTAs have lasted for more than a decade. Specifically, NANOGrav 15-yr data set contains observations of 68 pulsars with a time span of 16.03 years, PPTA DR3 contains observations of 32 pulsars with a time span of up to 18 years, and EPTA DR2 contains observations of 25 pulsars with a time span of 24.7 years. These PTA data sets all present a stochastic signal consistent with the Hellings-Downs spatial correlations expected for an SGWB. If this signal is true of GW-origin, it should share the same properties among these PTAs. Therefore, we combine the observations from these PTAs to estimate model parameters to increase the precision rather than using each individual PTA. In this letter, we use the free spectrum amplitude derived by each PTA with Hellings-Downs correlations. Given the time span T_obs of a PTA, the free spectrum starts with the lowest frequency 1/T_obs. NANOGrav, PPTA, and EPTA use 14, 28, and 24 frequency components in their SGWB searches, respectively. Combining these data together results in 66 frequencies of a free spectrum ranging from 1.28 nHz to 49.1 nHz. A visualization of the data used in the analyses is shown in ogw. In this work, we also consider the constraints from the big-bang nucleosynthesis (BBN) and CMB for the integrated energy-density fraction defined by ∫_k_min^∞ d ln k h^2 Ω_GW, 0(k), where h = H_0 / ( 100 km s^-1 Mpc^-1)=0.674 <cit.> is the dimensionless Hubble constant. The upper limits are 1.3 × 10^-6 for BBN <cit.> and 2.9 × 10^-7 for CMB <cit.>. We use the time delay data released by each PTA. The time delay d(f) can be converted to the power spectrum S(f) by d(f)=√(S(f) / T_obs). We then convert S(f) to the characteristic strain, h_c(f), by h_c^2(f)=12 π^2 f^3 S(f). Further we obtain the free spectrum energy density as Ω̂_GW(f)=2 π^2/3 H_0^2 f^2 h_c^2(f) = 8π^4/H_0^2 T_obs f^5 d^2(f). For each frequency f_i, with the posteriors of Ω̂_GW(f_i) at hand, we can estimate the corresponding kernel density ℒ_i. Therefore, the total likelihood is ℒ(Λ) = ∏_i=1^66ℒ_i(Ω_GW(f_i, Λ)), where Λ≡{A, Δ, f_*, ||} is the collection of the model parameters. We use  <cit.> sampler wrapped in  <cit.> package to search over the parameter space. The model parameters and their priors are summarized in tab:priors. We consider two models: one without non-Gaussianity, ℳ_G, and another with non-Gaussianity, ℳ_NG. The posterior distributions for the parameters are shown in posts_ppta, and the median and 90% credible interval values for each parameter are summarized in tab:priors. We note that the ℳ_G model has been studied by NANOGrav with their 15-yr data set, which is called in their paper. While we obtain consistent results, the combined data from NANOGrav, PPTA, and EPTA can constrain the parameters to higher precision than using the NANOGrav data set alone, as expected. For the model, the and A parameters are generally degenerate. The combined data can constrain the amplitude to be A = 1.06^+5.20_-1.02, therefore constraining || ≲ 13.9. Since positive and negative values are degenerate, we have -13.9 ≲≲ 13.9. Moreover, the abundance of PBHs cannot exceed that of dark matter, i.e., f_PBH≲ 1. Using Eqs.(<ref>) and (<ref>), this limitation allows us to break the degeneracy and obtain constraints on as -13.9 ≲≲ -0.1. Summary and discussion. While the CMB and large-scale structure observations have provided increasingly precise measurements on the largest scales of the universe, our knowledge of small scales remains limited, except for the constraints imposed by PBHs. PTAs, on the other hand, are an invaluable tool to probe the small-scale non-Gaussianity through SIGWs. Assuming the stochastic signal detected by the PTA collaborations origins from SIGWs, we jointly use the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to constrain the SIGWs accounting for non-Gaussianity. For the first time, we constrain the non-linear parameter as |F_NL|≲ 13.9 for a lognormal power spectrum of the curvature perturbation. Furthermore, we obtain -13.9 ≲ F_NL≲ -0.1 to avoid overproduction of PBHs. Although we have only dealt with the lognormal power spectrum of curvature perturbations, the method and the framework proposed in this work can be easily extended to different types of power spectra. The constraints on primordial non-Gaussianity have significant implications for inflation models that involve scalar fields, other than the inflaton, in generating the primordial curvature perturbations. For instance, adiabatic curvaton models predict that <cit.> f_NL = 5/3 F_NL =5/4 r_D - 5 r_D/6-5/3, when the curvaton field has a quadratic potential <cit.>. Here the parameter r_D = 3ρ_curvaton/(3ρ_curvaton + 4 ρ_radiation) represents the “curvaton decay fraction" at the time of curvaton decay under sudden decay approximation. Our constraint |F_NL|≲ 13.9 implies r_D≳ 0.05 (95%), and the further constraint that F_NL≲ -0.1 yields r_D≳ 0.62 (95%), indicating that the curvaton field has a non-negligible energy density when it decays. Our findings, therefore, pave the way to constrain inflation models with PTA data. Furthermore, as indicated in ogw, the energy density spectrum of SIGW can generally be extended to the frequency band of the space-borne GW detector. Therefore, the multi-band observations of PTAs with the forthcoming space-borne GW detectors, such as LISA/Taiji/TianQin, will provide a complementary investigation of non-Gaussianity. Note added. While finalizing this work, we found two parallel independent works <cit.>, which also investigate the possibility of explaining the NANOGrav signal with second-order GWs related to the non-Gaussianity. However, these two works didn't perform parameter estimation for the non-Gaussianity parameter using the PTA data. Acknowledgments LL is supported by the National Natural Science Foundation of China (Grant No. 12247112 and No. 12247176) and the China Postdoctoral Science Foundation Fellowship No. 2023M730300. ZCC is supported by the National Natural Science Foundation of China (Grant No. 12247176 and No. 12247112) and the China Postdoctoral Science Foundation Fellowship No. 2022M710429. QGH is supported by grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, CAS Project for Young Scientists in Basic Research YSBR-006, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15).
http://arxiv.org/abs/2307.03254v1
20230706190856
Vision Language Transformers: A Survey
[ "Clayton Fields", "Casey Kennington" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CL", "cs.LG" ]
Stable, entropy-consistent, and localized artificial-diffusivity method for capturing discontinuities Parviz Moin August 1, 2023 ====================================================================================================== Vision language tasks, such as answering questions about or generating captions that describe an image, are difficult tasks for computers to perform. A relatively recent body of research has adapted the pretrained transformer architecture introduced in <cit.> to vision language modeling. Transformer models have greatly improved performance and versatility over previous vision language models. They do so by pretraining models on a large generic datasets and transferring their learning to new tasks with minor changes in architecture and parameter values. This type of transfer learning has become the standard modeling practice in both natural language processing and computer vision. Vision language transformers offer the promise of producing similar advancements in tasks which require both vision and language. In this paper, we provide a broad synthesis of the currently available research on vision language transformer models and offer some analysis of their strengths, limitations and some open questions that remain. § INTRODUCTION Vision language modeling is the domain where computer vision and natural language processing intersect. An example of a VL task is visual question answering: given an image and a question about an image, a VL model must choose the correct answer out of a number of choices. Another example, and a more challenging task, is image captioning, given an image a model must produce a sequence of text describing the picture. Though effortless for human beings, tasks of this nature have historically proven extremely challenging for computers to perform. Until fairly recently, deep learning models for VL tasks tended to be both conceptually convoluted and confined to a narrow range of uses. In the past few years a new class of models called VL transformers have greatly expanded the accuracy and versatility of vision language models. These models are based on the celebrated transformer architecture introduced in <cit.>. VL transformers improve on the previous paradigms by pretraining models on large datasets of image-text pairs before transferring them to other tasks, usually with minor changes to parameter values and architecture. In a very short space of time a bewildering array of these models have appeared in the literature. They vary widely in their intended uses, architectures, pretraining processes as well as the data used to pretrain them. In this paper, we provide a comprehensive survey of the various VL transformer models found in the literature. These models were designed for a wide range of vision language tasks. Models such as CLIP <cit.> and ALIGN <cit.> are particularly well suited to vision language alignment tasks such image retrieval. Whereas models like UNITER <cit.>, ViLBERT <cit.> and METER <cit.> specialize in understanding tasks such as visual question answering (VQA) described in the introductory paragraph. Transformers with suitable architectures, LEMON <cit.> and GIT <cit.> to name two, were designed to generate text such as captions for image inputs. There is even a series of VL transformers specializing in visual grounding tasks in which a model must match words to the visual objects they describe. Referring Transformer and mDETR are two such models that can perform object detection on image inputs and match these objects to text descriptions. In the interest of brevity, we will restrict our study to models using English as their primary language. This excludes not only models for text in other languages, but also multi-lingual models. We also exclude models that are designed exclusively for video-language tasks. It should be noted however that some of the models we reviewed process video inputs as well as images. And one multi-lingual model, PaLI <cit.>, is included because of its superior performance on english language VL benchmarks. The impressive range of tasks mentioned above is reflected by an equally impressive variety of embedding strategies, model architectures, pretraining tasks and training datasets. We will discuss these topics in some detail as well as the various ways these features can be adapted to the VL domain. Along the way we hope to provide some insight into the various design choices of these models and when sufficient data exists, the corresponding effects on their performance. All of the models reviewed for this paper are listed in Table <ref> along with the references for the papers introducing each model and some basic information about there design. The remainder of the paper is organized as follows, in Section <ref>, we provide a brief explanation of the transformer model that forms the basis of the models that we have reviewed and how pretrained trasformers have been adapted for NLP and CV tasks. In Section <ref> we discuss how VL models embed visual and linguistic data into their feature space, with special attention paid to how they create visual features. Section <ref> addresses the architectures of the reviewed models and how these design choices affect the interactions of visual and linguistic features. The various pretraining tasks and strategies these models use and how they affect downstream performance are summarized in Section <ref>. Section <ref> describes the models downstream capabilities and Section <ref> describes the data used for pretraining. In the final section we provide a brief analysis of the strengths and limitations of the models discussed and explore some future directions for research and identify open questions that remain. § BACKGROUND: TRANSFORMERS In this section, we describe the transformer-style deep neural models that form the architectural basis of the VL models we discuss below. Transformers were first introduced in the seminal paper "Attention Is All You Need" by <cit.> in the context of using attention mechanisms for machine translation tasks. Since then, transformers have replaced recurrent neural networks (RNN) as the standard model for most NLP tasks. NLP transformers have achieved their remarkable results by pretraining networks with large unlabelled sets of text and then transferring the pretrained networks to other tasks with small changes in architecture and minimal parameter updates. Pretrained transformer models, such as RoBERTa <cit.> and GPT-3 <cit.>, are now state-of-the-art in virtually every category of NLP tasks. Convolutional neural networks (CNN) are still widely used for CV tasks as of the writing of this paper. However, recent efforts have shown that the transformer architecture can be adapted to CV tasks with relatively few modifications, <cit.>. When pretrained with a sufficiently large dataset, vision transformers can perform competitively with state-of-the-art CNNs whose architectures were designed for CV. Given their ability to perform at or near state-of-the-art in both domains, transformers have became the natural choice as the basis for pretrained VL models. Before we move on to discussing the design choices involved in adapting transformers to VL tasks, we will offer a brief overview of the transformer model and the attention mechanism that powers its remarkable results. Readers well versed in the working of transformers and their applications NLP and CV should feel free to proceed to the next section. §.§ Architecture of Transformers In this subsection we describe the architecture of the original transformer model. Of particular interest is the self-attention mechanism that underlies the model's success in sequence processing. Unless otherwise stated, the source for the exposition in this subsection is <cit.>. §.§.§ Encoder and Decoder Stacks The first implementation of the transformer models was based on an encoder-decoder design. Formally, a given input sequence of symbol representations 𝐱 = (x_1, ..., x_n) is mapped to an intermediate representation 𝐳 = (z_1, ..., z_n) by the encoder module. With 𝐳 as input, the decoder generates an output sequence 𝐲 = (y_1, ..., y_n). The transformer encoder stack consists of N transformer layers of identical dimension. Each transformer layer, in turn, consists of a multi-head attention (MHA) sub-layer and a feed-forward network sub-layer (FFN), both of which are described in the following sub-sections. A residual connection <cit.> around each sub-layer is then followed by a layer normalization. Formally this amounts to LayerNorm(𝐱 + Sublayer(𝐱)), where Sublayer(𝐱) is the MHA or FFN sublayer function itself. The decoder is also a stack of N layers of identical dimension. However, layers in the decoder stack contain a third multi-head attention sub-layer that attends to the output of the encoder stack. The self-attention mechanism in the decoder stack is also modified so that previous positions in the sequence are masked. Masking combined with an offset applied to the output embeddings ensure that decoder predictions are based only on previous outputs. These modifications make the decoder well suited to generative tasks. In the next subsection we describe the multi-head attention mechanism key to the transformer's operation. §.§.§ Multi-Head Attention Sub-Layer Broadly speaking, an attention mechanism is a function that maps a query vector and a set of key-value vector pairs to an output. The output is then computed as a weighted sum of the values. <cit.> call the attention mechanism they developed “scaled dot-product attention". The input consists of query and key vectors of dimension d_k and value vectors of dimension d_v. The weights for the values are obtained by applying a softmax to the dot product of the query with all keys divided by √(d_k). In practice, the component vectors of the attention function are packed together into matrices Q, K and V and computed simultaneously as: Attention(Q,K,V) = softmax(QK^T/d_k)V One of the key innovations of the transformer model, is that rather than performing a single attention function with input vectors of size d_model (the dimension of the model's hidden size), the query, key and value vectors are linearly projected h times with different, learned linear projections to their respective dimensions of d_k, d_k and d_v. The above attention function is then performed over each set of inputs yielding h different d_v value vectors. Finally, these are concatenated into a single value vector of dimension d_model. The creators of the transformer posit that the parallel representations allow the model to attend to information from different representation subspaces at different positions. Formally, the multi-head attention mechanism can be described as: MHA(Q,K,V) = Concat(head_1, ..., head_j)W^O where head_i = Attention(QW_i^Q, KW_i^K, VW_i^V) and the projections are learned parameter matrices are: W_i^Q ∈ R^d_model× d_k W_i^K ∈ R^d_model× d_k W_i^V ∈ R^d_model× d_v W_i^O ∈ R^hd_v× d_model The output of the MHA sub-layer is then passed to the FFN sub-layer described in the next section. §.§.§ Feed-Forward Network Sub-Layer Each layer of the transformer contains a fully connected feed forward network (FFN) sub-layer. The shallow network consists of two linear transformations around a ReLU activation. Formally, this is expressed as: FFN(x) = max(0, xW_1 + b_1)W_2 + b_2 W_1 projects the input from the model's hidden size d_model to an intermediate size d_ff and W_2 projects the transformed input back to d_model. b_1 and b_2 are bias terms. The dimensions of each network are identical, however the parameters of the learned projection matrices W_1 and W_2 differ from layer to layer. The original transformer had N=6 layers with dimensions d_model = 512 and d_ff = 4*d_model = 2048. Because the attention mechanism has no inherent order, the original transformer and most subsequent models add a positional encoding to the input embeddings so the model can make use of the sequence information. §.§ Pretrained Transformers for Natural Language Processing After their introduction in <cit.>, transformer models were quickly adapted to the task of transfer learning for NLP tasks. The GPT (Generative Pretrained Transformer) set new state-of-the-art performance on a variety of tasks upon its introduction by <cit.>. The GPT model consists of a large stack of transformer decoder blocks. The model is pretrained on the BooksCorpus dataset <cit.>, a large, unlabelled corpus of text using a standard language modeling objective. That is, given an unsupervised corpus of tokens U = {u_1, ..., u_n} the model maximizes the likelihood L = P(u_i | u_i-k, ..., u_i-1; Θ) where k is the context size and Θ are the network parameters. Less formally, the model must predict the next token, given the k previous tokens. After pretraining for 100 epochs, the model can be transferred to a supervised NLP task by replacing the output layer of the network and training for a few epochs. This process is known as fine-tuning. Shortly after GPT, BERT (Bidirectional Encoder Representations from Transformers) was introduced in <cit.>, and is now the most widely used NLP model available. In contrast to the generative GPT model, BERT consists of a stack of transformer encoder blocks. Crucially, BERT was the first transformer model pretrained with the masked language modeling objective (MLM). In MLM, the model is presented with a sequence of text tokens, some percentage of which are replaced with a special "[MASK]" token, and must predict the masked tokens given the unmasked tokens. Unlike the standard language objective, the prediction is conditioned on words that come before and after the token being predicted. After pretraining the model can then be fine-tuned to downstream tasks with minimal changes to the model parameters and architecture. Pretrained transformers have mostly displaced recurrent neural networks as the standard for NLP tasks. Though the pretraining tasks and domain are often quite different than the down stream tasks they are applied to, they generally outperform task specific deep models. The GPT model is now in its fourth iteration, GPT-4 <cit.>. By augmenting traditional language modeling with reinforcement learning, it is capable of reliably producing human quality text on demand. The functionality and performance of BERT have been extended by a variety of models <cit.>, and compressed to improve inference time <cit.>. Inspired by the extraordinary success of using transformers for NLP, researchers have recently begun to adapt transformers to the CV domain. We will close this section by describing some of these efforts. §.§ Pretrained Transformers for Computer Vision The ViT (Vision Transformer) model was introduced in <cit.>. As the name suggests, it is a transformer based model and its creators closely follow the design and dimensions of the BERT model. In place of textual tokens, features are created by breaking an image into a sequence of P× P patches, where P is the patch size, flattening the 2-D image patches and then linearly projecting them to match the transformer's embedding size. It is pretrained using supervised image classification on a labelled dataset. The model's creators show that ViT can match or exceed state-of-the-art CNN-based networks on common downstream image classification benchmarks when it is provided with sufficient data. One notable drawback of ViT is that it requires more pretraining data to achieve said results than CNN-based networks. The model's creators surmise that this is because transformers lack the image-friendly inductive biases of CNNs, such as locality, two-dimensional neighborhood structure and translation equivariance. One possible solution to the problems posed by the large pretraining data requirements of ViT was proposed in the BEIT model <cit.>. BEIT adopts a similar architecture and embedding strategy as ViT, but introduces a novel pretraining task, masked image modeling. The masked image modeling task is much likes BERT's masked language modeling objective. Image patch representations are first "tokenized" to a discrete representation using an auto-encoder. The designated, abstract category of the masked image patch is then predicted by the model. The creators show that it can perform on par with ViT using substantially less pretraining data. Notably, the masked image modeling task is very similar to the masked region modeling objective used by some VL transformer models that is described in Section <ref>. Other approaches to improving the transformer's performance in vision tasks take their inspiration from convolutional neural networks. CoAtNet <cit.> employs depthwise convolution layers that combine the data inductive biases of CNNs with the model capacity of transformers. Swin <cit.> and CSwin <cit.> are transformer based vision models known as hierarchial transformers are another approach. These models perform self-attention over a shifting two dimensional segment of the input image. This mechanism creates a bias toward learning two dimensional relationships and it allows hierarchical transformers to learn with less data than the vanilla vision transformer. In this section, we have covered sufficient background to fully explore our main topic. The remainder of the paper will be devoted to describing pretrained transformer VL models, starting with the strategies they use to jointly embed visual and textual features. § EMBEDDING STRATEGIES In this section we discuss how VL transformer models encode their textual and visual embeddings into the model's feature space. Formally, textual and visual input must be encoded into a sequence of textual tokens {t_1, .... t_T} and a sequence of visual features {v_1, ..., v_V} where each sequence element is a numerical vector. Virtually all of the models that we reviewed for this paper adopt the same embedding strategy for textual representations, described in detail in the subsection immediately below. The strategy for representing images however, varies significantly and represents one of the key differences in pretrained VL models and we will discuss the subject in detail in the following section. §.§ Textual Embeddings Most of the VL models we discuss use the textual embedding strategy of BERT <cit.>. The input text is first tokenized using the WordPiece algorithm <cit.>. Each sequence begins with a special classification token "[CLS]" and sequences of text are separated using the special "[SEP]" token. Finally, learned embeddings representing a token's position within the sequence and which segment of text it belongs to are added to produce the token's input representation. The process is summarized visually in Figure <ref>. Some models such as CLIP <cit.>, UNIMO <cit.>, OFA <cit.> and METER <cit.> to name a few, use the BPE encoding scheme <cit.> as opposed to WordPiece encoding. Other models, BEiT-3 <cit.>, VL-T5 <cit.> and Flamingo for example, make use of the SentencePiece encoding described in <cit.>. The essential embedding strategies of these models however, are largely the same. §.§ Visual Embeddings §.§.§ Region Features Among the most common approach to creating visual embeddings for VL transformers has been to use region based features produced by an out of the box object detection network. Object detection networks segment images into rectangular regions containing discrete visual objects and assign each region with an appropriate label for the object it contains. Fast R-CNN <cit.> and YOLO <cit.> are popular examples. UNITER <cit.>, ViLBERT <cit.>, VL-BERT <cit.>, VisualBERT <cit.>, Oscar <cit.>, VinVL <cit.>, LXMERT <cit.>, VL-T5 <cit.>, Unicoder-VL <cit.> and UNIMO all make use of region based features. Because the attention mechanism used by transformers is inherently unordered, an embedding representing position of the region within the image is usually included added to the feature embedding. As an illustrative example, consider the embeddings of VL-BERT. VL-BERT extracts region features from image input using a Fast R-CNN object detector <cit.>. A visual feature embedding is taken as the last hidden state vector prior to the final output layer for each Region of Interest (RoI). Information regarding the position of the bounding box enclosing the region is embedded into a high dimensional space and included with each RoI feature. The concatenated visual feature embedding is then projected to the matching the dimension of the textual features and concatenated with the VL transformer's textual token embeddings. Tokens representing image regions are marked with a special "[IMG]" token, textual and other special tokens receive an RoI representation of the entire image. Figure 3.1 gives a visual summary of the model's embedding strategy. The general complexity of region features makes for significant variations in how different model create and use them. UNITER simply concatenates the textual and visual features and separates them with a special "[SEP]" token. This allows it to dispense with the special "[IMG]" token that VL-BERT uses. ViLBERT and LXMERT create RoI features with CNN-based object detectors but keep textual and visual embeddings separate and use a cross-attention mechanism to allow them to interact. There are two notable drawbacks to using region features. Firstly, they can only identify the objects that object-detection model used to create them was trained on. VL models using them, therefore, can only recognize and describe those same visual categories. Secondly, object detection networks used to generate region features are also computationally expensive, and creating region features represents a serious computational bottleneck for models that use them <cit.>. §.§.§ Grid Features In place of region features, the encoder only models Pixel-BERT <cit.>, and SOHO <cit.> and the encoder-decoder model E2E-VLP learn to align text with visual embeddings extracted from the feature grid outputted by a CNN. The dual encoders CLIP, ALIGN <cit.> and LiT <cit.> also use grid features. As well as the visually grounded transformers GPV-1 <cit.>, mDETR <cit.>, DQ-DETR <cit.>, UniTAB <cit.>, Referring Transformer <cit.> and KD-VLP <cit.>. We consider the formal expression of the process for creating the grid features used by E2E-VLP. A raw image input is fed into a CNN encoder. Given an image I with 3 color channels and height H_0 and width W_0 such that I ∈ R^3× H_0 × W_0 the CNN encoder will output a feature map f ∈ R^C × H × W. The E2E-VLP model uses the same dimensions as DETR <cit.> with embedding dimension C=2048 and spatial dimensions H=H_0/32 and W=W_0/32. To reduce the embedding dimension a 1 × 1 convolution is applied to reduce C to a smaller dimension d such that we obtain a lower resolution map z ∈ R^d × H × W. Finally, in order to produce a sequence of tokens, the feature map is flattened to a sequence of H · W vectors of dimension d. The essential strategy of models using grid features is essentially the same in most important respects. This approach to visual embedding removes the theoretical ceiling imposed by the object categories of region features. It also provides a dense set of features for fine-grained visual reasoning. However, this approach still relies on an a pretrained CNN as its visual encoder creating a two step training and inference process. And although the creation of grid features uses less compute than an object detection models, image processing still accounts for most of a model like PixelBERT's inference time <cit.>. We close this section with the following section that describes a strategy called patch embedding that directly addresses the problem of reducing compute requirements for image preprocessing and obviates the need for CNN-based visual encoder. §.§.§ Patch Embeddings The final approach we discuss, patch embedding, was introduced by the Vision Transformer (ViT) <cit.> and was first adapted for use in VL tasks by the ViLT mdoel introduced in <cit.>. Since its introduction in ViLT, it has been a popular choice and is used by VLMo <cit.>, ALBEF <cit.>, BEiT-3, BLIP-2 <cit.>, CoCa <cit.>, SimVLM <cit.>, OFA <cit.>, METER <cit.>, mPLUG <cit.>, BridgeTower <cit.>, DaVinci <cit.>, Florence <cit.> and FLAVA <cit.> models. Formally, a given image I ∈𝐑^3 × H × W is sliced into N = HW/P^2 patches and flattened into p ∈𝐑^N × (P^2· C) where (P, P) is the patch resolution. A learned linear projection is then used to project each feature to the embedding dimension. The image embedding is obtained by summing the patch projection with learned position and type embeddings. Finally, the input features are formed by concatenating textual and visual embeddings for input into the transformer. Using a patch size of P=32, the ViLT model uses only a of fraction of the compute for image processing than the models previously discussed. Figure <ref> shows an illustration of the patch embedding process for ViLT provided by <cit.>. Most of the models use ViT <cit.> to process images into patch features. As a result, they follow a similar approach to patch embedding. Three notable exceptions are OmniVL, OFA and SimVLM, which use a CNN architecture to extract image patches. The authors argue that these patch embedding are superior to those obtained from the simple linear projection used by ViT. § MODEL ARCHITECTURE Independent of the embedding strategy employed, the model architecture of VL models must allow features associated with the textual and vision modalities to interact in some fashion. In this section we describe the different model designs used by pretrained VL transformers to jointly represent vision and language. In the broadest sense, pretrained VL models can be classified by whether this interaction is achieved through a shallow interaction, such as a dot product, or whether interaction occurs within the deep learning model itself. Among models using a deep interaction, architectures employ either a single-tower encoder, a dual-tower encoder or an encoder-decoder design. Following <cit.>, we refer to models using shallow interaction as dual encoders. These architectures are described in the subsection immediately below with notable examples from available VL models. §.§ Dual Encoders Dual encoders model visual and textual representations separately and the modalities do not interact within the deep learning model. Instead, the output of the visual and textual modules interact through a simple mechanism, usually a cosine similarity. Notable examples of dual encoder models are the CLIP model from OpenAI and ALIGN from Google Research, Florence and LiT. Consider the CLIP model, here text is encoded using a 12 layer transformer with BPE encoding <cit.>. Images are encoded with either a ResNet50 <cit.> or Vision Transformer <cit.> architecture. The output embeddings of each model are then compared via cosine similarity, which for normalized features reduces to a vector dot product. The shallow interaction scheme allows for a simple pretraining process that scales well to very large datasets. Given a large number of (image, text pairs), training minimizes cosine similarity between correct pairs and maximizes it between incorrect pairs, a process called contrastive learning which is described in detail in Section <ref>. The ALIGN model follows a strategy very similar to CLIP, using an EfficientNet <cit.> module as an image encoder and BERT as a text encoder. Like CLIP the encoder modules interact via a contrastive loss on their outputs. Crucially, ALIGN massively scales up the data set used for training. Its creators collected 1.8 billion image and alt-text pairs from the internet and performed only minimal post-processing steps in favor of scale. The large, and noisy, dataset allows ALIGN to surpass CLIP on several important benchmarks. Though much like CLIP and ALIGN in terms of architecture, LiT takes the novel approach of using a pretrained image encoder whose weights are locked during its contrastive training. The authors show through ablation studies that this approach has a variety of advantages and it performs better than CLIP and ALIGN several image retrieval benchmarks. The Florence model extends the dual encoder to a much greater variety of downstream tasks and capabilities. The base model consists of a text encoder transformer, RoBERTa <cit.> and a hierarchial vision transformer, CSwin <cit.> as the visual encoder trained with contrastive learning. Florence, however, can be equipped with a large number of task-specific heads that allow it perform a wider variety of tasks than the other dual encoder models, including inference on video tasks. Despite their relative simplicity, dual encoder models like CLIP achieve remarkable results on a variety of tasks <cit.>, in particular on zero shot classification and retrieval tasks. The fast inference time of dual encoder models make them ideal for image retrieval tasks. However, the creators of CLIP and other researchers <cit.> have noted that CLIP performs poorly on complex VL classification tasks such as NLVR2 <cit.>. The Florence model performs relatively well on the VQA task <cit.> though well below state-of-the-art and it was not tested on a wide array of complex classification tasks. In order to attain state-of-the-art results on such tasks, a deeper interaction between modalities appears to be required. In the next two sections we explore deep interaction mechanisms that fare better on complex VL tasks. §.§ Fusion Encoders §.§.§ Single-Tower Architecture Following <cit.>, we classify fusion encoders into two categories, single-tower and dual-tower architectures. In a single-tower architecture, a single transformer encoder operates on a concatenation of visual and textual input representations. Since both the visual and textural tokens are embedded into a single input, the single transformer stack allows for unconstrained modality interaction modeling. It also has the distinct advantage of requiring fewer parameters than the more complex two-tower architecture. Of the two encoder types, single stream architectures have been the most common approach thus far. ViLT, VL-BERT, UNITER, OSCAR, SOHO, UNIMO, PixelBERT, Unicoder-VL and VisualBERT all use a single stream architectures. We will use Unicoder-VL as an illustrative example. The region features and textual input embeddings described in Section <ref> are fed into a transformer encoder with the same dimensions as the BERT model <cit.>. The architecture is depicted in Figure <ref> Though single-tower models differ along other dimensions, embedding strategy, pretraining tasks and data used, etc., the essential architecture is much the same for all of single-tower models. Many of them, as their names suggest, are variations on the BERT NLP model and some, such as VisualBERT and VL-BERT are initialized with the pretrained weights from BERT. A notable feature of the aforementioned ViLT, is that it is initialized with the pretrained weights from ViT <cit.> instead of BERT. §.§.§ Two-Tower Architecture Rather than operating on a concatenation of visual and textual inputs, two-tower architectures encode each modality in separate transformer stacks. Interaction is then achieved through a cross attention mechanism. ViLBERT, LXMERT, METER and BridgeTower are examples of dual-tower models. In ViLBERT, for example, separate image and text embeddings are passed into parallel sequences of BERT-style transformer blocks and co-attention transformer layers. Co-attention layers take intermediate visual and textual representations H^(i)_V and H^(i)_T and compute query, key and value matrices as in a standard transformer block. The keys and values for each modality are then passed as input into the other modality's multi-headed attention block creating a multi-modal feature. The rest of the layer proceeds as in a traditional transformer block, resulting in a multi-modal feature. ViLBERT's architecture is depicted in Figure <ref> LXMERT's architecture and general approach are quite similar to that of ViLBERT. One notable difference between the models is that while ViLBERT's text module is intialized with the weights from BERT-base, all of LXMERT's parameters are trained from scratch. METER, though broadly similar to LXMERT and ViLBERT, differs in a number of key ways. Firstly, it creators conducted a broad architecture analysis and attempted a number of different text and image encoders. Their default model, consisted of a pretrained RoBERTa text encoder and a pretrained CLIP-ViI-224-32 <cit.> as the image encoder. These architectural innovations allow METER to outperform the previous models that made use of the patch embeddings mentioned in Section <ref>. The BridgeTower model is similar in most respects to METER, however it contains novel features that its creators call bridge connections. Here representations from the top 6 layers of each unimodal encoder are connected to each of the layers in the multi-modal encoder prior to the cross-attention mechanism. This is done on the theory that since different layers of transformers encode different types of information, that these bridge connections will enrich the model's multi-modal representations. And indeed, BridgeTower is able to outperform METER on several downstream benchmarks despite being almost identical in other respects. §.§ Combination Encoders Several recently introduced VL modes,VLMo, ALBEF, BEiT-3 , FLAVA and X^2-VLM have attempted to leverage the strengths of both dual encoders and fusion encoders in a single model. These models contain separate visual and textual encoders at the base of the model. The outputs of the text encoder and an image encoder are aligned using cosine similarity before being fed into a fusion encoder module of some kind. The VLMo model, for instance, combines the two interaction schema via a transformer block the model's creators call the mixture of modality experts. Here the FFN of a transformer encoder block is replaced with pool of modality specific expert models. Specifically the model can direct input to three different modality experts: a vision expert (V-FFN), a language expert (L-FFN) and a vision language expert (VL-FFN). If the model is given purely language or text inputs, it can encode them with the appropriate expert module. When given an image-text pair, it can encode each component with the correct modality expert at lower layers before using the vision language expert module in the top layers. This process is visually summarised in Figure <ref>. The architecture and approach of BEiT-3, is very similar to VLMo, however it is massively scaled up, leading to increased downstream performance on a variety of tasks. FLAVA uses a slightly less complex architecture but the idea remains the same. In FLAVA, the output each modality encoder can be passed to a task head for classification for uni-modal tasks or be passed to a multi-modal encoder after a contrastive loss for vision language tasks. The ALBEF model takes a slightly different approach in combining dual and fusion encoder. Here the respective modalities are encoded and then aligned using the cosine similarity and contrastive loss, much like the CLIP or ALIGN models. After this step, the representations are then fed into a 6-layer fusion encoder that uses cross-attention. Though this approach yields a model less flexible than that of a model like VLMo, its novel approach offers the possibility of a deeper, more comprehensive vision language interaction. X^2-VLM also consists of three separate modules, a vision encoder, a text encoder and a fusion encoder with features visual and textual features aligned using cosine similarity. Significantly, this model is larger in scale, is trained on more data and can also perform vision language tasks using video as input. §.§ Encoder Decoder Models Following the architecture of the original transformer, some VL models opt for a design consisting of at least one encoder stack and a decoder stack. The VL-T5, OFA, OmniVL, PaLI, E2E-VLP, SimVLM, mPLUG, Flamingo, OmniVL and CoCa models all make use of this architecture. Additionally, the models mDETR, DQ-DETR, UniTAB, KD-VLP and Referring Transformer that all specialize in visual grounding tasks use some variation of the encoder-decoder design. This model architectures is versatile in general and allows models using them to successfully perform a wide range of functions, including generative tasks such as image captioning. As an illustrative example we consider VL-T5. It consists of a single multi-modal encoder stack that accepts a sequence of word-piece tokens and image region features. The encoder produces an intermediate output h = {h_1^t, ... , h_|t|^, h_1^v, ... , h_|v|^v} where h_i^t and h_i^v respectively represent the intermediate state vector for textual and visual representations. The decoder stack then iteratively attends to its own previously generated output via self attention and the encoder's intermediate outputs via cross-attention. Using this procedure the decoder computes the probability of future text token j, formally expressed as P_θ(y_j|y_<j,h). Though the general approach in all of the aforementioned architectures is basically similar, there is a very wide range of design distinctions. mPLUG for instance, uses a two-tower encoder of the type discussed just above. Notably, its encoder omits cross attention on some of the layers for its visual component resulting in a more efficient model. Flamingo, uses a novel module called a Perceiver Resampler to convert the output of a pretrained visual encoder to a fixed number of visual features. The visual features are then cross-attended by a text-decoder with alternating blocks of a pretrained large language model and novel cross-attention layers. These features allow Flamingo to handle an arbitrary mix of image and text inputs and generate text in an open ended way. OmniVL makes use of two encoder stacks a visual encoder, that can handle both images and video as well as a text transform based on BERT. A novel feature of this model is that it makes use of two decoder stacks, one decoder for vision language alignment and understanding tasks and a separate decoder for generative tasks. CoCa also uses two decoder networks, though, opposed to OmniVL, the outputs of its image encoder and its text decoder are then input to a multi-modal decoder for generative tasks. The models mDETR, DQ-DETR, UniTAB and Reffering Transformers, designed for visual grounding tasks, also use encoder-decoder architectures. As their names suggest, mDETR and DQ-DETR are derived from an transformer based object detection model called DETR <cit.>. Both of these models process image input with a CNN backbone and text using a transformer encoder. Their output is concatenated and fed to a transformer encoder. The decoder module has a series of learned embeddings called queries. The decoder attends to the encoder output and the query embeddings. Finally, the decoder output is sent to a feed-forward network for classification. Though complex, this process provides the model with the type of fine-grained information about visual objects required for visual grounding tasks. The Referring Transformer also makes use of query embeddings to extract object information, however it also contains a seperate query encoder module to further process visual object information. The last of these models, UniTAB, uses a fairly generic encoder-decoder architecture. Three models, DaVinci and OFA can perform a truly impressive range of tasks and do so without the significant changes in architecture often required to adapt pretrained models to various tasks. The decoder module in both models can generate both image and textual output, meaning that these models can perform vision language understanding tasks such as VQA <cit.> as well as generative tasks like image-caption generation. Notably, both of these models can also perform text-to-image generation tasks, like models such as DALLE <cit.> perform, as well as unimodal tasks such as GLUE <cit.> and image classification on datasets such as ImageNet <cit.>. § PRETRAINING TASKS This section is devoted to discussing the pretraining tasks used by the various VL tranformers. Pretraining is a key element of these models success and we will devote a significant amount of space to describing these methods. Almost all of the fusion and combination encoder models make use of masked language modeling and image text matching, both of which are extensions of the pretraining objectives used in the BERT NLP model <cit.>. Below, we describe these tasks in detail along with several additional objectives found in the relevant literature. §.§ Masked Language Modeling Some variation of the masked language modeling objective introduced in <cit.> and described in section 2.2, is used by all of the fusion encoder and combination encoder models that we reviewed. Several of the encoder-decoder also make use of variations on this task such as VL-T5. The key difference between this task and masked language modeling in pure NLP is that in the VL setting, the models have access to corresponding visual tokens in addition to unmasked text representations. We refer to PixelBERT as a representative example. Formally, it minimizes the negative log-likelihood: L_MLM(θ) = -E_(t,v) ∼ Dlog P_θ(t_m|𝐭_\ m, 𝐯) Where t_m is the masked token being predicted, θ are the model's parameters, 𝐭_\ m, the unmasked text tokens, and 𝐯 are the corresponding image features. Each pair (𝐭, 𝐯) is sampled from the whole the training set D. As in BERT, 15% of test tokens are selected at random and replaced with a special "[MASK]" token. Though all of the fusion encoders make use of MLM, there are some minor differences in approach worth noting. ALBEF <cit.> for instance minimizes a cross-entropy loss function rather than a log-likelihood. ViLT <cit.>, masks only whole word units rather than the subword units that most of these models use in tokenization. This is done on the theory that visual clues are more likely to be associated with whole words; e.g. "giraffe" will have more visual meaning than "gi##" or "##raffe". UNIMO masks only contiguous spans of complete words as is done in SpanBERT <cit.>. Most models using this task, such as SOHO, follow <cit.> and mask 15 percent of text tokens. Some models deviate from this practice however; BEiT-3 for example masks 40 perent of the text tokens from the image-text pairs in its pretraining dataset. §.§ Masked Image Modeling The masked image modeling task in an extension of the masked language modeling objective described above. In this setting, however, the objects being modeled are image features rather than text tokens. ViLBERT, UNITER, VL-BERT, LXMERT, Unicoder-VL, BEiT-3, UNIMO, SOHO and FLAVA all make use of this objective in one way or another. We consider ViLBERT as an example. In ViLBERT, 15% of both text and image regions are randomly selected and masked. Rather than being replaced by a special "[MASK]" token as textual features are, the image features are set to 0. The model then predicts a distribution over the category labels for its region features. Training minimizes the KL divergence between this distribution and the output distribution for the region from the same model R-CNN model it uses for feature extraction. LXMERT, Unicoder-VL, UNITER, UNIMO and VL-BERT perform the masked image modeling classification task in a similar manner. Models that do not use region features must approach this task differently since they do not have labels naturally associated with each visual feature. SOHO uses grid features and it uses the index from its visual dictionary module that it assigns to each feature as the label for classification. BEiT-3 and FLAVA, which uses patch features, use a discrete variational autoencoder <cit.> to assign each image patch feature a fixed set of discrete image codes using the process described in <cit.>. The model must then classify each masked patch according to its assigned visual token. There are a couple of other noteworthy distinctions in the way this task is deployed among the various models. UNITER and LXMERT for example, extend the masked region modeling task to include region-feature regression. Here, the model is tasked with predicting the actual feature values of a masked region using a loss function such as squared error. Another key difference is that models such as UNITER will only mask elements from one modality at a time; i.e. during masked region modeling all text features are be unmasked and available to the model vice-versa. VilBERT and LXMERT, on the other hand mask a given percentage of all tokens at anytime regardless of their modality. §.§ Image-Text Matching Most of the encoder only models and some of the encoder-decoder models, including OFA and mPLUG, make use of the binary classification image-text matching task. Here the model, given an image-text pair must determine if the text in fact describes the image. We refer to UNITER as an example. Given a text-image pair (𝐭, 𝐯), the model output is fed into a fully connected layer and a sigmoid function to produce an output score S_θ(𝐭, 𝐯) between 0 and 1. The model then minimizes loss function 𝐋_𝐈𝐓𝐌(θ) = -E_(t,v) ∼ D[ylog S_θ(𝐭, 𝐯) + (1-y)log(1 - S_θ(𝐭, 𝐯))] . Where y ∈{0,1} is the label indicating whether the text and image are correctly paired. The negative pairs are created by replacing the image or text in paired sample with one randomly drawn from the training set D. The task is relatively simple and there aren't many variations between its implementations among models. §.§ Contrastive Learning Contrastive learning is the primary pretraining objective employed by dual encoder models such as CLIP, ALIGN, LIT and Florence and is also used in the combination models ALBEF, VLMo, X2-VLM and FLAVA. Several of the endocder-decoder type models also use some type of contrastive loss, including COCA, mPLUG and Flamingo. We consider the contrastive loss in CLIP as a representative example. Given a batch of N encoded (image, text) pairs, the CLIP model is trained to predict which of the N^2 possible (image, text) pairs actually occurred. This is achieved by jointly training an image encoder and a text encoder to minimize the cosine similarity of the N correct pairs and maximize the cosine similarity of the N^2 - N incorrect pairs. The similarity scores are input to a binary cross entropy loss function to be optimized. For normalized values, the cosine similarity reduces to a simple dot product. This process is visually summarized in Figure <ref>. Florence uses a slightly modified approach called unified image-text contrastive learning that was introduced in <cit.>. The unified approach uses a text hash-table to allow a single text description to be associated with multiple images, a capability not present in the procedure described above. The contrastive loss used in UNIMO is heavily augmented through text rewriting and image/text retrieval to create a large volumes of positive and negative examples. Among the minor distinctions among contrastive loss objectives are approaches to normalizing features. Some models such as CLIP, use the l2 function to normalize features, while others such as VLMo and ALBEF employ a softmax normalization function. A notable feature of contrastive learning is a relatively simple objective function and it scales well to large datasets. §.§ Visual Question Answering Some VL transformers use variants of the visual question answering task where the model must choose or generate an answer given as part of their pretraining regime. Some example image-question pairs are displayed in Figure <ref> as visual aid. Of the models we reviewed, LXMERT, OFA and VL-T5 use visual question answering. LXMERT treats VQA as a classification task where the model must choose the correct answers from set. VL-T5 and OFA on the other hand leverage their decoder module to simply generate the text given an image, question pair. §.§ Visual Grounding Visual grounding tasks are those that connect a specific visual object with the words or phrases that describe them. Reference resolution, identifying which object within an image a given phrase refers to, is an example of visual grounding task. Publicly available datasets such as RefCOCO <cit.> can be used to benchmark reference resolution. Grounded captioning is the inverse task, where the model is given an image region and must generate a caption that accurately describes it. Visual grounding tasks are of particular interest because they provide direct, fine-grained connections between words and specific visual objects. Three versatile, general purpose VL models, OFA, X2-VLM and VL-T5 make use visual grounding tasks. OFA uses both reference resolution and grounding captioning as proxy tasks. For reference resolution, it learns to generate a location that specifies the region position <x1, y1, x2, y2> based on the input of the image along with the instruction "Which region does the text x describe?" For the grounded captioning task OFA learns to generate a description based on the input image and the instruction "What does the region describe? region: <x1, y1, x2, y2>". VL-T5 performs visual captioning in essentially the same fashion. VL-T5 performs reference resolution slightly differently, however. Additionally, visually grounded transformers all make use of some form of visual grounding task. mDETR for example uses the same technique as DETR <cit.> to predict bounding boxes for visual objects within each image. It then assigns each predicted object a probability of being associated with each word in the referring phrase. This process is depicted in Figure <ref>. The other models designed for grounding tasks, such as DQ-DETR, Reffering Transformer and UniTAB use a variety approaches and the interested reader is encouraged to consult their source papers for more detail. §.§ Image Captioning Image captioning requires a model to generate a description of a provided image input. As a proxy task it differs from grounding captioning in that the caption must describe the entire image. Three of the models we reviewed, OFA, E2E-VLP and CoCa, use image captioning as a pretraining objective. In principal, the image captioning task is just the causal language modeling task that uses an image as context to condition the prediction of text tokens. CoCa, for instance formally defines the task as the conditional log likelihood L_cap = -∑_t=1^T logP_θ(y_t|y_<t, x) where T is the sequence length, y_t is the token being predicted, y_<t are the model's previous predictions and x are the associated image representations. §.§ Prefix Language Modeling Prefix language modeling can be seen as hybrid between traditional causal language modeling and masked language modeling. Three of the models we reviewed, DaVinci, mPLUG and SimVLM, make use of this training objective. SimVLM, for instance formulates prefix modeling in the following way, a given sequence of tokens is truncated to randomly selected length T_p. The sequence of tokens 𝐱_<T_p is denoted as the prefix and 𝐱_≥ T_p as the suffix. Bidirectional attention is then applied to the prefix sequence and autoregressive language modeling is applied to suffix sequence. In both the SimVLM and mPLUG models, the prefix sequence is chosen in such a way that it always contains all of the image tokens and the suffix consists entirely of text. The DaVinci model uses a similar prefix language modeling task and also extends the notion to a prefix image modeling task. Here the model is presented with a prefix consisting of a complete sequence of text tokens and a partial sequence of image tokens. The model must then restore the image tokens in the suffix given the prefix and its previous output. The task is depicted in Figure  <ref>. §.§ Other Objectives The previous sections in this section cover most of the pretraining objectives used by the models reviewed for this paper. However, there a few other noteworthy examples that bear mentioning before we close this section. PixelBERT takes a random sample of its pixel grid features as a form of pretraining regularization. This task of course, can only employed by models like PixelBERT use grid features. Finally, the VLMo model uses a stage-wise pretraining procedure. The VLMo model is a gated mixture of experts network consisting of vision, language and multi-modal sub networks. It first trains the vision expert using BEIT <cit.> style masked imgage modeling while the language and multi-modal weights are frozen. Then it proceeds to training to the language expert with text-only masked language modeling with the vision weights frozen. Finally, the full model is trained with an image-text matching objective. This more or less exhausts the approaches to pretraining used in VL transformers and we have largely summarized most of the meaningful differences among the various VL transformers. Several models, such as UNIMO, employ unimodal training tasks that deal exclusively with language or vision without freezing parts of the model as is done in VLMo. § DOWNSTREAM CAPABILITIES In principle, most of the models that we've discussed can be adapted to almost any given VL task with the proper adjustments to model architecture and a fine-tuning regime. Many models however, were explicitly designed and tested for certain VL capabilities. Dual encoders, for instance are especially well suited for alignment tasks such as text to image retrieval. The grounded transformers, e.g. mDETR or Referring Transformer, were trained on and extensively evaluated on visual grounding tasks. In this section we briefly cover the range of VL tasks that the creators of the models we reviewed either pretrained, evaluated zero-shot or fine-tuned on their models on. In the process, we will take the opportunity to reference some of the major benchmarks used for each type of task. §.§ VL-Alignment We define vision language alignment tasks as those tasks such that given a token from one modality, they must correctly relate a set of tokens from the other modality. Retrieval tasks are the canonical examples of alignment task. For example, in text to image retrieval, a model is given an text description and must select and rank a set of matching images. MSCOCO <cit.> and Flickr30K <cit.> are two publicly available datasets that are often used as benchmarks. The vast majority of models we have reviewed have been evaluated on at least one retrieval task. Dual encoder models such as CLIP, ALIGN and LiT however, are designed for VL-alignment tasks. Because image representations for these encoders can be cached, the simple nature of the models allow for quick scoring of large sets of image-text pairs. This makes dual encoders ideal for alignment tasks such as image searches. §.§ VL-Understanding In vision language understanding tasks a model must correctly classify the relationship between a given image text pair. VL understanding tasks include popular benchmarks such as VQA <cit.>, where a model must choose the correct answer to a question about an image, NLVR2 <cit.>, where a model must determine if a statement about an image is true or false. Example image-text pairs from the NLVR2 dataset are shown in Figure <ref>. All of the fusion encoders that we've discussed, both the one-tower and two-tower variations, are fine-tuned and benchmarked on VL-understanding tasks. Encoder-decoder models are also well tested on understanding tasks, though often they recast them as text-generation tasks. OFA, for instance, performs VQA by providing the model with an image and question and allowing it to generate the correct answer. This approach allows some encoder-decoder models to perform multiple types of VL-understanding tasks without the architectural changes encoder only models usually require. With the exception of Florence, most dual encoders aren't well tested on understanding tasks. §.§ VL-Text Generation Vision language text generation tasks are those such as image captioning, where a model is presented with an image or an image-text pair and must generate an appropriate sequence of text describing the image. MSCOCO Captions <cit.> and nocaps <cit.> are two common image captioning benchmarks. The models best suited to tasks of this nature are those that contain a decoder module. Encoder-decoder models such as, CoCa, SimVLM, mPLUG, OFA, DaVinci, OmniVL and GiT are all evaluated on text generation tasks. Despite not having a decoder that is optimized for text generation, the encoder-only models BEiT-3, OSCAR, VinVL, X2-VLM and UNIMO were evaluated and perform well on image captioning tasks. None of the visually grounded transformers we reviewed were explicitly evaluated on VL text generation task, though, in principle they could be adapted to the task without too much difficulty. §.§ Visual Grounding, Grounded Captioning and Object Detection Visual grounding tasks are those which require a model to match words or phrases to the distinct visual objects they describe. RefCOCO <cit.> is the premier dataset for benchmarking visual grounding tasks. Grounded captioning, accurately describing a specific object within an image, and visual object detection are related tasks. The visually grounded transformers, mDETR, DQ-DETR, KD-VLP, Referring Transformer and UniTAB were all designed, trained and evaluated specifically for these types of grounded modeling. The encoder-decoder models OFA and VL-T5 are also pretrained and evaluated on grounding related tasks. And though not pretrained on any grounding related tasks, the encoder only models ALBEF, BEiT-3, LXMERT, UNITER, VilBERT, VisualBERT AND X2-VLM and the endcoder-decoder models BLIP-2, E2E-VLP and mPLUG were all evaluated on visual grounding or object detection tasks. §.§ Image Generation In text to image generation, such as that performed by DALL-E <cit.>, a model provided with a text description must output an appropriate image. Most of the models discussed in this paper of geared toward classifying or generating text related to images. Two models however, OFA and DaVinci, are capable of generating images in addition to the other sets of tasks described above. An image generated by OFA with the text that prompted it is displayed in Figure <ref>. §.§ Video Tasks Though video-based models have not been the focus of this paper, it is worth noting that a number of models can perform video retrieval and understanding tasks in addition to the tasks previously described. Models such as CoCa, Florence, GiT, and Omni-VL were all designed and trained for video tasks and are evaluated on a variety of video benchmarks. §.§ Unimodal Tasks Several models are also trained to and evaluated on vision-only or text-only tasks. The dual encoder models such as CLIP and ALIGN are easily adapted to pure computer vision tasks such as image classification are all evaluated on such tasks. Many of the flexible encoder-decoder models are also evaluated on image classification. These models include BLIP-2, CoCa, DaVinci, FLAVA, Flamingo, GIT, OFA, OmniVL and PaLi. Surprisingly, from the fusion encoder models that we reviewed, UNIMO is the only one that was specifically trained and evaluated on unimodal tasks. Many fewer models have been evaluated on pure NLP tasks such as the GLUE benchmark <cit.>. Of those reviewed for this paper, only UNIMO and OFA were explicitly benchmarked on such pure NLP tasks. § PRETRAINING DATA In this section we address the subjects of the source and size of the various pretraining datasets used to train the models we've discussed. Though not intrinsically related to the models, the source and amount of pretraining data have a tremendous impact on the models downstream performance. The majority of models are pretrained using a small set of publicly available vision language datasets which are briefly described in the subsection immediately below. In the following subsection we provide a similar description of the several models that make use of proprietary datasets. Finally, we end with a subsection devoted to the various sizes of the pretraining datasets used. §.§ Public Data Sources The following public datasets of the models described above are used alone or in combination with other datasets and form the bulk of the pretraining data for the models that we've discussed. Two of them MSCOCO snd Visual Genome were annotated by human subjects. The remainder were collected and filtered from the web and are in general much larger in scale, but are noisier. §.§.§ Human Annotated Datasets MSCOCO Microsoft's Common Objects in Context (MSCOCO) is among the most widely used datasets for VL tasks. The dataset was introduced by <cit.> and was originally intended for object recognition tasks. Overall it consists of a total of 2.5 million labeled object instances in 328k unique images. Of that, over 164,000 of them contain up to 5 independent, user-generated sentences describing the scene. The training split of MSCOCO contains 118k images and their annotations, though most model creators remove some images to avoid data leakage to downstream tasks. ViLT, for instance uses a total of 113k images with 5 captions each for a total of 567k image-text pairs. The annotations were all produced using workers via Amazon's Mechanical Turk service. MSCOCO is publicly available and is used for pretraining, generally in conjunction with other datasets, by a number of the model's we reviewed, including ViLT, METER, mDETR and many others. Visual Genome The Visual Genome dataset <cit.> consists of only 108k unique images. The images are from the intersection of MSCOCO and YFCC100M <cit.>. The set of human derived annotations, also generated using Amazon Mechanical Turk, however is much richer than the MSCOCO dataset. Each image includes a large number of detailed descriptions of various image regions, an extensive set of segmented objects and their labels, as well as a scene graph relating the various objects. Because each image has 50 descriptions of image regions, it can provide more than 5 million image-text pairs for VL pretraining. An example of an image from Visual Genome with some examples of bounding boxes, referring expressions and part of the its scene graph is shown if Figure <ref>. Essentially all of the models that we've reviewed trained with publicly available data make use of Visual Genome as a pretraining dataset. In traditional computer vision modeling, it is also among the most popular datasets for training object detectors. §.§.§ Web Sourced Datasets SBU Captions The Stony Brook University (SBU) Captions dataset was introduced in <cit.> and was one of the first web-scale vision language datasets produced. The images and their associated text were first extracted from Flickr. Then the authors use a matching process to find additional images without associated text that match those previously collected. Matched photographs are then paired with appropriate text descriptions from the dataset. The process results in a dataset of around 1 million image-text pairs. The SBU captions dataset is used by a number of models in pretraining. It is generally used in conjunction with one or both of the Conceptual Captions sets described below. Conceptual Captions The Conceptual Captions dataset was introduced in <cit.>. The data was gathered by crawling a large number of english language websites and filtering for images with accompanying alt-text. The alt-text is then filtered and cleaned, resulting in a concise caption to describe each image. The original Conceptual Captions dataset contains three million image text-pairs and is often referred to as CC3M. Some example images, alt-texts and captions are from the CC3M set are shown in Figure <ref>. The Conceptual 12M dataset was introduced in <cit.>. The authors used a similar collection process as <cit.> but were able to dramatically increase the size of the resulting dataset by relaxing the filtering criteria. This increase in scale however, comes at the expense of a much noisier dataset. LAION-400M The LAION-400M dataset <cit.> is a massive collection of images and associated alt-text sourced from the web. The images are filtered for explicit content and caption length. Then a similarity score between each image and its associated text using CLIP; if the score falls below a certain threshold the pair is discarded. In this way, the creators produce a dataset of over 400 million image-text pairs. §.§ Proprietary Datasets ALIGN The creators of ALIGN <cit.> constructed a proprietary dataset consisting of 1.8 billion image-text pairs using the same process that <cit.> used in creating the CC3M dataset. The massive size of the dataset stems from the fact that its creators chose to relax most of the cleaning standards used to create the Conceptual Captions set. Though the resulting data is extremely noisy, the dataset's creators argue that the sheer amount of data tends to overcome the inherent noise. Example image-text pairs from the ALIGN set are shown in Figure <ref>. CLIP The CLIP model was trained on a proprietary dataset consisting of 400 million image-text pairs. The model's creator's don't give much detail on their collection process, other than saying "we constructed a new dataset of 400 million (image, text) pairs collected form a variety of publicly available sources on the Internet." Flamingo In addition to the using the dataset from the ALIGN model <cit.>, the creators of Flamingo also make use of two datasets of their own creations. The first of these is called LTIP (Long Text and Image Pairs) and it consists of 312 million image-text pairs. The authors also curate a custom video-text dataset with 27 million examples. LiT The creators of LiT, created a dataset using the essentially the same process as ALIGN, however, the further relaxed the text-filtering standards. The result is a turly massive dataset consisting of 4 billion image-text pairs. §.§ Data Sizes In addition to the various sources of pretraining data, we also consider the size of the pretraining dataset used. The size of the pretraining datasets range from that of VisualBERT, which only uses the 567k image-text pairs from COCO, to the 4 billion image-text pairs used to train the LiT model. For convenience we categorize pretraining dataset sizes into small, medium and large sizes. Because we do not currently have the type of extensive meta-analysis studies which formally analyze the effects of pretraining data quantity, these categories are somewhat arbitrary. Said categories are, however, useful in describing distinctions in pretraining data quantities and we begin with models using small datasets, which we define as 10 million or fewer image-text pairs. Small: Fewer than 10M I-T Pairs The majority of the models discussed here were trained with 10 million or fewer image-text pairs. The combined number of I-T pairs in the popular datasets MSCOCO, Visual Genome, SBU Captions and CC3M is just under 10 million pairs and a number of models, including BridgeTower, METER, UNIMO, UNITER, VILLA and ViLT, use exactly this dataset. Many models, use even fewer images and only make use of COCO and Visual Genome. E2E-VLP, KD-VLP, PixelBERT use only the 6 million pairs found in MSCOCO and Visual Genome. Because of the need for bounding box annotations in object detection and visual grounding, models that specialize in such tasks make use of the smaller human-annotated datasets designed for said purpose. mDETR, DQ-DETR and KD-VLP for instance use only 1.3 million image-text pairs from Visual Genome, MSCOCO and the Flickr30K Entities set. Medium: 10-25M I-T Pairs By adding the CC12M dataset to corpus of MSCOCO, Visual Genome, SBU Captions and CC3M, we arrive at a dataset with over 21M image-text pairs. This set is used by ALBEF, BEiT-3, mPLUG, OFA and OmniVL for pretraining. Large: More than 25M I-T Pairs The dual encoder type models all use relatively large pretraining datasets. CLIP, ALIGN, LiT and Florence all use large web-sourced datasets of 400 million to 4 billion image-text pairs. Several of the large models, such as GIT, PaLI, FLAVA, Flamingo, CoCa and DaVinci use large proprietary datasets. These last models are all quite large and likely need significant amounts of data to leverage the power of their scale. § ANALYSIS AND FUTURE DIRECTIONS In this final section, we draw some general conclusions about the models we've previously discussed. The first two sections are devoted to the overall strengths and limitations of the available VL transformer models. In the third section, we discuss some of the open questions and possible directions for future research. Finally, we conclude the paper with a concise statement on the state of VL transformers and their likely role in the immediate future of AI research. §.§ Strengths §.§.§ Generalized Representations One of the principal advantages of transfer learning from pretrained models is the incredible adaptability they offer. Prior to the recent advent of pretrained VL transformers, complex, task-specific models were required to reach state-of-the-art performance on popular VL benchmarks. Examples include DGAF <cit.>, for visual question answering, MAttNet <cit.>, designed to perform referring expression related tasks, and R2C <cit.>, for visual common sense reasoning. Each model is designed and trained for the end task and could not be easily adapted to other tasks, whereas all of the fusion encoder and encoder-decoder models we have described can be easily transferred to other tasks with minimal changes in architecture and manageable training requirements. Though the pretraining cost of these models is very high (often excessively so), it is incurred only once and the models can be fine-tuned with significantly less compute and limited data input required. §.§.§ Ease of Understanding and Use In addition to being confined to a small set of use cases, task-specific VL models are often complex and difficult to comprehend. Consider the MAttNet model <cit.>), designed to perform the reference resolution task (i.e. given an expression describing an object in an image, the model must identify the object referred to). The MAttNet model decomposes word embeddings produced by an RNN model into modules describing different aspects of a phrase and uses an attention network to relate them to image regions produced by an R-CNN. Understanding and reproducing a model like this would require a great deal of background study and programming experience. Though transformers are themselves complex models, their ubiquity ensure that most deep learning practitioners have some familiarity with their workings. VL-Transformers also simplify the practical use of vision language modeling. After the costly pretraining phase has been completed, most VL-transformers can be adapted to new tasks by a straight forward fine-tuning process. Fine-tuning generally consists of removing the final layer of the model and replacing it with a randomly initialized linear layer with an activation appropriate to the task at hand, such as a softmax or sigmoid layer for classification problems. The weights are then updated as a traditional supervised learning task. The LAVIS <cit.> vision language framework further simplifies the process. LAVIS is Python library for VL modeling that abstracts much of the difficult programming associated with deploying VL models and includes models such as CLIP, BLIP and ALBEF. With efforts such as these, VL transformers offer the possibility of making VL tasks as accessible as NLP and CV tasks currently are using pretrained models from each field. §.§.§ Performance Finally and likely most importantly, pretrained VL transformers simply perform better on most tasks than task-specific models. As appealing as their adaptability and simplicity are, pretrained transformers would not be so popular if they were routinely outperformed by other models. Just as it did in NLP, the transformer architecture has spurred huge advances in the performance of VL models. Leader boards for popular VL benchmarks as diverse as VQA and referring expression comprehension with RefCOCO are now dominated by transformer models.[https://eval.ai/web/challenges/challenge-page/830/leaderboard/2278, https://paperswithcode.com/sota/referring-expression-comprehension-on-refcoco]. §.§ Limitations and Open Questions §.§.§ Pretraining Data and Compute The tremendous strengths of pretrained transformers come at the cost of huge pretraining data requirements and enormous compute costs. The excessive compute costs arise from the enormous absolute size of VL-Transformers. At the small end are early encoder only models such as UNITER-Base, which contains 83M parameters. Even these "small" models are quite large and require enormous compute resources to pretrain or fine-tune. This problem is compounded during model development and research since multiple models are usually trained to test various configurations and hyper-parameters and to ensure that results are reproducible. As part of a meta-analysis experiment for VL transformers, <cit.> created a controlled setup for analyzing model performance. They estimated that training a single VL transformer 10 times for 4 downstream tasks required a 4-GPU machine on AWS for two months, costing around $6,000 at the time they published the paper in 2021. On the other end of the spectrum are models based on large language models such as Flamingo which contains 80B parameters. Models this size require enormous compute simply for the purpose of inference, to say nothing of training. A related problem that transformers face is the amount of pretraining data that they require to achieve strong results. Pretraining data requirements for VL transformers range from that of models like ViLBERT, which uses 3 million image-text pairs from the Conceptual Captions dataset, to GIT-2 which uses a whopping 12.9B imag-text pairs. Simply storing and processing this many image-text pairs is a cumbersome process. The need for massive datasets is aggravated by the fact that image-text data is difficult to produce. In order to be useful, the text data must be somehow related to the image it is paired with. Creating datasets of this kind often requires human annotation, though such set are generally quite small given the expense involved. Web-crawled datasets, such as the 1.8B image-text pairs used to train ALIGN, can be obtained using images and their assiciated alt-text, though these data are much noisier than human annotated data. §.§.§ Pretraining Tasks Independent of the amount of data and compute required to pretrain VL models, it is difficult to find training tasks that explicitly align vision and language and determine how much they contribute to performance. Extending the masked language modeling (or traditional language modeling) task to include visual elements is one strategy pursued by most of the models we have discussed in this paper. Yet standard NLP models such as BERT are already very good at the language modeling without the use of visual information. It is difficult then to determine how much visual feature representations actually contribute to a given task. This is especially true for models such as ViLBERT where the language processing stream is initialized with the pretrained weights from the standard BERT NLP model <cit.>. To explicitly model vision and language interaction, many of the models we have discussed use the image-text matching task we described in Section <ref>. Though this task has the advantage of requiring that vision and language interaction be modeled, it does not require the model to make connections between the meaning of words and the objects they actually describe. Furthermore, the image-text matching objective can be viewed as an extension of the next sentence prediction used in the training of BERT. Subsequent work, such as <cit.>, showed that the next sentence objective can be eliminated without hurting the model's performance. It is worth investigating whether such a straightforward binary classification is also providing marginal contributions to the VL pretraining process. A variety of models do make use of more involved pretraining tasks. LXMERT, OFA and VL-T5 for instance all make use of a visual question answering task. Visual grounding tasks, matching objects in images to textual descriptions, are used by OFA, VL-T5, mDETR, X2-VLM and several other models. Pretraining tasks of this nature more explicitly align vision and language and could possibly prove superior as proxy tasks. Unfortunately, tasks of this nature often require specialized datasets. Visual grounding tasks, for instance require the type of annotations found in RefCOCO. Said annotations are expensive to produce and the resulting dataset is very small. This poses an acute problem for pretraining data hungry transformers. Furthermore, to date, there is no detailed model development process or meta-analysis showing to what degree pretraining regimes such as these affect downstream performance. Until such data is available in the literature, it will remain an open question what a maximally effective vision language pretraining regime would entail. §.§.§ Visual Embeddings It is still an open question, as to what visual embedding strategy is best and under what circumstances. Many early models, ViLBERT and UNITER among them, made use of the region features described in Section <ref>. These features were a popular choice for visual embedding type because they can be produced by out of the box R-CNN models and directly fed into VL models, yet they have several drawbacks. Firstly, region features are confined to the object categories on which the object detection model was trained. This places an inherent limit on what types of visual information that the model can learn. Secondly, region features are usually denoted by rectangular bounding boxes and don't account well for shapes of objects they contain or their relationship to one another. Finally, CNN-based object detection models consume a great deal of compute and represent a serious bottleneck in the modeling process. The creators of ViLT <cit.> performed an analysis of the inference time for several of the models we discussed. Using region features, fusion encoders like the UNITER model spend the overwhelming majority of their inference time creating and operating on visual embeddings. Beyond creating a computational bottleneck, spending more time on data preprocessing than modeling is simply not a desirable design feature. The grid features of PixelBERT <cit.> and SOHO <cit.> have the advantages of reducing visual processing requirements and removing the theoretical ceiling on visual learning of region features. They also offer a dense set of image features for each image input. But, a model like PixelBERT still spends more time on visual preprocessing than VL modeling <cit.>. Furthermore, a separate CNN module was required to produce grid features, adding additional complexity to the model. Finally, models such as ViLT, that use patch embeddings spend a negligible percentage of inference time on visual processing. While this appears to solve the issue of excessive compute requirements for processing visual features, there is some concern that patch emeddings provide inferior representatoins to other approaches. ViLT, the first model found in the literature to make use of patch embeddings performed worse on most VL understanding tasks than similar models using region features. Later models that use patch embeddings, such as METER and BridgeTower, however, perform quite well on these tasks. A potential deficit of visual information is of special concern for object identification tasks where patch embeddings don't have access to the explicit object type information included in region feature representations. Almost all of the models that made use of visual grounding tasks in pretraining used, e.g. mDETR, use grid or region features. Without more detailed meta-analysis of VL transformers, it is still difficult to determine the best approach to visual embeddings. §.§ Future Directions §.§.§ Data Generation and Meta Analysis Though not directly tied to the process of model development, the generation of more quality vision language datasets is a clear need in the current research environment. There is currently a deficit of publicly available pretraining data as well as quality datasets for benchmarking VL models. The huge majority of models reviewed for this paper use some combination of MSCOCO, Visual Genome, SBU Captions and one or both of the Conceptual Captions datasets for pretraining. The addition of more quality pretraining data has been very beneficial to transformer's in other domains <cit.> and would almost certainly improve the performance of most of the models we've described. Similarly we also see a paucity of data on which to benchmark model performance. The models we've discussed here all evaluated on a narrow set of downstream tasks such as VQA <cit.>, NLVR2 <cit.> and the COCO cross modal retrieval task. These VL datasets must have images paired with textual components that accurately describe, question or reference them and are resource intensive to create, usually requiring human annotators. That being said, crowd sourcing services such as Amazon Mechanical Turk greatly simplify the process of rote tasks such as annotating images. Indeed, several important datasets in the VL domain, such as MSCOCO <cit.> and VCR <cit.> were created using its services. The process is then reduced to finding images, creating a script for anonymous workers to follow for annotation and compensating workers. Though the need for human annotation makes the cost of creating such datasets relatively high, we have already noted that creating and testing the models discussed here were resource intensive tasks. Well-resourced institutions interested in advancing the field would surely benefit as much from more quality data to evaluate models as from creating additional variations of VL transformers. Another obvious priority is understanding the performance of the various models that have already been created. One of the few published papers attempting a meta-analysis of current VL transformer models is <cit.>. This analysis sheds some light on how some of the models that we've discussed perform, yet it leaves open important questions such as whether dual or single stream fusion encoders offer superior performance, whether or not the patch embedding strategy is a viable solution to the problems posed by region features, and how effective are the pretraining objectives currently in use. Answering questions of this nature would require thorough controlled studies in which potential confounds such as pretraining data size (and type), training hyper-parameters that are held constant, and each model configuration is trained multiple times and evaluated on a broad range of downstream tasks. Studies of this nature would be a significant undertaking with such large models. As with data generation, however, investments in meta-analysis should surely take priority over model development. Simply put, model development has outpaced the supply of data and what we know about how these models perform. If dual-tower architectures cannot be shown to improve performance, it is hard to justify the additional parameters of a second transformer encoder stack. Similarly, if patch embeddings can perform on par with region features, they represent a better visual embedding strategy based on their huge advantage in efficiency. Along with controlled experimentation, a large and diverse body of data for model pretraining and evaluation are the keys to resolving these open questions and guiding future research. §.§.§ Alternative Pretraining Tasks The pretraining regimes that we've covered in this paper are mostly straightforward extensions of the pretraining objectives used in NLP models such as BERT. The choice to extend BERT's training objectives was a natural place to start, however, as we have previously described, there seems to be a great deal of room for exploring new pretraining objectives that create deeper and more explicit interactions between vision and language modalities. In particular, we believe that VL transformers would benefit from a more complex and detailed set of training objectives such as visual grounding tasks and visual question answering. In the course of developing the METER model, <cit.> conducted an extensive study of various model architectures in search of the most effective design. We propose that a similar approach testing novel pretraining tasks would also be beneficial and could possibly result in state-of-the-art results. In addition to systematically testing the pretraining objectives that we have previously introduced, there are other approaches worth investigating. Position guided text prompting <cit.> is an example of such a task. It splits an image into patches, identifies various objects in each patch and produces a series of fill in the blank tasks based on said object information. Crucially, the authors designed it to be compatible (at least theoretically so) with any type of VL transformer architecture. A thorough investigation of such pretraining tasks would almost surely move the field of VL modeling forward. §.§.§ Additional Modalities The extraordinary versatility of transformers recommends them to be applied to additional modalities beyond just language and vision. An evolving body of research is attempting to do just that. Models such as ONE-PEACE <cit.> and VALOR <cit.> model vision and language but also incorporate the audio modality using techniques that are quite similar to the ones discussed in this paper. Remarkably, transformers can also be applied to even more sensory modalities. PALM-E <cit.> is an embodied vision language models that is capable of performing a variety of robot maniputlation tasks. The model can additionally perform the types of vision language tasks referenced throughout this paper as well as language-only tasks. In addition to the obvious practical applications that these more general models might have, they are offer a possible path toward resolving problems like the symbol grounding problem described in <cit.>. Language models that incorporate both audio and video modalities are one step closer to the type of language grounding that underscores human speech. §.§ Concluding Remarks Pretrained VL transformer models are a relatively new creations and there is still a great deal of research to be done to determine how they should be designed and trained. It is not yet clear which of the several approaches we've discussed are superior. This includes basic questions such as how to create visual embeddings, single stream versus dual-tower architectures or which pretraining objectives to use and when. Pretrained transformers also come at the cost of huge pretraining data requirements, and this is of particular concern in the VL domain, where suitable data is hard to find and expensive to produce. Pretraining these models would probably also benefit from a new set of training tasks that explicitly align language and vision in deep and meaningful ways. With all of that said, the models discussed here do appear to considerably advance the state-of-the-art on the most common VL benchmarks currently available. Moreover, they do this via generalized models that can be easily adapted to a variety of tasks. VL transformers also offer the possibility of being extended to other sensory modalities. In general, they represent a remarkable step forward from previous VL models. Connecting symbols such as words to their real-world analogs is at the core of intelligence <cit.> and the models we've discussed offer promising paths toward achieving this goal and significantly improving NLP and AI in general. Given these remarkable strengths, pretrained VL transformers will likely play a key role in the near future of modeling tasks where vision and language intersect. acl_natbib
http://arxiv.org/abs/2307.00937v2
20230703112411
A Biomimetic Fingerprint for Robotic Tactile Sensing
[ "Oscar Alberto Juiña Quilachamín", "Nicolás Navarro-Guerrero" ]
cs.RO
[ "cs.RO" ]
A Biomimetic Fingerprint for Robotic Tactile Sensing Oscar Alberto Juiña Quilachamin Faculty of Mechatronic & Micro-Mechatronic Systems Hochschule Karlsruhe (HKA) Karlsruhe, Germany [email protected] Nicolás Navarro-Guerrero Deutsches Forschungszentrum für Künstliche Intelligenz GmbH and, L3S Research Center, Leibniz Universität Hannover Hannover, Germany <https://orcid.org/0000-0003-1164-5579> August 1, 2023 ===================================================================================================================================================================================================================================================================================================================================================================== Tactile sensors have been developed since the early '70s and have greatly improved, but there are still no widely adopted solutions. Various technologies, such as capacitive, piezoelectric, piezoresistive, optical, and magnetic, are used in haptic sensing. However, most sensors are not mechanically robust for many applications and cannot cope well with curved or sizeable surfaces. Aiming to address this problem, we present a 3D-printed fingerprint pattern to enhance the body-borne vibration signal for dynamic tactile feedback. The 3D-printed fingerprint patterns were designed and tested for an RH8D Adult size Robot Hand. The patterns significantly increased the signal's power to over 11 times the baseline. A public haptic dataset including 52 objects of several materials was created using the best fingerprint pattern and material. § INTRODUCTION Tactile perception is a pivotal technology to enable applications such as dexterous robotic grasping, smart prostheses, and surgical robots <cit.>. Tactile sensors are mainly designed to mimic mechanoreceptors. Some of the objectives of tactile sensors are to determine the location, shape and intensity of contacts <cit.>. Instantaneous pressure or force can be used to determine the force and multiple contact points. In contrast, dynamic tactile sensations are better suited to extract information about texture as well as rolling or slipping <cit.>. Several technologies have been studied over the years <cit.>, including capacitive (e.g., <cit.>), piezoelectric (e.g., <cit.>), piezoresistive (e.g., <cit.>), optical (e.g., <cit.>), fiber optics (e.g., <cit.>), and magnetic (e.g., <cit.>). While experimental sensors cover many technologies, commercial solutions are less diverse. The most common commercial sensors are those based on optics/cameras (e.g., the PapillArray by Contactile or DIGIT by GelSight), magnetic (e.g., uSkin sensor by Xela Robotics) or piezoelectric (e.g., FTS Tactile Pressure Sensors by Seed Robotics and BioTac® sensor by SynTouch®). For more information about haptic perception see <cit.>. Inspired by some of the primary mechanoreceptors in the human skin (Fig. <ref>) and existing related work, we suggest exploiting body-borne vibrations for dynamic tactile perception, which could be used along with other tactile sensors. In particular, we propose 3D-printed fingerprints to increase the body-borne vibrations and signal-to-noise ratio generated when interacting with objects and surfaces. In this version, we used a symmetric beam array covering most of the inner side of the hand, as shown in Fig. <ref>. We optimized the beams' geometry to increase the vibration signals' spectral magnitude within the audible range, see Section <ref>. The vibrations are measured with contact microphones mounted on either side of the robot's palm and one inside the palm. Our results show that the optimized 3D-printed fingerprint patterns achieve over 11 times higher spectral magnitude than non-optimized fingerprints. However, additional research is needed to design patterns capable of dealing with various object materials and textures. Moreover, research on the effect of wear, angle of incidence, force and velocity of the beam is also needed. In a future iteration, the microphones would be mounted inside the robot and thus protected from the elements, collisions, and other perturbations. Additionally, the 3D-printed fingerprint patterns are easy to manufacture, can cover sizeable and curved surfaces. The paper is organized as follows. Section <ref> presents the related work. Section <ref> outlines the system requirements and constraints. Section <ref> describes the theoretical analysis and fingerprint design. Section <ref> presents the results. Section <ref> discusses the results, and Section <ref> presents conclusions and future work. § RELATED WORK Over three decades ago, vibrotactile sensing for robot perception was suggested (e.g., <cit.>). One of the first sensors consisted of a core enveloped by polyurethane foam and a stiffer textured outer skin of rubber. An accelerometer was mounted on the inner side of the skin to measure the vibration produced when the finger made or broke contact with an object, an object was lifted or placed on a tabletop, or slippage occurred <cit.>. The outer skin was textured because very smooth rubber has a substantial coefficient of friction leading to a stick-slip motion of the finger, which in turn, made manipulation more complicated and could overload the accelerometer <cit.>. The textured skin consisted of parallel ridges a few hundred microns wide and up to a few hundred microns high. Such a pattern causes a “catch and snap back” behaviour, leading to local accelerations of the skin, which can be easily detected by the accelerometer <cit.>. Another version using a similar construction and sensing principles using piezoelectric film strips, which can transduce stress into charge, was later suggested <cit.>. The transduction function between stress and charge changes depending on the direction of the stress but is more significant in the transverse direction of the film <cit.>. Hence, the tested sensor consists of 4 strips oriented along the direction of the expected slip for maximum sensitivity <cit.>. Additionally, the skin was textured with protruding nibs to maximize the stress. More recently, microphones have been suggested as tactile sensors. For instance, Edwards et al. <cit.> presented a setup of a finger with latex skin and a microphone mounted roughly at the position of the fingernail. The untextured latex skin could produce and transfer enough vibrations to the microphone when interacting with artificial symmetrical texture patterns. Similarly, Hughes et al. <cit.> suggested a soft, amorphous texture-sensitive skin based on a sensor network of microphones. The sensors are mounted 15 cm apart on a flexible neoprene rubber mesh and then embedded into silicone rubber textured with a 60-grit (ca. 0.25 mm) aluminium oxide sandpaper. The sensor spacing corresponds to the wavelength of sound at 250 Hz, which is considered the mid-frequency of the Pacinian corpuscle <cit.>. Similarly, the grit size used is considered the groove width of human fingerprints. A silicone rubber with Young's modulus of 125 kPa was used; by reference, the human skin's modulus ranges from 420 to 850 kPa. The skin assembly was systematically tested using a vibration motor. Texture recognition and sound localization show satisfactory results. Yang et al. <cit.> suggested an array of condenser microphones as a large dynamic tactile sensor. The proposed system comprises a porous structured mesh, neoprene, and fabric to form a sensor's skin. The fabric's rough texture generates vibrations that are then measured by the microphones. The neoprene rubber keeps the fabric in place. At the same time, the mesh helps to keep the sensor's shape and transfer the vibrations from the skin to the microphones. Time Difference of Arrival (TDoA) was used to determine the point of contact, while a CNN was used to classify the touch type. The suggested solution can be arbitrarily large and cope well with curvatures and orientations <cit.>. Microphones have also been suggested for texture detection in prosthetics <cit.>. In particular, the vibrations a microphone captures while stroking materials of different textures are translated into electrotactile feedback. Despite using an off-the-shelf microphone without an artificial skin to enhance the vibrations, the setup conveys sufficient information to the human such that participants can discriminate textures with 85% accuracy <cit.>. Fingerprint-like textured skin has also been used with capacitive and piezoelectric to improve static and dynamic tactile sensing, e.g., ridges <cit.> and nibs <cit.>. In particular, Navaraj et al. <cit.> the 3D printed pattern included ridges of 500 μ m width, 2 mm length and 500 μ m thickness. The ridges' vertical separation was 1500 μ m (>3 times higher than typically observed in human fingerprints), and horizontal separation of 3000 μ m <cit.>. The suggested setup reached a maximum accuracy of 99.45% <cit.>. Pestell et al. <cit.> suggested a biomimetic tactile sensor called the TacTip. The sensor aims to mimic human skin's shallow dermal and epidermal layers. The `epidermis' was made from a rubber-like material over a soft inner elastomer gel analogue to the `dermis'. These two materials are interdigitated in a mesh of ridges comprising stiff inner nodular pins that extend from the epidermis into the soft gel. Such structure mechanically amplifies skin surface deformation into a lateral movement of the pins, which can be optically tracked and classified <cit.>. The same structure can be used to detect the induced vibration in the soft inner gel <cit.>. The fingerprint amplifies the vibrations aiding texture perception. Moreover, the harmonic structure of induced vibrations seems to be speed-invariant <cit.>. As shown in this section, vibration, textured skins, and microphones for dynamic tactile sensing have been validated in different setups. However, the design decisions for the fingerprint pattern are not clear. In most cases, the aim is to resemble human fingerprints disregarding the different mechanical properties of the artificial skin or transductors. Thus, we present an approach to optimize 3D-printed fingerprints for dynamic tactile sensing to increase the vibration signal's spectral magnitude and improve the signal-to-noise ratio. We first describe the requirements and constraints and then continue with the theoretical analysis and design. § SYSTEM REQUIREMENTS AND CONSTRAINTS §.§ The Hand A RH8D robotic hand by https://www.seedrobotics.com/Seed Robotics was used, see Fig. <ref>. The customization corresponds to the microphone holders on either side of the palm and one inside the palm, and a replaceable palm and finger segments marked in hues of red in Fig. <ref>. The manufacturer provides no detailed specifications for force and speed of the hand. We estimated the maximum linear velocity and maximum forces generated by the RH8D robot hand to be V̂ = 953.3 mms at 134.5 Nmm and F̂ = 473.5 N mm at 270.7 mms based on the maximum speed (78 rpm) and shall torque (10 N m) of the Dynamixel MX motor series and the cable pulley transmission of the RH8D hand. The estimated weight of each finger assembly is 10.9 g and 8.9 g for the thumb. §.§ The 3D Printer We tested two kinds of 3D printer systems, i.e., Fused Deposition Modeling (FDM) and Stereolithography (SLA). We tested PLA (ρ = 1.17 to 1.24 gcm^3 and E = 2641 MPa), and TPU (ρ = 1.22 gcm^3 and E = 9 MPa) for the FDM system and the ST 45B resin (ρ = 1.20 gcm^3 and E = 2000 MPa) for the SLA system. Although 3D printing allows for creating custom intricated shapes, the 3D printing technology and printer can restrict the size and shapes that can be created. In particular, we adhere to the following guidelines <cit.>: minimum beam width ≥ 0.4 mm for supported printing or ≥ 0.6 mm for unsupported printing. Holes should have a diameter ≥ 0.75 mm. §.§ The Sensors Harley Benton CM-1000 contact microphones were used to collect the sound propagated within the robot hand. The microphone is inexpensive and widely available, but no public datasheet exists. Thus, we first estimate some of its relevant characteristics. The measured frequency response of the microphones is shown in Fig. <ref> and the attenuation with respect to distance for the frequency of highest sensitivity is shown in Fig. <ref>. A minimum desirable amplitude threshold of -42 dB corresponding to the mean value of the measured amplitude range was selected. The target frequency ranges were defined based on the measured frequency response and minimum desirable amplitude threshold. The first range is between [3.2, 26] kHz with a peak at 9 kHz and another between [110, 280] kHz with a peak at 150 kHz. The lower frequency range was selected to lower the computational requirements of data sampling and processing. § THEORETICAL ANALYSIS AND FINGERPRINT DESIGN Inspired by the mechanoreceptors in the human skin and the existing work on vibrotactile sensors and microphones for dynamic tactile sensing, we propose 3D-printed fingerprints to increase the body-borne vibrations and signal-to-noise ratio generated when interacting with objects and surfaces. The vibrations are measured with contact microphones. Only one microphone is needed to differentiate material texture, while three or more microphones would allow for sound-source localization. The microphones can detect vibrations without a fingerprint pattern which we use as our baseline condition. For the fingerprint pattern, we suggest an array of symmetric homogeneously spaced beams covering most of the robot's hand, as shown in Fig. <ref>. We optimised the beams' length and cross-section to maximise the vibration signals' spectral magnitude within the audible range. We used the Euler-Bernoulli beam equation for free vibration to determine the dimensions of a “fixed-free” beam (see Figure <ref>) so that the beam's natural frequency is within the range of higher sensitivity of the microphones. Equation <ref> represents the vibration of a beam <cit.>: EI [4]yx(x,t) + ρ A [2]yt(x,t) = 0 where y(x,t) represents the instantaneous position of a point within the beam at time t. E is the Young's modulus of the material in Pa, I is the area moment of inertia of the cross-section in m^4. ρ the density of the material kgm^3, A the cross-sectional area in m^2. The natural frequency ω of the beam in Hertz can be computed as ω_0 = 1/2π (β_n l)^2 √(EI/ρ A l^4) where l is the length of the beam. β_n represents the vibration modes and can be determined from the boundary conditions of the beam. In this study, we used n=1 for a “fixed-free” beam. Thus, β_n l = 1.875104 <cit.>. We analysed symmetric cross-section shapes for the beams: a square, hexagon, and circle. The square shape produces the lowest natural frequency, followed by the hexagon, while the circle leads to the highest natural frequencies for comparably sized beams, see Fig. <ref> for an example. From Equation <ref>, it can also be inferred that solid beams lead to lower natural frequencies than hollow (lower density) beams. We verified the analysis of the peak resulting frequency on a small sample of 3D-printed square beams printed on ST 45B resin. Figure <ref> shows the result of 10 samples for beams of different widths and lengths. The two smallest cross-sectional areas match the closest to the expected natural frequency. §.§ Final Design Based on the simulated behaviour and preliminary test for PLA, TPU, and ST 45B resin, we determined that the PLA and ST 45B resin behave similarly. Thus, the same design can be used for both materials. However, TPU is a soft material requiring a beam of larger cross-sectional area to keep the length and natural frequency low. A solid square beam seems the most suitable shape for the cross-section because it leads to a low natural frequency for the impulse response without increasing beam length in contrast to a cylindrical shape. Finally, considering the 3D printing guidelines [24] described in Section <ref>, we determine the range for the side and length of a solid square beam, summarized in Table <ref>. An additional constraint to consider for the final design is the length of the beam not to restrict the motion of the fingers and still allow for the hand to close. The final dimensions of the different segment types used to collect the dataset are summarized in Table <ref>. The final designs are shown in Fig. <ref>. The ST 45B resin design includes a small border for all the segments and is intended to minimize beams breaking under undesired interactions or extreme flexion. Those design features are more clearly visible in the render shown in Fig. <ref>. The beams are separated by a distance equal to the beam side. § RESULTS We tested the designs on the real robot hand with five objects: porcelain apple, glass bottle, sponge, wooden cylinder, and steel beam. The test consisted of sliding the index finger over the object (Lateral Motion EP), see Fig. <ref>. Force control for the hand is performed using numerical 12 bits coding. Unfortunately, no correspondence to a force unit can be found in the documentation. For our experiments, we used a force setting of 400, which we determined to be the lowest setting for which friction could be overcome for all objects and fingerprint materials tested. The microphones recorded the vibrations with a sampling rate of 500 kHz. An example of the mean frequency response for ten repetitions of the Lateral Motion EP on a porcelain apple is shown in Fig. <ref>. The optimized patterns lead to higher amplitude vibrations than the default “robot skin” (baseline) within the first 5kHz. The spectral amplitude diagram makes the quantification of the differences challenging. Thus, we computed the area under the curve for each test. The mean area for the baseline was used to normalize the areas of the different fingerprint materials such that in Fig. <ref> and Fig. <ref>, the y-axis represents how many times the amplitude increased or decreased with respect to the baseline. Each microphone is normalized separately. When interacting with softer/deformable objects such as a sponge, the proposed fingerprints produce a comparable level of vibrations as the baseline, see Fig. <ref>. Here the fingerprint of the more flexible material (TPU) leads to slightly higher vibrations than the harder fingerprints. In contrast, when interacting with more rigid objects such as a wooden stick, an increase of over 11 times the baseline can be reliably measured, see Figure <ref>. In this case, harder fingerprint materials lead to considerably higher frequency response than both the baseline and the more flexible fingerprint material. §.§ Dataset Based on our results, we used the ST 45B resin fingerprint patterns to create a vibrotactile dataset consisting of pictures of each object and haptic data. A total of 52 objects were recorded. Five observations per object were collected, and the objects were reoriented each time. The haptic data consist of the vibrations generated by the fingerprint patterns, motor position information and current values. The object explorations include four object exploration procedures. Rubbing the object (Lateral Motion EP) with the middle and index finger with a force of 400 (12 bits coding), enclosing the object using a force of 300 (12 bits coding) for 2 seconds (Enclosure EP), and squeezing the object with a force of 400, 500, 600, and 700 (Pressure EP). Finally, the current delivered to the wrist is recorded (Unsupported Holding EP). The dataset, CAD files and demonstration videos can be found under https://doi.org/10.6084/m9.figshare.21120982doi: 10.6084/m9.figshare.21120982. § DISCUSSION The density and Young's modulus are proportional to the generated frequencies. Similarly, more flexible materials lead to lower frequencies. During our experiments, we noted that rigid fingerprints slid over the objects with hard surfaces, thus generating low vibrations. Using materials with a higher friction coefficient would be recommended in such cases. Alternatively, anisotropic patterns could be used to cope with a more extensive variety of interactions because no fingerprint material performed best across all objects. The mathematical modelling of a single beam can be verified in the 3D-printed counterpart. However, the measured frequencies on the real robot are lower than the calculated ones. The discrepancies between the simulated and actual measurements can partly be attributed to the theoretical analysis performed for a single beam and a single perturbation (impulse response). The hand's kinematics should be considered to improve the design further. Moreover, considering that the obtained frequencies were lower than expected would allow for a shallower fingerprint pattern, which should be more robust. The dimensions of the beam were, in part, chosen to produce low frequencies. § CONCLUSIONS Vibrotactile sensing is a sound approach to address some of the challenges of tactile sensing. The strategy has been tested in various applications, including tactile sensing, texture perception for prosthetics and social touch classification, as discussed in Section <ref>. However, no systematic approach for designing such artificial fingerprints has been suggested yet. In this article, we suggested an approach to optimizing 3D-printed fingerprint patterns based on the Euler-Bernoulli beam equation. The 3D-printed fingerprint material and geometrical dimensions were analytically determined to maximize the vibrations generated within the microphone's sensitive range. The optimized fingerprints can produce over 11 times higher amplitude than the baseline. Based on our results, the ST 45B resin achieves a higher frequency response across different object materials. Thus, this fingerprint design and material were used to create a public vibrotactile dataset including 52 objects. Although detecting body-borne vibrations are limited to dynamic interaction and cannot provide force information, they can be complemented by kinesthetic information to estimate force or other tactile sensing modalities. Despite these disadvantages, our 3D-printed fingerprints approach exploiting body-borne vibrations as tactile sensing offer several benefits. For instance, it is easy to manufacture, can be scalable, and it can be designed for curved surfaces. Additionally, the sensor is robust since the active sensing elements can be mounted inside the robot. Future research would also include studying the effect of fingerprint patterns of different materials, shapes and separations to more effectively deal with a wide range of object materials, such as rigid or deformable objects. In the future, we would also like to study the effects of fingerprint wear, the effect of interaction force, velocity and angle of incidence of the fingerprint pattern, as all these variables could impact the resulting vibration pattern. Another line of future research includes sound-source localization on the robot's body to infer the single or potential multi-contact points. The provided dataset could be used to explore this line of research. -149mm § ACKNOWLEDGEMENT This work was supported by the European Commission under the Horizon 2020 framework program for Research and Innovation via the APRIL project (project number: 870142) and the VeryHuman project funded by the http://www.bmbf.de/en/Federal Ministry of Education and Research with grant no. 01IW20004. IEEEtran
http://arxiv.org/abs/2307.01353v1
20230703205902
A Diagram-Like Basis for the Multiset Partition Algebra
[ "Alexander Wilson" ]
math.RT
[ "math.RT", "math.CO", "05E10 (Primary) 20C30 (Secondary)" ]
Toward an accurate equation of state and B1-B2 phase boundary for magnesium oxide to TPa pressures and eV temperatures Miguel A. Morales August 1, 2023 ====================================================================================================================== Abstract There is a classical connection between the representation theory of the symmetric group and the general linear group called Schur-Weyl duality. Variations on this principle yield analogous connections between the symmetric group and other objects such as the partition algebra and more recently the multiset partition algebra. The partition algebra has a well-known basis indexed by graph-theoretic diagrams which allows the multiplication in the algebra to be understood visually as combinations of these diagrams. We construct an analogous basis for the multiset partition algebra called the diagram-like basis and use this basis to construct its irreducible representations and give a generating set. We also provide a change-of-basis from the orbit basis of the multiset partition algebra to this diagram-like basis which exhibits similarities to the analogous change of basis for the partition algebra. § INTRODUCTION Let V_n be an n-dimensional vector space, and write V_n^⊗ r for its rth tensor power. The general linear group GL_n acts on the tensor power diagonally, where a matrix M∈ GL_n acts on each tensor factor: M.(v_1⊗…⊗ v_r)=(Mv_1)⊗…⊗ (Mv_r). The symmetric group _r acts on the tensor power by permuting tensor factors: σ.(v_1⊗…⊗ v_r)=v_σ^-1(1)⊗…⊗ v_σ^-1(r). These two actions are mutual centralizers. That is, taking the endomorphisms of V_n^⊗ r which commute with one action recovers the other. This is an example of Schur-Weyl duality, and one consequence of such a duality is the decomposition V_n^⊗ r≅⊕_λW_GL_n^λ⊗ W__r^λ where for a group or algebra A, we write W_A^λ to represent an irreducible representation of A and the sum is over partitions λ of r. This establishes a correspondence between irreducible GL_n-modules and irreducible _r-modules and allows information to be passed between the representation theory of the two groups. Any situation in which two actions mutually centralize each other leads to an analogous decomposition. The study of these centralizer algebras is connected to long-standing questions in the representation theory of the symmetric group and GL_n including the Kronecker problem <cit.>, the restriction problem, and plethysm <cit.>. For another example, let V_n,k=(^n)^k be the space of n× k matrices and let ^r(V_n,k) be the space of polynomial forms on V_n,k. The group GL_n naturally acts on V_n,k by left-multiplication. This action extends to ^r(V_n,k) where M∈ GL_n acts on f(X)∈^r(V_n,k) by (M.f)(X)=f(M^-1X). The group GL_k can act analogously where N∈ GL_k acts by: (N.f)(X)=f(XN). In <cit.>, Roger Howe determined that these actions are mutual centralizers leading to a decomposition ^r(V_n,k)≅⊕_λ W_GL_n^λ⊗ W_GL_k^λ where the sum is over partitions λ of r of length at most min{n,k} (see e.g. Section 5.2.6 of <cit.> for details). In the 1990's, Martin <cit.> introduced the partition algebra P_r(n) as a generalization of the Temperley-Lieb algebra and the Potts model in statistical mechanics. In the case that n≥ 2r, the partition algebra is the algebra whose action centralizers the action of _n⊆ GL_n as the subgroup of permutation matrices acting on V_n^⊗ r <cit.>. From this perspective, there is a natural basis for P_r(n) called the orbit basis, but its product is complicated. A second basis called the diagram basis has a much simpler product in terms of graph-theoretic diagrams. The partition algebra, its generators, and its representations have been well-studied, and some major milestones are outlined in the following timeline. [scale=0.57] in 1994,2005,2018,2020,2021 (-1990)*.7cm; (y) at (,0); (yt) at (,+3pt); (yb) at (,-3pt); [->] (y1994) – (y2021); in 1994,2005,2018,2020 (yt) – (yb); at (y1994) [below=3pt,align=left,xshift=1cm] Partition algebra introduced with orbit and diagram basis <cit.>; at (y2005) [above=3pt,align=left] Presentations by generators and relations given <cit.>; at (y2018) [below=3pt,align=left] Dimensions of irreducible representations found <cit.>; at (y2020) [above=3pt,align=left] Irreducible representations constructed <cit.>; A major motivation for studying the partition algebra is to understand its representations and use them to study objects in the representation theory of the symmetric group such as the Kronecker coefficients <cit.>. In <cit.>, the authors Orellana and Zabrocki restrict the GL_n action of Howe duality to the n× n permutation matrices to obtain the multiset partition algebra _r,k(n) and provide a basis analogous to the orbit basis for P_r(n). In <cit.> Orellana and Zabrocki use symmetric function methods to compute the dimensions of irreducible _r,k(n) as counting the number of semistandard multiset partition tableaux. Our aim in this paper is to fill in three gaps in the timeline for _r,k(n) by: (a) providing a basis analogous to the diagram basis for P_r(n) (b) providing generators for _r,k(n) (c) constructing the irreducible representations of _r,k(n) as actions on the tableaux enumeratively predicted by Orellana and Zabrocki In <cit.>, the authors investigate the centralizer __n(^_1(V_n)⊗…⊗^_k(V_n)) for a weak composition of r of length k, also dubbing it the multiset partition algebra, written _(n). Orellana and Zabrocki state that _(n) should be isomorphic to a subalgebra of their multiset partition algebra, and in this paper we describe that isomorphic subalgebra. In <Ref>, we define the relevant combinatorial objects and introduce the partition algebra and multiset partition algebra. In <Ref>, we collect some useful facts about orbits of the combinatorial objects under the action of Young subgroups. In <Ref>, we describe the centralizer of an algebra A acting on a direct sum of projections of a semisimple module V by idempotents. We then construct the irreducible modules over this centralizer in terms of irreducible modules of _A(V). In the following two sections, we specialize these constructions to the setting of the multiset partition algebra. In <Ref>, we use a decomposition of ^r(V_n,k) as a GL_n-module to obtain the diagram-like basis and describe _G(^r(V_n,k)) for general subgroups G of GL_n. In <Ref>, we construct the irreducible representations of _r,k(n) with bases indexed by multiset-valued tableaux. In <Ref>, we provide a generating set for _r,k(n), and finally in <Ref> we provide a change-of-basis formula from the orbit basis of Orellana and Zabrocki to our diagram-like basis. Acknowledgement: The author would like to thank his thesis advisor Rosa Orellana for introducing him to the problem, as well as Ben Adenbaum, Franco Saliola, and Mike Zabrocki for many helpful conversations. § PRELIMINARIES AND DEFINITIONS §.§ Set and Multiset Partitions A set partition ρ of a set S is a set of nonempty subsets of S called blocks whose disjoint union is S. We write ℓ(ρ) for the number of blocks in ρ. We define [r]={1,…,r} the unbarred numbers and [ r]={1,…, r} the barred numbers. We write Π_2r for the set of set partitions of [r]∪[ r]. For a set R and a set partition ρ of S, write ρ|_R for the set partition obtained by restricting the elements in ρ to the set R. A weak composition of an integer r of length k is a sequence of k non-negative integers which sum to r. Write W_r,k for the set of weak compositions of r of length k. For ∈ W_r,k, write _i for the ith number in the sequence. A multiset of size r from a set S is a collection of r unordered elements of S which can be repeated. We will write multisets in , to differentiate them from sets and we will usually denote them by a capital letter with a tilde. We may write multisets using exponential notation =s_1^m_1,…,s_k^m_k where the multiplicity of the element s_i is given by the exponent m_i. We write m_s_i()=m_i for this multiplicity. Given multisets =s_1^m_1,…,s_k^m_k and =s_1^n_1,…,s_k^n_k, write ⊎=s_1^m_1+n_1,…,s_k^m_k+n_k for the union of the two multisets. A multiset partition ρ̃ of a multiset is a multiset of multisets called blocks whose union is . We write ℓ(ρ̃) for the number of blocks. Write Π̃_2r,k for the set of multiset partitions with r elements from [k] and r elements from [ k]. For a multiset partition and R a set, write |_R for the multiset partition obtained by restricting the elements in to the set R. A multiset partition and its restriction to a set: =1,1,1,1,2,2∈Π_2(3),2 |_[2] =1,1,2 Finally, we define partial orders on multisets and multiset partitions, which extend to sets and set partitions as a special case. Let and be multisets from [k]. We say that < if one of the following conditions hold: (i) is empty and is not, (ii) max()<max(), or (iii) max()=max()=m and ∖{m}<∖{m}. We call this the last-letter order on multisets. If ={_1,…,_k} and ={_1,…,_ℓ} are multiset partitions, we say that ≤, or is coarser than , if can be obtained by combining blocks of . Precisely, there exists a set partition {C_1,…,C_k} of [ℓ] so that S̃_i=_j∈ C_iR̃_j for all i. For example, 1,1,3,3,2,3 ≤1,3,1,3,2,3 §.§ Set and Multiset Partition Diagrams For a set partition π∈Π_2r, there is a classical graph-theoretic representation of π on two rows of vertices with the top row being labeled 1 through r and the bottom being labeled 1 through r. Two vertices of this graph are connected (but not necessarily adjacent) if and only if their labels are in the same block of π. The set partition π={{1,1,2,3},{2,3},{4},{4,5,5}} could be represented by either of the following two graphs. [ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5 (ı,1.25) coordinate (Tı); (ı,1.8) node ı; (ı,.25) coordinate (Bı); (ı,-.3) node ı; [fill= black!12,draw=black!12,line width=4pt] (T1) – (T5) – (B5) – (B1) – (T1); [black] (T1)–(B3)–(B2)–(B1)–(T1); [black] (T2)–(T3); [black] (T4)–(T5)–(B5)–(T4); 1,2,3,4,5black1,2,3,4,5black ] [ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5 (ı,1.25) coordinate (Tı); (ı,1.8) node ı; (ı,.25) coordinate (Bı); (ı,-.3) node ı; [fill= black!12,draw=black!12,line width=4pt] (T1) – (T5) – (B5) – (B1) – (T1); [black] (T1)–(B3)–(B2)–(B1)–(T1); [black] (T1)–(B2); [black] (T2)–(T3); [black] (T5)–(B5)–(T4); 1,2,3,4,5black1,2,3,4,5black ] Note that there could be many such graphs that represent the set partition π, so we consider two graphs equivalent if their connected components give rise to the same set partition. The diagram of π is the equivalence class of graphs with the same connected components. We can similarly consider a graph-theoretic representation of any multiset partition ∈Π̃_2r,k. This time we place r vertices on the top labeled by the unbarred elements of the blocks of in weakly increasing order and place r vertices on the bottom labeled by the barred elements in weakly increasing order. We then connect the vertices in any way so that the labeled connected components taken together are . The multiset partition =1,1,1,2,1,1,2,2,2,2 could be represented by any of the following graphs. [ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); (1,1.8) node 1; (2,1.8) node 1; (3,1.8) node 1; (4,1.8) node 2; (5,1.8) node 2; (1,-.3) node 1; (2,-.3) node 1; (3,-.3) node 2; (4,-.3) node 2; (5,-.3) node 2; [fill= black!12,draw=black!12,line width=4pt] (T1) – (T5) – (B5) – (B1) – (T1); [black] (T1)–(B3)–(B2)–(B1)–(T1); [black] (T2)–(T3); [black] (T4)–(T5)–(B5)–(T4); 1,2,3c14,5c21,2c13,4,5c2 ] [ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); (1,1.8) node 1; (2,1.8) node 1; (3,1.8) node 1; (4,1.8) node 2; (5,1.8) node 2; (1,-.3) node 1; (2,-.3) node 1; (3,-.3) node 2; (4,-.3) node 2; (5,-.3) node 2; [fill= black!12,draw=black!12,line width=4pt] (T1) – (T5) – (B5) – (B1) – (T1); [black] (B3)–(B2)–(B1)–(T1); [black] (T2)–(T3); [black] (T4)–(T5)–(B5); 1,2,3c14,5c21,2c13,4,5c2 ] [ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); (1,1.8) node 1; (2,1.8) node 1; (3,1.8) node 1; (4,1.8) node 2; (5,1.8) node 2; (1,-.3) node 1; (2,-.3) node 1; (3,-.3) node 2; (4,-.3) node 2; (5,-.3) node 2; [fill= black!12,draw=black!12,line width=4pt] (T1) – (T5) – (B5) – (B1) – (T1); [black] (T3)–(B3)–(B2)–(B1); [black] (T1)–(T2); [black] (T4)–(T5)–(B4); 1,2,3c14,5c21,2c13,4,5c2 ] Again we may have many graphs which represent the same multiset partition. The diagram of is the equivalence class of graphs whose labeled connected components give . We will often drop the labels on these graphs. A set partition diagram will be distinguished by the black color of its vertices, and it will be understood that the vertices are labeled in increasing order left-to-right. A multiset partition diagram will be distinguished by its colored vertices. Its vertices are understood to be labeled with blue, orange, green, and purple representing 1, 2, 3, and 4 respectively. Because of this graphical representation, for a set S with elements from [r]∪[r] we will sometimes refer to elements at the “top” of S to mean the unbarred elements and elements at the “bottom” of S to mean the barred elements, and likewise with multisets. §.§ Tableaux A partition of n is a weakly-decreasing sequence λ of positive integers called parts which sum to n. We write ℓ(λ) for the number of parts of λ. For example, (3,3,1) is a partition of 7. We will write λ⊢ n to mean that λ is a partition of n and write |λ|=n. Write λ_i for the ith element of the sequence λ, called the ith part of λ, and λ^* for the partition (λ_2,…,λ_ℓ) obtained by removing the first part. Given a partition λ, its Young diagram is an array of left-justified boxes where the ith row from the bottom has λ_i boxes. For example, the Young diagram of (3,3,1) is (3,3,1) When we refer to the ith row of a Young diagram, we mean the ith row from the bottom, which corresponds to the ith part of λ. A tableau of shape λ will be a filling of these boxes in λ's Young diagram with mathematical objects—in this paper the objects will be positive integers, sets, or multisets. We will call these integer-valued, set-valued, and multiset-valued tableaux respectively. We take a moment to define some particular classes of tableaux. Let ρ be a set partition of [r] and λ⊢ n such that |λ^*|≤ℓ(ρ). A set-partition tableau of shape λ and content ρ is a filling T of the Young diagram of λ with the blocks of ρ along with at least as many empty boxes in the first row as total boxes in the second row. A standard set-partition tableau is a set-partition tableau whose rows increase left-to-right and columns increase bottom-to-top with respect to the last-letter order. Write _λ,r for the set of set partition tableaux of shape λ with content a set partition of [r] and write _λ,r for the subset of _λ,r consisting of standard set partition tableaux. -1.3em(1.3, <24><9>,<35><68>,<17>)∈_(5,2,1),9 Write Λ^P_r(n) for the set of λ for which _λ,r≠∅ (we choose this notation because Λ^P_r(n) will also be an indexing set for the irreducible P_r(n)-modules). Let ρ̃ be a multiset partition from [k] and λ⊢ n such that |λ^*|≤ℓ(ρ). A multiset-partition tableau of shape λ and content ρ̃ is a filling T̃ of the Young diagram with the blocks of ρ̃ along with as many empty boxes in the first row as total boxes in the second row. A semistandard multiset-partition tableau is a multiset-partition tableau which strictly increases along columns and weakly increases along rows under the last-letter order. Write _λ,r,k for the set of multiset-partition tableaux of shape λ with content a multiset partition from [k] with a total of r numbers. Write _λ,r,k for the subset of _λ,r,k consisting of semistandard multiset-partition tableaux. -2em(1.5, <22><3>,<11><11>,<12>,<112>)∈_(4,2,1,1),12,3 Write Λ^_r,k(n) for the set of partitions λ⊢ n for which _λ,r,k≠∅. §.§ Double-Centralizer Theorem The algebras of interest in this paper arise as the algebras of endomorphisms of _n-modules, commonly called centralizer algebras of _n. The general case of the Schur-Weyl duality and Howe duality discussed in the introduction as well as the duality between _n and its centralizer algebras is summarized in the following theorem. <cit.><cit.>Let A be a semisimple algebra acting faithfully on a module V and set B=_A(V). Then B is semisimple and _B(V)≅ A. Furthermore, there is a set P (a subset of the indexing set of the irreducible representations of A) such that for each x∈ P, W_A^x is an irreducible A-module occurring in the decomposition of V as an A-module. Then if we set W_B^x=(W_A^x, V), then W_B^x is an irreducible B-module and the decomposition of V as an A× B-module is V≅⊕_x∈ PW_A^x⊗ W_B^x. Moreover, the dimension of W_A^x is equal to the multiplicity of W_B^x in V as a B-module and the dimension of W_B^x is equal to the multiplicity of W_A^x in V as an A-module. The decomposition in the above theorem gives a correspondence between irreducible representations of the algebras A and B and allows information like dimensions and multiplicities to be passed between them. In this paper, we are interested in setting A=_n. In this case, the indexing set P for the irreducible representations is a subset of the integer partitions of n. §.§ Partition Algebra For r a positive integer and an indeterminate x, the partition algebra P_r(x) is an associative algebra over (x) first introduced as a generalization of the Temperley-Lieb algebra and the Potts model in statistical mechanics by Jones <cit.> and Martin <cit.>. When x is specialized to an integer n≥ 2r, the algebra P_r(n) is isomorphic to the algebra of endomorphisms __n(V_n^⊗ r). The partition algebra has two distinguished bases: the orbit basis {_π:π∈Π_2r} which arises naturally from the structure of P_r(n) as a centralizer algebra and the diagram basis {_π:π∈Π_2r} whose product has a combinatorial interpretation in terms of partition diagrams. The change-of-basis is obtained by summing over coarsenings of a diagram: _π=∑_ν≤π_ν. The product formula for _π can be stated in terms of diagrams as follows. To compute the product of _π and _ν, place a graph representing π above one representing ν and identify the vertices on the bottom of π with the corresponding vertices of ν to create a three-tiered diagram. Let γ be the restriction of this diagram to the very top and very bottom, preserving which vertices are connected and let c(π,ν) be the number of components entirely in the middle of the three-tier diagram. Then _π_ν=n^c(π,ν)_γ. Here we show the product of two diagram basis elements. Notice that two components are entirely in the middle, giving a coefficient of n^2. [ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5,6,7 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T7) – (B7) – (B1) – (T1); [black] (T1)–(T2)–(B1)–(T1); [black] (T3)–(B2); [black] (T4)–(T5)–(B4)–(T4); [black] (T7)–(B7); [black] (B3) .. controls +(0,+.35) and +(0,+.35) .. (B5); 1,...,7black1,...,7black; [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5,6,7 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T7) – (B7) – (B1) – (T1); [black] (T1)–(B1)–(B2)–(T1); [black] (B3)–(B5); [black] (T7)–(B7)–(B6)–(T7); [black] (T2) .. controls +(0,-.35) and +(0,-.35) .. (T4); [black] (T3) .. controls +(0,-.35) and +(0,-.35) .. (T5); 1,...,7black1,...,7black ]=n^2[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5,6,7 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T7) – (B7) – (B1) – (T1); [black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (T3)–(T5); [black] (B3)–(B5); [black] (T7)–(B7)–(B6)–(T7); 1,...,7black1,...,7black ] In <cit.>, the authors construct the irreducible representations P_r^λ of P_r(n) for n≥ 2r as a combinatorial action of the set partition diagrams on set partition tableaux. For λ∈Λ^P_r(n), the module P_r^λ is {v_T:T∈_λ,r} with action given as follows. For a set partition π∈Π_2r to act on a tableau T, first pull out the content of T, a set partition of [r], into a single row. Then, put π on top of this row and identify the corresponding vertices. Form T' by replacing the content of each box in T with the set of vertices atop π that the box is connected to, and for each block entirely in the top of π, include it as the content of a box in the first row of T'. If two blocks above the first row are combined or the content of a box does not connect to the top of the partition diagram, the result is zero. Here we show the action of two different diagrams on the same tableau. [line width=1.25pt, xscale=.7, yscale=.7] (1.2,1) node (1.5, <5>,<12><4>,<3>); (-.1,1.1) coordinate (T21); (.5,1.8) coordinate (T31); (2.45,.5) coordinate (T13); (1.45,1.1) coordinate (T22); (-.75,2.7) coordinate (C1); (.25,2.7) coordinate (C2); (1.25,2.7) coordinate (C3); (2.25,2.7) coordinate (C4); (3.25,2.7) coordinate (C5); (-.75,3.2) coordinate (B1); (.25,3.2) coordinate (B2); (1.25,3.2) coordinate (B3); (2.25,3.2) coordinate (B4); (3.25,3.2) coordinate (B5); (-.75,4.2) coordinate (T1); (.25,4.2) coordinate (T2); (1.25,4.2) coordinate (T3); (2.25,4.2) coordinate (T4); (3.25,4.2) coordinate (T5); [gray] (T21) .. controls +(-.6,0) and +(+.6,0) .. (C1); [gray] (T31) .. controls +(0,+.6) and +(0,-.6) .. (C3); [gray] (T22) .. controls +(0,+.6) and +(0,-.6) .. (C4); [gray] (T13) .. controls +(0,+.8) and +(0,-.8) .. (C5); [black] (C1) – (C2); [black] (T1)–(T2)–(B3); [black] (B1)–(B2)–(T3); [black] (B4)–(B5)–(T5)–(B4); [fill=gray,draw=gray,line width = 1pt] (T21) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T31) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T13) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T22) circle (2pt); ı in C1, C2, C4, B1, B2, B4, T1, T2, T3, B5, C5, C3, B3, T4, T5[fill=black,draw=black,line width = 1pt] (ı) circle (4pt); =-1.2em(1.5, <4>,<3><5>,<12>)=- -1.2em(1.5, <4>,<12><5>,<3>) [line width=1.25pt, xscale=.7, yscale=.7] (1.2,1) node (1.5, <5>,<12><4>,<3>); (-.1,1.1) coordinate (T21); (.5,1.8) coordinate (T31); (2.45,.5) coordinate (T13); (1.45,1.1) coordinate (T22); (-.75,2.7) coordinate (C1); (.25,2.7) coordinate (C2); (1.25,2.7) coordinate (C3); (2.25,2.7) coordinate (C4); (3.25,2.7) coordinate (C5); (-.75,3.2) coordinate (B1); (.25,3.2) coordinate (B2); (1.25,3.2) coordinate (B3); (2.25,3.2) coordinate (B4); (3.25,3.2) coordinate (B5); (-.75,4.2) coordinate (T1); (.25,4.2) coordinate (T2); (1.25,4.2) coordinate (T3); (2.25,4.2) coordinate (T4); (3.25,4.2) coordinate (T5); [gray] (T21) .. controls +(-.6,0) and +(+.6,0) .. (C1); [gray] (T31) .. controls +(0,+.6) and +(0,-.6) .. (C3); [gray] (T22) .. controls +(0,+.6) and +(0,-.6) .. (C4); [gray] (T13) .. controls +(0,+.8) and +(0,-.8) .. (C5); [black] (C1) – (C2); [black] (T1)–(B1); [black] (B2)–(B3)–(T2)–(B2); [black] (T3)–(B4)–(B5); [black] (T4)–(T5); [fill=gray,draw=gray,line width = 1pt] (T21) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T31) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T13) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T22) circle (2pt); ı in C1, C2, C4, B1, B2, B4, T1, T2, T3, B5, C5, C3, B3, T4, T5[fill=black,draw=black,line width = 1pt] (ı) circle (4pt); =-1.2em(1.5, <45>,<12><3>,<>)=0 The result T of the above process may not be a standard set partition tableau, so we need to make sense of what it means to write v_T for T nonstandard. The algorithm for writing v_T as a linear combination of standard tableaux is called the straightening algorithm. The straightening algorithm for P_r^λ is the same as for the Specht modules of _n applied to the rows above the first row of T, a complete treatment of which can be found in <cit.>, but we summarize some key features that will be important for our constructions. Given T∈_λ,r nonstandard, the straightening algorithm writes v_T as a linear combination v_T=∑_S∈_λ,r c_S v_S. The relations between the v_T are called Garnir relations and are generally complicated, but one special case will be particularly useful to us: if T' is the result of exchanging two boxes of T above the first row which sit in the same column, then v_T'=-v_T. §.§ Multiset Partition Algebra The multiset partition algebra naturally arises by restricting the action of GL_n on ^r(V_n,k) in Howe duality to the permutation matrices. One can think of elements in ^r(V_n,k) as homogeneous polynomials of degree r in indeterminates x_ij for 1≤ i≤ n and 1≤ j≤ k. The action of GL_n on the space ^r(V_n,k) can be described on monomials as follows. Given a matrix M=(m_ij)∈ GL_n, its inverse acts on an element of U_ as follows: M^-1.x_ij =∑_ℓ=1^n m_iℓx_ℓ j M^-1.x_i_1j_1… x_i_rj_r =(∑_ℓ_1=1^n m_i_1ℓ_1x_ℓ_1 j_1)⋯(∑_ℓ_r=1^n m_i_rℓ_rx_ℓ_r j_r) =∑_ℓ_1,…,ℓ_r=1^n m_i_1ℓ_1⋯ m_i_rℓ_r (x_ℓ_1j_1… x_ℓ_rj_r) In <cit.>, the authors introduce a multiset partition algebra _r,k(x) with bases indxed by elements of Π̃_2r,k. When x is specialized to an integer n≥ 2r, the algebra _r,k(n) is isomorphic to __n(^r(V_n,k)) where _n acts by the restriction of the GL_n action above to the n× n permutation matrices. The authors obtain a basis analogous to the orbit basis for P_r(n) and prove that for n≥ 2r the irreducible representations of _r,k(n) occurring in the decomposition ^r(V_n,k)≅⊕_λ∈Λ^_r,k(n)W__n^λ⊗ W__r,k(n)^λ have dimension (W__r,k(n)^λ)=#_λ,r,k. In <cit.>, the authors generalize the Robinson–Schensted–Knuth algorithm to two-line arrays of multisets. This algorithm establishes a correspondence between multiset partitions in Π̃_2r,k and pairs of elements of _λ,r,k, Π̃_2r,k∼⟷_λ∈Λ^_r,k(n)_λ,r,k×_λ,r,k showing that (_r,k(n))=∑_λ∈Λ((W__r,k(n)^λ))^2. Hence, the Λ^_r,k(n) irreducible representations occurring in <Ref> are pairwise nonisomorphic, and each irreducible representation of _r,k(n) is isomorphic to one representation in the set. § ORBITS UNDER YOUNG SUBGROUPS The symmetric group algebra _r sits naturally inside of P_r(n) as the diagrams whose blocks pair one vertex on top with one on the bottom. For σ∈_r, we will write _σ for the diagram basis element corresponding to the set partition {{σ(1),1},…{σ(r),r}}. This embedding leads to natural actions on set partitions and set partition tableaux. In this section, we collect up some useful facts about orbits of these actions when they are restricted to Young subgroups. For ∈ W_r,k, write _=_{1,…,_1}×…×_{_1+…+_k-1,…,_1+…+_k} for the corresponding Young subgroup of _r. A pair of permutations (σ_1,σ_2)∈_r×_r can act on a set partition π∈Π_2r by taking the product _σ_1_π_σ_2. The resulting set partition σ_1.π.σ_2 can be obtained by replacing each i in π with σ_1(i) and each i in π with σ_2^-1(i). Given ,∈ W_r,k, define the coloring map κ_,:Π_2r→Π̃_2r to be the function given by making the following substitutions. i ↦ 1 i≤_1 2 _1<i≤_1+_2 ⋮ k _1+…+_k-1<i i ↦1 i≤_1 2 _1<i≤_1+_2 ⋮ k _1+…+_k-1<i On diagrams, we can think of κ_, as coloring in the diagram of π with colors whose multiplicities are given by on top and on bottom. Two set partitions which map to the same multiset partition under the coloring map κ_(1,2,1),(2,0,2): κ_(1,2,1),(2,0,2)(4[black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (T3)–(B3); [black] (T4)–(B4); 1,2,3,4black1,2,3,4black) =4[black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (T3)–(B3); [black] (T4)–(B4); 1c12,3c24c31,2c13,4c3 κ_(1,2,1),(2,0,2)(4[black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (T3)–(B4); [black] (T4)–(B3); 1,2,3,4black1,2,3,4black) =4[black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (T3)–(B4); [black] (T4)–(B3); 1c12,3c24c31,2c13,4c3=4[black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (T3)–(B3); [black] (T4)–(B4); 1c12,3c24c31,2c13,4c3 Note that if π and π' are in the same _×_-orbit, they differ by a permutation that takes each label to another label colored the same under applying κ_,, so κ_,(π)=κ_,(π'). Hence, κ_, induces a map κ_,:Π_2r/(_×_)→Π̃_2r,k. Conversely, if κ_,(π)=κ_,(π'), then π and π' are in the same orbit. Hence, the map κ_, is injective. Given ∈Π̃_2r,k whose unbarred and barred multiplicities are given by and respectively, we can easily create a set partition π∈Π_2r such that κ_,(π)= by simply taking any graph representing and forgetting the data of the colored vertices. Hence, the maps κ_, taken together as a map _,∈ W_r,kΠ_2r/(_×_)→Π̃_2r,k gives a bijection. This gives us a correspondence between multiset partitions and orbits of set partitions under an action of a pair of Young subgroups. Π̃_2r,k∼⟷_,∈ W_r,kΠ_2r/(_×_) We now obtain a second action from the module structure of P_r^λ. A permutation σ∈_r acts on the set _λ,r by replacing each entry i of a tableau T with σ(i). For example, (1 3 2)(4).-.8em( ,14,<23>)=-.8em( ,34,<12>). Like above, we define a surjective coloring map κ_:_λ,r→_λ,r,k for ∈ W_r,k which replaces the numbers {1,…,_1} with 1, {_1+1,…,_1+_2} with 2, etc.. Two set partition tableaux that are sent to the same multiset partition tableau by the coloring map κ_(3,1): κ_(3,1)(( ,14,<23>) )=κ_(3,1)(( ,34,<12>) )=( ,12,<11>) By an analogous argument to the case of set partitions, κ_(T)=κ_(S) if and only if T and S are in the same _-orbit, so we get a bijection: _λ,r,k∼⟷_∈ W_r,k_λ,r/_ The orbit of a standard set partition tableau T corresponds to a multiset partition tableau whose rows and columns weakly increase. That is, is semistandard except for possible repeats within columns. § THE PAINTED ALGEBRA CONSTRUCTION This section considers an algebra B with distinguished idempotents and M a B-module. We provide constructions of a new algebra B̃ called the corresponding painted algebra and a B̃-module M̃ called the painted module. First, we consider the setting in which this construction will naturally arise in <Ref>. Let A be an algebra and V a semisimple A-module. Let e_1,…,e_m∈_A(V) be idempotents. Then _A(⊕_i=1^me_i V)≅⊕_i,j=1^m e_i_A(V) e_j where the product on the right hand side of e_iφ e_j∈ e_i_A(V)e_j and e_kψ e_ℓ∈ e_k_A(V)e_ℓ is given by (e_iφ e_j)·(e_kψ e_ℓ)=δ_jke_iφ e_jψ e_ℓ∈ e_i_A(V) e_ℓ. where the product to the right of the equal sign is taken in _A(V). First, note that _A(⊕_i=1^me_i V) ≅⊕_i,j=1^m _A(e_j V, e_i V). An element e_iφ e_j∈ e_i_A(V) e_j can be viewed as a map e_j V→ e_i V, giving rise to an injective linear map Φ:e_i_A(V)e_j→_A(e_jV, e_i V). Because V is semisimple, the submodule e_jV has a complementary submodule U so that V=e_j V⊕ U. A map ψ:e_j V→ e_iV⊆ V can be extended to a map ψ:V→ V by setting ψ(u)=0 for all u∈ U. Then for any e_jv∈ e_jV, we have that e_iψ e_j(e_jv)=e_iψ(e_jv)=e_iψ(e_jv). Hence, the map Φ is also surjective, so it is an isomorphism. _A(⊕_i=1^me_i V) ≅⊕_i,j=1^m e_i_A(V)e_j Suppose j≠ k. The output of the map e_kψ e_ℓ is in e_k V, so the component of its output in e_j V is zero. Hence, (e_iφ e_j)·(e_kψ e_ℓ)=0. If instead j=k, we have that (e_iφ e_j)·(e_kψ e_ℓ)=e_iφ e_j e_kψ e_ℓ. Hence, (e_iφ e_j)·(e_kψ e_ℓ) =δ_jke_iφ (e_j)^2ψ e_ℓ =δ_jke_iφ e_jψ e_ℓ∈ e_i_A(V) e_ℓ The above lemma motivates the following definition. For a semisimple algebra B with distinguished idempotents {e_1,…,e_m}, the corresponding painted algebra with respect to these idempotents is =⊕_i,j=1^m e_i B e_j with multiplication as in the previous lemma. For a B-module M, the corresponding painted module with respect to these idempotents is M̃=⊕_i=1^m e_i M where e_i b e_j.e_k m=δ_jke_ibe_j.m where the latter action is that of B on M. To conclude this section, we show that the irreducible -modules are precisely the painted irreducible B-modules. Let B be a semisimple algebra with distinguished idempotents e_1,…,e_m. Then * For any simple B-module S, either ={0} or is a simple -module. * For any simple -module T, there is a simple B-module S so that ≅ T. Suppose ≠{0}, let s̃=∑_i=1^m e_is_i∈ be nonzero, and fix any j∈[m] such that e_js_j is nonzero. We show that any such s̃ generates as a -module, and so is simple. Note that e_js̃=e_js_j∈ e_j S⊆ S. Because S is simple, for any e_ks∈ S there exists b∈ B such that be_js̃=s and so e_kb e_js̃=e_k s. Because the elements e_k s span , we see that is generated by any nonzero element and hence is simple. The remainder of the proof generalizes an argument for the case of a single idempotent found in the proof of Theorem 1.10.14 of <cit.>. Suppose T is a simple -module and define a B-module U=(⊕_i=1^m Ae_i)⊗_T where B acts on the direct sum by left-multiplication. Write S=U/M where M is a maximal submodule of U. Then S is a simple B-module. The goal is now to define a nonzero -module map from T to the painted module . Consider the quotient map π:U→ S and suppose e_i⊗ t≠0. We claim that e_i⊗ t generates U as a B-module and hence π(e_i⊗ t)≠0 (else e_i⊗ t∈ M, which would mean that M=U). To show this, we need only demonstrate that for any fixed ℓ∈[m] and t'∈ T, there exists an element b∈ B such that b.(e_i⊗ t)=e_ℓ⊗ t'. Because T is a simple module and e_i.t≠0, there exists an element b̃=∑_j,ke_jb_j,ke_k∈ such that b̃.e_i⊗ t =∑_je_jb_j,ie_i.e_i⊗ t =(e_1+…+e_m)⊗ (∑_je_jb_j,ie_i.t) =(e_1+…+e_m)⊗ e_ℓ t' =e_ℓ⊗ t'. Set b=∑_je_jb_j,ie_i. Then b.e_i⊗ t=e_ℓ⊗ t'. Let t∈ T be nonzero and note that in we have e_1+…+e_m=1, so (e_1+…+e_m).t=t. Hence, some e_i.t≠0. Consider the map e_iπ e_i:e_iU→ e_i S. Because e_i.(e_i⊗ t)=e_i⊗ t ∈ e_i U, we know that e_iπ e_i is nonzero. Define a -module map ⊕_i=1^m e_iU→⊕_i=1^m e_iS by e_jr↦ e_jπ(r). By the above observation, this is a nonzero module map. By Schur's Lemma, it is an isomorphism and hence ≅⊕_i=1^m e_iU ≅(⊕_i,j=1^m e_iBe_j)⊗ T ≅ T § PAINTED DIAGRAM ALGEBRAS AND A DIAGRAM-LIKE BASIS In this section, our goal is to decompose ^r(V_n,k) in a way that allows us to use the results of the previous section. Let U_ be the span of monomials of the form x_i_1j_1⋯ x_i_rj_r where for each 1≤ m≤ k, exactly _m of the values j_1,…,j_r are equal to m. For example, the monomials x_11x_21x_23 and x_21x_21x_23 are both in U_(2,0,1)⊂^3(V_2,3). To apply the lemmas in the previous section, we will need to write the subspace U_ as the projection by some idempotent in P_r(n). We define the following idempotent for ∈ W_r,k: s_=1/_∑_σ∈__σ. From <Ref> we see that the subspace U_ is in fact a GL_n-submodule of ^r(V_n,k). This gives a decomposition ^r(V_n,k)=⊕_∈ W_r,kU_ as a GL_n-module. We now use this decomposition to construct a linear isomorphism Φ:⊕_∈ W_r,ks_V_n^⊗ r→^r(V_n,k). For ∈ W_r,k, define a linear map Φ_:V_n^⊗ r→ U_ by Φ_(e_i_1⊗…⊗ e_i_r)=∏_m=1^_1x_i_m1∏_m=_1+1^_1+_2x_i_m2…∏_m=_1+…+_k-1+1^rx_i_mk Φ_(1,2,2)(e_2⊗ e_2⊗ e_1⊗ e_1⊗ e_2)=x_21x_22x_12x_13x_23. For ease of notation, we will write e_=e__1⊗…⊗ e__r for ∈[n]^r. It's clear that Φ_ is surjective and that Φ_(e_)=Φ_(e_') exactly when e_' can be obtained from e_ by rearranging factors grouped into the same product above. That is, e_'=σ(e_) for some σ∈_. Hence, Φ_ restricts to an isomorphism s_V_n^⊗ r∼⟶ U_, so the map Φ:⊕_∈ W_r,ks_V_n^⊗ r→^r(V_n,k) which sends s_(e_) to Φ_(s_(e_)) is an isomorphism. The linear isomorphism Φ: ⊕_∈ W_r,k s_V_n^⊗ r→^r(V_n,k) above induces an isomorphism of algebras _G(^r(V_n,k))∼⟶_G(⊕_∈ W_r,ks_V_n^⊗ r) for each subgroup G of GL_n. The action of M∈ GL_n on s_(e_i_1⊗…⊗ e_i_r) is given by M.s_(e_i_1⊗…⊗ e_i_r) =s_(Me_i_1⊗…⊗ M e_i_r) =∑_ℓ_1,…,ℓ_r=1^n m_i_1ℓ_r⋯ m_i_rℓ_rs_(e_ℓ_1⊗…⊗ e_ℓ_r). Comparing this computation with <Ref>, we see that the map Φ is nearly a homomorphism of GL_n-modules, but the action of M∈ GL_n on one space is the action of M^-1 on the other. That is, for M∈ GL_n, Φ M = M^-1Φ. Multiplying by Φ^-1 on the left and right of both sides yields an analogous statement for Φ^-1: MΦ^-1 =Φ^-1M^-1. The linear isomorphism Φ induces an algebra isomorphism _(^r(V_n,k)) ∼⟶_(⊕_∈ W_r,ks_V_n^⊗ r) φ ⟼Φ^-1φΦ. Now we make the following observation for G⊆ GL_n a subgroup. φ∈_G(^r(V_n,k)) φ=M^-1φ M ∀ M∈ G Φ^-1φΦ=Φ^-1 M^-1φ M Φ ∀ M∈ G Φ^-1φΦ=MΦ^-1φΦ M^-1 ∀ M∈ G Φ^-1φΦ∈_G(⊕_∈ W_r,ks_V_n^⊗ r) Hence, the map φ↦Φ^-1φΦ restricts to an isomorphism _G(^rV_n,k^⊗ r)∼⟶_G(⊕_∈ W_r,ks_V_n^⊗ r). Let G be a subgroup of GL_n. Because G⊆ GL_n, we have that _n≅_GL_n(V_n^⊗ r)⊆_G(V_n^⊗ r). Hence, the idempotents s_ for ∈ W_r,k are in _G(V_n^⊗ r), allowing us to use <Ref> to make the following computation. _G(^r(V_n,k)) ≅_G(⊕_∈ W_r,ks_V_n^⊗ r) ≅⊕_,∈ W_r,ks__G(V_n^⊗ r)s_ where the product is given by (s_φ s_)·(s_ψ s_)=δ_,s_φ s_ψ s_. If we write A_r(n)=_G(V_n^⊗ r), this algebra is precisely the painted algebra Ã_r,k(n) with respect to the idempotents {s_:∈ W_r,k}. What we have shown in the above analysis is summarized in the following theorem. Let G⊆ GL_n be a subgroup and let A_r(n)=_G(V_n^⊗ r). Then _G(^r(V_n,k))≅Ã_r,k(n) where Ã_r,k(n) is the painted algebra of A_r(n) with respect to the idempotents {s_:∈ W_r,k}. Now, we use this isomorphism to construct a basis for _r,k(n)≅__n(^r(V_n,k)) from the diagram basis of P_r(n). Such a basis can be similarly constructed for other subalgebras of P_r(n) which contain _r. In general, the projection s_ L_π s_ can be computed as follows. s__π s_ =1/_×_∑_(σ,σ')∈_×__σ_π_σ' =1/_×_∑_(σ,σ')∈_×__σ.π.σ' Each diagram basis element _π projects to the sum of the orbit of π under the _×_-action. Because the orbits are disjoint, the set of distinct projections are linearly independent and hence form a basis of s_ P_r(n) s_. Due to the correspondence between orbits of set partitions under a Young subgroup action and multiset partitions given in <Ref>, we know that these basis elements are indexed by multiset partitions obtained by coloring in the elements of Π_2r with colors whose multiplicities are given by on the top and on the bottom. For ∈Π̃_2r,k define _=s__π s_ where π is any set partition in the orbit corresponding to . We then see that {_:∈Π̃} is a basis for P̃_r,k(n)≅_r,k(n). Using the formula in <Ref>, the product __ for ∈Π̃_2r,k with multiplicities on top and bottom given by and respectively and ∈Π̃_2r,k with multiplicities on top and bottom given by ' and respectively is the following. __ =(s__π s_)·(s_'_ν s_) =δ_,'s__π s__ν s_ =δ_,'/_∑_σ∈_s__π_σ_ν s_ =δ_,'/_∑_σ∈_s__π_σ.ν s_ To interpret this product combinatorially, it will be helpful to assign a combinatorial object to each term in the sum. For a pair ,∈Π̃_2r,k, a snapshot is a pair (π,ν) where κ_,(π)= and κ_,=. We can represent these visually as a stack of partition diagrams whose vertices are painted from k colors. To differentiate from the multiset partition diagrams (and to emphasize that the identically colored vertices in this situation are not interchangeable and are instead fixed in place), we draw the vertices as open circles. The formula above can then be thought of as beginning with any snapshot (π,ν) and them summing over the snapshots {(π,σ.ν):σ∈_}. In the summand, _π_σ.ν is the product of the two set partitions as elements of P_r(n) and multiplying by the idempotents s_ and s_ projects to the diagram-like basis element corresponding to the multiset partition obtained by filling in the vertices. The product of [ [xscale=.35,yscale=.35,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1)–(T2)–(T1); [black] (B2)–(B3); [black] (T4)–(B4); 1,2,3c14c21,2c13,4c2 ] and [ [xscale=.35,yscale=.35,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1)–(B2)–(T1); [black] (T4)–(B4)–(B3)–(T4); 1,2c13,4c21,2,3,4c2 ] is given by choosing a snapshot, then acting on the top of the second diagram with each permutation in _(2,2). 1/2!2!( [ ; [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1)–(B2)–(T1); [black] (T4)–(B4)–(B3)–(T4); 1,2c13,4c21,2,3,4c2 ]+[ ; [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T2)–(B1)–(B2)–(T2); [black] (T4)–(B4)–(B3)–(T4); 1,2c13,4c21,2,3,4c2 ]+[ ; [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1)–(B2)–(T1); [black] (T3)–(B4)–(B3)–(T3); 1,2c13,4c21,2,3,4c2 ]+[ ; [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T2)–(B1)–(B2)–(T2); [black] (T3)–(B4)–(B3)–(T3); 1,2c13,4c21,2,3,4c2 ]) =1/4(n[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (T4)–(B4)–(B3)–(T4); 1,2,3c14c21,2,3,4c2 ]+[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(T2); [black] (B1)–(B2); [black] (T4)–(B4)–(B3)–(T4); 1,2,3c14c21,2,3,4c2 ]+[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(T2)–(B2)–(B1)–(T1); [black] (B3)–(B4); 1,2,3c14c21,2,3,4c2 ]+[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(T2); [black] (B1)–(B4); 1,2,3c14c21,2,3,4c2 ]) We call this basis {D_:∈Π̃_2r,k} for _r,k(n)≅__n(^r(V_n,k)) the diagram-like basis. This perspective resolves the question of how the multiset partition algebra _(n) for ∈ W_r,k defined in <cit.> sits inside _r,k(n): _(n) ≅__n(s_V_n^⊗ r) ≅ s_ P_r(n) s_. Note that s_ P_r(n) s_ is the span of diagram-like basis elements in _r,k(n) whose colors on top and bottom have multiplicity given by . § IRREDUCIBLE REPRESENTATIONS OF _R,K(N) In this section, we aim to construct the irreducible representations of _r,k(n). Part (ii) of <Ref> tells us that in order to construct each of these irreducible representations, we need only consider the irreducible P_r(n) representations painted with respect to the idempotents {s_:∈ W_r,k}. For λ∈Λ^P_r(n), define _r,k^λP̃_̃r̃^̃λ̃= ⊕_∈ W_r,ks_ P_r^λ. By <Ref>, each module in {_r,k^λ:λ∈Λ^P_r(n)} is either a simple _r,k(n)-module or the zero module, and each simple _r,k(n) module appears in the set. To investigate the structure of these modules, we note that for T∈_λ,r, the projection s_ v_T is the average over the _-orbit of T: s_ v_T=1/_.T∑_S∈_.Tv_S This orbit corresponds to =κ_(T)∈_λ,r,k, and so we define w_=s_ v_T=1/κ_^-1()∑_T∈κ_^-1() v_T where κ_^-1() is the preimage of under the coloring map. If T∈_λ,r, then either w_=0 or ∈_λ,r,k. As observed in <Ref>, if T∈_λ,r, then has rows and columns weakly increasing. If is not semistandard, then it must have two boxes within the same column which have identical contents. For a tableau T, write T' for the tableau obtained by swapping the content of these two boxes. Then w_=w_T̃'̃=1/_.T∑_S∈_.Tv_S'=1/_.T∑_S∈_.T-v_S=-w_. Hence, if ∉_λ,r,k, then w_=0. We can now use <Ref> to describe a straightening algorithm for _r,k^λ. Suppose is not semistandard and w_≠0. Then there exists T in the _-orbit corresponding to which is not standard. Then using the straightening algorithm for P_r^λ, we can write v_T =∑_S∈_λ,rc_Sv_S. Then projecting by s_, we obtain w_ =s_ v_T=∑_S∈_λ,r c_S s_ v_S=∑_S∈_λ,r c_S w_ where each for which w_≠0 is semistandard. The set {_r,k^λ:λ∈Λ^_r,k(n)} forms a complete set of irreducible representations for _r,k(n) and for each λ∈Λ^_r,k(n), the set {w_:∈_λ,r,k} forms a basis of _r,k^λ. Because _r,k^λ is the span of {w_:∈_λ,r,k}, we know that _r,k^λ=0 unless λ∈Λ^_r,k(n). We then have that each of the Λ^_r,k(n) irreducible representations appear in the smaller set {_r,k^λ:λ∈Λ^_r,k(n)}. Because there are only Λ^_r,k(n) representations in this set, it must be a complete set of irreducible representations for _r,k(n). A priori, we don't know that these w_ are linearly independent, so we can only conclude that (_r,k)≤#_λ,r,k. However, we do know that ∑_λ∈Λ^_r,k(n)((_r,k^λ))^2=(_r,k(n))=∑_λ∈Λ^_r,k(n)(#_λ,r,k)^2. Hence (_r,k(n))=#_λ,r,k and so for each λ∈Λ^_r,k(n) the set {w_:∈_λ,r,k} indeed forms a basis of _r,k^λ. We now consider the action of an element _ on w_. Suppose that the multiplicities of colors in are given by and respectively and that the multiplicities of elements in are given by . Then the definition of a painted module gives us the following formula: _.w_ =s_ L_π s_ . s_ v_T =δ_,(s_ L_π s_ v_T) =δ_,∑_σ∈_s_ L_π.σ v_T We can interpret this formula for diagrams as follows. (i) Pull out the content of , a multiset partition with r elements from [k], in a row above and fix the order. (ii) Place on top and permute the vertices of the same color at the bottom in each possible way. (iii) For each permutation, compute the action as for P_r^λ. (iv) Sum the resulting tableaux and divide by the number of permutations. The action of a multiset partition on a multiset partition tableau. [line width=1.25pt, xscale=.7, yscale=.7] (1,1) node (2, <1>,1,<12>); (1,-1.3) node -1.2em(2, 2,<11>,1); (-.1,1.1) coordinate (T21); (-.1,1.8) coordinate (T31); (1.25,.5) coordinate (T12); (-.75,2.7) coordinate (C1); (.25,2.7) coordinate (C2); (1.25,2.7) coordinate (C3); (2.25,2.7) coordinate (C4); (-.75,3.2) coordinate (B1); (.25,3.2) coordinate (B2); (1.25,3.2) coordinate (B3); (2.25,3.2) coordinate (B4); (-.75,4.2) coordinate (T1); (.25,4.2) coordinate (T2); (1.25,4.2) coordinate (T3); (2.25,4.2) coordinate (T4); [gray] (T21) .. controls +(-.6,0) and +(+.6,0) .. (C1); [gray] (T31) .. controls +(0,+.6) and +(0,-.6) .. (C2); [gray] (T12) .. controls +(0,+.8) and +(0,-.8) .. (C4); [black] (C2) – (C3); [black] (T2) – (T1) – (B1); [black] (T3) – (B3); [fill=gray,draw=gray,line width = 1pt] (T21) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T31) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T12) circle (2pt); ı in C1, C2, C4, B1, B2, B4, T1, T2, T3[fill=white,draw=c1,line width = 1pt] (ı) circle (4pt);ı in C3, B3, T4[fill=white,draw=c2,line width = 1pt] (ı) circle (4pt); [line width=1.25pt, xscale=.7, yscale=.7] (1,1) node (2, <1>,1,<12>); (1.55,-1.3) node -1.2em(2, 2, ,<111>)=0; (-.1,1.1) coordinate (T21); (-.1,1.8) coordinate (T31); (1.25,.5) coordinate (T12); (-.75,2.7) coordinate (C1); (.25,2.7) coordinate (C2); (1.25,2.7) coordinate (C3); (2.25,2.7) coordinate (C4); (-.75,3.2) coordinate (B1); (.25,3.2) coordinate (B2); (1.25,3.2) coordinate (B3); (2.25,3.2) coordinate (B4); (-.75,4.2) coordinate (T1); (.25,4.2) coordinate (T2); (1.25,4.2) coordinate (T3); (2.25,4.2) coordinate (T4); [gray] (T21) .. controls +(-.6,0) and +(+.6,0) .. (C1); [gray] (T31) .. controls +(0,+.6) and +(0,-.6) .. (C2); [gray] (T12) .. controls +(0,+.8) and +(0,-.8) .. (C4); [black] (C2) – (C3); [black] (T2) – (T1) – (B2); [black] (T3) – (B3); [fill=gray,draw=gray,line width = 1pt] (T21) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T31) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T12) circle (2pt); ı in C1, C2, C4, B1, B2, B4, T1, T2, T3[fill=white,draw=c1,line width = 1pt] (ı) circle (4pt);ı in C3, B3, T4[fill=white,draw=c2,line width = 1pt] (ı) circle (4pt); [line width=1.25pt,xscale=.7,yscale=.7] (1,1) node (2, <1>,1,<12>); (1.55,-1.3) node -1.2em(2, <112>, ,1)=0; (-.1,1.1) coordinate (T21); (-.1,1.8) coordinate (T31); (1.25,.5) coordinate (T12); (-.75,2.7) coordinate (C1); (.25,2.7) coordinate (C2); (1.25,2.7) coordinate (C3); (2.25,2.7) coordinate (C4); (-.75,3.2) coordinate (B1); (.25,3.2) coordinate (B2); (1.25,3.2) coordinate (B3); (2.25,3.2) coordinate (B4); (-.75,4.2) coordinate (T1); (.25,4.2) coordinate (T2); (1.25,4.2) coordinate (T3); (2.25,4.2) coordinate (T4); [gray] (T21) .. controls +(-.6,0) and +(+.6,0) .. (C1); [gray] (T31) .. controls +(0,+.6) and +(0,-.6) .. (C2); [gray] (T12) .. controls +(0,+.8) and +(0,-.8) .. (C4); [black] (C2) – (C3); [black] (T2) – (T1) – (B4); [black] (T3) – (B3); [fill=gray,draw=gray,line width = 1pt] (T21) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T31) circle (2pt); [fill=gray,draw=gray,line width = 1pt] (T12) circle (2pt); ı in C1, C2, C4, B1, B2, B4, T1, T2, T3[fill=white,draw=c1,line width = 1pt] (ı) circle (4pt);ı in C3, B3, T4[fill=white,draw=c2,line width = 1pt] (ı) circle (4pt); . -1.2em(2, <1>,1,<12>)=1/3 -1.2em(2, 2,<11>,1)=-1/3-1.2em(2, 2,1,<11>) § GENERATORS To describe a generating set for _r,k(n), we will give an algorithm for factoring out certain blocks from a diagram. We call a block of the form i, i a vertical bar. A block of a multiset partition which is not a vertical bar or a singleton is called a nonbasic block. We now define a statistic on multiset partitions and prove a lemma about how this statistic interacts with the diagram-like product. Write N() for the multiset of nonbasic blocks of and define the nonbasic weight of to be ()=∑_∈ N(). If _ appears with nonzero coefficient in the product __1__2, then ()≤(_1)+(_2) with equality if and only if N()=N(_1)⊎ N(_2). Consider a snapshot (π_1,π_2) in the product __1__2 and suppose _π_1_π_2=x^c_ν. Suppose _1=κ_,(π_1) and _2=κ_,(π_2). Write =κ_,(ν). We call a vertex of π_1,π_2, or ν nonbasic if it is mapped to an element of a nonbasic block under the coloring map. Our goal is now to construct an injective map φ from the set of nonbasic vertices of ν to the nonbasic vertices of π_1 and π_2. Consider a nonbasic vertex v of ν labeled i. There is a corresponding vertex v' of π_1 also labeled i. If v' were the only element in its block, the same would be true for v. Hence, v' must either be a nonbasic vertex or one end of a vertical bar. In the first case, set φ(v)=v'. In the latter case, let j be the label of the other vertex in the vertical bar and let φ(v) be the vertex of π_2 labeled j. If the diagram of π_1 were set atop that of π_2, this would be the vertex on the top of π_2 that the vertical bar lands on. If v is labeled i, the same process is followed swapping which elements are barred (see <Ref> for an illustration of this map). In the first case, it is clear that φ(v) is nonbasic. In the latter case, if φ(v) were basic, it would be either the only element of its block (in which case, v would be a singleton) or part of a vertical bar (in which case, v would be in a vertical bar). Either way, this contradicts the assumption that ν is nonbasic, so we have constructed a map φ from the set of nonbasic vertices of ν to the nonbasic vertices of π_1 and π_2. It's clear that this map is injective, and so the number of nonbasic vertices of ν is less than the total number of nonbasic vertices of π_1 and π_2, giving us the desired inequality. Now to investigate the case of equality we consider how the map φ interacts with the set partition structure. Suppose that φ(v) and φ(w) are in the same block. Without loss of generality, assume they are in the same block of π_1. We then need to consider the following cases: (see <Ref>(i)-(iii) for illustrations of these cases) (i) φ(v) and φ(w) are both on the top of the block. The vertex φ(v) has the same label as v and φ(w) has the same label as w. Because the vertices with these labels are connected, the vertices with the same labels must be connected in the product, so v and w are in the same block. (ii) φ(v) and φ(w) are both on the bottom of the block. The vertices φ(v) and φ(w) each meet a vertical bar whose other end is labeled the same as v and w respectively. Hence, v and w are joined in the product. (iii) Without loss of generality φ(v) is on the top of the block and φ(w) is on the bottom. The vertex φ(v) is labeled the same as v and the vertex on the other end of the vertical bar meeting φ(w) is labeled the same as w. Hence v and w are again joined in the product. Hence, if φ(v) and φ(w) are in the same block, then u and v are in the same block. Suppose that v and w are in the same block but φ(v) and φ(w) are not (see <Ref>(iv)-(v)). Then two nonbasic blocks in the product must have been combined, and any vertex where the two nonbasic blocks meet must not be in the image of φ, meaning φ is not a surjection in this case. When equality holds, the map φ is a bijection and φ(v) in the same block as φ(w) if and only if v and w are in the same block. The map φ then induces a bijection of nonbasic blocks, hence N()=N(_1)⊎ N(_2). We will introduce a sort of factorization of a diagram with a nonbasic block into diagrams with fewer nonbasic blocks, and to that end we define two diagrams / and |_. Informally, the diagram / is the result of removing the block and replacing it with basic blocks, and the diagram |_ is a diagram whose only nonbasic block is . Here we show how the diagram can be factored at the nonbasic block . =.75em[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,...,6 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T6) – (B6) – (B1) – (T1); [black] (T1)–(B1); [black] (T2)–(T5)–(B3)–(B2)–(T2); [black] (T6)–(B6)–(B5)–(T6); 1,2c13,4c25,6c31,2,3c14,5,6c2 [ thick, decoration= brace, raise=0.2cm , decorate ] (T2) – (T5) node [pos=0.5,anchor=north,yshift=1cm] ; ] |_ =6[black] (T1)–(B1); [black] (T2)–(T5)–(B3)–(B2)–(T2); [black] (T6)–(B6); 1,2c13,4c25,6c31,2,4,5c13c26c3 / =6[black] (T1)–(B1); [black] (T2)–(B2); [black] (T3)–(B3); [black] (T6)–(B6)–(B5)–(T6); 1,2,4,5c13c26c31,2c13,4,5,6c2 More precisely, let ∈Π̃_2r,k have a nonbasic block . Without loss of generality, assume has more unbarred entries than barred entries. Write / for the multiset partition obtained by replacing with vertical bars i, i for each barred entry i of and a number of singletons 1 making up the difference in the number of barred and unbarred entries in . Write |_ for the multiset partition consisting of , a vertical bar i, i for each vertex labeled i in not in , and enough 1 to make it an element of Π̃_2r,k. We see immediately in <Ref> that although the element _ will appear with nonzero coefficient in _|__/, many other diagrams appear. We now define a partial order on the multiset partition diagrams with respect to which these extra diagrams are smaller than . This will allow us to use the factorization recursively to write _ as a polynomial in simpler diagrams. Write () for the number of vertical bars in . Define a partial order on Π̃_r,k by saying that ≺ if either () <() or ()=() and ()<() Let be a multiset partition and a nonbasic block of which has more unbarred entries than barred entries. Then there is a constant c∈ so that c_|__/-_∈_{_:≺}. Consider a snapshot in the product _|__/ in which each vertex at the bottom of the block in |_ meets a vertical bar in / and each singleton in |_ meets a singleton in /. The resulting diagram from this snapshot is , and so _ appears with nonzero coefficient in the product. Let c be the reciprocal of the coefficient it appears with. By <Ref> any _ appearing in the product must have ()≤(|_)+(/)=() with equality only if N()=N(|_) N(/)=N(). If any vertical bars in came from nonbasic blocks of |_ and / meeting, then would necessarily have a smaller nonbasic weight. Hence, in the case that ()=(), it must either be the case that = or has fewer vertical bars. So, every ≠ that appears in the product is smaller in (Π̃_2r,k,≼). <Ref> can be used recursively to write a diagram as a polynomial in diagrams with a single nonbasic block. At each step, the nonbasic block that the diagram is being factored at is highlighted. Notice that example (ii) ends where example (i) begins. (i) [ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T2)–(T3); [black] (T4)–(B4); [black] (B1)–(B3); [draw=none,fill=yellow, opacity=0.5] (1.75,1.45) – (3.25,1.45) – (3.25,1.05) – (1.75,1.05)– cycle; 1,2c13,4c21,2c13c24c3 ] =1/n([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1); [black] (T2)–(T3); [black] (T4)–(B4); 1,2c13,4c21,2,3c14c2 ])([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T4)–(B4); [black] (B1)–(B3); [draw=none,fill=yellow, opacity=0.5] (3.75,1.45) – (4.25,1.45) – (4.25,.05) – (3.75,.05)– cycle; 1,2,3c14c21,2c13c24c3 ]) =1/n([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1); [black] (T2)–(T3); [black] (T4)–(B4); 1,2c13,4c21,2,3c14c2 ])([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1); [black] (T2)–(B2); [black] (T3)–(B3); [black] (T4)–(B4); 1,2,3c14c21,2,3c14c3 ])([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T4)–(B4); [black] (B1)–(B3); 1,2,3c14c31,2c13c24c3 ]) (ii)[ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T4)–(B4); [black] (T2)–(T3); [black] (T1)–(B1)–(B3)–(T1); [draw=none,fill=yellow, opacity=0.5] (.75,1.55) – (3.5,.05) – (.75,.05) – cycle; 1,2c13,4c21,2c13c24c3 ] =3/n^2([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [draw=none,fill=yellow, opacity=0.5] (3.75,1.45) – (4.25,1.45) – (4.25,.05) – (3.75,.05)– cycle; [black] (T1)–(B1); [black] (T4)–(B4); [black] (T2)–(T3); 1,2c13,4c21,2,3c14c3 ])([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T4)–(B4); [black] (T1)–(B1)–(B3)–(T1); 1,2,3c14c31,2,3c13c24c3 ])-2/n([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T4)–(B4); [black] (T2)–(T3); [black] (B1)–(B3); 1,2c13,4c21,2c13c24c3 ]) =3/n^4([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1); [black] (T2)–(B2); [black] (T3)–(B3); [black] (T4)–(B4); 1,2c13,4c21,2c13c24c3 ])([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T1)–(B1); [black] (T4)–(B4); [black] (T2)–(T3); 1,2c13c24c31,2,3c14c3 ])([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [black] (T4)–(B4); [black] (T1)–(B1)–(B3)–(T1); 1,2,3c14c31,2,3c13c24c3 ])-2/n([ [xscale=.4,yscale=.4,line width=1.25pt] ı in 1,...,4 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T4) – (B4) – (B1) – (T1); [draw=none,fill=yellow, opacity=0.5] (1.75,1.45) – (3.25,1.45) – (3.25,1.05) – (1.75,1.05)– cycle; [black] (T4)–(B4); [black] (T2)–(T3); [black] (B1)–(B3); 1,2c13,4c21,2c13c24c3 ]) We now introduce our generators. For i,j∈[k] and ∈ W_r-1,k, write P_i,j, for the diagram-like basis element indexed by the set partition with singleton blocks i and j and vertical bars whose colors have multiplicity given by . Now fix i∈[r], ,∈ W_i,k and ∈ W_r-i,k. Write R_,, for the diagram-like basis element for the set partition with a block whose colors on top and bottom are given by and respectively as well as vertical bars with multiplicities given by . Here we show an example of each type of generator. P_2,1,(2,0,1,2)=[xscale=.5,yscale=.5,line width=1.25pt,baseline=1.8ex] ı in 1,2,3,4,5,6 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T6) – (B6) – (B1) – (T1); ı in 2,...,6[black] (Tı)–(Bı);2,3c11c24c35,6c41,2,3c14c35,6c4∈_6,4(n) R_(2,0,1),(0,2,1),(0,0,2)=[xscale=.5,yscale=.5,line width=1.25pt,baseline=1.8ex] ı in 1,2,3,4,5 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T5) – (B5) – (B1) – (T1); [black] (T1)–(T3)–(B3)–(B1)–(T1); [black] (T4)–(B4); [black] (T5)–(B5); 1,2c13,4,5c31,2c23,4,5c3∈_5,3(n) As a base case, we show how these elements generate the diagrams with no nonbasic blocks. The elements {P_i,j,:i,j∈[k],∈ W_r-1,k} generate each _ where has no nonbasic blocks. Fix ∈ W_r,k. For m≤_1, write Q_m for the diagram-like basis element indexed by the multiset partition with m pairs of singletons 1 and 1 along with vertical bars 1,1^_1-m,2,2^_2,…,k, k^_k. Notice that Q_1=P_1,1,' where ' is with the first entry decremented by one. Now consider the product Q_1Q_m. The singleton at the bottom of Q_1 will meet one of the m singletons at the top of Q_m in m/_1 of the snapshots. In the remaining snapshots, the singleton meets a vertical bar and breaks it into a singleton, resulting in Q_m+1: Q_1Q_m=m/_1nQ_m+_1-m/_1Q_m+1. Hence, the elements Q_m for 1≤ m≤_1 are generated by the elements P_i,j, (see <Ref>(i)). Suppose is a multiset partition with no nonbasic blocks and a singleton i with i≠ 1. Let ' be the result of replacing that i with 1. Then for ∈ W_r-1,k chosen so that P_i,1,_' is nonzero, this product includes _ along with diagrams with fewer vertical bars (see <Ref>(ii)). Via this process and the corresponding process for singletons i, we can write any basic diagram with a non-one singleton as a polynomial in diagrams with fewer non-one singletons or fewer vertical bars. Repeating this process for any diagram in the resulting polynomial with a non-one singleton terminates in a polynomial in diagrams with all basic blocks and singletons of the form 1 or 1. These are just the Q_m above for different choices of , so the {P_i,j,:i,j∈[k],∈ W_r-1,k} generate the diagrams with all basic blocks. Finally, we use <Ref> and <Ref> to prove that the elements P_i,j, and R_,, defined above generate the algebra _r,k(n). The algebra _r,k(n) is generated by the set Θ={P_i,j,:i,j∈[k],∈ W_r-1,k}∪{R_,,:,∈ W_i,k,∈ W_r-i,k for some i∈[r]}. If has more than one nonbasic block, then we apply <Ref> to write _ as a polynomial in elements _ where ≺. We can iterate this process on each _ in this polynomial where has more than one nonbasic block (see <Ref>). Because the poset (Π̃_2r,k,≼) is finite, this iteration terminates with _ written as a polynomial in elements _ where each has at most one nonbasic block. Hence, it suffices to show that Θ generates the elements _ where has at most one nonbasic block. By <Ref>, Θ generates the elements _ where has no nonbasic blocks. We now prove that Θ generates the diagrams with a single nonbasic block by induction on the number of vertical bars. If has a single nonbasic block and no vertical bars, it has a straightforward factorization as _=__1__2__3 where _2 is obtained from by connecting all vertices into a single block, _1 has a vertical bar for each vertex at the top of the nonbasic block of and a pair of identically colored singletons for each singleton atop , and _3 is obtained similarly from the bottom of (see <Ref>(i)). Notice that __1,__2,__3∈Θ. For with a single nonbasic block and s vertical bars, one can try a modified version of the above factorization in which a copy of each vertical bar in is put in _1, _2, and _3 (see <Ref>(ii)). The element _ appears in the product __1__2__3 when each singleton in _1 and _3 meets the nonbasic block in _2. When these singletons instead meet vertical bars in _2, the resulting diagram has fewer than s vertical bars. By induction on the number of vertical bars, the set Θ generates the diagrams with at most one nonbasic block and hence the algebra _r,k(x). § CHANGE OF BASIS In this appendix, we give a formula for the change-of-basis from Orellana and Zabrocki's orbit basis to the diagram-like basis. While the earlier sections dealt with the centralizer algebras P_r(n) and _r,k(n), this appendix considers the abstract algebras P_r(x) and _r,k(x) over (x) for x an indeterminate. These results can all be applied to the centralizer algebra case by specializing x to an integer n≥ 2r. In analogy with the construction of the diagram-like basis as a projection of the diagram basis of P_r(n), we can define the orbit-like basis by projecting the orbit basis of P_r(n): _=s__π s_ where π∈Π_2r is any set partition so that κ_,(π)=. For a multiset partition , define m_() to be the multiplicity of the block in and write m()!=∏_∈ distinct m_() where the product is over distinct blocks of . For ∈Π̃_2r,k whose unbarred entries have multiplicity given by ∈ W_r,k, write ω()=m()!/_∏_∈m(|_[ k])!. Then the map φ:P̃_r,k(x) →_r,k(x) _ ↦ω()_ where {_:∈Π̃_2r,k} is the orbit basis of Orellana and Zabrocki is an isomorphism of algebras. Because each ω() is nonzero, it's clear that this map is an isomorphism of vector spaces–the remainder of this appendix is devoted to proving that it respects the multiplication. First, we observe that such an isomorphism gives us the following formula for the change of basis from Orellana and Zabrocki's orbit basis to the diagram-like basis: _ =φ(s_ L_π s_) =φ(∑_ν≤π s__ν s_) =∑_≤ c_,φ(_) =∑_≤ c_,ω()_ Where for a fixed π such that κ_,(π)=, c_, is the number of ν≤π such that κ_,(ν)=. We expand the following diagram-like basis element in the orbit basis of Orellana and Zabrocki: _[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3); =s_(2,2)_[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3);s_(2,2) =s_(2,2)( _[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3);+_[black] (T1)–(B1)–(B2)–(T2)–(T1);[black] (T3)–(T4)–(B4)–(B3)–(T3);+_[black] (T1)–(B1);[black] (T2)–(T4)–(B4)–(B2)–(T2);+_[black] (T1) .. controls +(0,-.4) and +(0,-.4) .. (T3) – (T4) – (B4) – (B3) .. controls +(0,+.4) and +(0,+.4) .. (B1) – (T1);[black] (T2) – (B2);+_[black] (T1)–(T4)–(B4)–(B1)–(T1);)s_(2,2) _[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3); =_[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3);+_[black] (T1)–(B1)–(B2)–(T2)–(T1);[black] (T3)–(T4)–(B4)–(B3)–(T3);+2_[black] (T1)–(B1);[black] (T2)–(T4)–(B4)–(B2)–(T2);+_[black] (T1)–(T4)–(B4)–(B1)–(T1); ↦ω(_1)_[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3);+ω(_2)_[black] (T1)–(B1)–(B2)–(T2)–(T1);[black] (T3)–(T4)–(B4)–(B3)–(T3);+2ω(_3)_[black] (T1)–(B1);[black] (T2)–(T4)–(B4)–(B2)–(T2);+ω(_4)_[black] (T1)–(T4)–(B4)–(B1)–(T1); =2!/3!_[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3);+2!/3!_[black] (T1)–(B1)–(B2)–(T2)–(T1);[black] (T3)–(T4)–(B4)–(B3)–(T3);+22!/3!_[black] (T1)–(B1);[black] (T2)–(T4)–(B4)–(B2)–(T2);+3!/3!_[black] (T1)–(T4)–(B4)–(B1)–(T1); =1/3_[black] (T1)–(B1);[black] (T2)–(B2);[black] (T3)–(T4)–(B4)–(B3)–(T3);+1/3_[black] (T1)–(B1)–(B2)–(T2)–(T1);[black] (T3)–(T4)–(B4)–(B3)–(T3);+2/3_[black] (T1)–(B1);[black] (T2)–(T4)–(B4)–(B2)–(T2);+_[black] (T1)–(T4)–(B4)–(B1)–(T1); §.§ Preliminary Definitions and Enumerative Results The product formula for the orbit basis {_π:π∈Π_2r} for P_r(n) use set partitions which include unbarred, barred, and double-barred elements. To that end, we make the following definitions. For π,ν∈Π_2r, write Γ^π_ν for the set of set partitions γ of [r]∪[ r]∪[ r] such that γ|_[r]∪[ r]=π and γ|_[ r]∪[ r]=ν where ν is the result of adding a bar to each element in ν. For such a γ, write β_γ={S∈γ:∀ i∈ S, i∈[ r]} for the set of blocks of γ contained entirely in the middle. Write b_γ(x)=(x-ℓ(γ|_[r]∪[ r]))_ℓ(β_γ) where (a)_n=a(a-1)⋯(a-n+1). The product formula for the orbit basis is: _π_ν =∑_γ∈Γ^π_νb_γ(x)_γ where the γ in the subscript is understood to be an element of Π_2r by taking the restriction γ|_[r]∪[ r] and removing a bar from each double-barred entry (see Theorem 4.14 of <cit.> for details). For the orbit basis {_:∈Π̃_2r,k} of _r,k(n) we make similar definitions. For ,∈Π̃_2r,k, write Γ̃^_ for the set of multiset partitions with r elements each from [k], [ k], and [ k] such that |_[k]∪[ k]= and |_[ k]∪[ k]=. For such a , write β_=∈:∀ i∈, i∈[ k]. For any multiset partition , write m_() for the multiplicity of in and m()!=∏_∈ distinctm_()! Finally, write b̃_(x) =(x-ℓ(|_[k]∪[ k]))_ℓ(β_)/m(β_)! a_ =∏_∈|_[k]∪[ k] distinctℓ(_)!/m(_)! where _=∈:|_[k]∪[ k]=. Then, the product for the orbit basis of _r,k(x) is given by: __ =∑_∈Γ̃^_ a_b̃_(x) _ where the in the subscript is understood to be an element of P̃ĩ_2r,k by taking the restriction γ|_[k]∪[ k] and removing a bar from each double-barred entry (see Section 3 of <cit.> for details). To handle these set and multiset partitions on three alphabets combinatorially, we will want to extend the notation of our painting function κ_, to them. In particular, κ_,,(γ) will be the result of replacing the unbarred, barred, and double-barred elements according to ,, and respectively. It will be useful later to write Γ_ν^π() for the set of γ∈Γ_ν^π such that κ_,,(γ)=. For π,ν∈Π_2r with π|_[ r]=ν|_ r, let π∗ν∈Γ^π_ν be the set partition obtained by placing the diagram of π atop the diagram of ν and identifying the corresponding vertices in the center. This set partition plays a central role because any γ∈Γ^π_ν only differs from π∗ν by connecting some blocks on the very top and very bottom. That is, if we denote by (γ) the result of splitting each block of γ which does not contain a vertex in the middle into it's restriction to [r] and restriction to [ r], then γ∈Γ^π_ν if and only if (γ)=π∗ν. To analyze the orbit basis of _r,k(n), we want to investigate how π∗ν acts when ν is acted upon by some permutation. This inspires the definition of a number of subgroups of permutations. For ρ a set partition of [r] and ∈ W_r,k, define _^ρ={σ∈_:σ.ρ=ρ} where the action σ.ρ applies σ to each element of each block of ρ. This subgroup factors as a semidirect product S_^ρ=X_^ρ Y_^ρ where the permutations in X_^ρ permute whole blocks and the permutations in Y_^ρ permute only within blocks of ρ. Given π∈Π_2r, consider the subgroup A_,^π of X_^π|_[ r] which only permutes blocks in π|_[ r] if they are part of blocks of π that are painted identically by κ_,. Let B_,^ν be the analogous subgroup for the restrictions to the top. More precisely, A^π_, ={x∈ X_^π|_[ r]:∀ S,T∈π, S|_[ r]=x(T|_[ r])κ_,(S)=κ_,(T)} B^ν_, ={x∈ X_^ν|_[r]:∀ S,T∈ν, S|_[r]=x(T|_[r])κ_,(S)=κ_,(T)} Put another way, these permutations σ of the blocks in the middle of π∗ν do not change the resulting multiset partition. That is, κ_,,(π∗ν)=κ_,,(π∗σ.ν) for σ∈ A_,^π or σ∈ B_,^ν. Finally, we collect up formulas for the sizes of these subgroups. To that end, it will be useful to consider the following multiset partitions obtained by restricting to particular blocks. _+ ={∈:∀ i∈ S,i∈[k]} _- ={∈:∀ i∈ S,i∈[ k]} _± ={∈:∀ i∈ S,i∈[k]∪[ k]} We think of _+ (resp. _-) as the blocks contained entirely in the top (resp. bottom) of and _± as the blocks of which have no vertex in the middle row. Let ,,∈ W_r,k, π,ν∈Π_2r, =κ_,(π),=κ_,(ν) and ∈Γ̃^_. Then, X^ρ_ =m(κ_(ρ))! Y^ρ_ =∏_∈m(|_[ k])! A^π_, =m()!/m(_+)! B^ν_, =m()!/m(_-)! A^π_,∩ B^ν_, =m()!/m(_±)! The first two equalities are clear. The next two follow from the observation that the only blocks of that contribute to A_,^π are the ones which touch the bottom, so we cancel out the contribution of those contained entirely in the top. For the last equality, we can think of A_,^π∩ B^ν_, as the permutations of the middle of π∗ν which only permute blocks which are the restrictions of blocks painted the same in κ_,,(π∗ν). The number of such permutations is m(κ_,,(π∗ν))!/m(κ_,,(π∗ν)_±)!. Because only differs from κ_,,(π∗ν) by blocks which don't touch the center (whose contributions are all canceled) we can make the substitution of for κ_,,(π∗ν). §.§ Proof of the Isomorphism Let ,∈Π̃_2r and let ,,',∈ W_r,k be such that there exist π,ν∈Π_2r so that κ_,(π)= and κ_',(ν)=. Note that when ≠', we have __=0 and __=0, so we need only address the case when =': __ =1/_∑_σ∈_s__π_σ.νs_ =1/_∑_σ∈_∑_γ∈Γ^π_σ.ν b_γ(x) s__γ s_ To simplify notation, we will write =κ_,,(γ). Note that b_γ(x)=b̃_(x)m(β_)!, so we can rewrite the expression as follows. =1/_∑_σ∈_∑_γ∈Γ^π_σ.νb̃_(x)m(β_)! _ We then partition the sum over the possible multiset partitions that could arise from γ in this sum, noting that κ_,(σ.ν)=, and then we swap the order of summation =1/_∑_∈Γ̃_^b̃_(x)m(β_)!(∑_σ∈_∑_γ∈Γ^π_σ.ν =1)_ Applying φ to both sides, we see that the coefficient of _ in φ(__) is:φ(__)|__ =ω()/_∑_∈Γ^_ |_[k]∪[ k]=b̃_(x)m(β_)!(∑_σ∈_Γ^π_σ.ν()) We now compare this to the same coefficient in φ(_)φ(_).φ(_)φ(_)|__ =ω()ω()∑_∈Γ̃^_ |_[k]∪[ k]=a_b̃_(x) The goal is now to show that these two rather unpleasant quantities are the same. We will leverage their similarities—namely that they sum over the same objects and each include a factor of b̃_(x) in each summand—to simplify the task. It would suffice to show the following equality: ω()/_m(β_)!(∑_σ∈_Γ^π_σ.ν()) =ω()ω()a_ By using the definition of ω() and noticing that |_[ k]=|_[ k], we are able to rearrange the above equality to the following: ∑_σ∈_Γ^π_σ.ν() =a_m()!m()!/m()!m(β_)!∏_∈m(|_[ k])! Then using the formulas in <Ref> for the sizes of the subgroups, we get: =a_m(_+)!m(_-)!/m()!m(β_)!m()!/m(_±)!A^π_,B^ν_,Y_^π|_[ r]/A_,^π∩ B_,^ν =a_m(_+)!m(_-)!/m()!m(β_)!m()!/m(_±)!A_,^π B_,^ν Y_^π|_[ r] And finally, using the fact that =|_[k]∪[ k], and so for a block ∈|_[k]∪[ k] we have that ℓ(_)=m_(): =(m()!/m(β_)!∏_∈|_[k]∪[ k] distinct1/m(_)!)m(_+)!m(_-)!/m(_±)!A_,^π B_,^ν Y_^π|_[ r] =m_+()m_-()/m_±()A_,^π B_,^ν Y_^π|_[ r] The proof of this equality will be carried out in two lemmas. First, <Ref> will show that the set of σ such that Γ_σ.ν^π()≠0 is given by a translation of the product A_,^π B_,^ν' Y_^π|_[ r] where κ_,(ν')=κ_,(ν) so B_,^ν'≅ B_,^ν. Finally, <Ref> will show that for each such σ, Γ_σ.ν^π()=m_+()m_-()/m_±(). Because this quantity is independent of σ, the value of the sum is simply the product of the two quantities. To set the stage for the final two lemmas, we first need to investigate a particular class of permutations in σ∈_^ρ. Let π, ν∈Π_2r such that π|_[ r]=ν|_[ r]=ρ and fix ,,∈ W_r,k. Then {σ∈_^ρ:κ_,,(π∗σ.ν)=κ_,,(π∗ν)}=A_,^π B_,^νY_^ρ. First, note that σ factors as σ=xy for x∈ X_^ρ and y∈ Y_^ρ. Because y.ν=ν for all ν, we need only determine which x can be factored into a product of an element of A_,^π and an element of B_,^ν. One containment is straightforward. Suppose x=x_1x_2 with x_1∈ A_,^π and x_2∈ B_,^ν. Consider a block in π∗ν. Although the bottom half of this block may be different in π∗ x_2.ν, the condition that x_2∈ B_,^ν guarantees that it is not different in κ_,,(π∗ x_2.ν). Hence, κ_,,(π∗ x_2.ν)=κ_,,(π∗ν) for any π∈Π_2r with π|_[ r]=ν|_[ r]. Analogously, we see that κ_,,(π.x_1^-1∗ν)=κ_,,(π∗ν) for any ν∈Π_2r with π|_[ r]=ν|_[ r]. Hence, κ_,,(π∗ x_1x_2.ν) =κ_,,(π.x_1^-1∗ x_2.ν) =κ_,,(π.x_1^-1∗ν) =κ_,,(π∗ν) For the other containment, suppose that κ_,,(π∗ν)=κ_,,(π∗ x.ν)=. We use the following convention for indexing the blocks of π∗ν. Write ρ={M_1<…< M_ℓ} for the restrictions of the blocks of π∗ν to [ r] in last-letter order, and write S_i for the block of π∗ν with M_i⊆ S_i. An element σ∈ X_^ρ permutes the blocks of ρ and hence the indices [ℓ]. For S_i∈π∗ν, it will be helpful to write σ(S_i) for the block in π∗σ.ν such that S_i|_[r]∪[ r]=σ(S_i)|_[r]∪[ r]. Equivalently, σ(S_i) is obtained by replacing the bottom row S_i|_[ r] with S_σ^-1(i)|_[ r]. For a fixed σ∈ X_^ρ and a multiset from [k]∪[ k]∪[ k], write W_σ,={i:κ_,,(σ(S_i))=} For an example of these sets, see <Ref>. The following two properties can be observed in this example—we show that they hold in general. 1. For a fixed ∈κ_,(π) such that ∩ [k]≠∅, _∈ |_[k]∪[k]=W_1,=_∈ |_[k]∪[k]=W_x, This follows just about immediately from the fact that σ(S_i)|_[r]∪[ r]=S_i|_[r]∪[ r] for any σ∈_^ρ. _∈ |_[k]∪[k]=W_x, ={i:κ_,,(x(S_i))|_[k]∪[ k]=} ={i:κ_,,(S_i)|_[k]∪[ k]=} =_∈ |_[k]∪[k]=W_1, Note that the assumption that κ_,,(π∗ν)=κ_,,(π∗ x.ν)= is necessary here so that the unions on either side of the equality are over the same set of . 2. For a fixed ∈ with ∩[k]≠∅, |W_x,|=|W_1,| For σ∈_^ρ and ∈κ_,,(π∗σ.ν) with ∩[ k]≠∅, we have W_σ, ={i:κ_,,(σ(S_i))=} =m_(κ_,,(π∗σ.ν)) The statement then follows from the assumption that κ_,,(π∗ν)=κ_,,(π∗ x.ν)=. These two facts allow us to construct a permutation in the following way. Fixing ∈κ_,(π) such that ∩ [k]≠∅, there exists a permutation h_ of {i:κ_,,(S_i)|_[k]∪[ k]=} such that h_(W_x,)=W_1, for all with |_[k]∪[k]=. Because h_ by definition permutes only blocks which restrict to the same block in κ_,(π), we see h_ is an element of A_,^π. Now define h∈ A_,^π by h:=∏_∈^, ∩[ k]≠∅h_ It remains only to show that hx∈ B_,^ν. Fix i∈[ℓ] and let =κ_,,(hx(S_i)). =κ_,,(hx(S_i)) =κ_,(S_i|_[r]∪[ r])∪κ_(S_(hx)^-1(i)) Then by the fact that h∈ A_,^π, =κ_,(S_h^-1(i)|_[r]∪[ r])∪κ_(S_x^-1(h^-1(i))) =κ_,,(x(S_h^-1(i))) Then h^-1(i)∈ W_x,, so i∈ h(W_x,)=W_1,. Hence, _i==κ_,,(hx(S_i)) for each i and so hx∈ B^π_,, meaning x∈ A_,^π B_,^ν. Thus σ=xy∈ A_,^π B_,^ν Y_^ρ. We label the blocks of ρ in the middle of the diagram π∗ν in last-letter order and let x=(1 2)(3 5 6). We then apply these labels to our diagrams of κ_,,(π∗ν) and κ_,,(π∗ x.ν). κ_,,(π∗ν) =[ [xscale=.75,yscale=.75,line width=1.25pt] ı in 1,2,3,4,5,6,7,8,9,10,11 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Mı); (ı,-.75) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T11) – (B11) – (B1) – (T1); [black] (M1)–(T1)–(T2)–(M3); [black] (M2)–(T3)–(T4)–(M4); [black] (T5)–(M5); [black] (T6)–(M6); [black] (T7)–(M7); [black] (M8)–(T8) .. controls +(0,-0.6) and +(0,-0.6) .. (T10); [black] (M9)–(T9) .. controls +(0,-0.6) and +(0,-0.6) .. (T11); [black] (B1)–(M1) .. controls +(0,-0.4) and +(0,-0.4) .. (M3); [black] (M2)–(B2)–(B3)–(M4); [black] (M5)–(B5)–(B4); [black] (M6)–(B6)–(B7); [black] (M7)–(B8); [black] (M8)–(B9); [black] (B10)–(B11); 1,2,3,4,5,6,7c1; 8,9c2; 10,11c3; 1,2c1; 3,4,5,6,7,8,9c2; 10,11c3; 1,2c1; 3,4,5,6,7c2; 8,9,10,11c3; at (3.25,.5) 1; at (4.25,.5) 2; at (5.25,.5) 3; at (6.25,.5) 4; at (7.25,.5) 5; at (8.25,.5) 6; at (9.25,.5) 7; at (10.25,.5) 8; at (11.25,.5) 9; ] κ_,,(π∗ x.ν) = [ [xscale=.75,yscale=.75,line width=1.25pt] ı in 1,2,3,4,5,6,7,8,9,10,11 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Mı); (ı,-.75) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T11) – (B11) – (B1) – (T1); [black] (M1)–(T1)–(T2)–(M3); [black] (M2)–(T3)–(T4)–(M4); [black] (T5)–(M5); [black] (T6)–(M6); [black] (T7)–(M7); [black] (M8)–(T8) .. controls +(0,-0.6) and +(0,-0.6) .. (T10); [black] (M9)–(T9) .. controls +(0,-0.6) and +(0,-0.6) .. (T11); [black] (B1)–(M2) .. controls +(0,-0.4) and +(0,-0.4) .. (M4); [black] (M1)–(B2)–(B3)–(M3); [black] (M7)–(B5)–(B4); [black] (M6)–(B6)–(B7); [black] (M8)–(B8); [black] (M5)–(B9); [black] (B10)–(B11); 1,2,3,4,5,6,7c1; 8,9c2; 10,11c3; 1,2c1; 3,4,5,6,7,8,9c2; 10,11c3; 1,2c1; 3,4,5,6,7c2; 8,9,10,11c3; at (3.25,.5) 1; at (4.25,.5) 2; at (5.25,.5) 3; at (6.25,.5) 4; at (7.25,.5) 5; at (8.25,.5) 6; at (9.25,.5) 7; at (10.25,.5) 8; at (11.25,.5) 9; ] Now we can read off that x(S_2)={3,4,2,4,1} and κ_,,(x(S_2))=1,1,1,2,1 by simply looking at the blocks labeled 2 in the diagrams. Consider =1,2∈κ_,(π). There are two distinct blocks in κ_,,(π∗ν) which restrict to this : =1,2,2,2 and '=1,2,3. W_1,={3,4} W_1,'={5} W_x,={4,5} W_x,'={3} Notice that the sets in each column have the same size and the union across rows is always {3,4,5}. Fix π,ν∈Π_2r such that π|_[ r]=ν_[ r]=ρ and ,,∈ W_r,k. Let ∈Γ̃_κ_,(ν)^κ_,(π). The set of σ∈ S_ for which there exists a γ∈Γ_σ.ν^π() is given by A_,^π B_,^σ_0.ν Y_^ρσ_0 for some σ_0∈ S_. Let π,ν∈Π_2r. If there exists γ∈Γ^π_ν(), then ()=κ_,,(π∗ν). Conversely if ()=κ_,,(π∗ν) we can construct a γ∈Γ_ν^π() as follows. For each block of broken into T̃ in the top and B̃ in the bottom, find blocks T and B in π and ν for which κ_,(T)=T̃ and κ_,(B)=B̃ and connect these blocks in π∗ν. After connecting such a pair for each block broken in μ, we have constructed the desired γ. =[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5,6 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Mı); (ı,-.75) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T6) – (B6) – (B1) – (T1); [black, dotted] (T1) .. controls +(+0.50,0) and +(+0.50,0) .. (B1); [black] (T1) – (T2); [black] (B1) – (B2) – (B3); [black] (T3) – (M3) – (M2) – (T3); [black] (T4) – (T5); [black, dotted] (T5) .. controls +(+0.50,0) and +(+0.50,0) .. (B5); [black] (B4) – (B5); [black] (M4) – (M5) – (B6); 1,2,3c1; 4c2; 5,6c3; 1,2c1; 3,4c2; 5,6c3; 1,2c1; 3c2; 4,5,6c3; ] π∗ν =[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5,6 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Mı); (ı,-.75) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T6) – (B6) – (B1) – (T1); [gray] (T2)–(T3); [gray] (B1) – (B3); [gray] (T1) – (M1) .. controls +(0,+0.50) and +(0,+0.50) .. (M3); [gray] (T4) – (T5); [gray] (B5) – (B6); [gray] (B4) – (M4) – (M5); 1,2,3,4,5,6gray; 1,2,3,4,5,6gray; 1,2,3,4,5,6gray; ] γ =[ [xscale=.5,yscale=.5,line width=1.25pt] ı in 1,2,3,4,5,6 (ı,1.25) coordinate (Tı); (ı,.25) coordinate (Mı); (ı,-.75) coordinate (Bı); [fill= black!12,draw=black!12,line width=4pt] (T1) – (T6) – (B6) – (B1) – (T1); [gray] (T2)–(T3); [gray, dotted] (T3) .. controls +(+0.50,0) and +(+0.50,0) .. (B3); [gray] (B1) – (B3); [gray] (T1) – (M1) .. controls +(0,+0.50) and +(0,+0.50) .. (M3); [gray] (T4) – (T5); [gray, dotted] (T5) .. controls +(+0.50,0) and +(+0.50,0) .. (B5); [gray] (B5) – (B6); [gray] (B4) – (M4) – (M5); 1,2,3,4,5,6gray; 1,2,3,4,5,6gray; 1,2,3,4,5,6gray; ] Hence, we are looking for the set of σ∈_ for which κ_,,(π∗σ.ν)=(). Note that π∗σ.ν only makes sense when σ.ν|_[r]=ρ, so we need only consider σ∈_^ρ. Choose σ_0 so that ()=κ_,,(π∗σ_0.ν) and write ν'=σ_0.ν. Then the desired set of σ is precisely the permutations σ∈ S_^ρ such that κ_,,(π∗(σσ_0^-1).ν')=κ_,,(π∗ν') Lemma <ref> tells us that this set is precisely those when σσ_0^-1∈ A_,^π B_,^ν'Y_^ρ as desired. Let μ be a set partition of [m] and ∈ W_m,ℓ such that the blocks of κ_(μ)= are all sets. Then the number of set partitions γ of [m] such that κ_(γ)= is _1!…_ℓ!/m()! First, observe that {γ:κ_(γ)=}=_.μ. Because the blocks of are sets, no permutation of _ swaps elements within a block of μ. Hence, the permutations which fix μ are precisely the ones that swap whole blocks, and the result is obtained by the orbit-stabilizer formula. Fix π,ν∈Π_2r and suppose ∈Γ̃^κ_,()_κ_,(). If Γ_ν^π()≠∅, then Γ^π_ν()=m(_+)!m(_-)!/m(_±)!. Let γ∈Γ_ν^π(). Because γ differs from π∗ν by connecting some number of blocks in the very top and very bottom, we can recover γ uniquely from the partial matching of blocks of (π∗ν)_± induced by γ_±. The question then becomes how many set partitions ρ on the blocks of (π∗ν)_± there are such that κ_,(ρ)=_±. Because we are only connecting blocks on top to blocks on bottom, we can apply <Ref> where the ℓ colors are the different multisets which appear in _±. The number is then m(_±|_[k])!m(_±|_[ k])!/m(_±)! =m(_+)!m(ν_-)!/m(_±)! where =κ_,(π) and =κ_,(ν). alpha
http://arxiv.org/abs/2307.00846v1
20230703083944
Global stabilization of sterile insect technique model by feedback laws
[ "Kala Agbo Bidi", "Luis Almeida", "Jean-Michel Coron" ]
math.OC
[ "math.OC" ]
BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU enhanced Keystroke Dynamics [ August 1, 2023 ============================================================================================================== The Sterile Insect Technique or SIT is presently one of the most ecological methods for controlling insect pests responsible for disease transmission or crop destruction worldwide. This technique consists of releasing sterile males into the insect pest population. This approach aims at reducing fertility in the population and, consequently, reduce significantly the native insect population after a few generations. In this work, we study the global stabilization of a pest population at extinction equilibrium by the SIT method. We construct explicit feedback laws that stabilize the model and do numerical simulations to show the efficiency of our feedback laws. The different feedback laws are also compared taking into account their possible implementation in field interventions. remarkRemark[section] figFigure[section] tabTable[section] definitionDefinition[section] thmTheorem[section] commComment[section] conjConjecture[section] lemmaLemma[section] propositionProposition[section] corollaryCorollary[section] § INTRODUCTION Mosquitoes are known to transmit a variety of diseases such as malaria, dengue, yellow fever, Zika virus, and others. These diseases are responsible for a significant number of deaths worldwide. According to the World Health Organization (WHO), malaria alone caused approximately 409,000 deaths in 2019, with the majority of deaths occurring in sub-Saharan Africa. Dengue and Zika virus, also transmitted by mosquitoes, are estimated to cause hundreds of thousands of cases and thousands of deaths each year. The precise number of deaths caused by mosquitoes is difficult to determine because many cases are not reported or not diagnosed. Unfortunately, more than half of the world's population is exposed to mosquito-borne diseases. Although there are many effective vector control measures for malaria and arboviroses, some of them can have negative impact on the environment and may result in ecological dammage. For example insecticide spraying can have unintended effects on non-target organisms, including beneficial insects such as bees and butterflies. In addition, repeated use of insecticides often leads to the development of resistance in mosquito populations. As a possible alternative, the sterile insect technique (SIT) has been proposed as a potential tool for reducing mosquito populations. The technique involves sterilizing male mosquitoes with ionizing radiation and then releasing them into the wild to mate with wild females. In agricultural setting, SIT has been used successfully in controlling a variety of insect pests, including fruit flies, tsetse flies, and moths. The SIT strategy was first used by R. Bushland and E. Knipling and applied successfully in the early 1950s by nearly eradicating screw-worm fly in North America. Since then, this technique has been considered for different pests and disease vectors <cit.>,<cit.>. The advantage of using such a technique is that it only targets the desired species and also significantly reduces the degradation of the ecosystem. This is why this technique is increasingly used for the control of insect pests and insect disease vectors. In order to determine the appropriate releases of sterile males to approach the extinction equilibrium of the population, we use mathematical control theory which provides the necessary tools for constructing such a control. Our work involves starting from the model proposed in <cit.> without the Allee effect to build this feedback law. Our theoretical results are illustrated with numerical simulations. While we were finishing writing this work, we learned that the reduced system (system of two ODE studied in<cit.>) was also recently studied by A. Cristofaro and L. Rossi in <cit.>. In particular, they were able to construct a feedback law leading to global stabilization of the extinction equilibrium in this setting using a backstepping approach. § MATHEMATICAL MODELING OF MOSQUITO POPULATION DYNAMICS §.§ Mathematical modeling of wild mosquito population dynamics The life cycle of mosquitoes has many stages but we will consider a simplified model where we just separate an aquatic and an adult phase. The aquatic phase, which includes egg, larva and pupa stages and then the adult phase. In order to lay their eggs, female mosquitoes need not only to be fertilized by males but also to have a blood meal. Thus, every 4-5 days, they will take a blood meal (that can sometimes involve biting several victimes) and lay 100 to 150 eggs in different places (10 to 15 per place). An adult mosquito usually lives for 2 to 4 weeks. The mathematical model we present takes account the two phases: the aquatic phase that we denote by the state E and the adult phase that we split into two sub-compartments, males, M and females, F. We consider the dynamics presented in <cit.>. Based on this model and neglecting the Allee effect, we obtain the system Ė = β_E F (1-E/K) - ( ν_E + δ_E ) E, Ṁ = (1-ν)ν_E E - δ_M M, Ḟ =νν_E E - δ_F F, where, * E(t)≥ 0 is the mosquito density in aquatic phase at time t; * M(t)≥ 0 is the wild adult male density at time t; * F(t)≥ 0 is the density of adult females at time t; we have supposed that all females are immediately fertilized in this setting and this equation is only here to use when we add the sterile male in which case only a fraction of the females will be fertilized; * β_E>0 is the oviposition rate; * δ_E,δ_M,δ_F >0 are the death rates for eggs, wild adult males and fertilized females respectively; * ν_E>0 is the hatching rate for eggs; * ν∈ (0,1) the probability that a pupa gives rise to a female, and (1-ν) is, therefore, the probability to give rise to a male. And to simplify, we suppose females become fertilized immediately when they emerge from the pupal stage; * K>0 is the environmental capacity for eggs. It can be interpreted as the maximum density of eggs that females can lay in breeding sites. Since here the larval and pupal compartments are not present, it is as if E represents all the aquatic compartments in which case in this term K represents a logistic law's carrying capacity for the aquatic phase that also includes the effects of competition between larvae. We set x=(E, M, F)^T and 𝒟 = ^3_+ = {x∈^3: x≥ 0}. The model (<ref>)-(<ref>) can be written in the form ẋ = f(x), where f:^3→^3 represents the right hand side of (<ref>)-(<ref>). The map f is continuously differentiable on ^3. Note that if ẋ =f(x) and x(0)∈𝒟, then, for every t≥ 0, x(t) is defined and belongs to 𝒟. Setting the right hand side of (<ref>)-(<ref>) to zero we obtain the extinction equilibrium 0 = (0, 0, 0)^T and the non-trivial equilibrium x^*=(E^*,M^*, F^*)^T given by E^* = K(1 -1/ℛ_0), M^* = (1-ν)ν_E/δ_ME^*, F^* = νν_E/δ_FE^*, where ℛ_0 := β_Eνν_E/δ_F(ν_E+δ_E). Note that x^*∈𝒟 if and only if ℛ_0≥ 1. Let us now recall some definitions connected to the stability of an equilibrium. Let x_e∈𝒟 be an equilibrium (of (<ref>)). The equilibrium x_e is stable in 𝒟 if, for every ε>0, there exists a δ>0 such that (x_0∈𝒟 and x(0)-x_e<δ)⟹(x(t)-x_e<ε t>0). The equilibrium x_e is unstable in 𝒟 if it is not stable in 𝒟. It is a global attractor in 𝒟 if, for every initial data in 𝒟, x(t)→ x_e as t→∞. Finally it is globally asymptotically stable in 𝒟 if it is both stable and a global attractor in 𝒟. The Jacobian of system (<ref>)-(<ref>) computed at the extinction equilibrium is J(0)=[ -(ν_E+δ_E) 0 β_E; (1-ν)ν_E -δ_M 0; νν_E 0 -δ_F ]. Its characteristic polynomial is P(λ)=λ^3 + (ν_E+δ_E +δ_M+δ_F)λ^2 + ((ν_E+δ_E)δ_F-β_Eνν_E + δ_M(ν_E+δ_E))λ +δ_M((ν_E+δ_E)δ_F-β_Eνν_E). Its roots are -δ_M and the roots of equation λ^2 + (ν_E+δ_E +δ_F)λ + δ_F(ν_E+δ_E)(1-ℛ_0) = 0 If ℛ_0<1, all eigenvalues of J(0) are either negative or have negative real parts, which implies that 0 is locally asymptotically stable. If ℛ_0=1 the eigenvalues of J(0) are -δ_M, 0, and -(ν_E+δ_E+δ_F)<0. If ℛ_0>1, the eigenvalues of J(0) are all real, one is strictly positive, two are strictly negative. The global stability properties of the extinction equilibrium 0 = (0, 0, 0)^T are described in terms of the basic offspring number ℛ_0 of the population. The essential properties of the model (<ref>)-(<ref>) are summarized in the following theorem similar to <cit.> and <cit.>. The following properties hold. * If ℛ_0≤ 1, then 0∈^3 is a globally asymptotically stable equilibrium in 𝒟 for (<ref>); * If ℛ_0>1, then the system has two equilibria 0 and x^* in 𝒟 where x^* is stable with basin of attraction 𝒟∖{x=(E, M, F)^T∈^3_+ : E=F=0} and 0 is unstable in 𝒟 with the non negative M-axis being a stable manifold. Proof. Let us first prove <ref>. We could proceed as in the proof of <cit.> or <cit.> which are based on properties of monotone operators. We propose a different approach, now based on Lyapunov functions. Let t↦ x(t)=(E(t),M(t),F(t))^T be a solution of (<ref>) defined at time 0 and such that (E(0),M(0),F(0))^T∈𝒟. One has M(t)=e^-δ_MtM(0)+(1-ν)ν_E∫_0^te^-δ_M(t-s)E(s) ds, which implies that M(t)≤ M(0)+ (1-ν)ν_E/δ_Msup{E(s); s≥ 0}, M(t)≤ M(0)e^-δ_Mt+ (1-ν)ν_E/δ_Me^-δ_Mt/2max{E(s); s∈[0, t/2]} + (1-ν)ν_E/δ_Msup{E(s); s≥ t/2}. Inequality (<ref>) shows that 0∈^3 is a stable equilibrium in 𝒟 for (<ref>) if 0∈^2 is a stable equilibrium in [0,+∞)^2 for the subsystem in (E,F)^T∈[0,+∞)^2: Ė = β_E F (1-E/K) - ( ν_E + δ_E ) E, Ḟ =νν_E E - δ_F F. Inequality (<ref>) shows that 0∈^3 is a global attractor in 𝒟 for (<ref>) if 0∈^2 is a global attractor in [0,+∞)^2 for the subsystem (<ref>)-(<ref>) in (E,F)^T∈[0,+∞)^2. Hence, in order to prove <ref>, it suffices to check that 0∈ [0,+∞)^2 is globally asymptotically stable in [0,+∞)^2 for the system (<ref>)-(<ref>). To prove this last statement, let us consider the Lyapunov function V:[0,+∞)^2→, y=(E,F)^T↦ V(y), defined by V(y):=δ_F E+ β_E F. Then V is of class 𝒞^1, V(y)>V((0,0)^T)=0, ∀ y ∈ [0, +∞)^2∖{(0,0)^T}, V(y)→ +∞ when y→ +∞ with y ∈ [0,+∞)^2. The time-derivative of V along the trajectories of (<ref>)-(<ref>) is V̇ = - ( δ_F(ν_E + δ_E)-β_Eνν_E ) E -δ_Fβ_E/KEF. Let us now assume that ℛ_0≤ 1. From (<ref>) and (<ref>) one gets V̇≤ -δ_Fβ_E/KEF≤ 0. We are going to conclude by using the LaSalle invariance principle. Let us assume that we have a trajectory t∈↦ y(t)=(E(t),F(t))^T∈[0,+∞)^2 of (<ref>)-(<ref>) such that V̇(y(t))=0 ∀ t∈. Then, using (<ref>), E(t)F(t)=0 ∀ t∈. Let us assume that there exists t_0∈ such that E(t_0)≠0. Then there exists ε>0 such that E(t)≠0 ∀ t ∈ (t_0-ε, t_0+ε), which, together with (<ref>), implies that F(t) =0 ∀ t ∈ (t_0-ε, t_0+ε). Differentiating (<ref>) with respect to time and using (<ref>) we get E(t) =0 ∀ t ∈ (t_0-ε, t_0+ε), in contradiction with (<ref>). Hence E(t) =0 ∀ t ∈. Differentiating (<ref>) with respect to time and using (<ref>) we get that F(t) =0 ∀ t ∈. With the LaSalle invariance principle, this concludes the proof of <ref>. In the case where ℛ_0<1 a simple linear strict Lyapunov function for the full system (<ref>) is given in Remark <ref>. Let us now prove <ref>. We first note that one has the following lemma, whose proof is obvious and is omitted. Let t↦ x(t)=(E(t),M(t),F(t))^T be a solution of (<ref>) defined at time 0 and such that (E(0),M(0),F(0))^T∈𝒟. Then it is defined on [0,+∞). Moreover, if E(0)≥ K, then there exists one and only one time t_0≥ 0 such that E(t_0)=K and one has E(t)<K ∀ t> t_0. Thank to this lemma we are allowed to assume that E<K, which we do from now on. We then follow the proof of <cit.>. To prove the stability and basin of attraction of the non-trivial equilibrium x^* we use <cit.>. This theorem applies to strongly monotone systems. The Jacobian (<ref>) associated with (<ref>) is not irreducible. Let us consider the subsystem for E and F, that is (<ref>)-(<ref>), which defines a dynamical system on _+^2. Its Jacobian j((E,F)^T)=[ -(ν_E+δ_E)-β_E F/K β_E(1-E/K); νν_E -δ_F ] is irreducible. Applying <cit.> to the two dimensional interval {(E,F)^T∈_+^2: 0≤ E≤ E^*,0≤ F≤ F^*}, it follows that every solution starting in this interval, excluding the end points, converge either all to (0,0)^T or all to (E^*,F^*)^T. As the characteristic equation of j((0,0)^T) is λ^2 + (δ_F+ν_E+δ_E)λ + δ_F(ν_E+δ_E)-β_Eνν_E=0, its discriminant is Δ = (ν_E+δ_E-δ_F)^2+4β_Eνν_E≥ 0. Therefore, since ℛ_0>1, j((0,0^T)) has one positive eigenvalue and so (0,0)^T is unstable. Since j((0,0)^T) is a Metzler matrix, it has a strictly positive eigenvector corresponding to the positive eigenvalue. Hence, it is not possible that all solutions converge to (0,0)^T. Therefore, they converge to (E^*,F^*)^T. The implication for the three dimensional system (<ref>)-(<ref>) is that all solutions starting in the interval [0,x^*], excluding the M-axis, converge to x^*. Using the same argument as in <cit.>, any solution starting at a point larger than x^* converges to x^*. Since any point in 𝒟∖{x=(E, M, F)^T∈^3_+ : E=F=0} can be placed between a point below x^*, but not on the M-axis, and a point above x^*, every solution starting in 𝒟∖{x=(E, M, F)^T∈^3_+ : E=F=0} converges to x^*. The monotone convergence of the solutions initiated below and above x^* implies the stability of x^* as well. This concludes the proof of <ref> and of Theorem <ref>. §.§ SIT model in mosquito population dynamics The SIT model obtained neglecting the Allee effect from the one presented in <cit.> is Ė = β_E F (1-E/K) - ( ν_E + δ_E ) E, Ṁ = (1-ν)ν_E E - δ_M M, Ḟ =νν_E E M/M+γ_s M_s - δ_F F, Ṁ_s = u - δ_s M_s, where M_s(t)≥ 0 is the sterilized adult male density, δ_s>0 is the death rate of sterilized adult, u≥ 0 is the control (density of sterile males released) at time t, and 0<γ_s≤1 accounts for the fact that females may have a preference for fertile males. Then, the probability that a female mates with a fertile male is M/(M + γ_sM_s). From now on we assume that δ_s≥δ_M, which is a biologically relevant assumption. When applying a feedback law u:𝒟'→ [0,+∞), the closed-loop system is the system ẋ= G(x,u(x)), where G(x,u) = ( [ β_E F (1-E/K) - ( ν_E + δ_E ) E; (1-ν)ν_E E - δ_M M; νν_E E M/M+γ_s M_s - δ_F F; u -δ_sM_s ]). Concerning the regularity of the feedback law, we always assume that u∈ L^∞_loc(𝒟'). Note that, even if u is of class 𝒞^∞, the map x∈𝒟'↦ G(x,u(x))∈^4 is not continuous and one needs to specify the definition of the solutions for the closed-loop system (<ref>). Carathéodory solutions seem to be natural candidates. Roughly speaking, Carathéodory solutions are absolutely continuous curves that satisfy the integral version of the differential equation. These solutions are indeed useful in other contexts. However, if they can lead to robustness for small errors on the control, as shown in <cit.>, they may not be robust with respect to arbitrary small measurement errors on the state, which is crucial for the application. To have a robustness with respect to arbitrary small measurement errors on the state, as shown in <cit.> (see also <cit.>), the good definition of the solutions for the closed-loop system (<ref>) are the Filippov solutions, i.e. the solution of ẋ∈ε>01.75∩N∈𝒩1.75∩conv[X(((x+ε B)∩𝒟')∖ N)]=:Y(x), where * B is the unit ball of ^4; * for a set A, conv[A] is the smaller closed convex set containing A; * 𝒩 is the set of subsets of ^4 of zero Lebesgue measure. Let us recall that x:I⊂→^4, t∈ I ↦ x(t)∈^4 (where I is an interval of ) is a solution of (<ref>) if x∈ W^1,∞_loc(I) and is such that ẋ(t) ∈ Y(x(t)) for almost every t∈ I. For references about Filippov solutions, let us mention, in particular, <cit.> and <cit.>. For the definition of stability, global attractor and asymptotic stability, we use again Definition <ref> (with 𝒟' instead of 𝒟) and take now into account all the solutions in the Filippov sense in this definition. The motivation for using Filippov solutions is given in <cit.>. The global asymptotic stability in this Filippov sense implies the existence of a Lyapunov function <cit.>; see also <cit.>. This automatically gives some robustness properties with respect to (small) perturbations (including small measurement errors on the state), which is precisely the goal of feedback laws. In fact, for many feedback laws constructed in this article, an explicit Lyapunov function will be given, which allows to quantify this robustness. Let us emphasize that in our case the Filippov solutions of our closed-loop system enjoy the following properties ((E(0),F(0)) =(0,0) )⟹((E(t),F(t))=(0,0) ∀ t≥ 0), ((E(0),F(0))≠(0,0) )⟹(E(t)>0, M(t)>0, F(t)>0 ∀ t> 0). From now on, the solutions of the closed-loop systems considered in this article are always the Filippov solutions. Let us assume that ℛ_0>1. Then the following properties hold. * If u=0, we have two equilibria: * the extinction equilibrium E^*=F^*=M^*=M_s^*=0 which is linearly unstable; * the persistence equilibrium E = K(1 -1/ℛ_0), M = (1-ν)ν_E/δ_ME, F = νν_E/δ_FE, M_s=0, which is locally asymptotically stable. * If u≥ 0, then the corresponding solution (E,M,F,M_s) to System (<ref>)-(<ref>) enjoys the following stability property: { E(0)∈(0,E], M(0)∈(0,M], F(0)∈(0,F], M_s(0)≥ 0, .{ E(t)∈(0,E], M(t)∈(0,M], F(t)∈(0,F], M_s(t)≥ 0, . t≥ 0. Let U^* = ℛ_0K(1-ν)ν_Eδ_s/4γ_sδ_M(1-1/ℛ_0)^2. If u(.) denotes a constant control function equal to some U>U^* for all t≥ 0, then the corresponding solution (E(t), M(t), F(t), M_s(t)) converges to (0,0,0,U/δ_s) as t→∞. Concerning the global asymptotic stability of 0 for the system (<ref>)-(<ref>) in 𝒟':=[0,+∞)^4, using a Lyapunov approach, one can get the following theorem. Let u=0. If ℛ_0<1, then 0 is globally asymptotically stable in 𝒟' for the system (<ref>)-(<ref>). Proof. Let x= (E,M,F, M_s)^T. We are going to conclude by applying Lyapunov's second theorem. To do so, a candidate Lyapunov function is V: 𝒟'→_+, x↦ V(x), defined by V(x) :=1+ℛ_0/1-ℛ_0 E +2 β_E/δ_F(1-ℛ_0) F + M + M_s. Note that, since ℛ_0<1, V(x)>V(0)=0, ∀ x∈𝒟'∖{0}, V(x)→ +∞ as |x|→ +∞ with x∈𝒟'. Moreover, along the trajectories of (<ref>)-(<ref>), V̇ (x)=-(νν_E+δ_E)E -β_E/K1+ℛ_0/1-ℛ_0FE -δ_MM -β_E F-δ_sM_s -2β_Eνν_E/δ_F(1-ℛ_0)γ_s M_s/M+γ_s M_s E if M+M_s≠0. From (<ref>) and (<ref>), one gets V̇ (x)≤ -cV if M+M_s≠0, with c_0 = min{(νν_E +δ_E) (1-ℛ_0)/1+ℛ_0) , δ_F (1-ℛ_0)/2 , δ_M , δ_s } Let us point out that, for every solution t↦ x(t)=(E(t),M(t),F(t),M_s(t))^T of the closed-loop system (<ref>)-(<ref>) defined at time 0 and such that x(0)∈𝒟', (M(0)+M_s(0)>0)⟹(M(t)+M_s(t)>0, ∀ t>0), (x(0)=0)⟹(x(t)=0, ∀ t≥ 0). From (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), one has, for every solution t↦ x(t)=(E(t),M(t),F(t),M_s(t))^T of the closed-loop system (<ref>)-(<ref>) defined at time 0 and such that x(0)∈𝒟', V(x(t))≤ V(x(0)) e^-c_0 t ∀ t≥ 0, which, together with (<ref>) and (<ref>), concludes the proof of Theorem <ref> (and even shows the global exponential stability and provides an estimate on the exponential decay rate c_0 given by (<ref>)). Note that Theorem <ref> implies Theorem <ref> in the case ℛ_0<1 and our proof of Theorem <ref> provides, for this case, a (strict) Lyapunov function which is just Ṽ((E,M,F)^T):=1+ℛ_0/1-ℛ_0 E +2 β_E/δ_F(1-ℛ_0) F + M. It would be interesting to provide Lyapunov functions for the two remaining cases ℛ_0=1 and ℛ_0>1. § GLOBAL STABILIZATION BY FEEDBACK OF THE EXTINCTION EQUILIBRIUM §.§ Backstepping feedback For the backstepping method, the control system has the following structure: ẋ_1 = f(x_1,x_2), ẋ_2 = u, where the state is x = (x_1,x_2)∈^p×^m and the control is u∈^m. The key and classical theorem for backstepping is the following one (see, for instance, <cit.>). Assume that f∈𝒞^1(^p×^m,^p) and that the control system ẋ_1 = f(x_1,v), where the state is x_1∈^p and the control v∈^m, can be globally asymptotically stabilized by means of a feedback law x_1∈^p ↦ v(x_1)∈^m of class 𝒞^1. Then the control system (<ref>)-(<ref>) can be globally asymptotically stabilized by means of a continuous feedback law x∈^p×^m↦ u(x)∈^m . Let x:= (E, M, F)^T. One way to rewrite the dynamics (<ref>)-(<ref>) is { ẋ= f(x,M_s), Ṁ_̇ṡ = u-δ_sM_s, . where f(x,M_s) := ( [ β_E F (1-E/K) - ( ν_E + δ_E ) E; (1-ν)ν_E E - δ_M M; νν_E E M/M+γ_s M_s - δ_F F ]). As f is not of class 𝒞^1 and the feedback law has to be non-negative, we cannot directly apply the backstepping theorem. However, to build the feedback law we use the classical Lyapunov approach of the proof of Theorem <ref> (see, for example, <cit.>) allowing us to select an appropriate control. Unfortunately, the control that we get with this approach is not positive all the time. To get around this, using the same Lyapunov function, we propose a new feedback law that is non-negative, decreases the Lyapunov function and leads to global asymptotic stability of the extinction equilibrium. First, consider the control system ẋ= f(x,M_s) with the state being x∈𝒟 and the control being M_s∈ [0,+∞). We assume that M_s is of the form M_s = θ M and study the closed-loop system ẋ= f(x,θ M). We have { Ė = β_E F (1-E/K) - ( ν_E + δ_E ) E, Ṁ = (1-ν)ν_E E - δ_M M, Ḟ = νν_E/1+γ_s θE - δ_F F. . It is a smooth dynamical system on 𝒟=[0,+∞)^3 which is also a positively invariant set for this dynamical system. Setting the right hand side of (<ref>) to zero we obtain the equilibrium 0∈ [0,+∞)^3 and the non-trivial equilibrium x^**=(E^**, M^**, F^**) given by E^** = K(1 -1/ℛ(θ)), M^** = (1-ν)ν_E/δ_ME^**, F^** = νν_E/δ_F(1+γ_sθ)E^**, where the offspring number is now ℛ(θ) := β_Eνν_E/δ_F(1+γ_sθ)(ν_E+δ_E) = ℛ_0/1+γ_sθ. Note that if ℛ(θ)≤ 1, 0∈^3 is the only equilibrium point of the system in 𝒟. Our next proposition shows that the feedback law M_s=θ M stabilizes our control system ẋ= f((x^T,M_s)^T) if ℛ(θ)<1. Assume that ℛ(θ)<1 . Then 0 is globally asymptotically stable in 𝒟 for system (<ref>). Proof. We apply Lyapunov's second theorem. To do so, we define V: [0,+∞)^3→_+, x↦ V(x), V(x) := 1+ℛ(θ)/1-ℛ(θ) E + M + 2β_E/δ_F(1-ℛ(θ))F. As (<ref>) holds, V is of class 𝒞^1, V(x)>V((0,0,0)^T)=0, ∀ x ∈ [0, +∞)^3∖{(0,0,0)^T}, V(x)→ +∞ when x→ +∞ with x∈𝒟. We have V̇(x) = ∇ V(x)· f(x, θ M)= [ 1+ℛ(θ)/1-ℛ(θ); 1; 2β_E/δ_F(1-ℛ(θ)) ]·[ β_E F (1-E/K) - a E; c E - δ_M M; νν_E/1+γ_sθ E - δ_F F ]. So V̇(x) = -β_E F-δ_M M -1+ℛ(θ)/1-ℛ(θ)β_E/K FE - (νν_E+δ_E)E. Then, using once more (<ref>), we get the existence of c>0 such that V̇(x) ≤ -c V(x), ∀ x ∈ [0, +∞)^3. This concludes the proof of Proposition <ref>. Let us define ψ := 2β_Eνν_E/δ_F(1-ℛ(θ))(1+γ_sθ), and the map G: 𝒟':=[0,+∞)^4→, (x^T,M_s)^T↦ G((x^T,M_s)^T) by G((x^T,M_s)^T):= γ_sψ E(θ M + M_s)^2/α(M+γ_sM_s)(3θ M + M_s) + ((1-ν)ν_Eθ E -θδ_M M)(θ M +3M_s)/3θ M + M_s +δ_sM_s + 1/α(θ M-M_s) if M+M_s≠0, G((x^T,M_s)^T):=0 if M+M_s=0. Finally, let us define the feedback law u: 𝒟'→ [0,+∞), (x^T,M_s)^T↦ u((x^T,M_s)^T), by u((x^T,M_s)^T):=max(0,G((x^T,M_s)^T)). Note that u, which is Lebesgue measurable, is not continuous in 𝒟'. However there exists C>0 such that |u(y)|≤ C y ∀ y∈𝒟'. Property (<ref>) is important for the applications since it implies that the density u of sterile males released is going to be small when the state is close to 0. For instance, this is essential to reduce the number of mosquitoes necessary for a long term intervention and also to allow using the sterile mosquitoes that are no longer needed in an area where the population is already close to zero, to intervene in other zones. This is in contrast with the constant control in Proposition <ref>. Property (<ref>) also implies that u∈ L^∞_loc(𝒟'), which allows to consider Filippov solutions for the closed-loop system, i.e. the system (<ref>)-(<ref>) with the feedback law (<ref>). The next theorem shows that the feedback law (<ref>) stabilizes the control system (<ref>)-(<ref>). Assume that (<ref>) holds. Then 0∈𝒟' is globally asymptotically stable in 𝒟' for system (<ref>)-(<ref>) with the feedback law (<ref>). Proof. Let α>0 and define W:𝒟'→ by W((x^T,M_s)^T) := V(x) + α(θ M-M_s)^2/θ M + M_s if M+M_s≠ 0, W((x^T,M_s)^T) := V(x) if M+M_s = 0. We have W is continuous, W is of class 𝒞^1 on 𝒟'∖{(E,M,F,M_s)^T∈𝒟'; M+M_s=0}, W((x^T,M_s)^T)→ +∞ x +M_s→ +∞, with x∈𝒟 and M_s∈ [0,+∞), W((x^T,M_s)^T)>W(0)=0, ∀ (x^T,M_s)^T∈𝒟'∖{0}. From now on, and until the end of this proof ,we assume that (x^T,M_s)^T is in 𝒟' and until (<ref>) below we further assume that (M,M_s)≠(0,0). One has [ Ẇ((x^T,M_s)^T) = ∇ V(x)· f(x, M_s); +α(θ M-M_s)2(θṀ-Ṁ_s)(θ M + M_s)-(θṀ+Ṁ_̇ṡ)(θ M-M_s)/(θ M + M_s)^2,; = ∇ V(x)· f(x, θ M) + ∇ V(x)·(f(x,M_s)-f(x,θ M)); + α(θ M-M_s)θṀ(θ M + 3M_s)-Ṁ_s(3θ M+M_s)/(θ M + M_s)^2. ] [ ∇ V(x)· (f(x,M_s)-f(x,θ M)) = [ 1+ℛ(θ)/1-ℛ(θ); 1; 2β_E/δ_F(1-ℛ(θ)) ]·[ 0; 0; νν_Eγ_sE( θ M-M_s)/(M +γ_sM_s)(1+γ_sθ) ]; = ψγ_sE(θ M-M_s)/M+γ_sM_s, ] Ẇ((x^T,M_s)^T) = ∇ V(x)· f(x, θ M) + α(θ M-M_s)/(θ M + M_s)^2 [(∇ V(x)·(f((x^T,M_s)^T)-f(x,θ M)))(θ M + M_s)^2/α(θ M-M_s) bbbbbbb +θṀ(θ M +3M_s)-Ṁ_s(3θ M + M_s)] = V̇(x) + α(θ M-M_s)/(θ M + M_s)^2[ ψγ_sE(θ M + M_s)^2/α(M+γ_sM_s) + ((1-ν)ν_Eθ E -θδ_M M)(θ M +3M_s) -u(3θ M + M_s)+δ_sM_s(3θ M + M_s)]. We take u as given by (<ref>). Therefore, in case ψγ_sE(θ M + M_s)^2/α(M+γ_sM_s) + ((1-ν)ν_Eθ E -θδ_M M)(θ M +3M_s) + δ_sM_s(3θ M + M_s) + 1/α(θ M-M_s)(3θ M+M_s)>0, we have u=1/3θ M + M_s[ ψγ_sE(θ M + M_s)^2/α(M+γ_sM_s) + ((1-ν)ν_Eθ E -θδ_M M)(θ M +3M_s) +δ_sM_s(3θ M + M_s) + 1/α(θ M-M_s)(3θ M+M_s)], which, together with (<ref>), leads to Ẇ((x^T,M_s)^T)= V̇(x) -(θ M-M_s)^2(3θ M+M_s)/(θ M + M_s)^2. Otherwise, i.e. if (<ref>) does not hold, ψγ_sE(θ M + M_s)^2/α(M+γ_sM_s) + ((1-ν)ν_Eθ E -θδ_M M)(θ M +3M_s) +δ_sM_s(3θ M +M_s) + 1/α(θ M-M_s)(3θ M+M_s)≤ 0, so, by (<ref>), u=0. We consider two cases: Case 1: θ M > M_s Using (<ref>), (<ref>) and (<ref>) Ẇ((x^T,M_s)^T) ≤V̇(x) -(θ M-M_s)^2(3θ M+M_s)/(θ M + M_s)^2. Case 2: θ M ≤ M_s Using once more (<ref>) and (<ref>) Ẇ((x^T,M_s)^T) =V̇(x) + α(θ M-M_s)/(θ M + M_s)^2[ ψγ_sE(θ M + M_s)^2/α(M+γ_sM_s) + θ ((1-ν)ν_EE-δ_M M)(θ M +3M_s)+δ_sM_s(3θ M + M_s)]. Using (<ref>) -δ_M M(θ M +3M_s)+δ_sM_s(3θ M + M_s)≥δ_M(M_s-θ M)(M_s+θ M), which, together with (<ref>), implies that Ẇ((x^T,M_s)^T) ≤V̇(x) - αδ_M(θ M-M_s)^2/(θ M + M_s). To summarize, using (<ref>), (<ref>), (<ref>) and (<ref>), one gets the existence of c'>0 independent of (x^T,M_s)^T∈𝒟', such that Ẇ((x^T,M_s)^T) ≤ -c'W((x^T,M_s)^T) if M+M_s≠0. Since one still has (<ref>), (<ref>), (<ref>) and (<ref>) (for x=(x^T,M_s^T)^T), this proves Theorem <ref> as in the proof of Theorem <ref> (and, again, even gives the global exponential stability and provides an estimate on the exponential decay rate). It is important to note that the backstepping feedback control (<ref>) does not depend on the environmental capacity K, which is also an interesting feature for the field applications. §.§.§ Numerical simulations The numerical simulations of the dynamics when applying the feedback (<ref>) gives the figures <ref>. The parameters we use are set in the following table. Parameter Name Value interval Chosen value Unity β_E Effective fecundity 7.46-14.85 10 Day^-1 γ_s Mating competitiveness of sterilized males 0-1 1 - ν_E Hatching parameter 0.005-0.25 0.05 Day^-1 δ_E Mosquitoes in aquatic phase death rate 0.023-0.046 0.03 Day^-1 δ_F Female death rate 0.033-0.046 0.04 Day^-1 δ_M Males death rate 0.077-0.139 0.1 Day^-1 δ_s Sterilized male death rate 0.12 Day^-1 ν Probability of emergence 0.49 Value intervals of the parameters for the system (<ref>)-(<ref>) (see <cit.>) With the parameters given in the table, condition (<ref>) is θ >75,67. We fix K = 222000 and we consider the persistence equilibrium as initial condition. That gives E^0 = 21910, M^0 = 5587, F^0 = 13419 and M_s^0 = 0. We take θ =220. In this case, with t_f = 360 days, ∫_0^t_f u(t) dt ≈ 18702985. §.§.§ Robustness test To analyze the robustness of our feedback against the variations of the parameters, we apply the feedback with the parameters fixed in the first column of tables <ref> (original parameters). We carry out some variation of the parameters in the second column (new parameters) of table <ref>. The results are summarized in the following tables. Robustness test We observe that the feedback (<ref>) is robust: it still stabilizes the dynamics at extinction equilibrium if the changes in the parameters are not too large. To apply the feedback (<ref>) we must estimate the number of male and female mosquitoes and the number of eggs. Some techniques used to measure these parameters are CDC light traps and BG-Sentinel traps. Based on their behavior, such as their attraction to pheromones or light, these traps use different attractants, such as light, CO2, or human odor, to capture mosquitoes. To estimate the population size and the ratio of sterile to fertile mosquitoes a common technique is to do Mark-release-recapture (MRR) studies. It consists in marking a subset of the released mosquitoes with a unique identifier and releasing then into the wild. By comparing the number of marked and unmarked mosquitoes captured in the traps, an estimate of the total population size and the ratio of sterile to fertile mosquitoes can be obtained. Some oviposition traps may be used to capture and count the number of eggs laid by female mosquitoes. In the next sections <ref> and <ref>, we propose feedback laws depending on less variables. §.§ Feedback laws depending on total number of male mosquitoes Some recent adult traps are able to count automatically the number of male mosquitoes that are captured and, even in a more classic setting, there exist traps that use synthetic versions of female insect pheromones to attract and capture male insects. This kind of traps placed at different locations in the field, allow us to determine M+M_s of the target pest population. Our aim in this section is to build a feedback linearly depending on M+M_s. Consider the closed-loop system ż= F(z,u(z)), z=(E,M,F,M_s)^T∈𝒟', where u(z) = α(M+M_s), F(z,u) = ( [ β_E F (1-E/K) - ( ν_E + δ_E ) E; (1-ν)ν_E E - δ_M M; νν_E E M/M+γ_s M_s - δ_F F; u -δ_sM_s ]), and α is a fixed real number. Throughout all this section <ref>, we assume that (<ref>) holds and that α∈ [0,δ_s ). The offspring number related to this system is ℛ_1(α):= (δ_s-α)β_Eνν_E/δ_F (ν_E+δ_E)(δ_s-(1-γ_s)α). §.§.§ Equilibria of the closed-loop system Equilibria of the SIT model (<ref>) are obtained by solving the system { β_E F (1-E/K) - ( ν_E + δ_E ) E = 0, (1-ν)ν_E E - δ_M M =0, νν_E E M/M+γ_s M_s - δ_F F=0, α M -(δ_s-α)Ms =0. . We get either the extinction equilibrium E =0, M = 0, F = 0, M_s = 0 or E = K(1 -1/ℛ_1(α)), M = (1-ν)ν_E/δ_ME, F = (δ_s-α)νν_E/δ_F((δ_s-α)+γ_sα)E, M_s = (1-ν)ν_Eα/(δ_s-α)δ_ME. Let us assume in the sequel that ℛ_1(α )< 1. Using (<ref>) and (<ref>), one gets E<0 and therefore the equilibrium given by (<ref>) is not relevant. In conclusion the closed-loop system (<ref>) has one and only one equilibrium which is the extinction equilibrium 0. It is therefore tempting to raise the following conjecture (compare with Theorem <ref>). The extinction equilibrium 0 is globally asymptotically stable in 𝒟' for the closed-loop system (<ref>). We have not been able to prove this conjecture. However * In section <ref>, we give a positively invariant set for the closed-loop system (<ref>) in which, as proved in section <ref>, 0 is globally asymptotically stable for (<ref>); * In section <ref>, we provide numerical evidence for this conjecture. §.§.§ Invariant set of the closed-loop system From (<ref>), (<ref>), and (<ref>), one gets β_Eνν_E- (ν_E+δ_E)δ_F/β_Eνν_E-(1-γ_s) (ν_E+δ_E)δ_Fδ_s<α<δ_s. Let us define, with z=(E,M,F,M_s)^T, 𝒯_1 :={z∈𝒟' : β_EF(1-E/K)≤ (ν_E+δ_E)E}, 𝒯_3 := {z∈𝒟': (1-ν)ν_EE≤δ_MM}, and, for κ>0, 𝒯_2(κ) = {z∈𝒟' : M≤κ M_s}. One has the following theorem. Assume that (<ref>) holds and that κ≤γ_sδ_F(ν_E+δ_E)/β_Eνν_E-δ_F(ν_E+δ_E), κ≥δ_s-α/α. Then ℳ(κ):= 𝒯_1∩𝒯_2(κ)∩𝒯_3 is a positively invariant set of the closed-loop system (<ref>). Note that (<ref>) implies that 0<δ_s-α/α<γ_sδ_F(ν_E+δ_E)/β_Eνν_E-δ_F(ν_E+δ_E). Hence there are κ>0 such that both (<ref>) and (<ref>) hold. Proof of Theorem <ref>. Let us first study the case where one starts with E=F=0 : we consider the Filippov solution(s) to the Cauchy problem ż =F(z,u(z)), E(0)=0, M(0)=M_0, F(0)=0, M_s(0)=M_s0, where (M_0,M_s0)^T∈ [0,+∞)^2 is such that M_0≤κ M_s0. From (<ref>), (<ref>), and (<ref>), one gets E(t)=F(t)=0, ∀ t≥ 0, Ṁ=-δ_M M and Ṁ_s =α M -(δ_s-α)M_s. In particular, for every t≥ 0, z(t)∈𝒯_1∩𝒯_3. It remains to check that z(t)∈𝒯_2(κ) ∀ t≥ 0. From (<ref>), one has d/dt(M-κ M_s)=-(δ_M+κα)(M-κ M_s) -κ ((1+κ)α-δ_s+δ_M)M_s. From (<ref>) one has (1+κ)α-δ_s+δ_M≥δ_M. Property (<ref>) readily follows from (<ref>), (<ref>) and (<ref>). Let us now deal with the case where E+F>0. Note that, for z∈ℳ(κ), this implies that E>0 and M>0. Until the end of the proof of Theorem <ref> we assume that z∈𝒟' and is such that (<ref>) holds. Let h_1:𝒟'→ be defined by h_1(z) := β_EF(1-E/K)- (ν_E+δ_E)E. Its time derivative along the solution of the closed-loop system (<ref>) is ḣ_1(z) = β_Eνν_EEM/M+γ_sM_s(1-E/K) -δ_Fβ_EF(1-E/K)-β_E^2F^2/K(1-E/K) + β_E(ν_E+δ_E)EF/K-(ν_E+δ_E)β_EF(1-E/K)+(ν_E+δ_E)^2E. For a set Σ⊂𝒟', let us denote by ∂Σ its boundary in 𝒟'. On ∂𝒯_1, β_EF(1-E/K)=(ν_E+δ_E)E. Hence ḣ_1(z) = β_Eνν_EEM/M+γ_sM_s(1-E/K) -δ_F(ν_E+δ_E)E if z ∈∂𝒯_1. In particular, using (<ref>), ḣ_1(z)≤ -β_Eνν_EM/M+γ_sM_sE^2/K<0 if z ∈∂𝒯_1∩𝒯_2(κ). Let us now turn to the behavior of the closed-loop system on the ∂𝒯_2(κ). Let h_2:𝒟'→ be defined by h_2(z) := M-κ M_s. Its time derivative along the solution of the closed-loop system (<ref>) is ḣ_2(z)=(1-ν)ν_E E - δ_M M-κ(α M -(δ_s-α) M_s), which leads to ḣ_2(z)=(1-ν)ν_E E - ((1+κ)α-δ_s+δ_M)M if z ∈∂𝒯_2(κ). From (<ref>), (<ref>), and (<ref>), one gets that ḣ_2(z)≤ 0 if z ∈𝒯_3 ∩∂𝒯_2(κ). Finally, let us study the behavior of the closed-loop system on the ∂𝒯_3. Let h_3:𝒟'→ be defined by h_3(z) := (1-ν)ν_EE - δ_MM. Its time derivative along the solution of the closed-loop system (<ref>) is ḣ_3(z)=β_E F (1-E/K) - ( ν_E + δ_E ) E - δ_M ((1-ν)ν_E-δ_M M), which leads to ḣ_3(z)=β_E F (1-E/K) - ( ν_E + δ_E ) E if z ∈∂𝒯_3. In particular, ḣ_3(z)≤ -β_E EF/K≤ 0 if z ∈𝒯_2(κ)∩∂𝒯_3. This concludes the proof of Theorem <ref>. §.§.§ Global asymptotic stability result Let κ: =γ_sδ_F(ν_E+δ_E)/β_Eνν_E-δ_F(ν_E+δ_E), ℳ:=ℳ(κ). Let us recall that, by (<ref>), κ, which clearly satisfies (<ref>), satisfies also (<ref>). In particular, by Theorem <ref>, ℳ is positively invariant for the closed-loop system (<ref>). The main result of this section is the following theorem. Assume that (<ref>) holds. Then 0 is globally asymptotically stable for the closed-loop system (<ref>) in ℳ. Proof. The first step of the proof is the following lemma which shows that Theorem <ref> holds with ℳ replaced by ℳ(κ) provided that (<ref>) is a strict inequality and that (<ref>) holds. Let us assume that (<ref>) holds and that κ < γ_sδ_F(ν_E+δ_E)/β_Eνν_E-δ_F(ν_E+δ_E). Then 0 is globally asymptotically stable for system (<ref>) in ℳ(κ). To prove this lemma we use a Lyapunov approach. Our Lyapunov function is U: 𝒟'→_+, z↦ U(z), U(z) = δ_F E +ε M +β_E(1+ε)F+ε^2M_s, where ε∈ (0,1] is a constant which will be chosen later on. One has U is of class 𝒞^1, U(z)>U(0) =0, ∀ z ∈𝒟'∖{0}, U(z)→ +∞ as |z|→ +∞ with z∈𝒟'. Let us assume for the moment being that M+M_s≠0. Then, the time derivative of U along the solution of the closed-loop system (<ref>) is U̇ (z)= δ_F (β_E F (1-E/K) - ( ν_E + δ_E ) E) +ε((1-ν)ν_E E - δ_M M) +β_E(1+ε)(νν_E E M/M+γ_s M_s - δ_F F) +ε^2(α M -(δ_s-α)M_s). In particular, U̇ (z)≤ -εδ_Fβ_E F - (( ν_E + δ_E )-ε (1-ν)ν_E-β_E(1+ε)νν_E κ/κ +γ_s)E - ε(δ_M-εα)M -ε^2(δ_s-α)M_s if z∈ℳ(κ). Let us now point out that (<ref>) implies that β_Eνν_E κ/κ +γ_s <( ν_E + δ_E ). From (<ref>) and (<ref>) one gets that for ε >0 small enough there exists c(ε)>0 independent of z∈ℳ(κ) such that U̇ (z)≤ -c(ε) U(z) if z∈ℳ(κ). It remains to remove assumption (<ref>). Let t↦ z(t)=(E(t),M(t),F(t),M_s(t))^T be a Filippov solution of the closed loop for the initial condition z(0)=(E_0,M_0,F_0,M_s0)^T∈ℳ(κ). We observe that if (E_0,F_0)=(0,0), then z(0)∈ℳ(κ) implies that M_0>0, from which one gets that M(t)>0 for every t≥ 0. Hence (<ref>) holds for every t≥ 0. While, if (E_0,F_0)≠ (0,0), then M(t)>0 for every t>0. In particular, one still has (<ref>) and therefore (<ref>) for every t> 0. Hence, U(z(t))≤ e^-c(ε)tU(z(0)), ∀ t≥ 0, which, together with (<ref>) and (<ref>), concludes the proof of Lemma <ref>. Let us now deduce from Lemma <ref> that 0 is a global attractor for the closed-loop system (<ref>) in ℳ. Let z(t)=(E(t),M(t),F(t),M_s(t))^T be a Filippov solution of the closed-loop system (<ref>) for the initial condition z(0)=(E_0,M_0,F_0,M_s0)^T∈ℳ(κ). If (E_0,F_0)=(0,0) then one has (<ref>) and (<ref>) which leads to z(t)→0 as t→ +∞ (note that, by (<ref>), δ_s-α>0). Let h_2:𝒟'→ be defined by h_2(z) := M-κM_s. Note that, if for some t_0≥ 0, h_2(z(t))<0, then there exists κ>0 satisfying (<ref>) and (<ref>) such that z(t_0)∈ℳ(κ). By Lemma <ref> one then has z(t)→0 as t→ +∞. If there is no such t_0 , then h_2(z(t))=0 for every t≥ 0. From (<ref>) with κ =κ, (<ref>), (<ref>), and (<ref>), one gets that h_3(z(t))=0 for every t≥ 0, which together with (<ref>) implies that E(t)F(t)=0 for every t≥ 0. Since z(t)∈𝒯_1, (<ref>) and (<ref>) imply that F(t)=0 for every t≥ 0. Then, if for some t_0≥ 0, E(t_0)=0, one has (E(t_0),F(t_0))=(0,0), which, as already pointed out above, implies that z(t)→0 as t→ +∞. It remains to handle the case where E(t)>0 for every t≥ 0. In particular, since z(t)∈𝒯_3, one has, using (<ref>), M(t)>0 for every t≥ 0. Then, differentiating (<ref>) with respect to time and using (<ref>) and (<ref>), one gets E(t)=0 for every t≥ 0, which leads to a contradiction with (<ref>). This concludes the proof of (<ref>). In order to end up the proof of Theorem <ref> it just remains to check that 0 is stable for the closed-loop system (<ref>) in ℳ. For that, let U: 𝒟'→_+, z↦U(z), be defined by U(z) = δ_F E +β_EF, which corresponds to the definition of U given in (<ref>) with ε=0. Let z(t)=(E(t),M(t),F(t),M_s(t))^T be a Filippov solution of the closed loop for the initial condition z(0)=(E_0,M_0,F_0,M_s0)^T∈ℳ. As above, we maysrestrict our attention to the case where E(t)>0 for every t>0. Let us recall that since z(t)∈ℳ⊂𝒯_3, (<ref>), and (<ref>) imply that M(t)>0 for every t≥ 0. Then, U(z(t)) can be differentiated with respect to time and one has, by (<ref>) with ε =0 and κ =κ, and (<ref>), U̇(z(t))≤ 0, which shows that E(t)+F(t)≤max{δ_E,δ_F}/min{δ_E,δ_F}(E(0)+F(0)), for every t≥ 0. It remains to estimate M(t) and M_s(t). Using z(t)∈𝒯_2(κ) and (<ref>), one already has M(t)≤κM_s(t) for every t≥ 0. Using (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), one has Ṁ_s(t)≤(ακ+α -δ_s)M_s(t) ≤ 0 for every t≥ 0. In particular, using also (<ref>), M_s(t)≤ M_s(0) and M(t)≤κM_s(0) for every t≥ 0. This concludes the proof of (<ref>) and, therefore, of Theorem <ref>. §.§.§ Numerical simulations In this section, we will show numerical simulations of the dynamics when we apply feedback (<ref>). We fix z_0 = (21910,5587, 13419,0)∉ℳ. We now compute condition (<ref>) according to the parameter set in the table <ref>. This gives 0.11843<α<0.12. We take α=0.11843. The following figures show the evolution of the states when condition (<ref>) holds. We observe that the convergence time of the states E,M and F is longer than when we applied the backstepping feedback control (<ref>). In this case, with t_f = 700 days, ∫_0^t_f u(t) dt ≈ 17916614. We take several initial conditions randomly and plot the resulting dynamics in figure <ref>, §.§.§ Robustness test To analyze the robustness of our feedback against variations of the parameters, we carry out some variation of the parameters (new values) in table <ref>. The results are summarized in table <ref>. We observe that very small perturbations of the parameters destabilize the origin. §.§ Feedback laws depending on wild male mosquitoes In the application of the technique it might also be possible to estimate only fertile males. For instance, in MRR experiments, sterile mosquitoes are identified by the presence of a marker, such as a dye or a fluorescent protein, which has been applied before their release (although, at present, it is not always easy to do this for all the mosquitoes released in field interventions). Nevertheless, since the technology is evolving very fast, it is possible that in can become standard practice in the near future (for instance, we recall that PCR analysis of the captured mosquitoes is already currently used thanks to genetic bar-coding). Thus, it is interesting to set up the mathematical techniques to deal with this situation. Therefore, we consider in this section the case where the feedback depends only on the state M. Consider the closed-loop system ż= F(z,u(z)), z=(E,M,F,M_s)^T∈𝒟', where u(z) = λ M and F(z,u(z)) = ( [ β_E F (1-E/K) - ( ν_E + δ_E ) E; (1-ν)ν_E E - δ_M M; νν_E E M/M+γ_s M_s - δ_F F; λ M -δ_sM_s ]), The offspring number related to this system is ℛ_2( λ):= δ_sβ_Eνν_E/δ_F (ν_E+δ_E)(δ_s+γ_s λ). We assume that ℛ_2( λ)<1. Note that this inequality is equivalent to λ>(β_Eνν_E- (ν_E+δ_E)δ_F)δ_s/γ_s (ν_E+δ_E)δ_F. Let us point out that the closed-loop system (<ref>) is exactly the closed-loop system (<ref>) if one performs the following change of variables (with natural notations): α^(<ref>)=λ^(<ref>) and δ_s^(<ref>)=δ_s^(<ref>)+λ^(<ref>). Hence Theorem <ref> and Theorem <ref> lead to the following theorem. Assume that (<ref>) and (<ref>) hold. Then ℳ is positively invariant for the closed-loop system (<ref>) and 0 is globally asymptotically stable for the closed-loop system (<ref>) in ℳ. §.§.§ Numerical simulations In this section, we show the numerical evolution of the states when we apply feedback (<ref>). We fix as initial condition z_0 = (21910,5587, 13419,0)∉ℳ and K = 22200. We now compute condition (<ref>) according to the parameters set in table <ref>. This gives λ>9.06. We take for the simulation λ = 22. Notice that with t_f = 400 days, ∫_0^t_f u(t) dt ≈ 17289041. In figure <ref> we take several initial conditions randomly for λ = 22 . §.§.§ Robustness test To analyze the robustness of our feedback against the variations of the parameters, we carry out some variation of the parameters (new parameters of the following table <ref>) in the dynamics. The results are summarized in table <ref>. Robustness test We observe that feedback (<ref>) is robust with respect to changes of parameters: for rather large perturbations on the parameters it stills globally stabilizes the dynamics at the extinction equilibrium. § CONCLUSION We have built feedback laws that stabilize the SIT dynamical model and have studied their robustness with respect to changes of parameters. We study three types of feedback laws: 1) a backstepping one in section <ref>. 2) one depending linearly on the total number of male mosquitoes, M+M_s in section <ref>. 3) one depending linearly on the number of wild male mosquitoes M in section <ref>. For the first one we were able to prove the global asymptotic stability. However, it depends on three variables (E,M and M_s) which may be difficult to measure in the field. For the second one, we proved the global asymptotic stability only in a certain invariant set ℳ. We conjecture that this feedback gives global stability and we show numerical evidence for this conjecture (see figure <ref>). The advantage of this feedback law is that it depends only on the total number of male mosquitoes M+M_s which is a natural quantity to measure in the field. However, this feedback law has an important drawback due to the narrow interval allowed for the gain α of the feedback in (<ref>). This might pose a problem for the robustness of this method relative to the variations of the biological parameters. For the third one, we proved the global asymptotic stability only in a certain invariant set ℳ. We also conjecture that this feedback gives global stability and we show numerical evidence for this conjecture (see figure <ref>). The main difference w.r.t. the previous feedback law is that now the method is robust w.r.t. variations of the biological parameters. However, the drawback in this case is that M should be harder to measure in the field. Also in our work, we did not consider the pest population's spatial distribution. This has again an impact in practical terms. In our future works, we will construct for this dynamics an observer that can estimate the state from easily measurable variables and we will also integrate the spatial aspect in this dynamical model. As stated in the introduction, although the paper is mostly written for the specific case of mosquitoes, our results can be extended to the case of other pests for which the Sterile Insect Technique is pertinent, § ACKNOWLEDGEMENTS The authors would like to thank Hervé Bossin and René Gato for the very interesting discussions that helped them identify feedback laws that can be useful for field applications and to be aware of their limitations. We hope that our future collaborations will allow us to develop and apply the ideas put forward in this work in field interventions and learn from the results to be able to improve our strategies. plain
http://arxiv.org/abs/2307.02728v1
20230706022705
Hierarchical Empowerment: Towards Tractable Empowerment-Based Skill-Learning
[ "Andrew Levy", "Sreehari Rammohan", "Alessandro Allievi", "Scott Niekum", "George Konidaris" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.RO" ]
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating Sheng-Lan Liu, Yu-Ning Ding, Si-Fan Zhang, Wen-Yue Chen, Ning Zhou, Hao Liu, Gui-Hong Lao Sheng-Lan Liu, Yu-Ning Ding, Si-Fan Zhang, Wen-Yue Chen, Ning Zhou, Hao Liu, Gui-Hong Lao are with the Computer Science and Technology, Dalian University of Technology, Dalian 116024, China. (Corresponding author: Sheng-Lan Liu, E-mail: [email protected]) =============================================================================================================================================================================================================================================================================================================================================================== General purpose agents will require large repertoires of skills. Empowerment—the maximum mutual information between skills and the states—provides a pathway for learning large collections of distinct skills, but mutual information is difficult to optimize. We introduce a new framework, Hierarchical Empowerment, that makes computing empowerment more tractable by integrating concepts from Goal-Conditioned Hierarchical Reinforcement Learning. Our framework makes two specific contributions. First, we introduce a new variational lower bound on mutual information that can be used to compute empowerment over short horizons. Second, we introduce a hierarchical architecture for computing empowerment over exponentially longer time scales. We verify the contributions of the framework in a series of simulated robotics tasks. In a popular ant navigation domain, our four level agents are able to learn skills that cover a surface area over two orders of magnitude larger than prior work. § INTRODUCTION General purpose robots that can execute a wide array of skills have the potential to greatly improve human productivity. Yet the question of how to enable agents to learn large sets of skills remains a central challenge in artificial intelligence research. Empowerment <cit.> provides a compelling pathway for implementing such general purpose robots. As the mutual information between skills and the states to which they lead, empowerment enables agents to learn the largest space of skills such that each skill can reliably reach a distinct area of the state space. A major problem with using empowerment for skill-learning is that it is difficult to compute. Recent work <cit.> has made some progress using Reinforcement Learning (RL) <cit.> to optimize a variational lower bound on empowerment, but these methods are limited to learning small spaces of skills. There are two problems with the current approach of using RL to compute empowerment. The first issue is the reward function employed by existing methods does not incentivize skills to target particular regions of the state space when there is significant overlap among the skills. For instance, at the start of training, when most of the skill space generally visits the same states near the starting state, the reward function used by many empowerment-based skill-learning methods will not encourage the skills to differentiate. The second problem is that it will be difficult to use RL to compute long-term empowerment as RL has consistently struggled at learning long horizon tasks <cit.>. Central to our work is the insight from <cit.> that goal-conditioned RL can be formulated as a variational lower bound on the mutual information between skills and states. Goal-Conditioned RL offers a promising lower bound on mutual information because it uses a reward function that explicitly encourages skills to specialize and target particular states. However, <cit.> do not provide a way to learn the distribution of goal states (i.e., skills) used in goal-conditioned RL and instead require a hand-crafted space of goal states. Further, the authors do not offer a practical approach for computing long-term empowerment. This approach to skill-learning requires domain expertise. In addition, if the hand-crafted distribution of goal states is either significantly smaller or larger than the space of reachable states, then the approach of <cit.> only computes a loose lower bound on empowerment as the agents either learns too few skills or too many redundant skills. We introduce a framework, Hierarchical Empowerment, that takes a step toward tractable long-term empowerment computation. The framework makes two contributions. The first contribution, Goal-Conditioned Empowerment, is a new variational lower bound objective on empowerment that extends <cit.> by learning the space of achievable goal states. We show that after applying the reparameterization trick <cit.>, our mutual information objective with respect to the learned goal space is a maximum entropy <cit.> bandit problem, in which the goal space is rewarded for being large and containing achievable goals. Optimizing the full Goal-Conditioned Empowerment objective takes the form of an automated curriculum of goal-conditioned RL problems. Figure <ref>(Left) illustrates this idea. The second contribution of our framework is a hierarchical architecture for scaling Goal-Conditioned Empowerment to longer time horizons. Under this architecture, Goal-Conditioned Empowerment is implemented at multiple levels of hierarchy. What distinguishes each level is the action space for the goal-conditioned policy, which is set to the learned goal space at the level below. This architecture enables each level up the hierarchy to compute empowerment at an exponentially increasing time scale. At the same time, each policy need only learn a short sequence of decisions, making optimizing empowerment with RL more tractable. Figure <ref>(Right) illustrates this architecture. We evaluate the two proposed contributions of our framework in a series of simulated robotic navigation tasks. The experimental results support both contributions. Goal-Conditioned Empowerment outperforms the baseline at learning the short-term empowerment of states. In addition, our results indicate that hierarchy is needed to compute long-term empowerment as agents with more levels of hierarchy outperformed those with fewer. A video presentation of our results is available at the following url: <https://www.youtube.com/watch?v=kwLUYMsMRNI>. § BACKGROUND §.§ Goal-Conditioned Markov Decision Processes (MDPs) Central to our work is the concept of Goal-Conditioned MDPs <cit.>, which represent a decision making setting in which an agent needs to learn a distribution of tasks. We define Goal-Conditioned MDPs by the tuple {𝒮,p(s_0),𝒢,p(g),𝒜,T(s_t+1|s_t,a_t),ϵ,r(s_t,g,a_t)}. 𝒮 is the space of states. p(s_0) is the distribution of starting states. 𝒢 is the space of goals and is assumed to be a subset of 𝒮. p(g) is the distribution of goals for the agent to achieve. 𝒜 is the action space. T(s_t+1|s_t,a_t) is the transition dynamics of the environment. We assume episodes terminate in an absorbing state when the agent is within an ϵ-ball of the goal state. That is, for some distance function d(·), if d(s_t,g) < ε, then p(s_t+1|s_t,a_t,g) = δ, in which δ is the impulse function, when s_t+1=s_t. Otherwise, if d(s_t,g) ≥ϵ, p(s_t+1|s_t,a_t,g) = T(s_t+1|s_t,a_t). r(s_t,g,a_t) is the reward function for a (state,goal,action) tuple. A policy π_θ in a Goal-Conditioned MDP is a mapping from states and goals to a distribution over the action space. The objective in a goal-conditioned MDP is learn the parameters θ of the policy that maximize the sum of discounted rewards averaged over all goals and goal-conditioned trajectories: max_θ_g ∼ p(g), τ∼ p(τ|g)[∑_t=0^∞γ^t r(s_t,g,a_t)], in which τ is a trajectory of states and actions (s_0,a_0,s_1,a_1,…). §.§ The Skill Channel and Empowerment Key to our approach for skill-learning is the observation that skills are an instance of noisy channels from Information Theory <cit.>. We will refer to this noisy channel as the skill channel. The skill channel is defined by the tuple {𝒵,p(s_n|s_0,z),𝒮}, in which 𝒵 is the set of skills that can be executed from a start state s_0, 𝒮 is the set of states, n is the number of actions contained within a skill, and p(s_n|s_0,z) is the probability of a skill z that started in state s_0 terminating in state s_n. The distribution p(s_n|s_0,z) depends on the skill-conditioned policy π(a_t|s_t,z) and the transition dynamics of the environment. The connection between skills and noisy channels is important because of the discovery by <cit.> that the largest number of distinct inputs (in bits) that can be sent over a channel (also referred to as the capacity of the channel) is the maximum mutual information of the channel. In this context, distinct means the input to the channel can be identified by the output with arbitrarily small probability of error. Mutual information is thus an appealing objective for implementing general purpose agents that need to reliably reach many specific states because maximizing the mutual information of the skill channel will find the largest set of skills such that each skill targets a unique area of the state space. The channel capacity of the skill channel is known as Empowerment <cit.>, which is thus defined as ℰ(s_0) = max_p(z),πI(Z;S_n|s_0). The mutual information of the skill channel is accordingly defined as I(Z;S_n|s_0) = H(Z|s_0) - H(Z|s_0,S_n). Note that in the literature empowerment is often defined with respect to more simple skills, such as single primitive actions or open loop action sequences <cit.>, but we use the more general definition developed by <cit.> that permits learnable closed loop skills. §.§ Noisy Channels, Channel Capacity, and Mutual Information Another important concept in our work is the noisy channel from Information Theory <cit.>. A noisy channel is defined by the tuple {𝒳,p(y|x),𝒴}. 𝒳 represents the set of inputs to the channel (e.g., the dot, dash, and space used for sending messages over a telegraph). p(y|x) is the conditional distribution representing the probability of the channel outputting y, from the set of outputs 𝒴, given x was input into the channel. A key result from Information Theory is a formula for the capacity of a channel, which represents the number of inputs (in bits) that can be sent over a noisy channel reliably, meaning the input to the channel can be determined from the output with arbitrarily small error. <cit.> discovered that the channel capacity 𝒞 = max_p(x) I(X;Y), in which p(x) is the input distribution and I(X;Y) is the mutual information of the channel. The mutual information is defined as I(X;Y) = H(Y) - H(Y|X) = H(X) - H(X|Y), in which H(·) refers to the entropy of a distribution. Per the second definition of mutual information, more inputs can be reliably sent over a noisy channel when (i) the alphabet of inputs 𝒳 larger and/or p(x) is more uniform (i.e., H(X) is higher) and (ii) the inputs target nonoverlapping regions of the output space 𝒴 (i.e., H(X|Y) is smaller). §.§ The Skill Channel and Empowerment The noisy channel that this work focuses on is the skill channel. The skill channel is defined by the tuple {𝒵,p(s_n|s_0,z),𝒮}, in which 𝒵 is the set of skills that can be executed from a start state s_0, 𝒮 is the set of states, n is the number of actions contained within the skill, and p(s_n|s_0,z) is the probability of a skill z that started in state s_0 terminating in state s_n. The distribution p(s_n|s_0,z) depends on the skill-conditioned policy π(a_t|s_t,z) and the transition dynamics of the environment. Given that general purpose agents will need to have a wide array of skills in which each skill targets distinct regions of the state space, a promising objective for implementing general purpose agents is to find the channel capacity of the skill channel for a variety of skill start states. Computing the channel capacity, or equivalently maximizing the mutual information, of the skill channel would produce the largest set of skills that each reliably target unique regions of the state space. The channel capacity of the skill channel is known as Empowerment <cit.>, which is thus defined as ℰ(s_0) = max_p(z),πI(Z;S_n|s_0). The mutual information of the skill channel is accordingly defined as I(Z;S_n|s_0) = H(Z|s_0) - H(Z|s_0,S_n). Note that in the literature empowerment is often defined with respect to more simple skills, such as single primitive actions or open loop action sequences <cit.>, but we use the more general definition developed by <cit.> that permits learnable closed loop skills. §.§ Challenges with Computing Empowerment One major difficulty with calculating empowerment is that computing the posterior distribution p(z|s_0,s_n), located within the conditional entropy term H(Z|s_0,S_n), is often intractable. Analytically calculating the posterior distribution would require integrating over the intermediate actions and states a_1,s_1,a_2…,s_n-1,a_n-1. Inspired by <cit.>, many empowerment-based skill-learning approaches <cit.> attempt to overcome the issue by optimizing a variational lower bound on the mutual information of the skill channel. In this lower bound, a learned variational distribution q_ψ(z|s_0,s_n) parameterized by ψ is used in place of the problematic posterior p(z|s_0,s_n). (See <cit.> for original proof of the lower bound.) The variational lower bound is accordingly defined I^VB(Z;S_n|s_0) = H(Z|s_0) + _z ∼ p(z|s_0), s_n ∼ p_θ(s_n|s_0,z)[log q_ψ(z|s_0,s_n)], in which θ represents the parameters of the skill-conditioned policy π_θ(a_t|s_t,z). The common approach in empowerment-based skill-learning is to train the learned variational distribution q_ψ with maximum likelihood learning so that the variational distribution approaches the true posterior. The skill distribution p(z) is typically held fixed so H(Z|s_0) is constant and can be ignored. The last term in the variational lower bound, as observed by <cit.>, is a Reinforcement Learning problem in which the reward for a (state,skill,action) tuple is r(s_t,z,a_t) = 0 for the first n-1 actions and then r(s_n-1,z,a_n-1) = _s_n ∼ p(s_n|s_n-1,a_n)[log q_ψ(z|s_0,s_n))] for the final action. The maximum likelihood objective in this case is _z ∼ p(z|s_0),s_n ∼ p_θ(s_n|s_0,z)[log q_ψ(z|s_0,s_n)]. The objective is optimized by (i) sampling a skill from the skill distribution p(z|s_0), (ii) sampling a terminal state from the distribution p_θ(s_n|s_0,z), and then (iii) updating q_ϕ(z|s_0,s_n) so that the tuple (s_0,z,s_n) becomes more probable. Note that to obtain unbiased estimates of the gradient of this objective for a variety of start state s_0 and skill z combinations, a model is needed to allow p_θ(s_n|s_0,z) to be sampled using the current skill-conditioned policy π_θ. Unfortunately, this reward function does not encourage skills to differentiate and target distinct regions of the state space (i.e., increase mutual information) when the skills are currently overlapping. For instance, at the start of training when skills tend to visit the same states in the immediate vicinity of the start state, the reward q_ψ(z|s_0,s_n) will typically be small for most (s_0,z,s_n) tuples. The undifferentiated reward will in turn produce little change in the skill-conditioned policy, which then triggers little change in the learned variational distribution and reward q_ψ, making it difficult to maximize mutual information. §.§ Goal-Conditioned RL as a Mutual Information Variational Lower Bound <cit.> observed that the goal-conditioned RL objective below is also a variational lower bound on mutual information. _g ∼ p(g|s_0)[_s_1,…,s_n ∼ p_θ(s_1,…,s_n|s_0,g)[log q(g|s_0,s_n)]], q(g|s_0,s_n) = 𝒩(g;s_n,σ_0) The expression in <ref> is a goal-conditioned RL objective as there is an outer expectation over a fixed distribution of goal states p(g|s_0) and an inner expectation with respect to the distribution of trajectories produced by a goal state. The reward function is 0 for all time steps except for the last when it is the log probability of the skill g sampled from a Gaussian distribution centered on the skill-terminating state s_n with fixed standard deviation σ_0 selected by the designer. Like any goal-conditioned RL reward function, this reward encourages goal-conditioned policy π_θ(s_t,g) to target the conditioned goal as the reward is higher when the goal g is near the skill-terminating state s_n. The goal-conditioned objective in Equation <ref>, which reduces to _g ∼ p(g|s_0),s_n ∼ p(s_n|s_0,g)[log q(g|s_0,s_n)], is also a variational lower bound on the mutual information of the skill channel. The distribution of skills is represented by a distribution of goal states, and a variational distribution—in this case, a fixed variance Gaussian distribution—is used in place of the true posterior p(g|s_0,s_n). This variational lower bound is appealing because it explicitly encourages skills to target specific states. Even when there is significant overlap among the skills in regards to the states visited, goal-conditioned RL will still encourage skills to specialize and achieve distinct states. One issue with using goal-conditioned RL as a variational lower bound on mutual information is the hand-crafted distribution of skills p(g|s_0). If this is significantly smaller or larger than the region of states that can be reached in n actions by the goal-conditioned policy, then maximizing the goal-conditioned RL objective would only serve as a loose lower bound on empowerment because the objective would either learn fewer skills than it is capable of or too many redundant skills. Another issue with the objective is that it is not a practical objective for learning lengthy goal-conditioned policies given that RL has consistently struggled at performing credit assignment over long horizons using temporal difference learning <cit.>. Thus, goal-conditioned RL is not an appealing objective for computing long-term empowerment. § HIERARCHICAL EMPOWERMENT We introduce a framework, Hierarchical Empowerment, that makes two improvements to using goal-conditioned RL as a variational lower bound on mutual information and empowerment. The first improvement we make is a new variational lower bound, Goal-Conditioned Empowerment, that uses the same fixed variational distribution as <cit.>, but also learns the distribution of skills. The second improvement is to integrate a hierarchical architecture that makes computing the empowerment over long time horizons more tractable. §.§ Goal-Conditioned Empowerment The Goal-Conditioned Empowerment objective is defined ℰ^GCE(s_0,θ,ϕ) = max_π_θ,p_ϕ I^GCE(Z;S_n|s_0). s_0 is the starting state for which empowerment is to be computed. I^GCE(Z;S_n|s_0) is a variational lower bound on the mutual information of the skill channel. π_θ(s_t,z) is the goal-conditioned policy parameterized by θ. We will assume π_θ is deterministic. p_ϕ(z|s_0) is the learned distribution of continuous goal states (i.e., skills) parameterized by ϕ. The mutual information variational lower bound within Goal-Conditioned Empowerment is defined I^GCE_θ,ϕ(Z;S_n|s_0) = H_ϕ(Z|s_0) + _z ∼ p_ϕ(z|s_0), s_n ∼ p_θ(s_n|s_0,z)[log q(z|s_0,s_n)]. Next, we will analyze the variational lower bound objective in equation <ref> with respect to the goal-space policy and goal-conditioned policy separately. An immediate challenge to optimizing Equation <ref> with respect to the goal space parameters ϕ is that ϕ occurs in the expectation distribution. The reparameterization trick <cit.> provides a solution to this issue by transforming the original expectation into an expectation with respect to exogenous noise. In order to use the reparameterization trick, the distribution of goal states needs to be a location-scale probability distribution (e.g., the uniform or the normal distributions). For the remainder of the paper, we will assume p_ϕ(z|s_0) is a uniform distribution. The reparameterization trick will then work as follows. Given that the uniform distribution is a member of the location-scale class of probability distributions, any expectation with respect to p_ϕ(z|s_0) can be transformed into an expectation with respect to an exogenous noise random variable. Specifically, the expectation _z ∼ p_ϕ(z|s_0)[f(z)] for some function f(·) can be transformed into the expectation _ϵ∼𝒰^d(0,1)[f(z)], in which z is reparameterized to be z = g(ϵ,μ_ϕ(s_0)). d is the dimensions of the goal space. ϵ is the exogenous random variable in which each of the d dimensions of ϵ is sampled from the unit uniform random variable. μ_ϕ is a function that provides the location-scale parameters of the uniform goal-space distribution for state s_0: μ_ϕ: 𝒮→ (Location, Scale). For each dimension of the goal space, there will be a location parameter designating the location of the center of the uniform distribution and a scale parameter that indicates the half width of the distribution. For instance, if the goal space is the space of reachable (x,y) coordinates, μ_ϕ(s_0) will output a four-dimensional vector. Two of the dimensions will indicate the center of the uniform distribution along the (x,y) dimensions (i.e., the center of the rectangle that represents the learned goal space). The other two dimensions will indicate the half widths of the uniform distributions (i.e., the half widths of the goal space rectangle). Given an ϵ and the location-scale parameters from μ_ϕ(s_0), the function g(·) outputs the sample from the goal space that corresponds to ϵ. For instance, in the two-dimensional goal space example, if ϵ = (1,1), z will be the top right point of the goal space rectangle. Applying the reparameterization trick to the expectation term in Equation <ref> and inserting the fixed variance Gaussian variational distribution used by <cit.>, the variational lower bound objective becomes I^GCE_θ,ϕ(Z;S_n|s_0) = H_ϕ(Z|s_0) + _ϵ∼𝒰^d[_s_n ∼ p(s_n|s_0,z)[log𝒩(s_0 + z; s_n,σ_0)]], in which d is the dimensionality of the goal space and z = g(ϵ,μ_ϕ(s_0)) is reparameterized to be a function of the exogenous unit uniform random variable ϵ and μ_ϕ(s_0), which outputs a vector containing the location-scale parameters of the uniform goal-space distribution. Specifically, for each dimension of the goal space, μ_ϕ(s_0) will output (i) the location of the center of the uniform distribution and (ii) the length of the half width of the uniform distribution. Note also that in our implementation, instead of sampling the skill z from the fixed variational distribution (i.e., 𝒩(z;s_n,σ_0)), we sample the sum z + s_0 (i.e., 𝒩(s_0 + z;s_n,σ_0)). This small change will encourage the goal z to reflect the desired change from the starting state s_0. The mutual information objective in Equation <ref> can be further simplified to I^GCE_θ,ϕ(Z;S_n|s_0) = H_ϕ(Z|s_0) + _ϵ∼𝒰^d[R(s_0,ϵ,μ_ϕ(s_0))], R(s_0,ϵ,μ_ϕ(s_0)) = _s_n ∼ p(s_n|s_0,ϵ,μ_ϕ(s_0))[log𝒩(s_0+z; s_n,σ_0)]. Equation <ref> has the same form as a maximum entropy bandit problem. The entropy term H_ϕ(Z|s_0) encourages the goal space policy μ_ϕ to output larger goal spaces. The reward function _ϵ∼𝒰^d[R(s_0,ϵ,μ_ϕ(s_0))] in the maximum entropy bandit problem encourages the goal space to contain achievable goals. This is true because R(s_0,ϵ,μ_ϕ(s_0)) measures how effective the goal-conditioned policy is at achieving the particular goal z, which per the reparameterization trick is g(ϵ,μ_ϕ(s_0)). The outer expectation _ϵ∼𝒰^d[·] then averages R(s_0,ϵ,μ_ϕ(s_0)) with respect to all goals in the learned goal space. Thus, goal spaces that contain more achievable goals will produce higher rewards. The goal space policy μ_ϕ can be optimized with Soft Actor-Critic <cit.>, in which a critic R_ω(s_0,ϵ,a) is used to approximate R(s_0,ϵ,a) for which a represents actions by the goal space policy. In our implementation, we fold in the entropy term into the reward, and estimate the expanded reward with a critic. We then use Stochastic Value Gradient <cit.> to optimize. Under this optimization strategy, the objective for training the critic is J(ω) = _(s_0,ϵ,a) ∼β[(R_ω(s_0,ϵ,a) - Target)^2], Target = _s_n ∼ p(s_n|s_0,z)[log𝒩(s_0 + z; s_n, σ_0) - log p(z|s_0)]. In Equation <ref>, the learned critic is trained to be closer to a target value using (s_0,ϵ,a) tuples from a buffer β. The target is obtained by first executing the skill z=g(ϵ,a), in which a is a goal space action (i.e., the vector of location-scale parameters for the uniform distribution goal space), and then sampling the terminal state s_n. The target is then the sum of (i) the log𝒩(s_0 + z; s_n, σ_0), which measures how effective the goal-conditioned policy is at achieving the goal state s_0 + z and (ii) -log p(z|s_0), which comes from the entropy reward that was folded in. The gradient for the goal-space policy μ_ϕ can then be determined by passing the gradient through the critic using the chain rule: ∇_ϕ I^GCE(Z;S_n|s_0) = ∇_ϕμ_ϕ(s_0) _ϵ∼𝒰^d[∇_a R_ω(s_0,ϵ,a)|_a = μ_ϕ(s_0)]. With respect to the goal-conditioned policy π_θ, the mutual information objective reduces to _z ∼ p_ϕ(z|s_0), s_n ∼ p_θ(s_n|s_0,z)[log q(z|s_0,s_n)], because the entropy term can be ignored. This is same objective as the goal-conditioned RL variational lower bound introduced by <cit.> except the outer expectation is with respect to a learned goal space. We optimize this objective with respect to the deterministic goal-conditioned policy with the Deterministic Policy Gradient <cit.>. In our implementation, we train the Goal-Conditioned Empowerment mutual information objective by iterating between updates to the goal-conditioned policy and the goal-space policy—similar to an automated curriculum of goal-conditioned RL problems. In each step of the curriculum, the goal-conditioned policy is trained to reach goals within and nearby the current goal space. The goal space is then updated to reflect the current reach of the goal-conditioned policy. Detailed algorithms for how the goal-conditioned actor-critic and the goal-space actor-critic are updated are provided in section <ref> of the Appendix. One change is that instead of having the goal space policy μ_ϕ output the half widths of the uniform distribution, we have the policy output the log of the half widths. The reason for this change is that the entropy part of the reward -log p(z|s_0) only provides a weak signal for the goal space to grow when the goal space reaches a certain size. Given that the goal space is a uniform distribution, -log p(z|s_0) reduces to ∑_i=0^d logμ^w_i(s_0) + constant, in which μ^w_i(s_0) represents the half width of the i-th dimension of the goal space. Because log is a concave function, the entropy reward ∑_i=0^d logμ^w_i(s_0) produces a diminishing incentive for the goal space to grow. In contrast, by having the goal space policy output the log of the half widths, the entropy portion of the reward becomes ∑_i=0^d μ^w_i(s_0) + constant, which does not diminish as the goal space half widths μ^w_i(s_0) grow larger. Another difference is that we optimize a regularized version of the goal-conditioned policy objective in Equation <ref>. Specifically, we optimize the objective _z ∼ p_ϕ(z|s_0), s ∼ρ_θ(s|s_0,z)[log𝒩(s_0 + z; s,σ_0)], in which ρ_θ(s|s_0,z) is the (improper state visitation distribution): ρ_θ(s|s_0,z) = ∑_t=0^n-1 p(s_0 → s,t,z), in which is the p(s_0 → s,t,z) is the probability of moving from the initial state s_0 to state s in t actions when the goal-conditioned policy is pursuing goal z. The main difference between the regularized version of the objective is that the reward r(s_t,z,a_t) = _s_t+1∼ p(s_t+1|s_t,a_t)[log𝒩(s_0 + z|s_t+1,σ_0)] occurs at each of the n steps instead of just the last step. We found this to produce more stable goal-conditioned policies. In the Supplementary Materials, we provide a derivation showing this is a regularized version of the original goal-conditioned policy objective. §.§ Hierarchical Architecture By learning skills that target particular regions of the reachable state space over long horizons, agents could learn skills that require varying amounts of time, from skills that just require a handful of primitive actions to potentially skills that require days to execute. General purpose agents will need to have both short and long time horizon skills. One way to learn both short and temporally extended skills is to be able to compute the long-term empowerment of a state (i.e., learn skills that target the state space that is reachable over long horizons). A key problem with Goal-Conditioned Empowerment is that it is not a practical solution for computing long-term empowerment because it is difficult to optimize temporally extended policies with RL. However, Goal-Conditioned Empowerment can be effective at learning skills containing short sequences of actions. We introduce a solutions that takes advantage of this capability. To scale Goal-Conditioned Empowerment to longer horizons we borrow ideas from Goal-Conditioned Hierarchical Reinforcement Learning (GCHRL), sometimes referred to as Feudal RL <cit.>. GCHRL approaches have shown that temporally extended policies can be learned by nesting multiple short sequence policies. In the Hierarchical Empowerment framework, this approach is implemented as follows. First, the designer implements k levels of Goal-Conditioned Empowerment, in which k is a hyperparameter set by the designer. That is, the agent will learn k goal-conditioned and goal space policies (π_θ_0,μ_ϕ_0,…,π_θ_k-1,μ_ϕ_k-1). Second, for all levels above the bottom level, the action space is set to the learned goal space from the level below. (The action space for the bottom level remains the primitive action space.) For instance, for an agent with k=2 levels of skills, the action space for the top level goal-conditioned policy π_θ_1 at state s will be set to the goal space output by μ_ϕ_0(s). The top level goal-space μ_ϕ_1 and goal-conditioned policies π_θ_1 will thus seek to learn the largest space of states reachable in n subgoals, in which each subgoal can contain at most n primitive actions. With this nested structure inspired by GCHRL, the time horizon of the computed empowerment can grow exponentially with k, and no goal-conditioned policy is required to learn a long sequence of actions, making optimizing Goal-Conditioned Empowerment with RL more tractable. Figure <ref> visualizes the proposed hierarchical architecture in one of our domains. §.§ Limitations The Hierarchical Empowerment framework has two significant limitations as currently constructed. The first is that it can only be applied to domains in which the state space has large contiguous regions of reachable states that can be covered by d-dimensional goal space boxes. In the framework, the uniform goal space distribution will only expand to include achievable states. If regions adjacent to the goal space are not reachable, the goal space will not expand. Consequently, if the state space contains large regions of unreachable states (e.g., a pixel space), the framework will not be able to learn a meaningful space of skills. The second limitation is that a model of the transition dynamics is required. The most important need for the model is to simulate the temporally extended subgoal actions so that the higher level goal-conditioned policies can be trained with RL. We also take advantage of the model to run episodes in parallel to make training more efficient. However, the existing empowerment-based skill-learning approaches that learn the variational distribution q_ψ(z|s_0,s_n) also require a model in large domains to obtain unbiased gradients of the maximum likelihood objective. The maximum likelihood objective requires sampling large batches of skills from large sets of skill starting states and then executing the current skill-conditioned policy. In large domains, a model of the transition dynamics is required for this to be feasible. § EXPERIMENTS The purpose of our experiments is to evaluate the two claims of our framework. The first claim is that Goal-Conditioned Empowerment, which learns skills using an automated curriculum of goal-conditioned RL problems, can more effectively compute empowerment (i.e., learn the largest space of distinct skills) than existing empowerment-based skill learning approaches that rely on a learned variational distribution. For this evaluation, we compare Goal-Conditioned Empowerment to Diversity Is All You Need (DIAYN) <cit.>. The skill-learning objective used by DIAYN is very similar to the typical empowerment-based skill-learning objective described in section <ref>. DIAYN employs a fixed space of skills p(z) and uses a learned variational distribution q_ψ(z|s_t) trained with maximum likelihood. One difference from the prototypical empowerment objective is that DIAYN optimizes a regularized version of that objective in which the reward log q_ψ(z|s_t) occurs at each time step. DIAYN also includes the entropy term H(π_θ(a_t|s_t,z)) to help skill-conditioned policy explore. We make some changes to DIAYN to make the comparison more fair. One change is we implement DIAYN with a continuous skill space in which p(z) is sampled from a uniform distribution. Also, as in Goal-Conditioned Empowerment, we condition the skills on a start state (i.e., p(z) → p(z|s_0) and q_ϕ(z|s_t) → q_ϕ(z|s_0,s_t)) and limit the number of primitive actions included in the skill. With these changes, the implementation of DIAYN is similar to HIDIO <cit.>, except HIDIO conditions the learned variational distribution on entire trajectories and not just individual states. The second claim of our framework that our experiments seek to evaluate is that hierarchy is required to compute long-term empowerment. To assess this claim, we compare Hierarchical Empowerment agents that learn one, two, and three levels of skills. §.§ Domains Although there are some emerging benchmarks for unsupervised skill-learning <cit.>, we cannot evaluate Hierarchical Empowerment in these benchmarks due to the limitations of the framework. The benchmarks either (i) do not provide access to a model of the transition dynamics, (ii) use downstream tasks that require achieving goals states with too many dimensions, or (iii) do not provide downstream goal-reaching tasks. Instead, we implement our own domains in the popular Ant and Half Cheetah environments. We implement the typical open field Ant and Half Cheetah domains involving no barriers. We also implement environments with barriers. In our Ant Cage and Half Cheetah Cage domains, the agent must press a button to remove a barrier. The purpose of implementing domains with barriers was to show that the learned goal spaces do adjust to the shape of the reachable state space. All our domains were implemented using Brax <cit.>, which provides access to a model of the transition dynamics. Each experiment involves two phases of training. In the first phase, there is no external reward, and the agent simply optimizes its mutual information objective to learn skills. In the second phase, the agent learns a policy over the skills to solve a downstream task. In our experiments, the downstream task is to achieve a goal state uniformly sampled from a hand-designed continuous space of goals. Additional details on how the first and second phases are implemented are provided in section <ref> of the Appendix. §.§ Results We present our results in a few ways. Figure <ref> provides charts for the phase two performances for most of the experiments. Results of the three versus four level Ant experiment is provided in section <ref> of the Appendix. We also provide a video presentation showing sample episodes of the trained agents at the following url: <https://www.youtube.com/watch?v=kwLUYMsMRNI>. In addition, we provide before and after images of the learned goal spaces in section <ref> of the Appendix. The experimental results support both contributions of the framework. Goal-Conditioned Empowerment and DIAYN were compared in the Ant Field – Small and Half Cheetah – Small domains. Per the phase two results in <ref>, Goal-Conditioned Empowerment was able to complete both tasks while DIAYN was not able to solve either task. As we describe section <ref> of the Appendix, in our implementation of DIAYN we tracked the scale of the learned variational distribution q_ϕ(z|s_0,s_t). As we hypothesized, the standard deviation of the variational distribution did not decrease over time, indicating that the the skills were not specializing and targeting specific states. On the other hand, per the performance charts and video, Goal-Conditioned Empowerment was able to learn skills to achieve goals in the phase two goal space. The experimental results also support the need for hierarchy to compute the long-term empowerment of states. In both the two versus three level agents (i.e., one skill level vs. two skill level agents) and in the three versus four level comparison, the agents with the additional level outperformed, often significantly, the agent with fewer levels. In addition, the surface area of the skills learned by the four level Hierarchical Empowerment agents is significantly larger than has been reported in prior empowerment-based skill-learning papers. In our largest Ant domain solved by the four level agent, the phase 2 goal space with size 800x800 is over four orders of magnitude larger than the surface area reported in the DIAYN Ant task (4x4), and over 2 orders of magnitude larger than the area (30x30) partially achieved by Dynamics-Aware Unsupervised Discovery of Skills (DADS) <cit.>. Section <ref> of the Appendix provides a visual of the differences in surface area coverage among Hierarchical Empowerment, DADS, and DIAYN. In terms of wall clock time, the longest goals in the 800x800 domain required over 13 minutes of movement by a reasonably efficient policy. § RELATED WORK There is an extensive history of skill-learning approaches <cit.>. Here we review the categories of skill-learning most closely related to our proposed framework. Empowerment. There are several works that take a different approach to maximizing the mutual information of the skill channel. DADS <cit.> optimizes the symmetric version of the mutual information of the skill channel: H(S_n|s_0) - H(S_n|s_0,Z). Similar to the empowerment-based skill-learning methods mentioned previously, DADS optimizes a variational lower bound on mutual information using reinforcement learning. In this case, DADS replaces the skill channel p(s_n|s_0,z) distribution with the learned variational distribution q_ϕ(s_n|s_0,z), which is trained with maximum likelihood and used as part of the reward function when training the skill-conditioned policy. However, this approach will still face the same issue as those that use the learned variational distribution q_ϕ(z|s_0,s_n). During times when the skills are not differentiated, the reward function will not provide a strong incentive for the skills to differentiate, in turn limiting the space of skills that is learned. Additional approaches in which empowerment is used to learn skills include SNN4HRL <cit.>, which augments a task reward with a mutual information reward that encourages more diverse skills. LSD <cit.> optimizes a modified version of the skill channel mutual information to force the agent to learn more dynamic and long horizon skills. However, general purpose agents need to learn skills the that are both static and dynamic as well as short and long horizon. The empowerment objective is capable of learning all of these skill types so it may offer a more promising objective. Automated Curriculum Learning Similar to our framework are several methods that implement automated curricula of goal-conditioned RL problems. Maximum entropy-based curriculum learning methods <cit.> separately optimize the two entropy terms that make up the mutual information of the skill channel. That is, they try to learn a goal space with large entropy H(Z|s_0) while also learning a goal-conditioned policy to reduce the conditional entropy H(Z|s_0,S_n). In Skew-Fit <cit.>, the distribution of goals tested at each phase of the curriculum is determined by a generative model, which is trained to model the distribution of states visited by the agent's current set of skills. The generative model is skewed to be closer to a uniform distribution so that states that are new to the agent are tested more frequently. In MEGA <cit.>, the set of goal states tested is determined using a learned density model of the states that have been visited previously. MEGA selects states with low probabilities per the density function so the states along the frontier of the visited state space can be tested more often. In both Skew-Fit and MEGA, a goal-conditioned policy is simultaneously but separately trained to minimize H(Z|s_0,s_n). Similarly, EDL <cit.> uses a three stage process to separately optimize the entropies H(S_n|s_0) and H(S_n|s_0,Z). In the exploration stage, an exploration algorithm <cit.> is used to try to learn a uniform distribution over the state space. In the discover phase, the state space is encoded into a latent skill space in order to try to produce a high entropy skill space. In the learning phase, a goal-conditioned policy is trained to reach goal latent states. There are two concerns with this approach of separately optimizing the goal space and goal-conditioned policy. One issue is that it is unlikely to scale to more realistic settings involving some randomness where many visited states are not consistently achievable. When applying the separate optimization approach to this more realistic setting setting, the skill space will still grow, producing higher entropy H(Z|s_0,s_n). However, because many of the goal states are not achievable, the conditional entropy H(Z|s_0,s_n) will also grow, resulting in a lower empowerment skill space. A more promising approach is to jointly train the two entropy terms, as in our approach, so that the goal space only expands to include achievable goals. However, changes will need to be made to our framework to handle settings in which states are not achievable. A second issue with these methods is that is unclear how hierarchy can be integrated to better handle long horizon tasks. Another class of automated curriculum learning methods implements curricula of goal-conditioned RL problems based on learning progress. Goal GAN <cit.> trains a GAN <cit.> to output goals in which the agent has made an intermediate level of progress towards achieving. CURIOUS <cit.> and SAGG-RIAC <cit.> output goals in which the agent has shown improvement. Bonus-based Exploration. Empowerment is related to another class of intrinsic motivation algorithms, Bonus-based Exploration. These approaches incentivize agents to explore by augmenting the task reward with a bonus based on how novel a state is. The bonuses these methods employ include count-based bonuses <cit.>, which estimate the number of times a state has been visited and provide a reward inversely proportional to this number. Another popular type of bonus is model-prediction error <cit.> which reward states in which there are errors in the forward transition or state encoding models. One issue with bonus-based exploration methods is that it is unclear how to convert the agent's exploration into a large space of reusable skills. A second issue, particularly with the model-prediction error bonuses, is that these methods can be difficult to implement as it is unclear how frequently to update the model that determines the bonus relative to the exploration policy. § CONCLUSION We make two improvements to computing empowerment with Reinforcement Learning. The first contribution is a new variational lower bound on empowerment that combines the practical variational distribution from goal-conditioned RL with a learnable skill space — a combination that can make it easier to calculate short-term empowerment. The second improvement is a hierarchical architecture that makes computing long-term empowerment more tractable. We hope future work is able to overcome the framework's limitations and generalize the approach to larger classes of domains. § THREE VS. FOUR LEVEL AGENT RESULTS Figure <ref> shows the phase 2 results for the 3 vs. 4 level agents (i.e., 2 vs. 3 skill levels) in the 800x800 Ant domain. 800x800 represents the dimensions of the phase 2 goal space, from which goals are uniformly sampled at the start of each phase 2 episode. As in the results showed earlier in Figure <ref>, the y-axis shows the average minimum distance to goal given a specific period of phase 2 training. The average is over (i) four phase 1 agents that were trained and (ii) 400 goals sampled from the phase 2 goal space for each phase 1 agent. A video of a trained four level agent in the 800x800 domain is available at the following url: <https://www.youtube.com/watch?v=2SW8mX-FXnc>. § COMPARISON OF SURFACE AREA COVERAGE Figure <ref> compares the surface area coverage of the skills learned by Hierarchical Empowerment in the 800x800 domain to the surface area coverage in the published results of DIAYN <cit.> and DADS <cit.>. The 800x800 domain is 40,000x and ≈ 700x larger than the surface area covered by the skills of DIAYN (4x4) and DADS (30x30), respectively. Note that the 30x30 surface area in the DADS results was only partially covered. § GROWTH OF GOAL SPACES Figures <ref>-<ref> provide visuals of the learned goal spaces before and after phase 1 for the agents that learn two and three levels of skills. The level 0, level 1, and level 2 (if applicable) goal spaces are shown by the blue, green, and red goal spaces, respectively. In all experiments, the goal spaces grow significantly from phase 1 of training. In the cage domains, one can also see how the learned goal spaces adjust to the barriers. In the ant cage domain, given that the button is in the northeast direction of the cage, the learned goal space expands more in the northeast direction. Also, in the half cheetah domain, the learned goal space is largely held within the cage. § ALGORITHM AND IMPLEMENTATION DETAILS In this section, we provide a detailed algorithm for how Hierarchical Empowerment is implemented. Before describing the algorithm, we note two small changes to the approach described so far that had a meaningful impact on our results. One change is that instead of having the goal space policy μ_ϕ output the half widths of the uniform distribution, we have the policy output the log of the half widths. The reason for this change is that the entropy part of the reward -log p(z|s_0) only provides a weak signal for the goal space to grow when the goal space reaches a certain size. This problem occurs because -log p(z|s_0) reduces to ∑_i=0^d logμ^w_i(s_0) + constant, in which μ^w_i(s_0) represents the half width of the i-th dimension of the goal space. Because log is a concave function, the entropy reward ∑_i=0^d logμ^w_i(s_0) produces a diminishing incentive for the goal space to grow. In contrast, by having the goal space policy output the log of the half widths, the entropy portion of the reward becomes ∑_i=0^d μ^w_i(s_0) + constant, which does not diminish as the goal space half widths μ^w_i(s_0) grow larger. Another difference is that we optimize a regularized version of the goal-conditioned policy objective in Equation <ref>. Specifically, we optimize the objective _z ∼ p_ϕ(z|s_0), s ∼ρ_θ(s|s_0,z)[log𝒩(s_0 + z; s,σ_0)], in which ρ_θ(s|s_0,z) is the (improper) state visitation distribution: ρ_θ(s|s_0,z) = ∑_t=0^n-1 p(s_0 → s,t,z), in which is the p(s_0 → s,t,z) is the probability of moving from the initial state s_0 to state s in t actions when the goal-conditioned policy is pursuing goal z. The main difference between the regularized version of the objective is that the reward r(s_t,z,a_t) = _s_t+1∼ p(s_t+1|s_t,a_t)[log𝒩(s_0 + z|s_t+1,σ_0)] occurs at each of the n steps instead of just the last step. We found this to produce more stable goal-conditioned policies. Algorithm <ref> provides pseudocode for Hierarchical Empowerment. The approach is implemented by repeatedly iterating through the k levels of the agent and updating the goal-conditioned actor-critic and goal space actor-critic. Algorithm <ref> provides the goal-conditioned actor-critic update function. The purpose of this function is to train the goal-conditioned policy to be better at achieving goals within and nearby the current goal space. We assume a deterministic goal-conditioned policy, so the update function uses Deterministic Policy Gradient (DPG) <cit.> to update the actor. Algorithm <ref> provides the goal space actor-critic update function. The purpose of this algorithm is to enable the goal space policy μ_ϕ to find the largest space of achievable goals. Given the deterministic goal space policy, this function also uses a DPG-style gradient. Hierarchical Empowerment can be used with any skill start state distribution p^level(s_0). We employ a process that requires no additional domain expertise. Beginning from a state sampled from the task's initial state distribution, our strategy repeats the following two step process for N iterations. First, we sample a skill from top level goal space based on the agent's current state. Second, we greedily execute the goal-conditioned policy at each level until top level skill is complete. Prior to execution of the first action by the goal-conditioned policy at each level i, the current state is be saved into level i's initial state buffer. The start state distribution for each level can then uniformly sample the start state buffer at that level. With this strategy, as the top level goal space grows, the start state distributions at all levels grows. § KEY HYPERPARAMETERS Tables <ref> and <ref> show the key hyperparameters for phase 1 and phase 2 training, respectively. In Table <ref>, k refers to the number of skills levels in each agent (note than this does not include the phase 2 policy). n represents the maximum number of actions the goal-conditioned policy at each level has to reach its goal. For instance, for a k=2 agent with n=[20,10], this means the level 0 (i.e., the bottom level) goal-conditioned policy has at most 20 actions to reach its goal, while level 1 has at most 10 actions. σ_0^gc and σ_0^gs are the standard deviations of the fixed variance Gaussian variational distributions used for the reward functions for the goal-conditioned and goal space policies, respectively. The larger standard deviation for training the goal space policy provides some additional flexibility, helping goal space growth. ε is the goal threshold. The goal-conditioned policy terminates when the agent is within ε of the goal. The last column in <ref> lists the number of epochs of training in phase 1. For the goal-conditioned actor-critics, each epoch of training in phase 1 consists of 10 iterations of Algorithm <ref>, in which the number of gradient steps per iteration S=50. For the goal space actor-critics, each epoch of training consists of a single iteration of Algorithm <ref>, in which the number of gradient steps per iteration S=10. Table <ref> lists key hyperparameters for phase 2 of training. n provides the maximum number of attempts the phase 2 goal-conditioned policy has to achieve its goal. ε provides the phase 2 goal threshold. The last column provides the length of each dimension of the goal space. In the Ant domains, the goal space is two-dimensional, while in the Half Cheetah domains, the goal space is one-dimensional. Also, given that it is easier to move forward than backward in Half Cheetah, we do slightly shift the goal space forward in the half cheetah domains. In all domains, at the start of each episode in phase 2, a goal is uniformly sampled from the phase 2 goal space. § DIAYN VISUALIZATION After phase 1 of training in our DIAYN implementation there was still significant overlap among the skills in the skill space. We can infer this outcome by visualizing the learned variational distribution. Figure <ref> provides a visualization of what the learned variational distribution typically looked like after phase 1 of training. The green rectangle shows the fixed skills space in our implementation DIAYN, which is the continuous space [-1,1]^2. The blue rectangular prism, is the current skill that is being executed in this image. The blue rectangle shows one standard deviation of the learned variational distribution which is a diagonal Gaussian distribution. This blue rectangle thus shows the probability distribution over skills given the tuple (skill start state, current state). Per the size of the single standard deviation, the learned variational distribution covers most of the skill space, meaning a large space of skills could have led to the current state (i.e., the skills are not differentiated).
http://arxiv.org/abs/2307.02096v1
20230705081636
Adaptive multi-stage integration schemes for Hamiltonian Monte Carlo
[ "Lorenzo Nagar", "Mario Fernández-Pendás", "Jesús María Sanz-Serna", "Elena Akhmatskaya" ]
stat.CO
[ "stat.CO", "cs.NA", "math.NA", "stat.ME" ]
1]Lorenzo Nagar cor1 [email protected] [cor1]Corresponding author: 2]Mario Fernández-Pendás 4]Jesús María Sanz-Serna 1,5]Elena Akhmatskaya [1]BCAM - Basque Center for Applied Mathematics, Alameda de Mazarredo 14, 48009 Bilbao, Spain [2]DIPC, Donostia International Physics Center, Manuel Lardizabal Ibilbidea 4, 20018 Donostia, Spain [4]Departamento de Matemáticas, Universidad Carlos III de Madrid, Avenida Universidad 30, 28911 Leganés, Spain [5]Ikerbasque - Basque Foundation for Science, Euskadi Plaza 5, 48009 Bilbao, Spain Hamiltonian Monte Carlo (HMC) is a powerful tool for Bayesian statistical inference due to its potential to rapidly explore high dimensional state space, avoiding the random walk behavior typical of many Markov Chain Monte Carlo samplers. The proper choice of the integrator of the Hamiltonian dynamics is key to the efficiency of HMC. It is becoming increasingly clear that multi-stage splitting integrators are a good alternative to the Verlet method, traditionally used in HMC. Here we propose a principled way of finding optimal, problem-specific integration schemes (in terms of the best conservation of energy for harmonic forces/Gaussian targets) within the families of 2- and 3-stage splitting integrators. The method, which we call Adaptive Integration Approach for statistics, or s-AIA, uses a multivariate Gaussian model and simulation data obtained at the HMC burn-in stage to identify a system-specific dimensional stability interval and assigns the most appropriate 2-/3-stage integrator for any user-chosen simulation step size within that interval. s-AIA has been implemented in the in-house software package without introducing computational overheads in the simulations. The efficiency of the s-AIA integrators and their impact on the HMC accuracy, sampling performance and convergence are discussed in comparison with known fixed-parameter multi-stage splitting integrators (including Verlet). Numerical experiments on well-known statistical models show that the adaptive schemes reach the best possible performance within the family of 2-, 3-stage splitting schemes. Hamiltonian Monte Carlo Multi-stage integrators Adaptive integration Bayesian inference Stability limit Velocity Verlet § INTRODUCTION First introduced for lattice field theory simulations <cit.>, Hamiltonian Monte Carlo (HMC) is nowadays recognized as a popular and efficient tool for applications in Bayesian statistical inference <cit.>. Using gradient information on the posterior distribution, HMC reduces random walk behavior typical of many conventional Markov Chain Monte Carlo (MCMC) samplers and makes it possible to sample high dimensional and complex distributions more efficiently than simpler MCMC algorithms. The use of Hamiltonian dynamics makes HMC able to perform large moves while keeping high acceptance rates, thus lowering the correlation between samples, provided that an accurate symplectic integrator is in use <cit.>. On the other hand, known drawbacks of HMC are the computational cost deriving from the evaluation of gradients and the strong dependence of the performance on the choice of the parameters in the algorithm. Many variants of HMC have been proposed in the literature during the last decades (see <cit.> for an advanced list of HMC methods in computational statistics and physical sciences). Numerical integration of the Hamiltonian equations of motion is crucial for HMC, since its accuracy and efficiency strongly affect the overall performance of the method. Velocity Verlet <cit.> is currently the method of choice owing to its simplicity, optimal stability properties and computational efficiency. Recently proposed multi-stage splitting integrators have shown promising performance in HMC for statistical and molecular simulation applications <cit.>. Such integrators are as easy to implement as Verlet schemes due to their kick-drift structure. However, they possess shorter stability intervals than corresponding multi-stage Verlet algorithms <cit.>. The Adaptive Integration Approach (AIA) <cit.> for HMC and its extensions MAIA and e-MAIA for Modified HMC (MHMC) methods <cit.> offer an intelligent (system- and step size-specific) choice of the most appropriate 2-stage integrator in terms of the best conservation of energy for harmonic forces. They have been formulated and implemented for molecular simulation applications and demonstrated an improvement in accuracy, stability and sampling efficiency compared with the fixed-parameter 1-, 2-stage numerical integrators (including the standard Verlet) when used in simulations of complex physical systems <cit.>. In this paper, we propose an Adaptive Integration Approach for statistics, that we call s-AIA, which extends the ideas of the original AIA to Bayesian statistical inference applications. The method employs a theoretical analysis of the multivariate Gaussian model and simulation data obtained at the HMC burn-in stage to identify a system-specific dimensional stability interval and assigns the most appropriate 2-, 3-stage integrator at any user-chosen simulation step size within that interval. To construct s-AIA, we address the difficulties encountered by the extension to the computational statistics scenario of the assumptions typical of molecular simulation applications made in AIA — such as dominating harmonic forces, known angular frequencies and resonance conditions, nonrandomized integration step size. The proposed algorithm does not add computational overheads during a simulation. We have implemented s-AIA in the in-house software (Hamiltonians in Computational Statistics) <cit.> and tested its efficiency and impact on the HMC accuracy, sampling performance and convergence in comparison with known fixed-parameter multi-stage splitting integrators for HMC-based methods (including Velocity Verlet). The numerical experiments have been performed on representative benchmarks and datasets of popular statistical models. The paper is structured as follows. We briefly review HMC in Section <ref> and multi-stage integrators in Section <ref>. The s-AIA algorithm and its implementation are presented in Section <ref>. Validation and testing of the new algorithm are described and discussed in Section <ref>. Our conclusions are summarized in Section <ref>. § HAMILTONIAN MONTE CARLO Hamiltonian Monte Carlo (HMC) is a Markov Chain Monte Carlo (MCMC) method for obtaining correlated samples θ_i ∼π(θ) from a target probability distribution π () in ℝ^D by generating a Markov chain in the joint phase space ℝ^D ×ℝ^D with invariant distribution π(θ, p) = π(θ) p(p) ∝exp (- H(θ, p)). Here H(, ) = K()+U() = 1/2^T M^-1 + U() is the Hamiltonian function, where the potential energy U() is related to the target π(θ) by means of U() = - logπ() + const , and the kinetic energy K() is explicited through an auxiliary momentum variable drawn from the normal distribution 𝒩(0, M), with M being a symmetric positive definite matrix (the mass matrix). HMC alternates momentum update steps, where a sample of is drawn from the distribution 𝒩 (0, M), with steps where both position and momenta are updated through the numerical integration of the Hamiltonian dynamics d θ/dt = M^-1p, d p/dt = - ∇_θ U(θ). The latter is performed using a symplectic and reversible integrator. If Ψ_h is the map in phase space that advances the numerical solution over a step size of length h, symplecticness means <cit.> Ψ'_h (z)^T J^-1Ψ'_h (z) = J^-1, ∀z∈Ω, ∀ h >0, where Ψ'_h is the Jacobian matrix of Ψ_h and J = [ 0 I; -I 0 ], with I the D× D unit matrix. Reversibility demands Ψ_h ∘ℱ = ( Ψ_h ∘ℱ)^-1, where ℱ (, ) = (, - ) is the momentum flip map. Given the state of the Markov chain (_i, _i) at the beginning of the i-th iteration, a proposal (', ') is obtained by integrating the Hamiltonian equations of motion for L steps using Ψ_h, i.e. (', ') = Ψ_h ∘ ... ∘Ψ_h_L times (_i, _i). Due to numerical integration errors, the Hamiltonian energy and thus the target density (<ref>) are not exactly preserved. The invariance of the target density is ensured through a Metropolis test with acceptance probability α = min{1, exp (- Δ H) }, where Δ H = H (', ') - H (_i, _i) is the energy error resulting from the numerical integration. In case of acceptance, ' is the starting point for the following iteration, i.e. _i+1 = ', whereas in case of rejection, the initial proposal _i is kept for the following iteration, i.e. _i+1 = _i. In both cases, the momentum is discarded and a new momentum _i+1 is drawn from its Gaussian distribution. §.§ Splitting The integration of the Hamiltonian dynamics in HMC is always performed by resorting to the idea of splitting. The split systems (A) d /dt = ∇_p K() = M^-1, d /dt = - ∇_θ K() = 0, (B) d /dt = ∇_p U() = 0, d /dt = - ∇_θ U(), have solution flows φ^A_t and φ^B_t explicitly given by φ^A_t (, ) = ( + t M^-1, ), φ^B_t (, ) = (, - t ∇_θ U()); these flows are often called a position drift and a momentum kick respectively. The integration of the target dynamics (<ref>) is carried out by combining drifts and kicks. The best known algorithm is the Velocity Verlet integrator <cit.> ← - h/2∇_θ U(), ← + h M^-1, ← - h/2∇_θ U(). With the notation in (<ref>), the algorithm may be written as Ψ_h^VV = φ^B_h/2∘φ^A_h ∘φ^B_h/2. As before, h is the length of an integration step, i.e. step size. By switching the roles of A and B in (<ref>) one obtains the Position Verlet algorithm <cit.>, whose performance is often worse than that of the velocity scheme <cit.>. More general splitting integration schemes <cit.> that alternate position drifts and momentum kicks will be reviewed in Section <ref>. §.§ Advantages and limitations of HMC By suitably choosing the time span Lh of the numerical integration (cf. (<ref>)), HMC offers the possibility of generating proposals that are sufficiently far from the current state of the Markov chain. At the same time, for fixed Lh, one may always reduce h and increase L to achieve a more accurate numerical integration and therefore an arbitrarily high acceptance rate. Thus HMC is in principle able to generate samples with low correlation and to explore rapidly the state space, even if the dimensionality is high, avoiding in this way the random walk behaviour of simpler MCMC algorithms. Unfortunately, it is well known that in practice the performance of HMC very much depends on the choice of the parameters h and L. Since most of the computational effort in HMC goes in the (often extremely costly) evaluations of the gradient ∇ U() required by the integrator, and the acceptance rate depends on the numerical integration error, the choice of the integration method is key to the efficiency of the HMC algorithm. § MULTI-STAGE INTEGRATORS AND ADAPTIVE APPROACH In this Section, we review multi-stage palindromic splitting integrators, which have demonstrated promising performance in HMC for both statistical and molecular simulation applications <cit.>. §.§ k-stage palindromic splitting integrators The family of palindromic k-stage splitting integrators with k-1 free parameters is defined as <cit.> Ψ_h = φ^B_b_1 h∘φ^A_a_1 h∘…∘φ^A_a_k' h∘φ^B_b_k'+1 h∘φ^A_a_k' h∘…∘φ^A_a_1 h∘φ^B_b_1 h, b_i, a_j ∈ℝ^+, if k = 2 k' is even, and Ψ_h = φ^B_b_1 h∘φ^A_a_1 h∘…∘φ^B_b_k' h∘φ^A_a_k' h∘φ^B_b_k' h∘…φ^A_a_1 h∘φ^B_b_1 h, b_i, a_j ∈ℝ^+, if k = 2k'-1 is odd. The coefficients b_i, a_j in (<ref>)-(<ref>) have to satisfy the conditions 2 ∑_i=1^k' b_i + b_k'+1 = 2 ∑_j=1^k' a_j = 1, and 2 ∑_i=1^k' b_i = 2 ∑_j=1^k'-1 a_j + a_k'= 1, respectively. The integrators (<ref>) and (<ref>) are symplectic as compositions of flows of Hamiltonian systems, and reversible, due to their palindromic structure. The number of stages k is the number of times the algorithm performs an evaluation of gradients ∇_θ U() per step size. Though φ^B appears k+1 times in (<ref>) and (<ref>), the number of gradient evaluations performed is still k since the (last) one in the leftmost φ^B_b_1 h at the current step is reused in the rightmost φ^B_b_1 h at the following step. Multi-stage splitting integrators alternate position drifts and momentum kicks of different lengths, which makes all of them, including the most common and popular 1-stage Verlet (<ref>), easy to implement. As pointed out above, most of the computational effort in HMC is due to evaluations of gradients. Splitting integrators with different numbers of stages do not perform the same number of gradient evaluations per integration step and therefore using those integrators with a common value of L and h does not result in fair comparisons (in terms of computational cost). If L̂ is a number of gradient evaluations/time steps suitable for the 1-stage Verlet algorithm with step size h, k-stage integrators will here be used by taking L=L̂/k steps of length k h. In this way all algorithms integrate the Hamiltonian dynamics over a time interval of the same length L̂h and use the same number of gradient evaluations. §.§ Examples of 2- and 3-stage integrators We plan to derive adaptive 2- and 3-stage integrators and we first review the examples in the literature of 2- and 3-stage integrators. The one-parameter family of 2-stage integrators is described as (see (<ref>)): Ψ^2stage_h = φ^B_b h∘φ^A_a h∘φ^B_b_1 h∘φ^A_a h∘φ^B_b h, with a = 1/2 and b_1 = 1-2b. Thus the integrators can be written as Ψ^2stage_h = φ^B_b h∘φ^A_h/2∘φ^B_( 1 - 2b ) h∘φ^A_h/2∘φ^B_b h, with b ∈ (0, 0.5) if we wish b>0 and b_1>0. Similarly, (<ref>) with k' = 2, 2a + a_1 = 1 and 2b + 2b_1 = 1 yields the two-parameter family of 3-stage integrators Ψ^3stage_h = φ^B_b h∘φ^A_a h∘φ^B_( 1/2 - b ) h∘φ^A_( 1 - 2a ) h∘φ^B_( 1/2 - b ) h∘φ^A_a h∘φ^B_b h, with a, b ∈ (0, 0.5). Several 2- and 3-stage integrators with suitably chosen parameters for achieving high performance in HMC have been proposed in the literature <cit.>. Some of them are presented below and summarized in Table <ref>. In the cited literature, two alternative types of analysis have been carried out in order to choose the integration parameters a and/or b in the context of HMC. In <cit.> or <cit.>, the integration coefficients are determined by minimizing the coefficients in the Taylor expansion of the Hamiltonian truncation error <cit.> Δ H = H(, ) - H(Ψ_h (, )). On the other hand, the paper <cit.> does not look at the behaviour of the Hamiltonian truncation error as h→ 0, as typically integrators are not operated with small values of h. Their analysis is rather based on a (tight) bound 𝔼[Δ H] ≤ρ(h, ), for the expected energy error for given h, that may be rigorously proved for Gaussian targets (and has been experimentally shown to be useful for all targets). Here ρ is a function associated with the integrator and represents the coefficients that identify the integrator within a family. For 2-stage palindromic splitting schemes <cit.> ρ_2 (h, b) = h^4 ( 2b^2 ( 1/2-b ) h^2 + 4b^2 - 6b + 1 )^2/8 (2 - b h^2 ) (2- ( 1/2 - b ) h^2 ) (1 - b ( 1/2 - b ) h^2 ). For 3-stage integrators the attention may be restricted to pairs (b, a) that satisfy <cit.> 6 a b - 2 a - b + 1/2 = 0; when this condition is not fulfilled the integrator has poor stability properties. Under this restriction (see <ref>) ρ_3 (h, b) = 0.95h^4 ( -3 b^4 + 8 b^3 -19/4 b^2 + b + b^2 h^2 ( b^3 - 5/4 b^2 + b/2 - 1/16 ) - 1/16 )^2/2 ( 3 b - b h^2 ( b - 1/4 ) - 1 ) (1 - 3 b - b h^2 (b - 1/2 )^2 ) ( -9 b^2 + 6 b - h^2 ( b^3 - 5/4 b^2 + b/2 - 1/16 ) - 1 ). The following schemes have been considered in the literature. * 2-stage Velocity Verlet (VV2). This is the integrator with the longest stability interval (0, 4) among 2-stage splitting schemes and corresponds to b = 1/4 in (<ref>). To perform one step of length h with this algorithm, one just performs two steps of length h/2 of standard Velocity Verlet. It is important to emphasize that it follows that when below we compare experimentally VV2 with alternative integrators, one is really comparing the standard Verlet algorithm with such alternative integrators, simply adjusting the step lengths and number of steps per integration leg so as to have a fair comparison. * 2-stage BCSS (BCSS2). This scheme was derived in <cit.> to minimize the maximum of ρ_2 (h, b) in (<ref>) as h ranges over the interval 0<h<2 (VV2 is often operated with h close to 2), i.e. b = _b ∈(0, 0.5 ) max_0 < h < 2ρ_2 (h, b) = 0.211781. It achieves its best performance when h is near the center of the stability interval <cit.>. * 2-stage Minimum Error (ME2). The coefficient of this integrator (b = 0.193183) was obtained by McLachlan in <cit.> through the minimization of the Hamiltonian truncation error (<ref>). For quadratic problems, see also <cit.>. * 3-stage Velocity Verlet (VV3). Similarly to VV2, the 3-stage Velocity Verlet is a 3-stage integrator with the longest stability interval (0, 6) among 3-stage splitting integrators. One step of this algorithm of length h is just the concatenation of three steps of length h/3 of the standard Velocity Verlet integrator. As we did for VV2, we emphasize that when comparing below VV3 and alternative integrators, one is really comparing the standard VV algorithms. * 3-stage BCSS (BCSS3). The parameter values are found by imposing the relation (<ref>) and b = _b ∈(0, 0.5 ) max_0 < h < 3ρ_3 (h, b), with ρ_3 in (<ref>). * 3-stage Minimum Error (ME3). ME3 was derived in <cit.> by requiring (<ref>) and a Hamiltonian truncation error of size 𝒪 (h^6). The performance of the different integrators within HMC very much depends on the simulation parameters, in particular on the choice of step size. Minimum Error schemes achieve their best performance for small step size, since they are obtained by studying the limit of vanishing step size. However, they have small stability limits and may perform badly for bigger integration step sizes. Velocity Verlet schemes preserve stability for values of the step size larger than those that may be used in other integrators, but may not be competitive in situations where the step size is not chosen on grounds of stability (for instance in problems of large dimensionality where accuracy demands that the step size be small to ensure non-negligible acceptance rates). BCSS integrators were designed for optimizing performance for values of the step size not close to 0 and not close to the maximum stability allowed for Verlet. §.§ Adaptive Integration Approach (AIA) Adaptive 2-stage integration schemes were proposed by Fernández-Pendás et al. in <cit.> for molecular simulation applications. Its extensions, called MAIA and e-MAIA, for Modified HMC (MHMC) methods, such as Generalized Shadow HMC (GSHMC) methods <cit.>, were introduced by Akhmatskaya et al. in <cit.>. Given a simulation problem, in AIA, the user chooses, according to their computational budget, the value of h to be used (i.e. h is chosen to be smaller if more time and resources are available for the simulation). After that, the AIA algorithm itself finds the most appropriate integration scheme within the family of 2-stage integrators (<ref>). If the time-step is very small for the problem at hand, AIA will automatically pick up a parameter value close to Minimum Error; if the time-step is very large, AIA will automatically choose an integrator close to the 2-stage Velocity Verlet. For intermediate values of h, AIA will choose an intermediate parameter value (near the BCSS integrator). We emphasize that in AIA, the parameter value used changes with h and with the problem being tackled. Given a simulation problem, the Adaptive Integration Approach (AIA) offers, for any integration step size chosen within an appropriate stability interval, an intelligent choice of the most appropriate integration scheme (in terms of the best conservation of energy for harmonic forces) within a family of 2-stage integrators. The original AIA algorithm is summarized in Algorithm <ref>. Our objective in this paper is to employ the ideas behind the 2-stage AIA approach for deriving multi-stage adaptive integration schemes specifically addressed to Bayesian inference applications. Taking into account the recent indications of the superiority of 3-stage integrators over 2-stage schemes in statistical applications <cit.>, we plan to develop not only 2-stage adaptive approaches as in AIA but also 3-stage adaptive algorithms. Extending AIA to computational statistics is not straightforward. The potential challenges are discussed in the next Section. § S-AIA §.§ Extension of AIA to computational statistics AIA makes use of specific properties and assumptions that hold for molecular simulation problems, e.g. the strongest forces in the target distribution are approximately harmonic (Gaussian) with known angular frequencies, there are well determined safety factors to avoid resonances, and the step size does not vary from one integration leg to the next. Unfortunately, those conditions are not usually met in Bayesian inference applications and therefore, when formulating s-AIA, the statistics version of AIA, the following issues have to be dealt with. * Harmonic forces. In contrast to molecular systems, they are not typically dominating in the Bayesian scenario. * Computation of frequencies. Even if the integrator could be chosen by examining only harmonic forces, the corresponding angular frequencies would not be known a priori in a Bayesian simulation. * Resonance conditions. Restrictions on the integration step size imposed by nonlinear stability are not known in the Bayesian case. * Choice of a step size. In statistics, the step size is usually randomized at the beginning of each integration leg and this would involve having to adjust at each step of the Markov chain the parameter values within the chosen family of integrators (see Step 5 in Algorithm <ref>). We address these issues separately. §.§.§ Pre-tabulation of the map h→ b_opt For each family of methods (2- or 3-stage), we tabulate once and for all the optimal integration coefficients b_opt^k, k = 2, 3, at small increments of h. In this way, the extra computational effort due to Step 5 in Algorithm <ref> can be avoided. We produced tables for k-stage s-AIA, k = 2, 3, using grids {h_i }_k, i = 1, ... , N_grid of the dimensionless stability interval (0, 2 k) (N_grid controls the accuracy of the estimated b_opt^k for a given h). Similarly to Algorithm <ref>, { b_opt_i^k }, i = 1, ... , N_grid, k = 2, 3, are found as b_opt_i^k = _b ∈( b_MEk, b_VVk) max_0 < h < h_iρ_k (h, b), h_i∈{h_i }_k, i = 1, ... , N_grid, k = 2, 3, where b_ME k (the optimal parameter for the k-stage integrator as h → 0) and b_VV k (the longest stability limit for the k-stage family) are the boundaries for b, and ρ_2 (h, b), ρ_3 (h,b) are given by (<ref>) and (<ref>) respectively. For 3-stage s-AIA, the second parameter a in (<ref>) is calculated according to (<ref>). Similarly to what happens in AIA, in s-AIA, one expects b_opt^k to be close to the MEk integrator coefficients for smaller values of h; to be close to b_BCSSk near h = k, and to increase up to b_VVk as h approaches 2k. Figure <ref> shows the ρ_2 (h, b) and ρ_3 (h, b) functions for the range of adaptive and fixed-parameter multi-stage integrators discussed in this work, whereas Figure <ref> depicts b^2_opt and b^3_opt as functions of dimensionless step size. §.§.§ Computation of frequencies The frequencies ω_j, j = 1, ... , D, of the system are calculated during the burn-in stage (a mandatory initial stage of an HMC simulation to reach its stationary regime) as ω_j = √(λ_j), j = 1, ... , D, where λ_j are the eigenvalues of the Hessian matrix of the potential function H_i, j = ∂^2 U()/∂θ_i ∂θ_j, i, j = 1, ... , D. §.§.§ Calculation of fitting factors Explicit integrators, such as the ones discussed in this study, may become unstable, and thus suffer from serious step size limitations when applied to nonlinear Hamiltonian systems <cit.>. To quantify the step size limitations imposed by nonlinear stability in the Verlet integrator, Schlick et al. <cit.> introduced so-called safety factors <cit.> for up to the 6th order resonances. This seemed to cover the worst scenarios in molecular simulations. We have already mentioned that AIA <cit.> makes use of a safety factor √(2) (cf. Algorithm <ref>), which avoids resonances up to 4-th order, while the MAIA algorithm for Modified HMC <cit.> utilizes √(3), that covers resonances up to 5-th order. In Bayesian inference applications, the number of multiple time scales and the level of non-linearity are in general hardly predictable, and should be treated for each problem separately. For our purposes, instead of a safety factor, we introduce what we call a fitting factor S_f, which not only plays the role of the safety factor but also results from fitting the proposed multivariate Gaussian model to the data generated during the burn-in stage. As in the case of a safety factor in <cit.>, we use a fitting factor for nondimensionalization of the step size. Thus, for a chosen step size Δ t, its nondimensional counterpart is found as h = S_f ω̃ Δ t. Here, S_f is the fitting factor determined below and ω̃ is the highest frequency of the system, obtained from the burn-in simulation. Our objective now is to express S_f in terms of the known properties of the simulated system. We choose to run a burn-in simulation using a Velocity Verlet algorithm and setting L = 1 and Δ t = Δ t_VV. The reason for that is the availability of a simple closed-form expression for the expected energy error 𝔼 [Δ H] of a univariate Gaussian target with such a choice of an integrator and L (see <ref> for details): 𝔼^1_VV [Δ H] = h_VV^6/32, with h_VV being a dimensionless counterpart of Δ t_VV, i.e. from (<ref>) h_VV = S_f ω Δ t_VV . For a D-dimensional multivariate Gaussian target, one can consider D dimensionless counterparts h_VV_j = S_f ω_j Δ t_VV, j = 1, ... , D , and find the expected energy error for a multivariate Gaussian model with the help of (<ref>) as 𝔼^D_VV [Δ H] = ∑_j=1^D h^6_VV_j/32. Combining (<ref>) and (<ref>), we find the fitting factor S_f = 1/Δ t_VV√(32 𝔼^D_VV [Δ H]/∑_j=1^D ω_j^6). Alternatively, the calculation of the frequencies may be avoided (and computational resources saved), if the multivariate Gaussian model is replaced with a univariate Gaussian model (as in <cit.>), which leads to S_f = 1/ω̃Δ t_VV√(32 𝔼^D_VV [Δ H]/D). Notice that, though ω̃ appears in (<ref>), one can compute S_f ω̃ = 1/Δ t_VV√(32 𝔼^D_VV [Δ H]/D), without needing frequencies and use it in (<ref>). From now on, in order to distinguish between the two approaches, we will denote the one in (<ref>) — which requires frequency calculation — by S_ω and the second one in (<ref>) — which does not — by S, i.e. S_ω = 1/Δ t_VV√(32 𝔼^D_VV [Δ H]/∑_j=1^D ω_j^6), S = 1/ω̃Δ t_VV√(32 𝔼^D_VV [Δ H]/D). As pointed out above, safety factors are meant to impose limitations on a system-specific stability interval (cf. (<ref>)). Thus, they should not be less than 1 and, as a consequence, we actually use 1S_ω = max(1, 1/Δ t_VV√(32 𝔼^D_VV [Δ H]/∑_j=1^D ω_j^6)), S = max(1, 1/ω̃Δ t_VV√(32 𝔼^D_VV [Δ H]/D)). The only unknown quantity in (<ref>) is 𝔼^D_VV [Δ H], which can be found by making use of the data collected during the burn-in stage. In fact, following the high-dimensional asymptotic formula for expected acceptance rate 𝔼 [α] <cit.> proven for Gaussian distributions in a general scenario <cit.>, i.e. 𝔼 [α] = 1 - 1/2√(π)√(𝔼^D [Δ H]), 𝔼^D [Δ H] → 0, D →∞, we get an expression for 𝔼^D [Δ H] 𝔼^D [Δ H] = 4 π( 1 - 𝔼 [α] )^2 . An estimation of 𝔼 [α] in a simulation is given by the acceptance rate AR, i.e. the ratio between the accepted N_acc and the total N number of proposals AR = N_acc/N. Combining (<ref>) with 𝔼 [α] = AR calculated during the burn-in stage, we compute 𝔼^D_VV [Δ H] as 𝔼^D_VV [Δ H] = 4 π( 1 - AR)^2, which gives an explicit expression for the fitting factors in (<ref>) 1S_ω = max(1, 2/Δ t_VV√(2 π (1 - AR)^2/∑_j=1^D ω_j^6)), S = max(1, 2/ω̃Δ t_VV√(2 π (1 - AR)^2/D)). Once the fitting factor is computed using (<ref>), a dimensionless counterpart of a given step size Δ t can be calculated either as h = 2 ω̃Δ t/Δ t_VV√(2 π (1 - AR)^2/∑_j=1^D ω_j^6) , or h = 2 Δ t/Δ t_VV√(2 π (1 - AR)^2/D). We remark that for systems with disperse distributions of frequencies, i.e. when the standard deviation of frequencies, σ, is big, it might be useful to apply a nondimensionalization of Δ t smoother than the proposed in (<ref>), namely h =2( ω̃ - σ) Δ t/Δ t_VV√(2 π (1 - AR)^2/∑_j=1^D ω_j^6). Otherwise, if σ < 1, (<ref>) is a better choice. In Section <ref> we will analyze different choices of scaling and provide practical recommendations. With (<ref>)-(<ref>) one has everything in place for finding the optimal integrator parameter b_opt^k (<ref>). To conclude this section, it is worth mentioning yet another useful output of the analysis. Let us recall that the dimensionless maximum stability limit of k-stage integrators is equal to 2 k, k = 1, 2, 3, ... <cit.>. Then, the stability interval can be expressed in terms of the chosen fitting factor S_f (S or S_ω in (<ref>)) as ( 0, 2 k/(S_f ω̃ )), k = 1, 2, 3, ..., or 0 < Δ t < SL = 2 k/S_f ω̃, k = 1, 2, 3, ... . Here SL is the stability limit. We remark that, with the nondimensionalization (<ref>), the estimation of the stability interval differs from (<ref>) and reads as 0 < Δ t < SL = 2 k/S_ω ( ω̃ - σ), k = 1, 2, 3, ... . In summary, we have proposed an approach for the prediction of a stability interval and an optimal multi-stage integrator for a given system. The step size can be freely chosen within the estimated stability interval. §.§ s-AIA algorithm Since the nondimensionalization method forms a key part of the s-AIA algorithm, it is important to give some insight into the options offered by (<ref>)-(<ref>). Obviously, the method (<ref>) is cheaper in terms of computational effort as it does not require the calculation of frequencies. In addition, (<ref>) is not affected by potential inaccuracies of the computed frequencies due, e.g., to insufficient sampling during the burn-in stage. On the other hand, taking into account the different frequencies (hence, the different time scales) of the system provides a more accurate estimation of the system-specific stability interval. Moreover, in the case of dominating anharmonic forces, the analysis based on the univariate harmonic oscillator model may lead to poor estimation of the fitting factor S and, as a result, of the dimensionless step size in (<ref>). Therefore, we expect S_ω in (<ref>) to provide a better approximation of the stability interval, and thus to lead to a better behavior of s-AIA. However, with the upper bound of the safety factor for the 1-stage Velocity Verlet suggested in <cit.>, it is possible to identify those computational models for which the less computationally demanding fitting factor S ensures a reliable stability limit estimation. In particular, S > 2 implies an anharmonic behavior of the underlying dynamics of the simulated model, and thus the need for a more accurate S_ω, together with (<ref>) or (<ref>) (depending on the distribution of ω_j), for a proper estimation of the stability limit. On the contrary, if S ≤ 2, one expects S and (<ref>) to be able to provide a reliable approximation of the stability limit. Though, in contrast to (<ref>), the calculation of S in (<ref>) requires the knowledge of the highest frequency ω̃, it is still less computationally demanding than the S_ω approach since ω̃ can be computed avoiding calculations of Hessians <cit.>, which is the bulk of computational cost for the frequencies calculations. We remark that the option to avoid calculating frequencies and use (<ref>) straightaway is present in the s-AIA algorithm. The s-AIA algorithm is summarised in Figure <ref>. Given a model; a dataset; HMC parameters and settings for Tuning, Burn-in and Production stages; I_ω (see Figure <ref>) and an order k of s-AIA (k = 2 or 3), s-AIA algorithm works as shown in Figure <ref>. §.§ Implementation s-AIA has been implemented in the BCAM in-house software package (Hamiltonians in Computational Statistics) for statistical sampling of high dimensional and complex distributions and parameter estimation in Bayesian models using MCMC and HMC based methods. The package is written in and and is targeted to computers running UNIX certified operating systems. Specifically, the code for the computational simulation is written in , while the performance analysis and MCMC diagnostics are carried out in by means of scripts and tools compatible with the popular <cit.> toolkit. A detailed presentation and description of the package can be found in <cit.>. Thanks to the implementation of the novel and efficient algorithms for statistical sampling, ensures competitive performance with respect to already existing HMC software packages. Moreover, due to its structure, it allows the user to have flexibility in methodology development and testing as well as to control the code performance and optimization. The current version incorporates several sampling techniques: Random Walk Metropolis algorithm, Hamiltonian Monte Carlo (HMC), Generalized HMC (GHMC) <cit.>, Metropolis-Adjusted Langevin Algorithm (MALA) <cit.>, second-order Langevin Monte Carlo (L2MC), Generalized Shadow HMC (GSHMC) <cit.> and Mix & Match HMC (MMHMC) <cit.>. In addition, different models for Bayesian inference applications are available: Gaussian Distribution (GD) <cit.>, Bayesian Logistic Regression (BLR) <cit.>, Stochastic Volatility (SV) <cit.>, inverse magnetotelluric (MT) model <cit.>, SIR/SEIR-like models <cit.>. The package comprises the state-of-the-art and most popular numerical integration schemes for HMC based methods: s-AIA (2- and 3-stage), AIA, Velocity Verlet (1-, 2- and 3-stage), BCSS (2-, 3- and 4-stage), ME (2- and 3-stage). Moreover, it includes an efficient implementation of modified Hamiltonians, jointly with the corresponding fixed-parameter multi-stage integrators (m-BCSS2, m-BCSS3, m-BCSS4, m-ME2, m-ME3, m-ME4 —full description and derivation in <cit.>), and adaptive ones (MAIA, e-MAIA <cit.>). Finally, various randomization schemes for the HMC parameters can be found in the package. § NUMERICAL RESULTS AND DISCUSSION In order to evaluate the efficiency of the proposed s-AIA algorithms, we compared them in accuracy and performance with the integrators previously introduced for HMC-based sampling methods (Table <ref>). We examined 2- and 3-stage s-AIA on four benchmark models presented. §.§ Benchmarks * Gaussian 1, Gaussian 2: two D-dimensional multivariate Gaussian models 𝒩 (0, Σ), D = 1000, with precision matrix Σ^-1 generated from a Wishart distribution with D degrees of freedom and the D-dimensional identity scale matrix <cit.> (Gaussian 1) and with diagonal precision matrix Σ^-1 made by D_1 = 990 elements taken from 𝒩 (1000, 100) and D_2 = 10 from 𝒩 (4000, 1600) (Gaussian 2). * German, Musk: two real datasets for a Bayesian Logistic Regression model <cit.> available from the University of California Irvine Machine Learning Repository <cit.>, with dimensions D = 25 (German), 167 (Musk) and K = 1000 (German), 476 (Musk) observations. The frequency distributions of the selected benchmarks are plotted in Figure <ref>. §.§ Metrics For HMC performance evaluation we monitored the following properties: * Acceptance rate. The acceptance rate (AR) is the ratio between the accepted and the total N number of proposals as in (<ref>). * Effective Sample Size. The Effective Sample Size (ESS) is the number of effectively uncorrelated samples out of N collected samples of a Markov chain. We calculated it, as proposed in <cit.>, through the effectiveSize function of the package of <cit.>. * Monte Carlo Standard Error. The Monte Carlo Standard Error (MCSE) quantifies the estimation noise caused by Monte Carlo sampling methods. It indicates the estimated Standard Error of the sample mean μ̂ = 1/N∑_i=1^N _i in a Markov chain <cit.>, and is calculated by substituting the sample size N in the Standard Error formula SE = √(σ̂^2/N), with the ESS, i.e. MCSE = √(σ̂^2/ESS). In (<ref>) and (<ref>), σ̂^2 is the sample variance. * Potential Scale Reduction Factor. The Potential Scale Reduction Factor (PSRF) monitors the convergence of a Markov chain by comparing it with other randomly initialized chains <cit.>. We calculated it as explained in <cit.> (Sections 1.2-1.3). We took minESS and min(MCSE)^-1 normalized with respect to the theoretical average number of gradient evaluations, that is k L̅ (L̅ is the theoretical average of number of integration steps, k is the number of stages of an integrator in use). Evaluation of gradients constitutes the bulk of the computational effort in HMC simulations and the chosen normalization provides leads to fair comparison between integrators with different number of stages. Of course, larger values of minESS and min(MCSE)^-1 imply better sampling performance. Finally, we monitored maxPSRF to examine the convergence of tests and used a very conservative threshold, PSRF < 1.01, as suggested in <cit.>, for all benchmarks but Musk, for which the threshold was relaxed to 1.1 <cit.>. §.§ Simulation setup The proposed k-stage s-AIA algorithms, k = 2, 3, were tested for a range of step sizes { k Δ t_i } within the system-specific dimensional stability interval ( 0, k Δ t_l ). Such an interval is found through the dimensionalization of the theoretically predicted nondimensional stability limit for the k-stage Velocity Verlet using the fitting factor (<ref>) and a method chosen among (<ref>), (<ref>), adjusted to a heuristic randomization method and aiming to minimize the effect of inaccuracies in the prediction due to the approximated nature of the proposed analysis. The grids of step sizes were obtained by dividing the stability interval into 20 equidistant parts k Δ t_i, i = 1, ... , 20, with k Δ t_1 = k Δ t_l/20, k Δ t_2 = k Δ t_1 + k Δ t_l/20, ... , k Δ t_20 = k Δ t_l. For each iteration of the HMC simulation, a step size was drawn uniformly randomly from (k Δ t_i-1, k Δ t_i], i = 1, ... , 20 (k Δ t_0 = 0). The number of integration steps per iteration, L, was drawn randomly uniformly at each iteration from {1, ... , 2 L̅ - 1 }, with L̅ such that L̅ h = τ D, where D is the problem dimension and τ is a benchmark-specific constant, found empirically to maximize performance near the center of the stability interval h = k. Such a setting provides a fair comparison between various multi-stage integrators by fixing the average number of gradients evaluations performed within each tested integrator. We remark that optimal choices of HMC simulation parameters, such as step sizes, numbers of integration steps and randomization intervals are beyond the scope of this study and will be discussed in detail elsewhere. Each simulation was repeated 10 times and the results reported in the paper were obtained by averaging over those multiple runs to reduce statistical errors. The simulation settings are detailed in Table <ref>. §.§ Results and discussion First, we tested 2- and 3-stage s-AIA integrators using the fitting factor approach S_ω (<ref>) and its corresponding nondimensionalization methods (<ref>) or (<ref>), selected according to the distribution of ω_j (Table <ref>). Figures <ref>–<ref> show the metrics collected for the Gaussian 1 and the German BLR benchmarks. One can appreciate the superiority of 2- and 3-stage s-AIA in terms of acceptance rate and sampling performance when compared with fixed-parameter multi-stage schemes of the same number of stages. Recall that, as explained before, the standard Verlet typically used in HMC is included in the family of multi-stage schemes. In particular, s-AIA integrators reach the best possible performance in their groups, i.e. 2- and 3-stage groups respectively, almost for each step size in the stability interval. This means that the adaptation of the integrator coefficient b^k_opt with respect to the randomized step size did enhance the accuracy and sampling of HMC. Specifically, the highest performance was reached around the center of the stability interval, in good agreement with the recommendations in <cit.>. As expected, HMC combined with 3-stage s-AIA outperformed HMC with 2-stage s-AIA in sampling efficiency. Moreover, the maxPSRF plot demonstrates that 3-stage s-AIA was the last integrator to lose convergence. In particular, for German BLR (Figure <ref>), s-AIA ensured convergence over the entire range of step sizes, which suggests that the stability limit had been estimated accurately, i.e. the chosen fitting factor approach worked properly. Similar trends, though less pronounced, can be observed for the Gaussian 2 benchmark in Figure <ref>. Again, 2- and 3-stage s-AIA exhibited the best possible performance (with the clear superiority of 3-stage s-AIA) for most step sizes and turned out to be the last integrators to lose convergence in their groups. In contrast, the same fitting factor approach S_ω applied to the Musk BLR benchmark did not show the level of accuracy observed for other benchmarks. In Figure <ref>, one can admit the poor performance achieved for almost all integrators in the second half of the stability interval, i.e the stability limit was overestimated. However, 3-stage s-AIA reached the best values in terms of minESS and min(MCSE)^-1, again around the center of the stability interval. Further analysis of the simulated frequencies and forces of the benchmarks revealed (see Figure <ref>) the anharmonic behavior of the Musk system, which, along with the fitting factor S ≈ 2.93 > 2 (Table <ref>), explains the inaccuracy of the harmonic analysis presented in Section <ref> in the estimation of the stability limit in this case. Next, we tested 2- and 3-stage s-AIA integrators using the fitting factor approach S (<ref>) and its corresponding nondimensionalization method (<ref>) (see Figures <ref>–<ref>). As expected, the more accurate S_ω fitting factor and its nondimensionalization methods (<ref>), (<ref>) lead to an overall better performance than s-AIA with S and (<ref>). However, for models with S < 2 (cf. Figures <ref>, <ref>, <ref>), both fitting approaches exhibited similar trends. On the other hand, for the Musk BLR benchmark, i.e. when S > 2, 2- and 3-stage s-AIA benefit from the more accurate S_ω fitting factor approach, reaching a clearly better estimation of the stability limit (cf. Figure <ref>). Finally, we wish to review the behavior of the other tested multi-stage integrators. First, we remark the superiority of 3-stage integrators over their 2-stage counterparts. For any benchmark and fitting factor approach, the 3-stage integrators performed on average better at the same computational cost, as previously suggested in <cit.>. In addition, we highlight that the other integration schemes tested showed a strong dependence on the model in use. In particular, VV performed poorly for the Gaussian benchmarks (Figures <ref>, <ref>) but demonstrated solid performance for the BLR models, especially for larger step sizes (Figures <ref>, <ref>). Similarly to VV, AIA resulted to be one of the worst integrators for the Gaussian benchmarks (Figures <ref>, <ref>), but achieved performance similar to 2-stage s-AIA for the BLR models (Figures <ref>, <ref>). On the contrary, the BCSS and ME integrators performed similarly to s-AIA for Gaussian 2 and Musk (Figures <ref>, <ref>), whereas they lose performance for Gaussian 1 and BLR German (Figures <ref>, <ref>). In conclusion, we observed that the s-AIA algorithms enhanced the performance of HMC, if the stability interval length was estimated accurately. When that is the case, s-AIA demonstrates the best performance around the center of the stability interval, which, together with (<ref>)-(<ref>), gives a helpful suggestion for the choice of step size in HMC simulations. Moreover, the more accurate fitting factor approach S_ω (<ref>) with (<ref>) or (<ref>) provided a better approximation of the stability limit, which resulted in higher accuracy and greater performance of the adaptive integrators, mostly when applied to systems with prevailing anharmonic forces, i.e. if S > 2. § CONCLUSION We have presented a novel adaptive multi-stage integration approach for enhancing the accuracy and sampling efficiency of HMC-based methods for Bayesian inference applications. The proposed methodology, which we call s-AIA, provides, for any choice of step size within the stability interval, a system-specific palindromic 2- or 3-stage splitting integrator which ensures the best energy conservation for harmonic forces within its family. Moreover, we offered a solution for detecting a system specific dimensional stability interval using the simulation data generated at the HMC burn-in stage. In particular, we introduced three optional scaling/nondimensionalization approaches for estimating the stability limit with different level of accuracy and computational effort. s-AIA was implemented (without introducing computational overheads in simulations) in the in-house software package (Hamiltonians in Computational Statistics) <cit.> and tested against the popular numerical integrators (Verlet <cit.>, BCSS <cit.> and Minimum Energy <cit.>) on the range of benchmark models. We found that the adaptivity helped to reach the best possible performance within the families of 2-, 3-stage splitting integration schemes. We emphasize that standard Velocity Verlet, the HMC integrator of choice, is a member of those families. If the stability limit was estimated accurately, s-AIA integrators reached the best performance in their groups, i.e. 2- and 3-stage groups, almost for each step size in the stability interval. Also, using more stages enhanced the sampling performance, stability and conservation of the energy of the harmonic forces with the same computational effort. We have demonstrated that the more accurate fitting factor approach S_ω (<ref>) led to an overall better performance in HMC simulations than its less computationally expensive counterpart S. However, the latter was able to reach comparable results when lying below the upper threshold S < 2 <cit.>. In that way, computational time and resources may be saved by avoiding the computation of angular frequencies. On the other hand, for more complex distributions, e.g. with dominating low-frequencies (like the Musk BLR benchmark model <cit.>), we found that a proper analysis of the underlying dynamics of the simulated system might assist in the choice of a suitable system-specific fitting factor, the randomization interval and the number of HMC iterations required for a chain to converge. We remark that even in the case of a rough estimation of the stability limit (like in Musk BLR), HMC with multi-stage adaptive splitting schemes achieves top performance in comparison with the fixed-parameter schemes, though the exact location of the optimal step size is harder to predict in this case. In an upcoming study, we will show how the proposed methodology can be adjusted for refining optimal parameters of HMC-based simulations. § DERIVATION OF IN (<REF>) Consider the harmonic oscillator with Hamiltonian H = 1/2 (p^2 + θ^2), θ,p ∈ℝ, and equations of motions d θ/dt = p, d p/dt = - θ. Given a k-stage palindromic splitting integrator Ψ_h (h is the integration step size), it acts on a configuration (θ_i, p_i) at the i-th iteration as Ψ_h ( [ q_i; p_i ]) = ( [ q_i+1; p_i+1 ]) = ( A^_h B^_h C^_h D^_h ) ( [ q_i; p_i ]), for suitable method-dependent coefficients A^_h, B^_h, C^_h, D^_h (= {b_i, a_j } is the set of k-1 integration coefficients). In <cit.>, a formula for ρ (h, ) is provided: ρ (h, ) = (B^_h + C^_h)^2/2(1- A^_h^2) . For a 3-stage palindromic splitting integrator (<ref>), the integrator coefficients are (= {b, a }) A^_h = 1 - h^2/2 + a (1/2 - b) (1/2 - a + b) h^4 - 2 a^2 b (1/2 - a) (1/2 - b)^2 h^6, B^_h = h - 2 a (1 - a) (1/2 - b) h ^3 + 2 a^2 (1/2 - a) (1/2 - b)^2 h^5, C^_h = - h + (2 a b (1 - b) - a/2 + 1/4) h^3 + + 2 a b (1/2 - b) (a (1 - b) - 1/2) h^5 + 2 a^2 b^2 (1/2 - a) (1/2 - b)^2 h^7. Finally, for a, b in (<ref>) and A^_h, B^_h and C^_h in (<ref>)-(<ref>)-(<ref>), ρ (h, ) in (<ref>) becomes ρ_3 (h, b) = 1h^4 (-3 b^4 + 8 b^3 -19/4 b^2 + b + b^2 h^2 (b^3 - 5/4 b^2 + b/2 - 1/16 ) - 1/16 )^2/2 (3 b - b h^2 (b - 1/4 ) - 1 ) (1 - 3 b - b h^2 (b - 1/2 )^2 ) ( -9 b^2 + 6 b - h^2 ( b^3 - 5/4 b^2 + b/2 - 1/16 ) - 1 ). § DERIVATION OF IN (<REF>) According to <cit.>, for the harmonic oscillator with the Hamiltonian (<ref>) and the equations of motion (<ref>), the expected energy error produced by a k-stage palindromic splitting integrator Ψ_h applied for L integration steps is given by 𝔼[Δ H] = sin^2 ( L Θ^_h ) ρ (h, ), where Θ^_h = arccos A^_h, and A^_h is defined in (<ref>). For L = 1 and ρ (h, ) defined in (<ref>), (<ref>) yields 𝔼[Δ H] = ( B^_h + C^_h )^2/2. For the 1-stage Velocity Verlet integrator (<ref>), one has Ψ_h^VV([ θ_i; p_i ]) = ([ ( 1 - h^2/2) θ_i + h p_i; ( - h + h^3/4) θ_i + (1 - h^2/2) p_i ]), that is B^_h = h, C^_h = - h + h^3/4, which, combined with (<ref>), provides 𝔼^1_VV [Δ H] = h_VV^6/32. § DERIVATION OF FOR S-AIA TUNING. For the burn-in stage, we choose the 1-stage Velocity Verlet integrator with L = 1 and step size Δ t_VV, which should be ideally chosen to be close to the center of the stability interval to achieve the best accuracy and sampling efficiency of an HMC simulation. In order to identify such a step size, we estimate the expected acceptance probability 𝔼 [α] following <cit.> (Sec. 5.2, Th. 1), i.e. 𝔼 [α] = 1 - 2/πarctan√(𝔼 [Δ H]/2) , which holds for standard univariate Gaussian distribution, i.e. the harmonic oscillator with the Hamiltonian (<ref>), regardless of the integrator being used, the step size and L. For the burn-in stage simulation setting, the expected energy error 𝔼 [Δ H] is defined in (<ref>) and, evaluated at the middle of the stability interval, h = 1, it is equal to 𝔼 [Δ H] = 1/32. Combining (<ref>) and (<ref>), one obtains 𝔼 [α] ≈ 0.92 = α_target. § ACKNOWLEDGMENTS We thank Tijana Radivojević, Jorge Pérez Heredia and Felix Müller for their valuable contributions at the early stage of the study. We acknowledge the financial support by the Ministerio de Ciencia y Innovación (MICINN, AEI) of the Spanish Government through BCAM Severo Ochoa accreditation CEX2021-001142-S (LN, EA) and projects PID2019-104927GB-C22, PID2019-104927GB-C21, MCIN/AEI/10.13039/501100011033, ERDF (“A way of making Europe”) (all). This work was supported by the BERC 2022-2025 Program (LN, EA), by Convenio IKUR 21-HPC-IA, by ELKARTEK Programme, grants KK-2022/00006 (EA), KK-2021/00022 (EA, LN) and KK-2021/00064 (EA) - all funded by the Basque Government, and by La Caixa - INPhINIT 2020 Fellowship, grant LCF/BQ/DI20/11780022 (LN), funded by the Fundación "la Caixa". This work has been possible thanks to the support of the computing infrastructure of the i2BASQUE academic network, Barcelona Supercomputing Center (RES), DIPC Computer Center, BCAM in-house cluster Hipatia and the technical and human support provided by IZO-SGI SGIker of UPV/EHU. § REFERENCES elsarticle-num
http://arxiv.org/abs/2307.02842v1
20230706081454
Provably Efficient Iterated CVaR Reinforcement Learning with Function Approximation
[ "Yu Chen", "Yihan Du", "Pihe Hu", "Siwei Wang", "Desheng Wu", "Longbo Huang" ]
cs.LG
[ "cs.LG" ]
Local Modules in Braided Monoidal 2-Categories Thibault D. Décoppet and Hao Xu July 2023 ============================================== Risk-sensitive reinforcement learning (RL) aims to optimize policies that balance the expected reward and risk. In this paper, we investigate a novel risk-sensitive RL formulation with an Iterated Conditional Value-at-Risk (CVaR) objective under linear and general function approximations. This new formulation, named ICVaR-RL with function approximation, provides a principled way to guarantee safety at each decision step. For ICVaR-RL with linear function approximation, we propose a computationally efficient algorithm ICVaR-L, which achieves an O(√(α^-(H+1)(d^2H^4+dH^6)K)) regret, where α is the risk level, d is the dimension of state-action features, H is the length of each episode, and K is the number of episodes. We also establish a matching lower bound Ω(√(α^-(H-1)d^2K)) to validate the optimality of ICVaR-L with respect to d and K. For ICVaR-RL with general function approximation, we propose algorithm ICVaR-G, which achieves an O(√(α^-(H+1)DH^4K)) regret, where D is a dimensional parameter that depends on the eluder dimension and covering number. Furthermore, our analysis provides several novel techniques for risk-sensitive RL, including an efficient approximation of the CVaR operator, a new ridge regression with CVaR-adapted features, and a refined elliptical potential lemma. § INTRODUCTION Reinforcement learning (RL) <cit.> is a general sequential decision-making framework for creating intelligent agents that interact with and learn from an unknown environment. RL has made ground-breaking achievements in many important application areas, e.g., games <cit.>, finance <cit.> and autonomous driving <cit.>. Despite the practical success, existing RL formulation focuses mostly on maximizing the expected cumulative reward in a Markov Decision Process (MDP) under unknown transition kernels. This risk-neutral criterion, however, is not suitable for real-world tasks that require tight risk control, such as automatic carrier control <cit.>, financial investment <cit.> and clinical treatment planning <cit.>. To address this limitation, risk-sensitive RL has emerged as a promising research area, which aims to incorporate risk considerations into the RL framework. A rich body of works have considered various risk measures into episodic MDPs with unknown transition kernels to tackle risk-sensitive tasks. Among the different risk measures, the Conditional Value-at-Risk (CVaR) has received an increasing attention in RL, e.g., <cit.>. CVaR is a popular coherent risk measure <cit.>, which can be viewed as the expectation of the worst α-percent of a random variable for a given risk level α∈ (0, 1]. It plays an important role in financial risk controlling <cit.>, safety-critical motion planning <cit.>, and robust decision making <cit.>. However, existing CVaR-based RL works <cit.> focus on the tabular MDP, where the state and action spaces are finite, and the complexity bounds scale polynomially in the sizes of state and action spaces. As a result, such tabular MDPs can tackle only few problems since in realistic applications, the state and action spaces are often large or even infinite. To extend the risk-sensitive RL theory and handle large state space, in this paper, we study Iterated CVaR RL with both linear and general function approximations in episodic MDPs (ICVaR-RL with linear and general function approximations). One key distinction of our work from existing function approximation results <cit.> is the Iterated CVaR objective. Iterated CVaR <cit.> is an important variant of CVaR, which focuses on optimizing the worst α-percent performance at each step, and allows the agent to tightly control the risk throughout the decision process. m The Iterated CVaR objective imposes significant technical challenges under the function approximation setting: the Iterated CVaR measure destroys the linearity of the risk-neutral Bellman equation, which makes existing risk-neutral RL algorithms for function approximation fail. For ICVaR-RL with function approximation, we develop two novel algorithms ICVaR-L for linear function approximation and ICVaR-G for general function approximation. We also develop new analytical techniques and establish tight regret upper and lower bounds for the algorithms. Our contributions are summarized as follows: * We develop a provably efficient (both computationally and statistically) algorithm ICVaR-L for ICVaR-RL with linear function approximation, based on a delicate approximation of the CVaR operator and a novel least square ridge regression for transition parameter estimation. ICVaR-L is a sample efficient algorithm with regret bound O(√(α^-(H+1)(d^2H^4+dH^6)K)), where α is the risk level, d is the dimension of state-action features, H is the length of each episode, and K is the number of episodes. * We construct a hard-to-learn instance for ICVaR-RL with linear function approximation, where any algorithm must suffer an Ω(√(α^-(H-1) d^2 K )) regret. This shows that algorithm ICVaR-L acheives a nearly minimax optimal dependency on d and K, and the factor √(α^-H) in our regret bound is unavoidable in general. * For ICVaR-RL with general function approximation, we propose algorithm ICVaR-G. We prove that ICVaR-G achieves a regret bound of O(√(α^-(H+1)DH^4K)) based on a new elliptical potential lemma. Here D is a dimensional parameter that depends on the eluder dimension and covering number (see Section <ref> for the details). Notation. For a positive integer n, [n] := {1, 2, ⋯, n}. For a non-zero real number r ∈∖{0}, the sign operator sgn(r) := r/|r|. For a d-dimension vector x ∈^d and a positive definite matrix Λ∈^d × d, x_Λ := √(x^⊤Λ x) be the norm of vectors in ^d under a positive matrix Λ. The operator (x)^+ := max{x, 0}. § RELATED WORKS Risk-sensitive RL with CVaR measure. There are two types of CVaR measures, i.e., the static and dynamic (iterated) CVaR measures. <cit.> study the static CVaR measure, which considers the CVaR of cumulative reward in tabular MDPs with known transition kernels. <cit.> investigate the static CVaR RL with unknown transition kernels, and <cit.> proposes a nearly minimax optimal algorithm. On the other hand, <cit.> proposes Iterated CVaR RL (ICVaR-RL), an episodic risk-sensitive RL formulation with unknown transition kernels and the Iterated CVaR measure, and studies both regret minimization and best policy identification in tabular MDPs. In addition, <cit.> investigates a general iterated risk measure (including Iterated CVaR) in tabular MDPs. In contrast, we study Iterated CVaR RL with linear and general function approximations. RL with function approximation. For risk-neutral RL, <cit.> study linear function approximation in two types, i.e., linear MDPs and linear mixture MDPs. <cit.> and <cit.> present nearly minimax optimal algorithms for Linear MDP and linear mixture MDP, respectively. <cit.> study risk-neutral RL with general function approximation, which assumes that transition probabilities belong to a given function class. They establish sublinear regret bounds dependent on the eluder dimension of the given function class. <cit.> considers the first risk-sensitive RL with function approximation under the entropic risk measure, and <cit.> studies RL with the iterated coherent risk measure with non-linear function approximation under a simulator assumption. Compared to <cit.>, we investigate the function approximation for RL with Iterated CVaR measure without the simulator assumption. § PRELIMINARIES §.§ Episodic MDP with Function Approximation We consider an episodic MDP parameterized by a tuple ℳ=(𝒮, 𝒜, K, H, {ℙ_h}_h=1^H, {r_h}_h=1^H), where 𝒮 and 𝒜 represent the state space and action space respectively, K is the number of episodes, and H is the length of each episode. For step h , _h : 𝒮×𝒜→Δ(𝒮) is the transition kernel which is unknown to the agent, and r_h : 𝒮×𝒜→ [0, 1] is the reward function which is deterministic and known to the agent.[This assumption is commonly considered in previous works <cit.>. ] At the beginning of episode k, an initial state s_k,1 is chosen by the environment. At each step h ∈ [H], the agent observes the state s_k,h, and chooses an action a_k,h := π^k_h(s_k,h), where π^k_h : 𝒮→𝒜 is a mapping from the state space to action space. The agent then receives a reward r_h(s_k,h, a_k,h). Then, the MDP transitions to a next state s_k,h+1 that is drawn from the transition kernel _h(·| s_k,h, a_k,h). This episode will terminate at step H+1, and the agent will advance to the next episode. This process is repeated K episodes. The objective of the agent is to determine an optimal policy π^k so as to maximize its performance (specified below). §.§.§ Linear Function Approximation [Linear function approximation <cit.>] In the given episodic MDP ℳ, the transition kernel is a linear mixture of a feature basis ϕ: 𝒮×𝒮×𝒜→^d, i.e., for any step h∈[H], there exists a vector θ_h ∈^d with θ_h_2 ≤√(d) such that _h(s' | s, a) = ⟨θ_h, ϕ(s', s, a)⟩ holds for any (s', s, a) ∈𝒮×𝒮×𝒜. Moreover, the agent has access to the feature basis ϕ. In this paper, we assume that the given feature basis ϕ satisfying ψ_f(s,a) _2 ≤ 1 where ψ_f(s,a) :=∑_s'∈𝒮ϕ(s',s,a)f(s') for any bounded function f : 𝒮→ [0, 1] and (s, a) ∈𝒮×𝒜. This assumption is also considered in <cit.>. Since ϕ is given to the agent, the transition kernel _h is parameterized by the d-dimension vector θ_h. This form of linear function approximation is also studied in risk-neutral RL <cit.> and risk-sensitive RL <cit.>. A episodic MDP with this type of linear function approximation is also called linear mixture MDP <cit.>. §.§.§ General Function Approximation In addition to the above linear mixture model, we also consider a general function approximation scenario, which is proposed by <cit.> and also considered in <cit.>. [General function approximation] In the given episodic MDP ℳ, the transition kernels {_h}_h=1^H ⊂𝒫 where 𝒫 is a function class of transition kernels with the form : 𝒮×𝒜→Δ(𝒮). In addition, the agent has access to such function class 𝒫. Denote the bounded function set ℬ(𝒮, [0, H]) with form f : 𝒮→ [0, H]. With the given candidate set 𝒫, we define a function class 𝒵 𝒵 := {z_(s,a,V) = ∑_s'∈𝒮(s'| s,a)V(s') : ∈𝒫}, where z_ is a function with domain 𝒮×𝒜×ℬ(𝒮, [0, H]). For simplicity, we denote [ V](s,a) := ∑_s' ∈𝒮(s' | s, a) V(s') for function V: 𝒮→. We measure the efficiency of RL algorithms under Assumption <ref> using the eluder dimension of 𝒵 and covering number of 𝒫 (similar to previous works <cit.>). To introduce the eluder dimension, we first define the concept of ε-independence. For ε > 0 and function class 𝒵 whose elements are with domain 𝒳, an element x ∈𝒳 is ε-dependent on the set 𝒳_n := {x_1, x_2, ⋯, x_n}⊂𝒳 with respect to 𝒵, if any pair of functions z, z' ∈𝒵 with √(∑_i = 1^n ( z(x_i) - z'(x_i) )^2)≤ε satisfies z(x) - z'(x) ≤ε. Otherwise, x is ε-independent on 𝒳_n if it does not satisfy the condition. For any ε > 0, and a function class 𝒵 whose elements are in domain 𝒳, the Eluder dimension _E(𝒵, ε) is defined as the length of the longest possible sequence of elements in 𝒳 such that for some ε' ≥ε, every element is ε'-independent of its predecessors. Intuitively, the eluder dimension is the length of longest possible sequence x_1, x_2 ⋯, x_d such that for any function z ∈𝒵 and for any i ∈ [d-1], knowing z(x_1),⋯,z(x_i) will not reveal z(x_i+1). Eluder dimension is a widely used metric to measure the complexity of a function class in RL, and we refer readers to <cit.> for further details. In fact, the linear mixture setting with feature dimension d is a special case with eluder dimension d of the function set 𝒵, which is detailed in Section <ref>. §.§ Iterated CVaR RL In this paper, we apply Iterated CVaR as the risk-sensitive criterion (similar to <cit.>). First, we define the CVaR operator <cit.>. For a random variable X with probability measure and given risk level α∈ (0, 1]: CVaR^α_(X) := sup_x∈{ x - 1/α𝔼[(x - X)^+] }, which can be viewed as the expectation of the α-worst-percent of the random variable X. Then, we consider the value function V_h^π : 𝒮→ and Q-value function Q_h^π : 𝒮×𝒜→ under the Iterated CVaR measure for a policy π = {π_h : 𝒮→𝒜} as the cumulative reward obtained when transitioning to the worst α-portion states (i.e., with the lowest α-portion values) at step h, h+1, ⋯, H. Q_h^π and V_h^π are recursively defined by CVaR-based Bellman equation: { Q_h^π(s,a) = r_h(s,a) + CVaR^α_s'∼_h(·| s, a)(V_h+1^π(s')) V_h^π(s) = Q_h^π(s, π_h(s)) V_H+1^π(s) = 0, ∀ s ∈𝒮. For simplicity, we use to denote the CVaR operator: [_^α(V)](s,a) := CVaR^α_s' ∼(· | s, a)(V(s')) = sup_x ∈{ x - 1/α[(x-V)^+](s,a) }, where [ (x-V)^+](s,a) = ∑_s'∈𝒮(s'| s,a)(x-V)^+(s'). Let π^* be the optimal policy which gives the optimal value function V_h^*(s) = max_π V_h^π(s) for any s ∈𝒮. Prior work <cit.> shows that π^* always exists , and the optimal value function (optimality Bellman equation) is given as Q_h^*(s,a) = r_h(s,a)+[^α__h V_h+1^*](s,a), V_h^*(s) = max_a∈𝒜Q_h^*(s,a). The objective of the agent is to minimize the cumulative regret for all K episodes, which is defined as Regret(K) := ∑_k=1^K ( V_1^*(s_k,1) - V_1^π^k(s_k,1) ), where π^k is the policy taken by the agent in episode k, and V^*_1(s_k,1) - V^π^k_1(s_k,1) represents the sub-optimality of π^k. Remark. In classic risk-neutral RL, the Q-value function is defined as Q_h^π(s,a) = r_h(s,a) + [_h V_h+1^π](s,a). The distinction of ICVaR-RL is to replace the Bellman operator [_h V] by the CVaR-based risk-sensitive Bellman operator [^α__h V]. Notice that when α = 1, the CVaR operator becomes the expectation operator, and Iterated CVaR RL degenerates to classic risk-neutral RL. § ICVAR-RL WITH LINEAR FUNCTION APPROXIMATION §.§ Algorithm for ICVaR-RL with Linear Function Approximation: ICVaR-L In this section, we propose ICVaR-L (Algorithm <ref>), an upper-confidence value-iteration algorithm designed for ICVaR-RL with linear function approximation. ICVaR-L is inspired by the algorithm ICVaR-RM proposed in <cit.> for tabular MDPs, and incorporates two novel techniques: an ε-approximation of the CVaR operator and a new ridge regression with CVaR-adapted features for estimating the transition parameter θ_h. Algorithm <ref> presents the pseudo-code of ICVaR-L. Overall, ICVaR-L performs optimistic value iteration in Lines <ref>-<ref>, where the key component is calculating the optimistic Q-value function Q_k,h in Line <ref> with approximated CVaR operator and the bonus term. Then the policy π^k is executed in Line <ref> which is greedy determined by the optimistic Q-value function. After that, we calculate the transition parameter estimator θ_k+1,h in Lines <ref>-<ref> by a new ridge regression. The following paragraphs delve into the novel techniques involved: (i) we approximate the CVaR operator by taking the supremum on a discrete set instead of a continuous interval; (ii) we perform ridge regression for estimating θ with customized regression features. Prior results, e.g., <cit.>, also study RL with linear function approximation. However, they focus only on the risk-neutral setting and cannot provide any safety guarantees. Approximation of the CVaR operator. Under Assumption <ref>, we have that for any function V : 𝒮→ [0, H], [^α__hV](s,a) = sup_x∈{x - 1/α[_h (x - V)^+](s,a)} = sup_x∈[0, H]{x - 1/α⟨θ_h, ψ_(x-V)^+(s,a) ⟩}, where the second equality holds by V(s) ∈ [0, H] for any s ∈𝒮. For simplicity, we denote [^α_θ V](s,a) := sup_x∈[0, H]{x - 1/α⟨θ, ψ_(x-V)^+(s,a) ⟩}. Notice that directly calculating the supremum operator is computationally inefficient. To address this issue, we define a approximated CVaR operator ^α, _θ with accuracy ε > 0, [_θ^α, (V)](s, a) := sup_x∈{ x - 1/α⟨θ, ψ_(x-V)^+(s,a)⟩}, where is a discrete ε-net of [0, H], i.e., := { nε: n ∈{1, 2, ⋯, ⌊ H/ε⌋}}. In other words, ^α, _θ takes a supremum over a discrete finite set instead of a continuous interval [0, H]. This approach provides a computationally efficient method to approximate the CVaR operator. Notably, the approximation of the CVaR operator guarantees that the maximum difference between the approximate and true operators is at most 2ε (shown in Lemma <ref> in Appendix <ref>). This efficient approximation technique enables us to effectively handle risk-sensitive RL problems with CVaR-type measures while maintaining computational tractability. Ridge regression for the estimation of θ_h. Since we consider ICVaR-RL with linear function approximation, it is important to estimate the parameter θ_h of the transition kernel _h for each step h ∈ [H]. Inspired by previous works for linear function approximation <cit.>, we apply ridge regression to update the estimated parameter θ_k+1,h as follow. θ_k+1,h←θ' ∈^dminλθ'_2^2 + ∑_i=1^k( (x_i,h - V_i,h+1)^+(s_i,h+1) - ⟨θ', ψ_(x_i,h-V_i,h+1)^+(s_i,h,a_i,h) ⟩)^2. Note that we consider {ψ_(x_i,h-V_i,h+1)^+}_i=1^k as the regression features, which are different to {ψ_V_i,h+1}_i=1^k used in previous works of linear mixture MDP for risk-neutral RL <cit.>. This choice is motivated by the explicit expression of the CVaR operator, which involves the function (x-V_k,h+1)^+ for some x ∈ [0, H] and optimistic value function V_k,h+1. The specific value of x_k,h is determined in Line <ref>. Intuitively, the agent will explore the direction of the maximum norm of ψ_(x-V_k,h+1)^+(s_k,h,a_k,h) for x ∈ such that every possible direction is eventually well explored. Remark.(Computation Tractability) The space and computation complexities for ICVaR-L are O(d^2H + |||𝒜|HK) and O(d^2|||𝒜|H^2K^2), respectively. Please refer to Appendix <ref> for details. §.§ Regret Upper Bound for ICVaR-L (Algorithm <ref>) We summarize the regret performance of Algorithm <ref> as follows. Suppose Assumption <ref> holds and for some δ∈ (0, 1], set λ = H^2, ε = dH√(α^H-3/K), and the bonus multiplier β as β = H√(dlog( H + KH^3/δ)) + √(λ). Then, with probability at least 1 - 2δ, the regret of ICVaR-L (Algorithm <ref>) satisfies Regret(K) ≤ 4dH^2√(K)/√(α^H+1) + 2β√(KH/α^H+1)√(8dHlog(K) + 4H^3log4log_2 K + 8/δ) Theorem <ref> states that ICVaR-L enjoys an O(√(α^-(H+1)(d^2H^4+dH^6)K)) regret upper bound for any Iterated CVaR RL problem with linear function approximation. In comparison to the O(√(α^-(H+1)S^2AH^3K)) regret upper bound for tabular MDPs <cit.>, our result exhibits the same order of dependence on α and K as the tabular setting, and removes the S term since in our setting S can be extremely large or even infinite. Additionally, our upper bound matches the lower bound in terms of d and K (see lower bound in Section <ref>). The detailed proof of the theorem is given in Appendix <ref>. Here we highlight our novel techniques in the analysis. Efficient approximation of CVaR operator. In Algorithm <ref>, we take the supremum in finite set instead of interval [0, H] in Eq. (<ref>). For this step, we present a novel lemma that shows the error of approximating [^α_θ(V)](s,a) by [^α, _θ(V)](s,a) is at most 2ε (Lemma <ref> in Appendix <ref>), i.e., for any (s, a) ∈𝒮×𝒜, probability kernel parameter θ, and bounded function V : 𝒮→ [0, H], | [^α, _θ(V)](s, a) - [^α_θ(V)](s, a) | ≤ 2ε. By this lemma, we have a computationally tractable method to calculate an ε-accurate approximation of the CVaR operator, which contributes to the computational efficiency of Algorithm <ref>. Novel transition estimation and concentration. In Line <ref> of Algorithm <ref>, we calculate θ_k,h to estimate the transition parameter θ_h by solving a new least square problem. In the proof of Theorem <ref>, we establish a novel concentration argument in Lemma <ref> in Appendix <ref> which shows the transition parameter θ_h lies in a ellipse centered at the estimator θ_k,h. Then, we can bound the deviation term of transition parameter θ_h and estimator θ_k,h for the CVaR operator by |[^α, _θ_k,h(V_k,h+1)] (s, a) - [^α, _θ_h(V_k,h+1)] (s, a) | ≤β/αsup_x∈ψ_(x-V_k,h+1)^+(s, a)_Λ_k,h^-1= B_k,h(s,a) This result is formally present in Lemma <ref> in Appendix <ref>. Recall that in our least square problem, we choose the regression features {ψ_(x_i,h-V_i,h+1)^+}_i=1^k for specific value x_i,h determined in Line <ref>, such that the agent updates the maximum norm of ψ_(x-V_k,h+1)^+(s, a) for x∈ in the covariance matrix Λ_i+1,h. Hence, ∑_k B_k,h(s_k,h,a_k,h) could be upper bounded by the elliptical potential lemma (Lemma <ref> in Appendix <ref>), and is sublinear with respect to K. §.§ Regret Lower Bound for ICVaR-RL with Linear Function Approximation Here we present a regret lower bound for Iterated CVaR RL with linear function approximation. Let H ≥ 2, d ≥ 2, and an interger n ∈ [H - 1]. Then, for any algorithm, there exists an instance of Iterated CVaR RL under Assumption <ref>, such that the expected regret is lower bounded as follows: 𝔼[Regret(K)] ≥Ω( d(H - n)√(K/α^n)). Here we briefly explain the the key idea of constructing the hard instance, and defer the complete proof to Apprendix <ref>. Consider the action space as 𝒜 = {-1, 1}^d-1 and a parameter set 𝒰 = {-Δ, Δ}^d-1, where Δ is a small constant. The instance contains n+3 states with n regular states s_1, ⋯, s_n and three absorbing states x_1, x_2, x_3. Moreover, we uniformly choose a vector μ from 𝒰. Set θ_h = (1, μ^⊤) for any h ∈ [H]. Then, we can generate the transition probabilities and reward function shown in Figure <ref> by properly define the feature mapping. Intuitively, the structure of the instance in Firgure <ref> is combined with a chain of regular states s_1→ s_2→⋯→ s_n and a hard-to-learn bandit state s_n →{x_2, x_3} (inspired by the construction for tabular MDP in <cit.>). With probability of α, the agent can move from s_i to s_i+1 for i ∈ [n - 1]. Since we consider the worst-α-portion case under the Iterated CVaR criterion, the CVaR-type value function of s_i only depends on the state s_i+1 for i ∈ [n - 1]. At state s_n, there is a linear-type hard-to-learn bandit (inspired by the construction for the lower bound instance of linear bandits <cit.>). By construction, the absorbing state x_2 is better than x_3. Hence, the best policy at s_n is a_n^* = sgn(μ) := (sgn(μ_1), ⋯, sgn(μ_d-1))^⊤. As a result, the agent needs to learn the positive and negative signs of every element of μ by reaching s_n and pull the bandit. Optimality of ICVaR-L. By choosing n = H-1 in Theorem <ref>, we can see that ICVaR-L achieves a nearly minimax optimal for factor d and K, and the factor √(α^-H) in the regret upper bound of ICVaR-L is unavoidable in general. § ICVAR-RL WITH GENERAL FUNCTION APPROXIMATION §.§ Algorithm for ICVaR-RL with General Function Approximation: ICVaR-G In this section, we introduce ICVaR-G (as shown in Algorithm <ref>) for Iterated CVaR RL with general function approximation defined in Section <ref>. ICVaR-G introduces a new distance function Dist_k,h for transition distributions in 𝒫, and solves a novel least square problem to calculate the estimator _k,h and construct the confidence set 𝒫_k,h. Overall, in each episode, the algorithm first calculates _k,h to estimate the transition kernel _h by a least square problem in Line <ref> and selects a confidence set 𝒫_k,h in Line <ref>, such that _h is likely to belong to 𝒫_k,h with high probability (as detailed in Lemma <ref> in Appendix <ref>). Subsequently, the algorithm calculates the optimistic value functions in Line <ref>, <ref> based on the selected set 𝒫_k,h and chooses the exploration policy π^k using a greedy approach in Line <ref>. We now introduce our selection of the desired estimator _k,h and confidence set 𝒫_k,h with the application of distance function Dist_k,h : 𝒫×𝒫→_≥ 0 for transition kernels in 𝒫. Recall the definition of function class 𝒵 = {z_ : ∈𝒫} in Eq. (<ref>). Let 𝒳 := 𝒮×𝒜×ℬ(𝒮, [0, H]) be the domain of z_. We use the functions in 𝒵 to measure the difference between two probability kernels in 𝒫. Specifically, for all (s,a) ∈𝒮×𝒜, set x_k,h(s,a) := x∈[0,H]max{sup_'∈𝒫_k,hz_'(s,a,(x-V_k,h+1)^+) - inf_'∈𝒫_k,hz_'(s,a,(x-V_k,h+1)^+) }, i.e., x_k,h(s,a) maximize the diameter of 𝒫_k,h by function z_(s,a,(x-V_k,h+1)^+). Here we denote X_k,h := (s_k,h, a_k,h, (x_k,h(s_k,h,a_k,h) - V_k,h+1)^+) ∈𝒳. We can define the distance function Dist_k,h : 𝒫×𝒫→_≥ 0 based on the state-action pair (s_k,h, a_k,h) and optimistic value-function V_k,h+1: Dist_k,h(, ') = ( z_(X_k,h) - z_'(X_k,h) )^2 for any , ' ∈𝒫. Equipped with this distance function, we can estimate _h by _k,h:=min_'∈𝒫∑_i=1^k-1Dist_i,h(', δ_k,h), where δ_k,h denotes the delta function centered on s_k,h+1 for any (s, a) ∈𝒮×𝒜, i.e., δ(s' | s, a) = 1 for s' = s_k,h+1 and δ(s'| s, a) = 0 for other cases, which can be seen as a sample from _h. That is, _k,h is the one with the lowest gap to the sequence {δ_i,h}_i=1^k-1 which contains the information of history trajectories. As for 𝒫_k,h, it is the set of probability kernels centered at _k,h, and our analysis shows that the true transition _h ∈𝒫_k,h with high probability (detailed in Lemma <ref> in Appendix <ref>). §.§ Regret Upper Bound for ICVaR-G (Algorithm <ref>) In this section , we present the main result for ICVaR-G (Algorithm <ref>). Suppose Assumption <ref> holds and for some positive constant δ∈ (0,1], we set the estimation radius γ as γ =4 H^2(2log(2H· N(𝒫, ·_∞, 1, 1/K)/δ) + 1 + √(log(5K^2/δ))). Then, with probability at least 1 -2δ, the regret of Algorithm <ref> satisfies Regret(K) ≤√(4KH/α^H+1)√(2H + 2d_EH^3 + 8γd_EHlog(K) + H^3log4log_2K + 8/δ), where d_E := _E(𝒵, 1/√(K)) is the eluder dimension of 𝒵, and N(𝒫, ·_∞, 1, 1/K) is the 1/K-covering number of function class 𝒫 under the norm ·_∞, 1.[For any , ' ∈𝒫, - '_∞, 1 := sup_(s,a) ∈𝒮×𝒜∑_s'∈𝒮|(s' | s, a) - '(s' | s, a)|. This norm is also considered in <cit.>] By setting the dimensional parameter D = d_Elog(N(𝒫, ·_∞, 1/K)), we have Regret(K) ≤O(√(α^-(H+1)DH^4K)). Remark. We defer the complete proof to Appendix <ref>. The dominating term of the regret upper bound in Theorem <ref> is O(√(α^-(H+1)DH^4K)), which enjoys the same order of α, H, and K as the result of ICVaR-L in Theorem <ref>. Moreover, in the case where Assumption <ref> holds (i.e., the linear mixture case), d_E = O(d), and log (N(𝒫, ·_∞, 1, 1/K)) = O(d). This means that we can recover the Regret(K) ≤O(√(α^-(H-1)d^2H^4K)) bound in Theorem <ref> by Theorem <ref>. The technical highlight in proving Theorem <ref> lies in the introduction of a novel elliptical potential lemma for a more precise analysis of the regret summation. We begin with bounding the deviation term: sup_' ∈𝒫_k,h [^α_'V_k,h+1](s,a) - [^α__hV_k,h+1](s,a) ≤1/αg_k,h(s,a), where g_k,h(s,a) is defined as g_k,h(s,a) = sup_'∈𝒫_k,hz_'(s,a,(x_k,h(s,a)-V_k,h+1)^+) - inf_'∈𝒫_k,hz_'(s,a,(x_k,h(s,a)-V_k,h+1)^+) Intuitively, g_k,h(s,a) can be interpreted as the diameter of 𝒫_k,h. Then our newly introduced elliptical potential lemma (Lemma <ref> in Appendix <ref>) provides a more refined result by showing that ∑_k ∑_h g^2_k,h(s_k,h,a_k,h) = O(log(K)) in terms of K. This result is sharper compared to the existing result ∑_k ∑_h g_k,h(s_k,h,a_k,h) = O(√(K)) in previous works <cit.>. With the refined elliptical potential lemma, we can then perform a precise analysis of regret summation similar to the proof of Theorem <ref>. The detailed proof of Theorem <ref> is deferred to Appendix <ref> § CONCLUSION In this paper, we investigate the risk-sensitive RL with an ICVaR objective, i.e. ICVaR-RL, with linear and general function approximations. We propose two algorithms, ICVaR-L and ICVaR-G, and establish regret bounds, by developing novel techniques including an efficient approximation of the CVaR operator, a new ridge regression with CVaR-adapted regression features, and a refined elliptical potential lemma. We also provide a hard-to-learn instance for Iterated CVaR RL with linear function approximation, demonstrating that ICVaR-L achieves a nearly minimax optimal dependency on the dimension of feature mapping d and the episode number K. There are several interesting directions for future works about risk sensitive RL with function approximation, e.g, further closing the gap between the upper and lower regret bound for ICVaR-RL with function approximation on α and H, and extending the function approximation setting to more risk measures. plain [section] [section]l1 § NOTATIONS We summarize the key notations used for ICVaR-RL with linear function approximation in Table <ref> and those for ICVaR-RL with general function approximation in Table  <ref>. Measurable space and σ-algebra. To discuss the performance of the algorithm on any MDP instance, we should establish the formal definition of the probability space considered in the problem. Since the stochasticity in the MDP is due to the transition, we define the probability space as Ω = (𝒮×𝒜)^KH and the probability measure as the gather of transition probabilities and the policy obtained from the algorithms. Thus, we work on the probability space (Ω, ℱ, ℙ), where ℱ is the product σ-algebra generated by the discrete σ-algebras underlying 𝒮 and 𝒜. To analyze the random variable on step h in episode k, we inductively define ℱ_k,h as follows. First let ℱ_1,h := σ(s_1,1, a_1,1, ⋯, s_1,h, a_1, h) for any h∈[H]. Then set ℱ_k,h := σ(ℱ_k-1,H, s_k,1,a_k,1, ⋯, s_k,h,a_k,h) for any k∈[K] and h∈[H]. § DEFINITION OF THE CVAR OPERATOR In this section, we provide a full introduction of the CVaR operator. First, we introduce the definition of CVaR. For a bounded random variable X with distribution function F_X(x) = [X ≤ x] on a probability space (Ω, ℱ, ) where ℱ is the measurable set of the probability space and a given risk level α∈ (0, 1], the Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) is defined as <cit.> VaR^α_(X) = min{x : F_X(x) ≥α}, CVaR^α_(X) = sup_x∈{ x - 1/α𝔼[(x - X)^+] }. Intuitively, VaR is the α-quantile of X and CVaR is a distorted expectation of X conditioning on its α-portion-tail. Actually, in the definition of CVaR Eq. (<ref>), the maximal x of the supremum can be exactly VaR^α_(X), i.e., CVaR^α_(X) = VaR^α_(X) - 1/α𝔼[(VaR^α_(X) - X)^+]. This property is proposed in <cit.>. Eq. (<ref>) shows that the maximal of the supremum in Eq. (<ref>) As stated in previous works <cit.>, when the risk level α = 1, the CVaR operator is exactly the expectation of X, and when α→ 0^+, CVaR tends to the minimum of X. § PROOF OF THEOREM <REF>: REGRET UPPER BOUND FOR ALGORITHM <REF> In this section, we present the complete proof of Theorem <ref>. First, we give an overview of the proof. In Appendix <ref>, we bound the approximation error of CVaR operator from taking the supremum in finite set instead of interval [0, H] in Eq. (<ref>). We propose Lemma <ref> which bounds the error of approximating [^α_(V)](s,a) by [^α, _(V)](s,a). In Appendix <ref>, we establish the concentration argument with respect to our estimated parameter θ_k,h and the true parameter θ_h for step h. Lemma <ref> shows that θ_h - θ_k,h_Λ_k,h≤β with high probability, and Lemma <ref> upper bounds the deviation term based on the concentration of θ_k,h. In Appendix <ref>, Lemma <ref> implies that our calculation of functions Q_k,h and V_k,h is optimistic. Finally, we apply regret decomposition method and bound the regret of Algorithm <ref> in Appendix <ref>. §.§ Error of CVaR Approximation Below we show that the error of approximating the CVaR operator by the technique of taking supremum on the discrete set is small. Assume the transition kernel is parameterized by transition parameter θ, i.e. (s' | s, a) = ⟨θ, ϕ(s', s, a) ⟩ for any (s', s, a) ∈𝒮×𝒮×𝒜. For a given constant ε > 0 and fixed a value function V: 𝒮→ [0, H], we have | [^α, _θ(V)](s, a) - [^α_θ(V)](s, a) | ≤ 2ε. First, we denote [^α, x_θ(V)](s,a) := x - 1/α[(x-V)^+](s,a). Let x^* := VaR_^α(V) ∈ [0, H]. Then, we have [^α_θ(V)](s, a) = [^α, x^*_θ(V)](s, a) given by Eq. (<ref>). If x^* ∈, we have [^α, _θ(V)](s, a) = [^α_θ(V)](s, a). It suffices to consider x^* ∉. Suppose x^* ∈ (mε, (m+1)ε) for some positive integer m ∈ [⌊ H/ε⌋]. By the property of CVaR operator, we have [^α, _θ(V)](s, a) = max{ [^α, mε_θ(V)](s, a), [^α, (m+1)ε_θ(V)](s, a) } Then, we assume 𝒮_0 := {s' ∈𝒮 : V(s') ≤ mε}, 𝒮_1 := {s' ∈𝒮 : mε < V(s') < x^*}. Denote s^* as V(s^*) = x^*. Noticing that x^*=VaR_^α(V), we have: ∑_s'∈𝒮_0∪𝒮_1(s'| s,a) < α, ∑_s'∈𝒮_0∪𝒮_1∪{s^*}(s'| s,a) ≥α. We give Figure <ref> where we sort the successor states s' ∈𝒮 by V(s') in ascending order, and the red virtual line denotes the α-quantile line. The black virtual line denotes the value of mε, x^*, (m+1)ε, and the sets of states 𝒮_0, 𝒮_1 are marked on the figure. By the Figure <ref>, we can write the exact form of [_θ^α, mε](s,a) and [_θ^α, x^*](s,a) as [_θ^α, mε](s,a) = mε - 1/α∑_s'∈𝒮_0(s'|s,a)(mε - V(s'))^+, [_θ^α, x^*](s,a) = x^* - 1/α∑_s'∈𝒮_0+𝒮_1(s'|s,a)(x^* - V(s'))^+, respectively. Then, we have [_θ^α, x^*](s,a) - [_θ^α, mε](s,a) ≤ x^* - mε + 1/α(∑_s'∈𝒮_0(s'| s, a)(x^* - mε) + ∑_s'∈𝒮_1(s'| s, a)(x^* - V(s'))) ≤ ε + 1/α∑_s'∈𝒮_0∪𝒮_1(s' | s, a)(x^* - mε) ≤ 2ε, where the first inequality holds by triangle inequality, and the second inequality holds by the definition of 𝒮_1, and the last one holds by the definition of α. Thus | [^α, _θ(V)](s, a) - [^α_θ(V)](s, a) | = [^α, x^*_θ(V)](s, a) - max{ [^α, mε_θ(V)](s, a), [^α, (m+1)ε_θ(V)](s, a) } ≤ | [_θ^α, x^*](s,a) - [_θ^α, mε](s,a)| ≤ 2ε, where equality holds since [^α_θ(V)](s,a) ≥ [^α, _θ(V)](s,a) by definition. §.§ Concetration Argument We show that our estimated parameter θ_k,h is a proper estimation of the true parameter θ_h for all episodes k and steps h. In fact, we can prove that θ_k,h falls in an ellipsoid centered at θ_h with high probability. In order to define the bonus term, we define a function X_k,h(·,·) that chooses the ideal x based on given state-action pair (s,a) by X_k,h(s,a) := max_x ∈ψ_(x-V_k,h+1)^+(s,a)_Λ_k,h^-1. Then, we denote ψ_k,h(s,a) as the maximum norm of ψ_(x-V_k,h+1)^+(s,a)_Λ_k,h^-1 for x ∈ with a given state-action pair (s,a) ∈𝒮×𝒜: ψ_k,h(s,a) := ψ_(X_k,h(s,a)-V_k,h+1)^+(s, a). For δ∈ (0, 1), we have that with probability at least 1 - δ/H, θ_h - θ_k,h_Λ_k,h≤β = H√(dlog( H + KH^3/δ)) + √(λ) holds for any k∈[K] and h ∈ [H]. First, we fixed an h ∈ [H]. Let A_k = ψ_k,h(s_k,h, a_k,h) and η_k := ⟨θ_h, ψ_k,h(s_k,h,a_k,h)⟩ - (x_k,h - V_k,h+1)^+(s_k,h+1). We have A_k is ℱ_k,h measurable, η_k is ℱ_k,h+1 measurable. And {η_k}_k is a martingale difference sequence and H-sub-Gaussian. We have θ_h -θ_k,h= Λ_k,h^-1( ∑_i=1^k-1ψ_i,h(s_i,h,a_i,h)(⟨θ_h, ψ_i,h(s_i,h,a_i,h)⟩ - (x_i,h-V_i,h+1)^+(s_i,h+1) ) + λθ_h ) = Λ_k,h^-1∑_i=1^k-1 A_iη_i + λΛ_k,h^-1θ_h. Then, we can write θ_h - θ_k,h_Λ_k,h≤∑_i=1^k-1 A_iη_i_Λ_k,h^-1 + λθ_h_Λ_k,h^-1≤∑_i=1^k-1 A_iη_i_Λ_k,h^-1 + √(λ), where first inequality is due to Eq. (<ref>) and triangle inequality, and the second one comes from Λ_k,h≽λ I. By Lemma <ref> (Theorem 2 in <cit.>), we have that with probability at least 1 - δ/H^2, θ_h - θ_k,h_Λ_k,h≤ H√(dlog( H^2 + KH^4/δ)) + √(λ) Thus, by uniform bound, we have the above inequality holds for any h∈[H] with probability at least 1 - δ/H. Combined with the concentration argument above, we can bound the deviation term of θ_h and θ_k,h with respect to the CVaR operator. For δ∈ (0, 1), any k ∈[K] and any h ∈ [H], we have that with probability at least 1 - δ/H, the following holds: |[^α, _θ_k,h(V_k,h)] (s, a) - [^α, _θ_h(V_k,h)] (s, a) | ≤β/αψ_k,h(s, a)_Λ_k,h^-1. Apply the same definition of [^α, x_θ(V)](s,a) := x - 1/α[(x-V)^+](s,a), we can write [^α, _θ(V)](s,a) = sup_x∈ [^α, x_θ(V)](s,a). We have |[^α, _θ_h(V_k,h+1)](s, a) - [^α, _θ_k,h(V_k,h+1)](s, a)| = |sup_y∈[^α, y_θ_h(V_k,h+1)](s, a) -sup_x∈[^α, x_θ_k,h(V_k,h+1)](s, a)| ≤ sup_y∈|[^α, y_θ_h(V_k,h+1)](s, a)-[^α, y_θ_k,h(V_k,h+1)](s, a) | = sup_y∈|y - 1/α⟨θ_h, ψ_(y-V_k,h+1)^+(s,a) ⟩ - y + 1/α⟨θ_k, h, ψ_(y-V_k,h+1)^+(s,a) ⟩| ≤ 1/αθ_k,h - θ_h_Λ_k,hsup_y∈ψ_(y - V_k,h+1)^)(s, a)_Λ_k,h^-1, where the first inequality holds by the property of supremum, and the second inequality holds by triangle inequality. Recall the definition of X_k,h(s,a) and ψ_k,h in Eq. (<ref>) and (<ref>). By Lemma <ref>, we have that with probability at least 1 - δ/H, |[^α, _θ_h(V_k,h+1)](s,a) - [^α, _θ_k,h(V_k,h+1)](s,a)| ≤1/αβψ_k,h(s,a)_Λ_k,h^-1. §.§ Optimism We use upper confidence bound-based value iteration as in <cit.> to calculate the optimistic value and Q-value functions V_k,h, Q_k,h, and construct the policy π^k in a greedy manner. Then, we prove the optimism of V_k,h below. For δ∈ (0, 1], s∈𝒮, and any k∈[K], h∈[H], with probability at least 1 - δ, we have V_k,h(s) ≥ V^*_h(s). We prove this argument by induction in h. For h = H+1, we have V_k,H+1(s) = V^*_H+1(s) = 0 for any k ∈ [K] and s ∈𝒮. For h ∈ [H], assume that with probability at least 1 - (H - h)δ / H, V_k,h+1(s) ≥ V_h+1^*(s) for any k ∈ [K] and s∈𝒮. Consider the case of h. For any k ∈ [K] and (s,a) ∈𝒮×𝒜, we have that with probability at least 1 - δ/H, Q_k,h(s,a) - Q^*_h(s,a) = [^α, _θ_k,h(V_k,h+1)] (s,a) + 2ε + β/αsup_x∈ψ_(x-V_k,h+1)^+(s,a)_Λ_k,h^-1 -[^α_θ_h(V^*_h+1)](s,a) = [^α, _θ_k,h(V_k,h+1)](s,a) -[^α, _θ_h(V_k,h+1)](s,a) + 2ε + β/αψ_k,h(s,a)_Λ_k,h^-1 +[^α, _θ_h(V_k,h+1)](s,a)-[^α_θ_h(V_k,h+1)](s,a)+[^α_θ_h(V_k,h+1)](s,a)-[^α_θ_h(V^*_h+1)](s,a) ≥ [^α_θ_h(V_k,h+1)](s,a)-[^α_θ_h(V^*_h+1)](s,a) where the inequality comes from Lemma <ref> and Lemma <ref> which show that [^α, _θ_h(V_k,h+1)](s,a)-[^α_θ_h(V_k,h+1)](s,a) ≥ -2ε and [^α, _θ_k,h(V_k,h+1)](s,a) -[^α, _θ_h(V_k,h+1)](s,a) ≥ -β/αψ_k,h(s,a)_Λ_k,h^-1. Since V_k,h+1(s') ≥ V_h+1^*(s') for any s'∈𝒮 and k ∈ [K] with probability at least 1 - (H - h)δ / H. Then by union bound, we have Q_k,h(s,a) ≥ Q^*_h(s,a) holds for any k∈[K] and (s,a)∈𝒮×𝒜 with probability at least 1 - (H+1-h)δ/H. Take the supremum on the left and right side for a ∈𝒜, we have V_k,h(s)≥ V_h^*(s) for any k ∈ [K] and s ∈𝒮 with high probability. This implies the case of h. By induction, we finish the proof. §.§ Regret Summation In this section, we provide the proof of the main theorem. Here we follow the definitions in <cit.>. For a fixed risk level α∈ (0, 1], value function V : 𝒮→, and a transition distribution (· : s, a) ∈Δ(𝒮), we denote the conditional probability of transitioning to s' from (s, a) conditioning on transitioning to the α-portion tail states s' as ℚ_^α, V(s' | s, a). ℚ^_α, V(s' | s, a) is a distorted transition distribution of based on the lowest α-portion values of V(s'), i.e., CVaR^α_s' ∼(·| s, a)(V(s')) = ∑_s' ∈𝒮ℚ_^α, V(s' | s, a)V(s'). Moreover, let [ℚ_^α, Vf](s, a) := ∑_s'∈𝒮ℚ_^α, V(s' | s, a) f(s') for real valued function f : 𝒮→. Then, we consider the visitation probability of the trajectories. Let {π^k}_k=1^K be the polices produced by ICVaR-L in the Let w_k,h(s,a) denote the probability of visiting (s, a) at step h of episode k, i.e. the probability of visiting (s, a) under the transition probability of the MDP _i(·|·, ·) with policy π^k_i at step i = 1, 2, ⋯, h - 1, starting with state s_k,1 initially. Similarly, we use w^CVaR, α, V^π^k_k,h to denote the conditional probability of visiting (s, a) at step h of episode k conditioning on the distorted transition probability ℚ^α, V^π^k_i+1__i(·|·, ·) and policy π^k_i at step i = 1, 2, …, h - 1. Equipped with these notations, now we present our proof of the main theorem for ICVaR-RL with linear function approximation. First we perform the regret decomposition. The following holds with probability at least 1 - δ: V_k,1(s_k,1) - V^π^k_1(s_k,1) = [^α, _θ_k,1(V_k,2)](s_k,1,a_k,1) + 2ε + B_k,1(s_k,1,a_k,1) - [^α_θ_1(V_2^π^k)](s_k,1,a_k,1) = [^α, _θ_k,1(V_k,2)](s_k,1,a_k,1) - [^α, _θ_1(V_k,2)](s_k,1,a_k,1)_I_1 + [^α, _θ_1(V_k,2)](s_k,1,a_k,1) - [^α_θ_1(V_k,2)](s_k,1,a_k,1)_I_2 + [^α_θ_1(V_k,2)](s_k,1,a_k,1) - [^α_θ_1(V_2^π^k)](s_k,1,a_k,1)_I_3 + 2ε + B_k,1(s_k,1,a_k,1) ≤ 2(2ε + B_k,1(s_k,1,a_k,1)) + [ℚ^α, V_2^π^k__1(V_k,2-V_2^π^k)](s_k,1,a_k,1) where the inequality holds by applying Lemma <ref>,<ref>, and <ref> to bound I_1,I_2, and I_3 respectively. By recursively apply the same method of Eq. (<ref>) to V_k,h-V_h^π^k for h = 2, 3, ⋯, H, we have that with probability at least 1 - δ, V_k,1(s_k,1) - V^π^k_1(s_k,1) ≤ 2(2ε + B_k,1(s_k,1,a_k,1) ) + ∑_s_2∈𝒮ℚ^α, V_2^π^k__1(s_2|s_k,1,a_k,1) (V_k,2(s_2) - V_2^π^k(s_2)) ≤ 2∑_h=1^H ∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)(2ε+B_k,h(s,a)) ≤ 4Hε/α^H-1 + 2β/α∑_h=1^H ∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)b_k,h(s,a) where we denote b_k,h(s,a):=ψ_k,h(s,a)_Λ_k,h^-1 = α B_k,h(s,a) / β, then b^2_k,h(s,a) ≤ H. The first inequality is exactly Eq. (<ref>), the second inequality holds by recursively apply the same method of Eq.(<ref>) to V_k,h-V_h^π^k for h = 2, 3, ⋯, H, and the last inequality holds by w_k,h^CVaR, α, V(s,a) ≤α^-(H-1) by Lemma <ref>. Then, we have that with probability at least 1 - δ, Regret(K) = ∑_k=1^K V_1^*(s_k,1)-V_1^π^k(s_1) ≤ 4HKε/α^H-1 + 2β/α∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)b_k,h(s,a)_I We can bound term I by similar approach in <cit.>. By Cauchy inequality, we have I ≤ √(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)b_k,h^2(s,a))√(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)) = √(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)b_k,h^2(s,a))√(KH) where the equality holds due to ∑_(s,a)w_k,h^CVaR, α, V^π^k(s,a) = 1 by definition. By Lemma <ref>, we have √(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V_h^π^k(s,a)b_k,h^2(s,a)) ≤ √(1/α^H-1∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h(s,a)b_k,h^2(s,a)) = √(1/α^H-1∑_k=1^K𝔼_(s_h,a_h)∼ d_s_k,1^π^k[∑_h=1^H b_k,h^2(s_h,a_h)]), where d_s_k,1^π^k denotes the distribution of (s, a) pair playing the MDP with initial state s_k,1 and policy π^k. Let 𝒢_k := ℱ_k,H, where ℱ_k,H is defined in Appendix <ref>. We have π^k is 𝒢_k-1 measurable. Set T_k := √(∑_h=1^H b_k,h^2(s_k,h,a_k,h)), we have |T_k|^2 ≤ H^3, and T_k is 𝒢_k measurable. According to Lemma <ref>, we have the following holds with probability 1 - δ. ∑_k=1^K𝔼_(s_h,a_h)∼ d_s_k,1^π^k[∑_h=1^H b_k,h^2(s_h,a_h)] ≤ 8∑_k=1^K∑_h=1^H b_k,h^2(s_k,h,a_k,h) + 4H^3log4log_2 K + 8/δ Notice that we can apply the elliptical potential lemma (Lemma <ref>) to the first term on the right hand side. Thus we can bound term I in Eq. (<ref>) with high probability. Combine the arguments above, we have that with probability at least 1 - 2δ, Regret(K) ≤ 4HKε/α^H-1 + 2β/α√(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V_h^π^k(s,a)b_k,h^2(s,a))√(KH) ≤ 4HKε/α^H-1 + 2β/√(α^H+1)√(∑_k=1^K𝔼_(s_h,a_h)∼ d_s_k,1^π^k[∑_h=1^H b_k,h^2(s_h,a_h)])√(KH) ≤ 4HKε/α^H-1 + 2β/√(α^H+1)√(8∑_k=1^K∑_h=1^H b_k,h^2(s_k,h,a_k,h) + 4H^3log4log_2 K + 8/δ)√(KH) ≤ 4dH√(K)/√(α^H+1) + 2β/√(α^H+1)√(8dHlog(K) + 4H^3log4log_2 K + 8/δ)√(KH) where the first inequality is due to Eq. (<ref>) and Eq. (<ref>), the second inequality is due to Eq. (<ref>), the third inequality holds by Eq. (<ref>), and the last inequality holds by ε = dH√(α^H-3/K) and elliptical potential lemma (Lemma <ref>). § SPACE AND COMPUTATION COMPLEXITIES OF ALGORITHM <REF> In this section, we discuss the space and computation complexities of Algorithm <ref>. We consider the setting of ICVaR-RL with linear function approximation, where the size of 𝒮 can be extremely large and even infinite. We will show that the space and computation complexities of Algorithm <ref> are only polynomial in d, H, K and |𝒜|. Noticing that ε = dH√(α^H-3/K) is given by Theorem <ref>, we have || = ⌊ H/ε⌋≤√(K/(α^H-3d^2)) + 1 is also polynomial in d, H, K. We will include the size of into the complexities of Algorithm <ref>. §.§ Space Complexity Though in episode k ∈ [K], we calculate the optimistic Q-value function Q_k,h(s,a) for every (s,a)-pair in Line <ref> of Algorithm <ref>, we only need to calculate the Q-value and value functions for the observed states {s_k,h}_h=1^H to produce the exploration policies {π^k_h}_h=1^H in episode k, and calculate the estimator θ_k+1,h for any h ∈ [H] in episode k. Thus we need to store the covariance matrix Λ_k,h, regression features ψ_(x-V_k,h+1)^+(s_k,h,a) for any x∈, a∈𝒜 and value (x_k,h-V_k,h+1)^+(s_k,h+1). The total space complexity is O(d^2H + |||𝒜|HK). §.§ Computation Complexity By the above argument, we only need to calculate the optimistic value and Q-value functions for the observed states {s_k,h}_h=1^H in episode k. We show that the total complexity is O(d^2|||𝒜|H^2K^2) by analyzing the specific steps of Algorithm <ref> in two parts. §.§.§ Calculation of the Optimistic Value and Q-Value Functions We discuss the complexities of calculating optimistic value iteration steps (Lines <ref>-<ref>) in this section. First, we need to calculate Q_k,h(s_k,h,a) for every action a∈𝒜 to produce the exploration policy π^k_h(s_k,h) at step h in episode k. In Line <ref>, calculating the approximated CVaR operator [_θ_k,h^α, V_k,h+1](s,a) = sup_x∈{x-1/α⟨θ_k,h, ψ_(x-V_k,h+1)^+(s,a)} costs O(d||) operations. Calculating ψ_(x-V_k,h+1)^+(s,a) costs O(KH) operations since the number of non-zero elements of V_k,h+1(·) is at most KH. Computing the bonus term B_k,h(s,a) needs O(d^2||) operations. Thus, calculating Q_k,h(s_k,h,a) for any h ∈ [H] needs O(d^2|||𝒜|H^2K) operations. Since we have Q_k,h(s_k,h,a), we can calculate V_k,h(s_k,h) by O(|𝒜|) operations in Line <ref>, and π^k_h(s_k,h) by O(|𝒜|) operations in Line <ref>. In all, computing the optimistic functions will cost O(d^2|||𝒜|H^2K^2) operations. §.§.§ Calculation of the Parameter Estimators At step h of episode k, we choose the specific value x_k,h in Line <ref>, which needs O(d^2||) operations. Then, Line <ref> takes O(d^2) operations to calculate the covariance matrix Λ_k+1,h. In Line <ref>, we can store the prefix sum ∑_i=1^kψ_(x_i,h-V_i,h+1)^+(s_i,h,a_i,h)(x_i,h-V_i,h+1)^+(s_i,h+1) and calculate θ_k+1,h with O(d^2) operations. Thus the total complexity for calculating the parameter estimators is O(d^2||HK). § PROOF OF THEOREM <REF>: REGRET LOWER BOUND FOR ITERATED CVAR WITH LINEAR FUNCTION APPROXIMATION In this section, we present the complete proof of Theorem <ref> by constructing a hard-to-learn instance for ICVaR-RL with linear function approximation. First, we define the hard-to-learn instance (shown in Figure <ref>), which is inspired by the lower-bound instances constructed in <cit.>. For given integers d, H, K, n ∈ [H-1] and risk level α∈ (0, 1], consider the action space as 𝒜 = {-1, 1}^d-1 and a parameter set 𝒰 = {-Δ, Δ}^d-1, where Δ is a constant to be determined. The instance contains n+3 states with n regular states s_1, ⋯, s_n and three absorbing states x_1, x_2, x_3. Moreover, we uniformly choose a μ from 𝒰. Then, we introduce the reward function of this instance. For any step h ∈ [H], the reward function r_h(s_i, a) = 0 for any regular state s_i with i ∈ [n] and action a ∈𝒜. The reward functions of absorbing states are r_h(x_1, a) = 1, r_h(x_2, a) = 0.8, and r_h(x_3, a) = 0.2 for any step h ∈ [H] and action a ∈𝒜. For the transition kernels, set θ_h = (1, μ^⊤)^⊤ for any h ∈ [H]. For any i ∈ [n - 1] and action a ∈𝒜, let ϕ(s_i+1, s_i, a) = (α, 0, ⋯, 0)^⊤ and ϕ(x_1, s_i, a) = (1 - α, 0, ⋯, 0)^⊤. Then the transition probabilities at regular state s_i are _i(s_i+1| s_i, a) = α and _i(x_1 | s_i, a) = 1- α since we will only reach s_i at step h = i. For any action a_n ∈𝒜, let ϕ(x_2, s_n, a_n) = (1 - α + (d-1)Δ, a_n^⊤)^⊤ and ϕ(x_3, s_n, a_n) = (α - (d-1)Δ, a_n^⊤)^⊤. Then, we have _h(x_2 | s_n, a_n) = 1 - α + (d-1)Δ + ⟨μ, a_n ⟩ and _h(x_3 | s_n, a_n) = α - (d-1)Δ - ⟨μ, a_n ⟩ for any h ∈ [H]. For the absorbing states x_i with i ∈{1, 2, 3}, let ϕ(x_i, x_i, a) = (1, 0, ⋯, 0)^⊤ and ϕ(s, x_i ,a) = 0 for s ≠ x_i. Thus _h(x_i | x_i, a) = 1 for i ∈{1, 2, 3} and any a ∈𝒜, h ∈ [H]. In this instance, we have V_1^*(s_1) = H - n/α( 0.2(α - 2(d-1)Δ) + 0.8(2(d-1)Δ) ) V_1^π(s_1) = H - n/α( 0.2(α - (d-1)Δ + ⟨μ, π_n(s_n)⟩ + 0.8 ((d-1)Δ - ⟨μ, π_n(s_n)⟩) ) Thus we have V_1^*(s_1) - V_1^π(s_1) = 1.2(H - n)Δ/α∑_i = 1^d - 1(1 - I(μ, π_n(s_n), i)), where I(μ, π_n(s_n), i) = 1(sgn(μ_i)=sgn(π_n(s_n)_i)). Then if Algorithm produces policy π = (π^k)_k ∈ [K] in K episodes, we have Regret(K) = 1.2(H-n)Δ/α∑_i = 1^d-1(∑_k = 1^K 1 - I(μ, π^k_n(s_n), i)) Since we uniformly choose μ from 𝒰, we have 𝔼[Regret(K)] = 1.2(H-n)Δ/α∑_i = 1^d-11/|𝒰|∑_μ∈𝒰𝔼_μ[(∑_k = 1^K 1 - I(μ, π^k_n(s_n), i))]. Denote 𝔼_μ be the conditional expectation on the fixed μ∈𝒰. For fixed i ∈ [d-1], we denote μ(i) := (μ_1, ⋯, μ_i-1, -μ_i, μ_i+1, ⋯, μ_d-1) which differs from μ at its i-th coordinate. Assume N(μ, π, i) := ∑_k=1^K (1 - I(μ, π_n^k(s_n), i)). By Pinsker's inequality (Exercise 14.4 and Eq.(14), (12) in <cit.>), we have the following lemma. For fixed i ∈ [d - 1], we have 𝔼_μ[N(μ, π, i)] - 𝔼_μ(i)[N(μ, π, i)] ≥ -K/√(2)√(KL(_μ || _μ(i))), where _μ denotes the joint distribution over all possible reward sequences of length K under the MDP parameterized by μ. Denote μ(i) := (μ_1, ⋯, μ_i-1, -μ_i, μ_i+1, ⋯, μ_d-1) which differs from μ at its i-th coordinate. Let w(s_n) be the probability to reach s_n in each episode. By construction, we have w(s_n) = α^n-1. Denote Ber(p) as the Bernoulli distribution with parameter P. Let Ber_μ := Ber(α - (d-1)Δ - ⟨μ, π^k_n(s_n)⟩). By definition of KL divergence, we have KL(Ber(a) || Ber(b)) ≤ 2(a-b)^2/a 𝔼_μ[KL(Ber_μ || Ber_μ(i))] ≤𝔼_μ[2⟨μ - μ(i), π^k_n(s_n) ⟩^2/⟨μ, π^k_n(s_n)⟩ + α - (d-1)Δ] ≤8Δ^2/α - 2(d-1)Δ Let Δ = c√(1/α^n-2K) where c is a small constant such that 2(d-1)Δ < α / 2. Then, we have KL(_μ || _μ(i)) = ∑_k = 1^K w(s_n)𝔼_μ[KL(Ber_μ || Ber_μ(i))] ≤16α^n-2KΔ^2. Combined with above equations, we can bound the expectation of the regret as: 𝔼[Regret(K)] = 1.2(H-n)Δ/α1/2^d-1∑_μ∈𝒰∑_i = 1^d-1𝔼_μ[N(μ, π, i)] = 1.2(H-n)Δ/α1/2^d∑_μ∈𝒰∑_i = 1^d-1𝔼_μ[N(μ, π, i)] + 𝔼_μ(i)[N(μ(i), π, i)] = 1.2(H-n)Δ/α1/2^d∑_μ∈𝒰∑_i = 1^d-1K + 𝔼_μ[N(μ, π, i)] - 𝔼_μ(i)[N(μ, π, i)] ≥ 1.2(H-n)Δ/α1/2^d∑_μ∈𝒰∑_i = 1^d-1 K - 2√(2)KΔ√(α^n-2K) = 0.6(H-n)Δ/α(d-1)(K - 2√(2)KΔ√(α^n-2K)), where the inequality holds by Lemma <ref> and Eq. (<ref>). Since Δ = c√(1/α^n-2K), we have 𝔼[Regret(K)] ≥Ω(d(H-n)√(K/α^n)). § PROOF OF THEOREM <REF>: REGRET UPPER BOUND FOR ALGORITHM <REF> In this section, we present the full proof of Theorem <ref> for ICVaR-RL with general function approximation under Assumption <ref>. The proof consists of two parts. In Appendix <ref>, we establish the concentration argument which shows _h ∈𝒫_k,h with high probability in Lemma <ref>. With the concentration argument, we can prove the optimism of Q_k,h and V_k,h in Lemma <ref>, and further bound the deviation term for general setting in Lemma <ref>. In Appendix <ref>, we present our novel elliptical potential lemma in Lemma <ref>, and prove Theorem <ref> by regret decomposition and regret summation. §.§ Concentration Argument In this section, we apply the techniques firstly proposed by <cit.> and also used in <cit.> to establish the concentration argument, which shows that _h is belong to our confidence set 𝒫_k,h with high probability. We have that for δ∈ (0, 1], with probability at least 1 - δ, _h ∈𝒫_k,h holds for any k ∈ [K] and h ∈ [H]. Firstly we fix h ∈ [H]. By definition of D_i,h(·, ·) and the delta distribution δ_k,h, we have _k,h = ' ∈𝒫min∑_i=1^k-1((x_i,h-V_i,h+1)^+(s_i,h+1) - ['(x_i,h-V_i,h+1)^+](s_i,h, a_i,h))^2, and 𝒫_k,h = {' ∈𝒫 : ∑_i=1^k-1 D_i,h(', _k,h) ≤γ^2 }. Let X_k,h = (s_k,h, a_k,h, (x_k,h - V_k,h+1)^+) and Y_k,h = (x_i,h-V_i,h+1)^+(s_i,h+1). Then, we have that X_k,h is ℱ_k,h measurable and Y_k,h is ℱ_k,h+1 measurable. Note that { Y_k,h - z__h(X_k) }_k is H-sub-gaussian conditioning on {ℱ_k,h}_k, and 𝔼[Y_k,h - z__h(X_k,h) |ℱ_k,h] = 0. Moreover, by definition of _k,h and function class 𝒵, we have z__k,h = z_'∈𝒵min∑_i=1^k-1(Y_i,h - z_'(X_i,h))^2. Let 𝒵_k,h(γ) = {z_'∈𝒵 : ∑_i=1^k-1( z_'(X_i,h) - z__k,h(X_i,h) )^2 ≤γ^2 }. By Lemma <ref>, for any α > 0, with probability at least 1 - δ/H, for all k ∈ [K], we have z__h∈𝒵_k,h(γ_k). Here γ_k = β_k(δ/H, H/K) = 8H^2log(2H · N(𝒵, ·_∞,H/K) / δ) + 4k/K(H^2 + H^2√(log(4k(k+1)/δ))), where β_k is defined by Eq. (<ref>) in Lemma <ref>, and N(𝒵, ·_∞, H/K) is the covering number of 𝒵 with norm ·_∞ and covering radius H/K. Since z__h∈𝒵_k,h(γ_k), we have _h ∈{' ∈𝒫 : ∑_i=1^k-1( z_'(X_i) - z__k,h(X_i) )^2 ≤γ_k^2}. Moreover, we have z_ - z_'_∞ = sup_(s, a, V) ∈𝒮×𝒜×ℬ| ∑_s'∈𝒮(s'| s, a)V(s') - ∑_s'∈𝒮'(s'| s, a)V(s')| ≤ H sup_(s, a, V) ∈𝒮×𝒜×ℬ| ∑_s'∈𝒮(s'| s, a) - ∑_s'∈𝒮'(s'| s, a)| ≤ H sup_(s, a, V) ∈𝒮×𝒜×ℬ∑_s'∈𝒮|(s'| s, a) - '(s'| s, a)| = H - '_∞, 1, where the first inequality holds by V(s') ∈ [0, H] for any s' ∈𝒮, the second inequality holds by the triangle inequality, and the third equality is due to the definition of nor ·_∞, 1. Thus we have N(𝒵, ·_∞, H/K) ≤ N(𝒫, ·_∞, 1, 1/K). Since γ =4 H^2(2log(2H· N(𝒫, ·_∞, 1, 1/K)/δ) + 1 + √(log(5K^2/δ))) ≥γ_k, we have _h ∈𝒫_k,h for any k∈[K] with probability at least 1 - δ/H. Finally, by union bound, we have _h ∈𝒫_k,h holds for any (k,h) ∈ [K] × [H] with probability at least 1 - δ. With the concentration property in Lemma <ref>, we can easily show the construction of V_k,h and Q_k,h is optimistic in Algorithm <ref>. If the event in Lemma <ref> happens, we have V_k,h(s) ≥ V^*_h(s), ∀ s ∈𝒮. Since the event in Lemma <ref> happens, we have _h ∈𝒫_k,h holds for any k and h. Thus by the definition of Q_k,h in Algorithm <ref>, Q_k,h(s,a) = r_h(s,a) + sup_∈𝒫_k,h [^α_V_k,h+1](s,a) ≥ r_h(s,a) + [^α__hV_k,h+1](s,a). By similar argument of induction in Lemma <ref>, we can easily get the result. The following lemma upper bounds the deviation term by g_k,h(s,a)/α. If the event in Lemma <ref> happens, 0 ≤sup_' ∈𝒫_k,h [^α_'V_k,h+1](s,a) - [^α__hV_k,h+1](s,a) ≤1/αg_k,h(s,a) The left side holds trivially by the result of Lemma <ref>. We only need to prove the right side. sup_' ∈𝒫_k,h [^α_'V_k,h+1](s,a) - [^α__hV_k,h+1](s,a) = sup_x∈[0, H]{ x - 1/αinf_' ∈𝒫_k,h['(x - V_k,h+1)^+](s,a) } - sup_x∈[0, H]{ x - 1/α[_h(x - V_k,h+1)^+](s,a) } ≤ 1/αsup_x∈[0, H]{ -inf_' ∈𝒫_k,h['(x - V_k,h+1)^+](s,a) + [_h(x - V_k,h+1)^+](s,a) } ≤ 1/αsup_x∈[0, H]{sup_∈𝒫_k,h[(x - V_k,h+1)^+](s,a)-inf_∈𝒫_k,h[(x - V_k,h+1)^+](s,a) } = 1/αsup_' ∈𝒫_k,h['(x_k,h(s,a) - V_k,h+1)^+](s,a)-inf_' ∈𝒫_k,h['(x_k,h(s,a) - V_k,h+1)^+](s,a) = 1/αg_k,h(s,a), where the first inequality holds by the property of supremum, and the second inequality holds by holds by _h ∈𝒫_k,h under the event happens in Lemma <ref>, and the rest equalities are due to the definition of x_k,h(s,a) in Eq. (<ref>) and g_k,h(s,a) in Eq. (<ref>). §.§ Regret Summation In this section, we firstly propose a refined elliptical potential lemma for ICVaR-RL with general function approximation. Then, we apply the similar methods in the proof of linear setting to get the regret upper bound. Noticing that <cit.> presents a similar elliptical potential lemma (Lemma 5 in <cit.>) used in <cit.> which shows that ∑_k=1^K ∑_h=1^H g_k,h(s_k,h,a_k,h) = O(√(K)) with respect to the term of K. Inspired by this version of elliptical potential lemma, our Lemma <ref> is a refined version which gives a sharper result. We provide the elliptical potential lemma for general function approximation. We have ∑_k=1^K ∑_h=1^H g^2_k,h(s_k,h,a_k,h) ≤ H + _E(𝒵, 1/√(K))H^3 + 4γ_E(𝒵, 1/√(K))H(log(K)+1) Our proof is inspired by the proof framework of Lemma 5 in <cit.>. First we recall the definition of X_k,h∈𝒳 in the proof of Lemma <ref>, i.e, X_k,h := (s_k,h, a_k,h, (x_k,h(s_k,h, a_k,h) - V_k,h+1)^+). For simplicity, let G_k,h:=g_k,h(s_k,h,a_k,h) = sup_' ∈𝒫_k,hz_'(X_k,h) - inf_' ∈𝒫_k,hz_'(X_k,h). Then for fixed h ∈ [H], we know g_k,h(s_k,h, a_k,h) ≤ H since 0 ≤ z_(X_k,h) ≤ H for any probability kernel ∈𝒫. Then, we can reorder the sequence (G_1,h, ⋯, G_K,h) → (G_j_1, h, ⋯, G_j_K, h) such that G_j_1,h≥ G_j_2,h≥⋯ G_j_K,h. Then, we have ∑_k=1^K G_k,h^2 = ∑_k=1^K G_j_k,h^2 = ∑_k=1^m G_j_k,h^2·1{G_j_k,h≥ K^-1/2} + ∑_k=m+1^K G_j_k,h^2·1{G_j_k,h < K^-1/2} for some m ∈ [K]. Since the second term is less than 1 trivially, we only consider the first term. Then, we fix t ∈ [m] and let s = G_j_t, h and we have ∑_i=1^K 1(G_i,h≥ s) ≥ t By Lemma <ref>, we have t ≤∑_i=1^K 1(G_i,h≥ s) ≤_E(𝒵, s)(4γ/s^2 + 1 ). For simplicity, we denote d_E := _E(𝒵, K^-1/2). Since t ∈ [m], we have G_j_t,h = s ≥ K^-1/2, which implies _E(𝒵, s) ≤ d_E. By Eq. (<ref>), we have s = G_j_t,h≤√((4γd_E)/(t-d_E)). Notice that this property holds for every fixed t ∈ [m]. Combined with G_k,h≤ H, we have ∑_k=1^K G_k,h^2 ≤ 1 + ∑_k=1^m G_i_k,h^2·1{G_j_k,h≥ K^-1/2} ≤ 1 + d_EH^2 + ∑_k=d_E+1^K 4γd_E/k - d_E ≤ 1 + d_EH^2 + 4γd_E(log(K)+1), where the first inequality is due to Eq. (<ref>), the second inequality holds by G_j_t,h≤√((4γd_E)/(t-d_E)) for any t ∈ [m] and G_k,h≤ H, and the last inequality is due to the property of harmonic series. Sum over Eq. (<ref>) for h∈[H], we get the result. Combined by this refined elliptical potential lemma, we can prove the main theorem of ICVaR-RL with general function approximation. This proof is similar to the proof of Theorem <ref> with tiny adaption. Firstly, by standard regret decomposition method, we have that with probability at least 1 - δ, the event in Lemma <ref> happens and V_k,1(s_k,1) - V^π^k_1(s_k,1) = sup_' ∈𝒫_k,h[_'^α(V_k,2)](s_k,1,a_k,1) - [__1^α(V^π^k_2)](s_k,1,a_k,1) = sup_' ∈𝒫_k,h[_'^α(V_k,2)](s_k,1,a_k,1) - [__1^α(V_k,2)](s_k,1,a_k,1) + [__1^α(V_k,2)](s_k,1,a_k,1) - [__1^α(V^π^k_2)](s_k,1,a_k,1) ≤ 1/αg_k,1(s_k,1,a_k,1) + [ℚ__1^α, V^π^k_2(s_k,1,a_k,1)(V_k,2 - V^π^k_2)](s_k,1,a_k,1), where the inequality holds by Lemma <ref> and Lemma <ref>. Here ℚ_^α, V is defined above in Eq.(<ref>). Next we use the techniques of the proof in Section <ref> to bound the regret. Specifically, we have V_k,1(s_k,1) - V^π^k_1(s_k,1) ≤1/α∑_h=1^H ∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)g_k,h(s,a). This implies that the regret of the algorithm satisfies Regret(K) = ∑_k=1^K V^*_1(s_k,1) - V^π^k_1(s_k,1) ≤∑_k=1^K V_k,1(s_k,1) - V^π^k_1(s_k,1) ≤1/α∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)g_k,h(s,a) with probability at least 1 - 2δ. Here w_k,h^CVaR, α, V^π^k is defined in Appendix <ref>. By Cauchy inequality, we have Regret(K) ≤ 1/α√(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)g_k,h^2(s,a))√(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)) = 1/α√(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V^π^k(s,a)g_k,h^2(s,a))√(KH), where the equality holds due to ∑_(s,a)w_k,h^CVaR, α, V^π^k(s,a) = 1 by definition. By Lemma <ref>, we have Regret(K) ≤ √(KH)/α√(∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h^CVaR, α, V_h^π^k(s,a)g_k,h^2(s,a)) ≤ √(KH)/α√(1/α^H-1∑_k=1^K∑_h=1^H∑_(s,a)∈𝒮×𝒜w_k,h(s,a)g_k,h^2(s,a)) = √(KH)/α√(1/α^H-1∑_k=1^K𝔼_(s_h,a_h)∼ d_s_k,1^π^k[∑_h=1^H g_k,h^2(s_h,a_h)]), where d_s_k,1^π^k denotes the distribution of (s, a) pair playing the MDP with initial state s_k,1 and policy π^k. Since √(∑_h=1^H g_k,h^2(s_k,h,a_k,h))≤√(H^3), by Lemma <ref>, we have ∑_k=1^K𝔼_(s_h,a_h)∼ d_s_k,1^π^k[∑_h=1^H g_k,h^2(s_h,a_h)] ≤ 8∑_k=1^K∑_h=1^H g_k,h^2(s_k,h,a_k,h) + 4H^3log4log_2 K + 8/δ. Apply Lemma <ref> to ∑_k=1^K∑_h=1^H g_k,h^2(s_k,h,a_k,h), we can bound the regret with probability at least 1 - 2δ Regert(K) ≤√(KH/α^H+1)√(8∑_k=1^K∑_h=1^H g_k,h^2(s_k,h,a_k,h) + 4H^3log4log_2K + 8/δ) ≤√(4KH/α^H+1)√(2H + 2d_EH^3 + 8γd_EH(log(K)+1) + H^3log4log_2K + 8/δ), where d_E = d_E(𝒵, 1/√(K)), the first inequality holds by Eq. (<ref>), (<ref>), and the second inequality holds by Lemma <ref>. § AUXILIARY LEMMAS In this section, we present several auxiliary lemmas used in this paper. Let {ℱ_t}_t=0^∞ be a filtration. Let {η_t}_t=1^∞ be a real-valued stochastic process such that η_t is ℱ_t-measurable and η_t is conditionally R-sub-Gaussian for some R ≥ 0. Let {X_t}_t=1^∞ be a ^d-valued stochastic process such that X_t is ℱ_t-1-measurable. Assume that V is a d × d positive definite matrix. For any t ≥ 0, define V_t=V+∑_s=1^t X_sX_s^⊤, S_t = ∑_s=1^t η_sX_s. Then for any δ > 0, with probability at least 1 - δ, for all t ≥ 0, S_t^2_V_t^-1≤ 2R^2log( (V_t)^1/2(V)^1/2/δ). For λ > 0, sequence {X_t}_t=1^∞⊂^d, and V_t = λ I + ∑_s=1^t X_s X_s^⊤, assume X_t_2 ≤ L for all t. If λ≥max(1, L^2), we have that ∑_t = 1^n X_t^2_V_t-1^-1≤ 2 log(V_n)/λ^d≤ 2dlogdλ + TL^2/dλ. Let {ℱ_i}_i≥ 0 be a filtration. Let {X_i}_i=1^n be a sequence of random variables such that |X_i|≤ 1 almost surely, that X_i is ℱ_i measurable. For every δ∈ (0, 1), we have [ ∑_i=1^n 𝔼[X_i^2 | ℱ_i-1] ≤∑_i=1^n 8X_i^2 + 4log4/δ] ≤ ([log_2 n] + 2)δ. For any (s, a) ∈𝒮×𝒜, distribution p(·| s, a) ∈Δ_𝒮, and functions V, V:𝒮→ [0, H] such that V(s')≥ V(s') for any s' ∈𝒮. CVaR^α_s' ∼ p(·| s, a)(V(s')) - CVaR^α_s' ∼ p(·| s, a)(V(s')) ≤β^α, V(·| s, a)^⊤ (V - V) For any functions V_1, ⋯, V_H ∈𝒮→, k > 0, h ∈ [H] and (s, a) ∈𝒮×𝒜 such that w_k,h(s, a) > 0. w_k,h^CVaR, α, V(s,a)/w_k,h(s, a)≤1/α^h-1, where w_k,h^CVaR, α, V(s,a) denotes the conditional probability of visiting (s, a) at step h of episode k, conditioning on transitioning to the worst α-portion successor states s' (i.e. with the lowest α-portion values V_h'+1(s') at each step h' = 1, ⋯, h - 1. Let (X_p,Y_p)_p=1,2,⋯ be a sequence of random elements, X_p ∈𝒳 for some measurable set 𝒳 and Y_p ∈. Let ℱ be a set of real-valued measurable function with domain 𝒳. Let 𝔽 = (𝔽_p)_p=0,1,⋯ be a filtration such that for all p ≥ 1, (X_1, Y_1, ⋯, X_p-1,Y_p-1,X_p) is 𝔽_p-1 measurable and such that there exists some function f_* ∈ℱ such that 𝔼[Y_p|𝔽_p-1] = f_*(X_p) holds for all p ≥ 1. Let f_t = min_f∈ℱ∑_p=1^t (f(X_p)-Y_p)^2. Let N_α be the ·_∞-covering number of ℱ at scale α. For β > 0, define ℱ_t(β) = {f∈ℱ : ∑_p=1^t(f(X_p)-f_t(X_p))^2 ≤β}. If the functions in ℱ are bounded by the positive constant C > 0. Assume that for each s ≥ 1, (Y_p - f_*(X_p))_p is conditionally σ-sub-gaussian given 𝔽_p-1. Then for any α > 0, with probability 1 - δ, for all t ≥ 1, f_* ∈ℱ_t(β_t(δ, α)), where β_t(δ, α) = 8σ^2log(2N_α/δ) + 4tα(C+√(σ^2 log(4t(t+1)/δ))). Consider the function class 𝒵, 𝒫, and 𝒫_k,h defined in Section <ref>. For fixed h ∈ [H], let ω_k(X_k) := sup_∈𝒫_k,h z_(X_k) - inf_∈𝒫_k,h z_(X_k), then ∑_k=1^K 1(ω_k(A_k)≥ϵ) ≤(4γ/ϵ^2+1)_E(𝒵, ·_∞, ϵ), for all k ∈ [K] and ϵ > 0.
http://arxiv.org/abs/2307.01763v1
20230704150528
On evolution kernels of twist-two operators
[ "Yao Ji", "Alexander Manashov", "Sven-Olaf Moch" ]
hep-ph
[ "hep-ph", "hep-th" ]
[email protected] Physik Department T31, Technische Universität München, D-85748 Garching, Germany [email protected] Max-Planck-Institut für Physik, Werner-Heisenberg-Institut, D-80805 München, Germany [email protected] II. Institut für Theoretische Physik, Universität Hamburg D-22761 Hamburg, Germany The evolution kernels that govern the scale dependence of the generalized parton distributions are invariant under transformations of the SL(2,R) collinear subgroup of the conformal group. Beyond one loop the symmetry generators, due to quantum effects, differ from the canonical ones. We construct the transformation which brings the full symmetry generators back to their canonical form and show that the eigenvalues (anomalous dimensions) of the new, canonically invariant, evolution kernel coincide with the so-called parity respecting anomalous dimensions. We develop an efficient method that allows one to restore an invariant kernel from the corresponding anomalous dimensions. As an example, the explicit expressions for NNLO invariant kernels for the twist two flavor-nonsinglet operators in QCD and for the planar part of the universal anomalous dimension in N=4 SYM are presented. On evolution kernels of twist-two operators Sven-Olaf Moch August 1, 2023 ============================================= § INTRODUCTION The study of deeply-virtual Compton scattering (DVCS) gives one access to the generalized parton distributions <cit.> (GPDs) that encode the information on the transverse position of quarks and gluons in the proton in dependence on their longitudinal momentum. In order to extract the GPDs from experimental data one has to know, among other things, their scale dependence. The latter is governed by the renormalization group equations (RGEs) or, equivalently, evolution equations for the corresponding twist two operators. Essentially the same equations govern the scale dependence of the ordinary parton distribution functions (PDFs) in the Deep Inelastic Scattering (DIS) process. In DIS one is interested in the scale dependence of forward matrix elements of the local twist-2 operators and therefore can neglect the operator mixing problem between local operators from the operator product expansion (OPE). In the nonsinglet sector, there is only one operator for a given spin/dimension. The anomalous dimensions of such operators are known currently with the three-loop accuracy <cit.> and first results at four loops are becoming available <cit.>. In contrast, the DVCS process corresponds to non-zero momentum transfer from the initial to the final state and, as a consequence, the total derivatives of the local twist-two operators have to be taken into consideration. All these operators mix under renormalization and the RGE has a matrix form. The DIS anomalous dimensions appear as the diagonal entries of the anomalous dimensions matrix which, in general, has a triangular form for the latter. It was shown by Dieter Müller <cit.> that the off-diagonal part of the anomalous dimension matrix is completely determined by a special object, the so-called conformal anomaly. Moreover, in order to determine the off-diagonal part of the anomalous dimension matrix with ℓ-loop accuracy it is enough to calculate the conformal anomaly at one loop less. This technique was used to reconstruct all relevant evolution kernels/anomalous dimension matrices in QCD at two loops <cit.>. A similar approach, but based on the analysis of QCD at the critical point in non-integer dimensions, was developed in refs. <cit.>. It was shown that the evolution kernels in d=4 in the MS-like renormalization scheme inherit the symmetries of the critical theory in d=4-2ϵ dimensions. As expected, the symmetry generators deviate from their canonical form. Corrections to the generators have a rather simple form if they are written in terms of the evolution kernel and the conformal anomaly. It was shown in ref. <cit.> that by changing a renormalization scheme one can get rid of the conformal anomaly term in the generators bringing them into the so-called “minimal" form. Beyond computing the evolution kernels, the conformal approach has also been employed to calculate the NNLO coefficient (hard) functions of vector and axial-vector contributions in DVCS <cit.>, the latter in agreement with a direct Feynman diagram calculation <cit.>. Moreover, the conformal technique is also applicable to computing kinematic higher-power corrections in two-photon processes as was recently shown in refs. <cit.>. In this paper we construct a similarity transformation that brings the full quantum generators back to the canonical form. Correspondingly, the transformed evolution kernel is invariant under the canonical SL(2,R) transformation. Moreover, we will show that the eigenvalues of this kernel are given by the so-called parity respecting anomalous dimension, f(N) <cit.> which is related to the PDF anomalous dimension spectrum γ(N) as γ(N)=f(N+β̅(a) +1/2γ(N)), where β̅(a)=β(a)/a with β(a) being the QCD beta function. The strong coupling α_s is normalized as a=α_s/(4π). We develop an effective approach to restore the canonically invariant kernel from its eigenvalues γ(N). As an example, we present explicit expressions for three-loop invariant kernels in QCD and N=4 supersymmetric Yang-Mills (SYM) theory. The answers are given by linear combinations of harmonic polylogarithms <cit.>, up to weight four in QCD and up to weight three in N=4 SYM. We also compare our exact result with the approximate expression for the three-loop kernels in QCD given in ref. <cit.>. The paper is organized as follows: in section <ref> we describe the general structure of the evolution kernels of twist-two operators. In section <ref> we explain how to effectively recover the evolution kernel from the known anomalous dimensions and present our results for the invariant kernels in QCD and N=4 SYM. Sect. <ref> contains the concluding remarks. Some technical details are given in the Appendices. § KERNELS & SYMMETRIES We are interested in the scale dependence of the twist-two light-ray flavor nonsinglet operator <cit.> 𝒪(z_1,z_2) = [q̅(z_1n)γ_+ [z_1n,z_2n] q(z_2n)]_MS, where n^μ is an auxiliary light-like vector, n^2 = 0, z_1,2 are real numbers, γ_+ = n^μγ_μ and [z_1n,z_2n] stands for the Wilson line ensuring gauge invariance, and the subscript MS denotes the renormalization scheme. This operator can be viewed as the generating function for local operators, 𝒪^μ_1…μ_N that are symmetric and traceless in all Lorentz indices μ_1…μ_N. The renormalized light-ray operator (<ref>) satisfies the RGE (μ∂_μ +β(a)∂_a+ ℍ(a)) 𝒪(z_1,z_2)=0, where β(a) is d-dimensional beta function β(a)= -2a(ϵ+β_0 a +β_1 a^2+O(a^3)), β_0=11/3 N_c-2/3n_f, etc., and ℍ(a)=a ℍ_1 + a^2 ℍ_2+… is an integral operators in z_1,z_2. It follows from the invariance of the classical QCD Lagrangian under conformal transformations that the one-loop kernel ℍ_1 commutes with the canonical generators of the collinear conformal subgroup, S_0, S_±, S_- =-∂_z_1-∂_z_2 , S_0 =z_1∂_z_1 + z_2∂_z_2 + 2 , S_+ =z_1^2∂_z_1 + z_2^2∂_z_2 +2z_1 + 2z_2 . This symmetry is preserved beyond one loop albeit two of the generators, S_0, S_+ receive quantum corrections, S_α↦S_α(a)=S_α+Δ S_α(a). The explicit form of these corrections can be found in ref. <cit.>. It is quite useful to bring the generators to the following form using the similarity transformation <cit.>, ℍ(a)=e^-X(a)H(a) e^X(a) , S_α(a) =e^-X(a)S_α(a) e^X(a) , where X(a)=a X_1 +a^2 X_2 +… is an integral operator known up to terms of O(a^3) <cit.>. This transformation can be thought of as a change in a renormalization scheme. The shift operator S_- is not modified and hence identical to S_- in Eq. (<ref>), and the quantum corrections to S_0 and S_+ come only through the evolution kernel S_0(a) =S_0+β̅(a) +1/2H(a) , S_+(a) =S_+ + (z_1 + z_2)(β̅(a) +1/2H(a)) , where β̅(a)=β_0 a+β_1 a^2+⋯ is the beta function in four dimensions, cf. Eq. (<ref>). The form of the generator S_0(a) is completely fixed by the scale invariance of the theory, while Eq. (<ref>) is the “minimal" ansatz consistent with the commutation relation [S_+,S_-]=2S_0. Since the operator H(a) commutes with the generators, [H(a),S_α(a)]=0 its form is completely determined by its spectrum (anomalous dimensions). However, since the generators do not have the simple form as in Eq. (<ref>), it is yet necessary to find a way to recover the operator from its spectrum. To this end we construct a transformation which brings the generators S_α(a) to the canonical form S_α, Eq. (<ref>). Let us define an operator T(H): T(H)=∑_n=0^∞1/n!L^n (β̅(a)+1/2H(a) )^n , where L= ln z_12, z_12≡ z_1-z_2. Recall that z_1,z_2 are real variables, so for z_12<0 it is necessary to choose a specific branch of the logarithm function. Although this choice is irrelevant for further analysis we chose the +i0 recipe for concreteness, i.e., L=ln (z_12+i0). It can be shown that the operator T(H) intertwines the symmetry generators S_α(a) and the canonical generators, S_α. Namely, T(H) S_α(a) = S_α T(H), see Appendix <ref> for details. Let us also define a new kernel H as T(H) H(a) = H(a) T(H). It follows from Eqs. (<ref>), (<ref>) that the operator H commutes with the canonical generators in Eq. (<ref>) [S_α,H(a)]=0. The problem of restoring a canonically invariant operator H(a) from its spectrum is much easier than that for the operator H(a) and will be discussed in the next section. It can be shown that the inverse of T(H) takes the form T^-1(H) = ∑_n=0^∞(-1)^n/n!L^n(β̅(a)+1/2H(a) )^n , see Appendix <ref>. Further, it follows from Eq. (<ref>) that H(a) =T^-1(H) H(a) T(H) =H(a) +∑_n=1^∞1/n!T_n(a)(β̅(a)+1/2H(a) )^n . The operators T^(n) are defined by recursion T_n(a)=[T_n-1(a),L] with the boundary condition T_0(a) =H(a). The n-th term in the sum in Eq. (<ref>) is of order 𝒪(a^n+1) so that one can easily work out an approximation for H(a) with arbitrary precision, e.g., H(a) =H(a) +T_1(a) (1 +1/2T_1(a)) ( β̅(a)+1/2H(a) ) +1/2T_2(a)( β̅(a)+1/2H(a) )^2 + 𝒪(a^4) . It can be checked that this expression coincides with that obtained in ref. <cit.> [The notations adopted here and in ref. <cit.> differ slightly. To facilitate a comparison we note that the operators T_n defined here satisfy the equation [S_+, T_n] =n [T_n-1,z_1+z_2].]. The evolution kernel H(a) can be realized as an integral operator. It acts on a function of two real variables as follows H(a) f(z_1,z_2) = A f(z_1,z_2)+∫_+ h(τ) f(z_12^α,z_21^β), where A is a constant, z_12^α≡ z_1α̅+z_2 α, α̅≡ 1-α, and ∫_+ ≡∫_0^1dα∫_0^α̅ dβ. τ=αβ/α̅β̅ is called conformal ratio. The weight function h(τ) in Eq. (<ref>) only depends on this particular combination of the variables α, β as a consequence of invariance properties of H, Eq. (<ref>). It is easy to find that the operators T_n take the form T_n(a)f(z_1,z_2) = ∫_+ ln^n(1-α-β) h(τ) f(z_12^α,z_21^β) , that again agrees with the results of ref. <cit.>. Note, that this expression does not depend on the choice of the branch of the logarithm defining the function L=ln z_12 in Eq. (<ref>), see Appendix <ref> for more discussion. § ANOMALOUS DIMENSIONS VS KERNELS First of all let us establish a connection between the eigenvalues of the operators H and H. Since both of them are integral operators of the functional form in Eqs. (<ref>), (<ref>), both operators are diagonalized by functions of the form ψ_N(z_1,z_2)=(z_1-z_2)^N-1, where N is an arbitrary complex number. One may worry that the continuation of the function ψ_N for negative z_12 is not unique and requires special care. But it does not matter for our analysis. Indeed, z_12^α-z_21^β= (1-α-β)z_12 with α+β<1, therefore the operators do not mix the regions z_12≷0. For definiteness let us suppose that ψ_N(z_1,z_2)=θ(z_12) z_12^N-1. Let γ(N), γ(N) be eigenvalues (anomalous dimensions) of the operators H, H corresponding to the function ψ_N, respectively, H(a)ψ_N =γ(N)ψ_N, H(a)ψ_N =γ(N)ψ_N. The anomalous dimensions γ(N), γ(N) are analytic functions of N in the right complex half-plane, Re(N) >0. For integer even (odd) N, γ(N) gives the anomalous dimensions of the local (axial)vector operators [As usual one has to consider the operators of certain parity, 𝒪_±(z_1,z_2)=𝒪(z_1,z_2) ∓𝒪(z_2,z_1), then the functions γ_±(N) give the anomalous dimensions of local operators, for even and odd N respectively.]. Now let us note that the operator T(H) acts on ψ_N as follows T(H)ψ_N(z_1,z_2) =∑_n=0^∞L^n/n! (β̅(a)+1/2 γ(N))^nψ_N(z_1,z_2) = z_12^β̅(a)+1/2 γ(N)ψ_N(z_1,z_2) =ψ_N+β̅+1/2γ(N)(z_1,z_2). Thus, it follows from Eq. (<ref>) that the anomalous dimensions γ(N) and γ(N) satisfy the relation (cf. also Eq. (<ref>)) γ(N)=γ(N+β̅(a)+1/2γ(N)). This relation appeared first in refs. <cit.> as an generalization of the Gribov-Lipatov reciprocity relation <cit.>. It was shown that the asymptotic expansion of the function γ(N) for large N is invariant under the reflection N→ -N-1, see e.g., refs. <cit.>. This property strongly restricts harmonics sums which can appear in the perturbative expansion of the anomalous dimension γ(N) <cit.>. Explicit expressions for γ(N) are known at four loops in QCD <cit.> and at seven loops in the N=4 SYM, see refs. <cit.>. §.§ Kernels from anomalous dimensions For large N the anomalous dimension γ(N) grows as ln N. This term enters with a coefficient 2 Γ_cusp(a) where Γ_cusp(a) is the so-called cusp anomalous dimension <cit.> known to the four-loop order in QCD <cit.> and in N=4 SYM <cit.>. Thus, we write γ(N) in the following form γ(N)=2Γ_cusp(a) S_1(N) + A(a) + Δγ(N) , where S_1(N)=ψ(N+1)-ψ(1) is the harmonic sum responsible for the ln N behavior at large N, and A(a) is a constant term. The remaining term, Δγ(N), vanishes at least as O(1/N(N+1)) at large N. The constant A(a) is exactly the same which appears in Eq. (<ref>). The first term in Eq. (<ref>) comes from a special SL(2,ℝ) invariant kernel ℋf =∫_0^1 dα/α{2 f(z_1,z_2)-α̅(f(z_12^α,z_2)+ f(z_1,z_21^α))}, which in momentum space gives rise to the so-called plus-distribution. The eigenvalues of this kernel are 2 S_1(N) (ℋ z_12^N-1=2 S_1(N)z_12^N-1). It corresponds to a singular contribution of the form -δ_+(τ) to the invariant kernel h(τ), see ref. <cit.> for detail. Thus the evolution kernel can be generally written as H =Γ_cusp (a)ℋ+ A(a) +ΔH . Here ΔH is an integral operator, ΔHf(z_1,z_2) =∫_+ h(τ) f(z_12^α,z_21^β) , where the weight function h(τ) is a regular function of τ∈ (0,1). The eigenvalues of ΔH are equal to Δγ(N) and are given by the following integral Δγ(N) =∫_+ h(τ) (1-α-β)^N-1 . The inverse transformation takes the form <cit.> h(τ) =∫_C dN/2π i (2N+1) Δγ(N)P_N(1+τ/1-τ), where P_N are the Legendre polynomials. The integration path C goes along the line parallel to the imaginary axis, Re (N)>0, such that all poles of Δγ(N) lie to the left of this line. Some details of the derivation can be found in Appendix <ref>. One can hardly hope to evaluate the integral (<ref>) in a closed form for an arbitrary function Δγ(N). However, as was mentioned before, the anomalous dimensions Δγ(N) in quantum field theory are rather special functions. Most of the terms in the perturbative expansion of Δγ(N) have the following form η^k(N) Ω_m⃗(N), η^k(N) Ω_1^p(N) where η(N)=1/(N(N+1)), and the functions Ω_m⃗(N)=Ω_m_1,…,m_p(N) are the parity respecting harmonic sums <cit.>, (Ω_m⃗(N) ∼Ω_m⃗(-N-1) for N→∞). We will assume that the sums Ω_m⃗(N) are “subtracted", i.e. Ω_m⃗(N)→ 0 at N→∞. The second structure occurs only for k>0, since Ω_1(N)=S_1(N) grows as ln N for large N. Since all SL(2,ℝ) invariant operators share the same eigenfunctions, the product of two invariant operators H_1 and H_2, H_1 H_2 ( =H_2H_1) with eigenvalues H_1(N) and H_2(N) respectively, has eigenvalues H_1(N)H_2(N). One can use this property to reconstruct an operator with the eigenvalue (<ref>). First, we remark that the operator with the eigenvalues η(N), (we denote it as ℋ_+), has (as follows from Eq. (<ref>)) a very simple weight function, h_+(τ)=1. This can also be derived from Eq. (<ref>). Since P_N(x)=P_-N-1(x) the integral in Eq. (<ref>) vanishes for the integration path Re (N)=-1/2 due to antisymmetry of the integrand. Therefore, the integral (<ref>) can be evaluated by the residue theorem [This trick allows one to calculate the integral (<ref>) for any function Δγ(N) with exact symmetry under N→ -1-N reflection.] h_+(τ)=2N+1/N+1P_N(1+τ/1-τ)|_N=0=1. 2mm Let us consider the product H_2 = H_+ H_1 (=H_1 H_+), where H_1 is an integral operator with the weight function h_1(τ). Then the weight function h_2(τ) of the operator H_2 is given by the following integral h_2(τ)= ∫_0^τds/s̅^2ln(τ/s) h_1(s), see Appendix <ref> for details. Thus the contribution to the anomalous dimension of type (<ref>) can be evaluated with the help of this formula if the weight function corresponding to the harmonic sums Ω_m⃗ is known. We also give an expression for another product of the operators: H_2= ℋ H_1, h_2(τ) = -lnτ h_1(τ) +2τ̅∫_0^τds/s̅h_1(τ)-h_1(s)/(τ-s) , which appears to be useful in the calculations as well. §.§ Recurrence procedure Let us consider the integral (<ref>) with Δγ=Ω_m⃗, h_m⃗(τ) =∫_C dN/2π i (2N+1)Ω_m⃗(N)P_N(z), where z=(1+τ)/(1-τ). Using a recurrence relation for the Legendre functions (2N+1) P_N(z) =d/dz( P_N+1(z) - P_N-1(z)) we obtain h_m⃗(τ) =- d/dz∫_C dN/2π i P_N(z) F_m⃗(N), where F_m⃗(N) =( Ω_m⃗(N+1)-Ω_m⃗(N-1)). It is easy to see that the function F_m⃗(N) has the negative parity under N→ -N-1 transformation and can be represented in the form F_m_1,…, m_p(N) =∑_k=2^p r_k(N) Ω_m_k,…,m_p(N) + r(N), where r_k(N) are rational functions of N. The harmonic sums Ω_m_k,…,m_p(N) in Eq. (<ref>) can be either of positive or negative parity. Therefore the coefficient r_k(N) accompanying the positive parity function Ω_m_k,…,m_p(N) has the form r_k(N)=(2N+1) P_k(η), where P_k is some polynomial, while r_k=P_k(η) for the harmonic sums of negative parity. The free term has the form r(N)=(2N+1) P(η). Together, they make F_m_1,…, m_p(N) with negative parity. For example, for the harmonic sum Ω_1,3 (see appendix <ref> for a definition), one gets F_1,3(N) =(2N+1)η(Ω_3+ζ_3 -η^2-1/2η^3), while for the harmonic sum Ω_2,2 F_2,2(N) =(2N+1)1/2η^3(3+η) +η(2+η)Ω_2 . Note the reappearance of the common factor (2N+1) in the first case, (<ref>). This implies that, up to the derivative d/dz, the integral (<ref>) has the form (<ref>). Hence, if the kernel corresponding to the underlined terms in Eq. (<ref>) is known, the kernel corresponding to Ω_1,3 can be easily obtained. Thus the problem of finding the invariant kernel with the eigenvalues Ω_1,3(N) is reduced to the problem of finding the kernel with the eigenvalues Ω_3(N) ( Ω_1,3↦Ω_3 ). However, as it seen from our second example, not all parity preserving harmonic sums share this property. Indeed, the underlined term on the right hand side (rhs) of Eq. (<ref>) does not have the factor (2N+1). Hence, all these transformations do not help to solve the problem for Ω_2,2. It is easy to see that the above recurrence procedure works only if all the harmonic sums Ω_m_k,…,m_p appearing in Eq. (<ref>) are of positive parity. It was proven in ref. <cit.> that any harmonic sum, Ω_m⃗, with all indices m⃗ positive odd or negative even has positive parity (see Appendix <ref> for explicit examples of the harmonic sums satisfying these conditions). Therefore, the rhs of Eq. (<ref>) only contains harmonic sums of the same type. Thus the invariant kernels corresponding to the harmonic sums of positive parity can always be calculated recursively, using Eqs. (<ref>), (<ref>) and (<ref>), (<ref>). Crucially, only such harmonic sums appear in the anomalous dimensions γ(N) in QCD and N=4 SYM. All convolution integrals (<ref>) and (<ref>) can in turn be systematically calculated with the packages HyperInt <cit.> or PolyLogTools <cit.>. The explicit expressions for the kernels corresponding to the lowest harmonic sums are given in Appendix <ref> for references. §.§ Invariant kernels: QCD Below we give an explicit expression for the invariant kernel of twist-two flavor nonsinglet operator in QCD. We will not split the operator 𝒪(z_1,z_2) into positive (negative) parity operators. The evolution operator still takes the form (<ref>), with ΔH given by the following integral ΔHf(z_1,z_2) =∫_+ ( h(τ) + h̅ (τ)P_12 ) f(z_12^α,z_21^β) , where P_12 is a permutation operator, P_12 f(z_1, z_2)= f(z_2,z_1)[ In order to avoid possible misunderstandings we write down it explicitly, P_12 f(z_12^α,z_21^β) =f(z_21^α,z_12^β).]. For (anti)symmetric functions f(z_1,z_2) the operator (<ref>) takes a simpler form (<ref>) with the kernel h±h̅. Our expression for the constant term A(a) agrees with the constant term χ given in ref. <cit.>, A =χ -2Γ_cusp. For completeness, we provide explicit expressions for the constant A = a A_1+a^2 A_2 + a^3 A_3 +⋯, A_1 = -6 C_F , A_2 = C_F[n_f(16/3ζ_2+2/3)-N_c(52/3ζ_2+43/6) +1/N_c(24ζ_3-12ζ_2+3/2)] , A_3 = C_F[n_f^2(32/9ζ_3-160/27ζ_2+34/9) +n_fN_c(-256/15ζ_2^2+8/9ζ_3+2492/27ζ_2-17) + n_f/N_c(232/15ζ_2^2-136/3ζ_3+20/3ζ_2-23) + N_c^2(-80ζ_5+616/15ζ_2^2+266/9ζ_3 -5545/27ζ_2+847/18) +(-120ζ_5-16ζ_2ζ_3 -124/15ζ_2^2+1048/3ζ_3-356/3ζ_2+209/4) +1/N_c^2(120ζ_5+16ζ_2ζ_3 -144/5ζ_2^2-34ζ_3-9ζ_2-29/4)] , where C_F=(N_c^2-1)/(2N_c) is the quadratic Casimir in the fundamental representation of SU(N_c) and we take T_F=1/2. Note that we are adopting a different color basis compared to ref. <cit.>. The explicit expressions for the cusp anomalous dimensions Γ_ cusp(a) = a Γ_ cusp^(1) + a^2 Γ_ cusp^(2) + a^3 Γ_ cusp^(3) up to three loops are provided in Eq. (<ref>). Finally we give answers for the kernels h(h̅)(a)=∑_k a^k h_k (h̅_k). Explicit one- and two-loop expressions are known <cit.> but for completeness we give them here h_1=-4C_F , h̅_1=0 , and h_2 =C_F{ n_f 88/9 + N_c(-2 H_1 + 8ζ_2 -604/9) +1/N_c( - 8(H_11 +H_2) + 2 (1-4/τ) H_1)} , h̅_2 =-8 C_F/N_c(H_11 + τ H_1) , where H_m⃗=H_m⃗(τ) are the harmonic polylogarithms (HPLs) <cit.>. The three-loop expression[A file with our main results can be obtained from the preprint server http://arXiv.org http://arXiv.org by downloading the source. Furthermore, they are available from the authors upon request.] is more involved h_3 = C_F{ -64/9 n_f^2 +n_f N_c8/3[ H_3 -H_110 -H_20 +H_12 +1/τH_2 - 1/τH_10 -19/12H_1 + 8 ζ_3 -32/3ζ_2 +5695/72] +n_f/N_c16/3[3ζ_3 -75/16+H_3 + H_21+ H_12 +H_111 +(16/3+1/τ) ( H_2+H_11) + (31/24+10/3τ)H_1] +N_c^2 4[ H_13 + H_112 - H_120 - H_1110 + 2 H_4 -2 H_30 -2 H_210 +2 H_22 + (8/3-2/τ) (H_20-H_3 +H_110 -H_12) - 5/4(H_10 +H_11) +2/3τ(H_10 -H_2 ) -5/2H_0 +(115/72+ζ_2+1/τ)H_1 -44/5ζ_2^2 -22/3ζ_3 +436/9ζ_2 -4783/27] +16 [H_4 -H_30+H_13 +H_121 -3/2H_120 +3/2H_22 +3/2H_112 +2H_31 +2H_1111 +3H_211 -1/2H_1110 - (1/τ+ 1)H_20 -(11/6 - 1/τ)H_3 - H_110 +(-37/12 + 3/2τ) H_12 -(7/3 - 2/τ) H_21 +( -43/12 + 3/τ)H_111 +(13/8+ 1/2ζ_2)H_10 -(1/2ζ_2 + 127/9 + 11/6τ) H_2 -(899/72 + 1/3τ) H_11 + (ζ_2-1) H_0 +(7/4ζ_2-143/36 -1/τ(1/2ζ_2+67/9))H_1 +5/2ζ_2 -47/24] +8/N_c^2[ H_4 -H_30 -H_210 +H_112 -H_1111 -2 H_120 +2 H_13 +2H_31 -2H_1110 -2H_211 +3H_121 -(1/2+1/τ)( H_20-H_3+H_110) +2(1 + 1/τ) H_21 +(3/2 - 2/τ) H_111 +(7/8 + 3/2 τ) H_10 -(ζ_2-1/2+3/2 τ) H_2 +(11/8 - ζ_2) H_11 -11/4 H_0 +( ζ_2 -107/16 -ζ_2/τ-1/ 2τ) H_1 + 7/2] } andh̅_3 = -8C_F{ -2n_f/3N_c[ H_111 +H_110+τ H_10 +(16/3 +τ)H_11 +(1/2+10/3τ)H_1 +1/2] +H_120 + H_22 -H_1110 -H_112 -2H_121 +2H_211 -4H_1111 + τH_20 +(13/6- τ) H_110 +(1/2 - 2 τ) H_12 + (5/2-2 τ)H_21 +(43/6- 6 τ) H_111 -(ζ_2-13/6τ)H_10 -(3+ζ_2 + 3/2τ)H_2 +(236/9+2/3τ)H_11 -ζ_2 τH_0 +(53/6 + ζ_2 + 3 ζ_3 + 134/9τ + ζ_2 τ) H_1 +11/6 +3(ζ_2+ζ_3-1/2) τ +1/N_c^2[ H_1111 - H_22 - H_211 +3 H_120 + 3 H_112 -3 H_1110 +4 H_121 +3 τ H_20 +3 (1/2-τ) H_110 -(7/2 - 4 τ) H_12 + (1/2 + 4 τ) H_21 +(-3/2 + 2τ) H_111 -3 (ζ_2 - 1/2τ) H_10 +(ζ_2 -2- 3/2τ) H_2 +2(ζ_2 -1) H_11 -3ζ_2 τ H_0 +(5 +2ζ_2 +3ζ_3+ ζ_2 τ) H_1 +3τ( ζ_3- 1/2) ] }. The kernels are smooth functions of τ except for the endpoints τ=0 and τ=1. For τ→ 1 the three-loop kernel functions behave as ∑_0≤ k≤ 4∑_m>0 r_kmτ̅^m ln^kτ̅. For small τ – which determines the large N asymptotic of the anomalous dimensions – the kernels (for each color structure) have the form ∑_k≥ 0 (a_k + b_klnτ) τ^k. We note here that the reciprocity property of the anomalous dimension is equivalent to the statement that the small τ expansion of the kernels does not involve non-integer powers of τ, namely h(τ)∼∑_m,k≥0 a_mkτ^m ln^kτ. Below we compare our exact three-loop results with the approximate expressions constructed in ref. <cit.>. The approximate expressions reproduce the asymptotic behaviors of the exact kernels at both τ→ 0,1. We therefore subtract the logarithmically divergent pieces (see Eqs. (<ref>) and (<ref>) for explicit expressions) from both the exact and the approximated expressions to highlight their (small) deviations as shown in Figs. <ref> and <ref>. For illustrative purposes, we plot the planar contribution (C_FN_c^2 and C_F in h_3 and h̅_3 respectively) and the subsubplanar contribution (C_F/N_c^2). The former is numerically dominant and generates the leading contribution in the large-N_c limit whereas the latter shows the worst-case scenario for the previous approximation using a simple HPL function ansatz. The error of other color structures all fall between the planar and subsubplanar cases, hence are numerically small. §.§ Invariant kernels: N=4 SYM In this section we present the invariant kernels for the universal anomalous dimensions of the planar N=4 SYM, see e.g., refs. <cit.> for expressions up to NNLO. They are rather short so that we quote them here. We use the parametrization (<ref>), where Γ_cusp(a) can be found in ref. <cit.> and the constant term A(a) is A(a)=-24 a^2 ζ_3 + 32 a^3 (ζ_2ζ_3 +5ζ_5) +O(a^4), where a= N_c g^2_SYM/16π^2, and Δγ(N) = - a^2 16(Ω_3 - 2 Ω_-2,1 + 2 Ω_1 Ω_-2) +a^3 64( Ω_5 +2 Ω_3,-2 -8 Ω_1,1,-2,1+2Ω_1,-4 + Ω_1(Ω_-4 +Ω_-2^2 +ζ_2Ω_-2 ) -2ζ_2 Ω_-2,1). For the kernels we find h_1=h̅_1=0, h_2 =8 τ̅/τH_1 , h̅_2 =- 8 τ̅H_1 and h_3 = - 16τ̅/τ( 4 H_1 1 1 + H_21 + H_1 2 + H_1 1 0) , h̅_3 = 16 τ̅( 4H_1 1 1 + 3( H_21 + H_1 2) - H_1 1 0 + H_2 0 - ζ_2 H_0). These expressions are extremely simple in comparison with the expressions in QCD of the same order. Let us notice that the two-loop kernels contain only HPLs of weight one with the three-loop kernels involving HPLs of weight three, while in QCD the corresponding kernels require HPLs of weight two and four, respectively. Note also that the kernel h is proportional to the factor τ̅/τ and the kernel h̅ to the factor τ̅. It would be interesting to see if these properties persist in higher loops. § SUMMARY We have constructed a transformation that brings the evolution kernels of twist-two operators to the canonically conformal invariant form. The eigenvalues of these kernels are given by the parity respecting anomalous dimensions. We have developed a recurrence procedure that allows one to restore the weight functions of the corresponding kernels. It is applicable to a subset of the harmonic sums (with positive odd and negative even indices). It is interesting to note that exactly only such harmonic sums appear in the expressions for the reciprocity respecting anomalous dimensions. We have calculated the three-loop invariant kernels in QCD and in N=4 SYM (in the planar limit). In QCD it was the last missing piece to obtain the three-loop evolution kernels for the flavor-nonsinglet twist-two operators in a fully analytic form, see ref. <cit.>. In the case of N=4 SYM the lowest order expressions for the kernels are rather simple and exhibit some regularities, h∼τ̅/τ, h̅∼τ̅. It would be interesting to check if these properties survive at higher loops. We expect that at ℓ-loops the kernels h^(ℓ)(τ) will be given by linear combinations (up to common prefactors) of HPLs of weight 2ℓ -3 with positive indices. Therefore going over to the invariant kernel can lead to a more compact representation of the anomalous dimensions than representing the anomalous dimension spectrum γ(N) in terms of harmonic sums. The much smaller function basis in terms of HPLs (τ/τ̅H_m⃗ and τ̅H_m⃗) opens the possibility of extracting the analytical expressions of the higher-order evolution kernels from minimal numerical input through the PSLQ algorithm. § ACKNOWLEDGMENTS We are grateful to Vladimir M. Braun and Gregory P. Korchemsky for illuminating discussions and comments on the manuscript. This work is supported by Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center TRR110/2, grant 409651613 (Y.J.), and the Research Unit FOR 2926, project number 40824754 (S.M.). § APPENDICES § In this appendix, we describe in detail the derivations of some of the equations presented in section <ref>. Let us start with Eq. (<ref>). For the generator S_-(a)=S_- the statement is trivial. Next, making use of Eq. (<ref>) for the operator T(H) and, taking into account that H(a) commutes with the generators S_α(a), one can write the left hand side (lhs) of Eq. (<ref>) in the form ∑_n=0^∞1/n!L^n S_α(a) X^n , where X=β̅(a)+1/2H(a). Using the representation (<ref>) for the generators and taking into account that [S_0,L]=1 and [S_+,L]=z_1+z_2 (we recall that L=ln z_12) one obtains L^n S_0 = S_0 L^n - n L^n-1 + L^n X, L^n S_+ = S_+ L^n + (z_1+z_2)(-n L^n-1 + L^n X). Substituting these expressions back into Eq. (<ref>) one finds that the contributions of the last two terms on the rhs of Eq. (<ref>) cancel each other. Hence Eq. (<ref>) takes the form S_α∑_n=0^∞1/n!L^n X^n = S_αT(H), that finally results in Eq. (<ref>). 3mm Let us now show that the inverse to T(H) has the form (<ref>). The product ℐ=T^-1(H)T(H) can be written as ℐ = ∑_n=0^∞(-1)^n/n!L^n(β̅(a)+1/2H(a) )^n T(H). Moving T(H) to the left with help of the relation (<ref>) and then using Eq. (<ref>) for T(H) one gets (X=β̅(a)+1/2H(a)) ℐ=∑_n=0^∞(-1)^n/n!L^n T(H) X^n =∑_n,k=0^∞(-1)^n/n!k!L^n+kX^n+k =1 . Finally, we consider the product of operators T with a differently defined function L. Namely, let us take T_±(H) ≡T(L_±,H), where L_± =ln(z_12± i0) so that L_+-L_-=2πθ(z_2-z_1). In order to calculate the product U=T_+(H) T_-(H) one proceeds as before: use expansion (<ref>) for T_+(H), move T_-(H) to the left and then expand it into a power series. It yields U =∑_n,k=0^∞(-1)^n/n!k!L_+^nL_-^k X^n+k , where X=β̅(a)+1/2H(a). Let L_+=L_-+2π iθ(z_2-z_1) one can get for the sum in Eq. (<ref>) U =∑_m=0^∞(2π i θ)^m/m!X^m(a) =1 - θ(1- e^2π i ( β̅+1/2H)), where θ≡θ(z_2-z_1). Since S_0,+θ(z_21)∼ z_21δ(z_21)=0 one concludes that U commutes with the canonical generators S_α and hence UH=HU. § Let us check that the kernel h(τ) given by Eq. (<ref>) has the eigenvalues Δγ(N). First, after some algebra, the integral in Eq. (<ref>) can be brought to the following form Δγ(N)=∫_1^∞ dt h(t-1/t+1) Q_N(t) , where Q_N(t) is the Legendre function of the second kind <cit.>. Inserting h in the form of Eq. (<ref>) into Eq. (<ref>) one gets ∫_CdN'/2π i(2N'+1)Δγ(N')∫_1^∞ dt P_N'(t) Q_N(t) . The t-integral of the product of the two Legendre functions gives <cit.> ((N-N')(N+N'+1))^-1. Then closing the integration contour in the right half-plane one evaluates the N' integral with the residue theorem at N'=N yielding the desired lhs of Eq. (<ref>). Finally, in order to verify Eq. (<ref>) one can check that the integral (<ref>) with the kernel h_2, Δγ_2(N), is equal to Δγ_1(N)/N/(N+1). The simplest way to do it is to substitute the Legendre function in the form Q_N(t)=-∂_t(1-t^2)∂_t Q_N(t)/N/(N+1), and perform integration by parts. § In this appendix, we collect the harmonic sums and the corresponding kernels which we have used. We split them into two parts: the first one includes the harmonic sums Ω_m_1,…,m_k such that ∏_i^k sign(m_i)=1. Ω_3 = S_3-ζ_3, Ω_3,1 =S_3,1-1/2 S_4 -3/10ζ_2^2 Ω_-2,-2 = S_-2,-2 -1/2 S_4+1/2ζ_2S_-2+1/8ζ_2^2, Ω_1,3,1 = S_1,3,1-1/2 S_1,4-1/2 S_4,1 + 1/4 S_5 -3/10ζ_2^2 S_1 + 3/4ζ_5 , Ω_-2,-2,1 = S_-2,-2,1-1/2 S_4,1-1/2 S_-2,-3 + 1/4ζ_3 S_-2 +5/16ζ_5 , Ω_5 = S_5-ζ_5. Here S_m⃗ are the harmonic sums with argument N. We define the sums of negative signature, ∏_i^k sign(m_i)=-1, with an additional sign factor: Ω_-2 =(-1)^N [S_-2+ζ_2/2], 0 Ω_-2,1 =(-1)^N [ S_-2,1 - 1/2 S_-3 + 1/4ζ_3], Ω_1,-2,1 = (-1)^N[S_1,-2,1 - 1/2S_1,-3 - 1/2 S_-3,1 + 1/4 S_-4. . + 1/4ζ_3 S_1-1/80ζ_2^2 ], Ω_-4,1 =(-1)^N[S_-4,1-1/2 S_-5 + 11/8ζ_5 -1/2ζ_2ζ_3], Ω_3,-2 = (-1)^N[ S_3,-2-1/2 S_-5 +1/2ζ_2 S_3 + 9/8ζ_5-3/4ζ_2ζ_3], Ω_1,1,-2,1 = (-1)^N [S_1,1,-2,1 -1/2 S_1,1,-3-1/2 S_1,-3,1 -1/2 S_2,-2,1 +1/4 S_2,-3 +1/4 S_-4,1 +1/4 S_1,-4 -1/8 S_-5 +1/4ζ_3 S_1,1-1/80ζ_2^2 S_1 -1/8ζ_3 S_2 + 1/8ζ_5 -1/16ζ_2ζ_3 ], Ω_1,-4=(-1)^N[S_1,-4-1/2S_-5+7/20ζ_2^2S_1-11/8ζ_5+1/2ζ_2ζ_3]. These combinations of harmonic sums are generated by the following kernels, ℋ_3 = -1/2τ̅/τH_1, ℋ_3,1 = 1/4τ̅/τ(H_11+H_10) ℋ_-2,-2 = 1/4τ̅/τH_11, ℋ_1,3,1 = -1/8τ̅/τ(H_20+H_110+H_21 +H_111), ℋ_-2,-2,1 = 1/8τ̅/τ(H_12-H_110), ℋ_5 = -1/2τ̅/τ(H_111+H_12) and ℋ_-2 = 1/2τ̅, ℋ_-2,1 = -1/4τ̅(H_1+H_0), ℋ_1,-2,1 = 1/8τ̅( H_10 + H_11), ℋ_-4,1 = -1/4τ̅(H_21+H_20+H_111+H_110), ℋ_3,-2 = -1/4τ̅(H_21 +H_111), ℋ_1,1,-2,1 =-1/16τ̅ (H_111+H_110), ℋ_1,-4 =-1/4τ̅(H_12+H_111), where all HPLs have argument τ. These functions serve as a basis and more complicated structures can be generated as products of Ω_m⃗. § Here we give the small (τ→ 0) and large (τ→ 1) expansions of the invariant kernels h_3,h̅_3. By h_3^(A) (h̅_3^(A)) we denote the function which appears in the expression for h_3 (h̅_3^(A)) with the color factor C_F× A. We will keep the logarithmically enhanced and constant terms in both limits. The former is subtracted from both the exact and approximated three-loop kernel to obtain the two figures in Eqs. <ref> and <ref>. At τ→ 0 one gets h_3^(n_f N_c) = 5839/27 - 256/9ζ_2 + 64/3ζ_3 -8/3lnτ , h_3^(n_f/N_c) = -17/9 + 16ζ_3 , h̅_3^(n_f/N_c) = 8/3 , h_3^(N_c^2) = -18520/27 - 88/3ζ_3 - 176/5ζ_2^2 + 1744/9ζ_2 -46/3lnτ h_3^(N_c^0) = - 1186/9 + 32ζ_2 + (-32+16ζ_2)lnτ , h_3^(N_c^-2) = 24-8ζ_2 -18 lnτ , h̅_3^(N_c^0) = -44/3 , h̅_3^(N_c^-2) = -48τ(ζ_2 + ζ_3+1/4 - ζ_2 lnτ) , and for τ→ 1 one obtains h_3^(n_f N_c) = 5695/27 - 208/9ζ_2 + 64/3ζ_3 + (-16/3ζ_2+38/9) lnτ̅ , h_3^(n_f/N_c) = 304/9ζ_2 + 16ζ_3-25 -(16/3ζ_2 +74/3) lnτ̅ + 152/9ln^2τ̅-8/9ln^3τ̅ , h̅_3^(n_f/N_c) = 16/3(1/2-ζ_2 + ζ_3) + (16/3ζ_2-184/9)lnτ̅ +152/9ln^2τ̅-8/9ln^3τ̅ , h_3^(N_c^2) =-72/5ζ_2^2 + 1741/9ζ_2 - 88/3ζ_3 - 19132/27 +(4/3ζ_2 -187/18) lnτ̅+( -5/2 + 4ζ_2)ln^2τ̅ , h_3^(N_c^0) = 136/5ζ_2^2 - 2170/9ζ_2 + 80ζ_3 - 94/3 +( -32/3ζ_2- 24ζ_3 + 548/3)lnτ̅ +( 16ζ_2-923/9) ln^2τ̅+14/9ln^3τ̅+4/3ln^4τ̅ , h_3^(N_c^-2) =-28ζ_2^2 - 27ζ_2 + 56ζ_3 + 28 +(115/2 -12ζ_2-40ζ_3)lnτ̅ +(8ζ_2+11/2)ln^2τ̅+2/3ln^3τ̅-1/3ln^4τ̅ , h̅_3^(N_c^0) = -136/5ζ_2^2 + 88/3ζ_2 - 136/3ζ_3-8/3 +( -16/3ζ_2 +1708/9) lnτ̅ -968/9ln^2τ̅+14/9ln^3τ̅+4/3ln^4τ̅ , h̅_3^(N_c^-2) = -216/5ζ_2^2 + 40ζ_2 + 8ζ_3 + 12 + (40ζ_2 - 64ζ_3+40)lnτ̅ -8(4ζ_2-1)ln^2τ̅+ 2/3ln^3τ̅- 1/3ln^4τ̅ . Here we quote the cusp anomalous dimensions up to three loops for reference <cit.>, Γ_ cusp^(1) = 4C_F , Γ_ cusp^(2) = C_F[N_c(536/9-16ζ_2)-40/9n_f ] , Γ_ cusp^(3) = C_F[N_c^2(176/5ζ_2^2+88/3ζ_3-1072/9ζ_2+490/3) + N_c n_f(-64/3ζ_3+160/9ζ_2-1331/27) +n_f/N_c(-16ζ_3+55/3) -16/27n_f^2 ] .
http://arxiv.org/abs/2307.00669v1
20230702211545
Spitzer thermal phase curve of WASP-121 b
[ "Giuseppe Morello", "Quentin Changeat", "Achrène Dyrek", "Pierre-Olivier Lagage", "Jonathan C. Tan" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.IM" ]
#1,for:=#1<ref> Department of Space, Earth and Environment, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden Instituto de Astrofísica de Canarias (IAC), 38205 La Laguna, Tenerife, Spain European Space Agency (ESA), ESA Office, Space Telescope Science Institute (STScI), Baltimore MD 21218, USA. Department of Physics and Astronomy, University College London, Gower Street,WC1E 6BT London, United Kingdom AIM, CEA, CNRS, Université Paris-Saclay, Université Paris Diderot, Sorbonne Paris Cité, F-91191 Gif-sur-Yvette, France Dept. of Astronomy, University of Virginia, Charlottesville, VA 22904, USA We analyse unpublished Spitzer observations of the thermal phase-curve of WASP-121 b, a benchmark ultra-hot Jupiter. We adopted the wavelet pixel-independent component analysis technique to remove challenging instrumental systematic effects in these datasets and we fit them simultaneously with parametric light-curve models. We also performed phase-curve retrievals to better understand the horizontal and vertical thermal structure of the planetary atmosphere. We measured planetary brightness temperatures of ∼2700 K (dayside) and ∼700–1100 K (nightside), along with modest peak offsets of 5.9^∘±1.6 (3.6 μm) and 5.0^∘_-3.1^+3.4 (4.5 μm) after mid-eclipse. These results suggest inefficient heat redistribution in the atmosphere of WASP-121 b. The inferred atmospheric Bond albedo and circulation efficiency align well with observed trends for hot giant exoplanets. Interestingly, the measured peak offsets correspond to a westward hot spot, which has rarely been observed. We also report consistent transit depths at 3.6 and 4.5 μm, along with updated geometric and orbital parameters. Finally, we compared our Spitzer results with previous measurements, including recent JWST observations. We extracted new information on the thermal properties and dynamics of an exoplanet atmosphere from an especially problematic dataset. This study probes the reliability of exoplanet phase-curve parameters obtained from Spitzer observations when state-of-the-art pipelines are adopted to remove the instrumental systematic effects. It demonstrates that Spitzer phase-curve observations provide a useful baseline for comparison with JWST observations, and shows the increase in parameters precision achieved with the newer telescope. Spitzer thermal phase curve of WASP-121 b G. Morellochalmers,iac Q. Changeatesa,stsci A. Dyrekcea P.-O. Lagagecea J. C. Tanchalmers,uva August 1, 2023 ================================================================================================== § INTRODUCTION WASP-121 b is an ultra-hot Jupiter (UHJ) orbiting around an F6 V star in ∼1.27 d. Table <ref> reports the stellar and planetary parameters taken from its discovery paper <cit.>. WASP-121 b has been targeted by many follow-up studies, based on its nature as an exoplanet amenable to characterisations with various observing techniques. It is especially well suited for atmospheric characterisation by both transmission and emission spectroscopy, owing to its high equilibrium temperature and large size. Some researchers have proposed WASP-121 b as a suitable target to further investigate its interior structure and/or shape deformations <cit.>. Shortly after the WASP-121 b discovery, <cit.> detected the 1.4 μm water absorption band from a transit observed with the Hubble Space Telescope (HST) using the Wide Field Camera 3 (WFC3) with the G141 grism, covering 1.1-1.7 μm. <cit.> also detected H_2O in emission, along with evidence of a stratosphere, using the same instrument setup to observe the planetary eclipse. Earlier atmospheric models of UHJs predicted temperature inversions to occur due to absorption by metal oxides, such as TiO and VO, in their upper atmospheric layers <cit.>. Small features occurring at the blue edge of the HST/WFC3 spectra of WASP-121 b have been tentatively attributed to TiO and VO <cit.>. Based on subsequent transit observations obtained with the HST/Space Telescope Imaging Spectrograph (STIS), covering 0.3-1.0 μm, <cit.> confirmed the presence of VO, but not TiO, at the terminator of WASP-121 b atmosphere. <cit.> found evidence of H^- in the emission spectrum of WASP-121 b taken with HST/WFC3 using the G102 grism (0.8-1.1 μm), but retracted the previous claim of VO in emission. <cit.> refined H_2O detection and VO non-detection in the planet dayside by stacking multiple eclipse observations taken with HST/WFC3 G141. <cit.> detected an excess of near-UV absorption (0.20-0.27 μm) during three transits of WASP-121 b observed with the Ultraviolet/Optical Telescope (UVOT) onboard the Neil Gehrels Swift Observatory. <cit.> resolved exospheric Mgii and Feii lines in the near-UV transmission spectrum observed with HST/STIS. <cit.> and <cit.> reported two independent analyses of long-term visible photometry of WASP-121 from the Transiting Exoplanet Survey Satellite (TESS, ) showing strong phase-curve modulations. Both studies measured a strong day-night temperature contrast and a small offset between the maximum emission and substellar points, suggesting low reflectivity and inefficient heat redistribution in the planetary atmosphere. They also found evidence for a temperature inversion, partly caused by H^-. <cit.> analysed two spectroscopic phase curves observed with HST/WFC3 G141, revealing variations in the H_2O feature that correspond to a thermal profile warming (cooling) with altitude in the dayside (nightside). These data are consistent with models predicting thermal dissociation of H_2O on the dayside and recombination on the nightside <cit.>. The James Webb Space Telescope (JWST) has recently observed a full phase curve of WASP-121 b using the Near-InfraRed Spectrograph (NIRSpec) with the G395H grism (2.70-5.15 μm). <cit.> published their first-look analysis of the JWST/NIRSpec data, the results of which align well with those in the previous literature. WASP-121 b has also been the subject of numerous ground-based observing campaigns. <cit.> detected a deep planetary eclipse in the 2MASS K band with A Novel Dual Imaging CAMera (ANDICAM) attached to the 1.3-m telescope of the SMARTS Consortium. Multiple studies based on the high-resolution Doppler spectroscopy technique placed severe upper limits on the possible presence of TiO and VO in gaseous form (e.g. ). However, the high-resolution spectra revealed a rich inventory of metals and ions in the WASP-121 b atmosphere, including Hα, Nai, Fei, Feii, Cri, Vi, Mgi, Nii, Cai, Caii, Ki, Lii, Scii, Baii, Coi, and Srii <cit.>. In this paper, we present the first analysis of two phase curves of WASP-121 b observed with the Spitzer/InfraRed Array Camera (IRAC) channels 1 and 2 <cit.>, which operate in photometric passbands centred at 3.6 and 4.5 μm, respectively. These data were acquired by the end of January 2018 for the Spitzer program ID 13242 (PI: Tom Evans). Using the wavelet pixel-independent component analysis (ICA) pipeline <cit.>, we overcome the issues of strong instrumental systematic effects that may have prevented their publication so far. We validate the robustness of our results, comparing them with those from recent JWST observations in similar passbands. Section <ref> presents the Spitzer/IRAC observations of WASP-121 b phase curves. Section <ref> describes the procedure adopted in this work to analyse the data. Section <ref> reports our results, including the transit and phase-curve parameters, and derived atmospheric properties. Section <ref> discusses the atmospheric properties of WASP-121 b with more details, including the results of phase-curve retrievals. Section <ref> compares our results with those from previous observations to obtain a more complete picture of the atmosphere of WASP-121 b, and puts them in the context with other UHJs. Section <ref> summarizes the conclusions of our study. § OBSERVATIONS We analysed two phase curves of WASP-121 b observed with Spitzer/IRAC for the program ID 13242 (PI: Tom Evans). Each visit consists of four consecutive astronomical observation requests (AORs) spanning approximately 39 hr, including one transit and two eclipse events. The observations were taken using IRAC sub-array readout mode with 2 s frame time <cit.>. In this mode, 64 frames are taken consecutively, with a delay of 1.27 s after reset. In total, 69,184 frames were acquired per visit, split unequally across the four AORs, but analogously for the two visits. The first visit made use of IRAC channel 2, that is, photometric filter with ∼4.0-5.0 μm passband and effective wavelength of 4.5 μm. The second visit made use of IRAC channel 1, that is, photometric filter with ∼3.2-3.9 μm passband and effective wavelength of 3.6 μm. Table <ref> summarises the main details of the observations. § DATA ANALYSIS §.§ Raw photometry extraction We downloaded the basic calibrated data (BCD, files extension `_bcd.fits') from the Spitzer Heritage Archive <cit.>. The BCD are flat-fielded and flux-calibrated frames <cit.>. We extracted the pixel time series from 5×5 arrays where the central pixel records the highest flux in most frames within an AOR. The raw light curves were computed as the sum of pixel time series from the 5x5 arrays. We note that the selected array could vary between AORs within the same visit. We also attempted to use a single array for each entire visit, but this choice increased the photometric offsets between AORs and degraded the performance of our data detrending method. We flagged and corrected outliers in the raw light curves through the following procedure. First, we computed the smoothed reference light curves as the sliding window medians of binned raw light curves. We adopted a bin size of 64, corresponding to the original data cube size, and a sliding window size of five. Second, we computed the reference noise level for the unbinned raw light curves as the median of the moving standard deviation with a sliding window of five. Third, we identified outliers as those points that are more than 5σ away from the smoothed references. Fourth, we replaced the sets of consecutive outliers with the vector means of the adjacent sets, or, equivalently, via a linear interpolation in the case of isolated outliers. The replacements were applied to all pixel light curves, and not just to the raw light curves. We iterated the third and fourth steps until there were no outliers left in the raw light curves. Finally, we binned all the light curves by a factor of four, corresponding to an integration time of 8 s, to speed up the following data analysis. The chosen bin size is a conservative one, being much smaller than the occultation timescales <cit.>. We could have adopted a larger bin size for the parts outside the occultations, based on the phase curve timescales, but our choice also minimizes the impact of correlated noise <cit.>. Figure <ref> shows the binned raw light curves analysed in this work. For illustrative purposes only, we calculated the coordinates of the stellar centroid using the centre-of-light method, as implemented by <cit.>. Figure <ref> shows the x and y coordinates obtained for both visits. It appears by eye that the centroids describe different loci in the x-y plane for each AOR. Pointing is stable mostly within 1-2 tenths of the pixel side during an AOR, then jumps abruptly by up to more than half a pixel when starting a new AOR. The larger-than-usual discontinuities likely make these datasets especially challenging to analyse compared to other Spitzer/IRAC phase-curve observations (e.g. ). §.§ Data detrending We applied the wavelet pixel-ICA technique, which is one of the most efficient for detrending Spitzer/IRAC time series <cit.>. ICA is a blind source separation technique with a wide range of applications, including many astrophysical fields (e.g. ). It performs a linear transformation of input mixed signals into maximally independent components <cit.>. The wavelet pixel-ICA technique uses wavelet-transformed pixel light curves as input for the ICA <cit.>. It is an improvement on the pixel-ICA technique, which instead used pixel light curves in the time domain <cit.>. As in previous papers, here we applied a single-level discrete wavelet transform (DWT) to the pixel light curves, but adopting the Haar wavelet <cit.> instead of the more complex Daubechies-4 one <cit.>. We checked, however, that the choice of wavelet function does not noticeably affect the ICA transform. The adopted ICA algorithm is MULTICOMBI <cit.>, as always. This time we wrapped the original MATLAB source code for use in a Python script. We initially tried to concatenate the pixel light curves from multiple AORs to form a single set of input signals to be transformed with ICA for each visit. <cit.> successfully adopted this approach to detrend Spitzer/IRAC phase curves of WASP-43 b. The same approach failed on the WASP-121 b data presented here, most likely due to larger pointing jumps between consecutive AORs. Therefore, we decided to perform individual ICA transforms for each AOR. We excluded the second AORs from each visit. In fact, these AORs do not contain an astrophysical signal with a well recognizable shape, such as a transit or an eclipse. The lack of morphology makes it difficult to separate the astrophysical component from the instrumental ones. For all other AORs, we identified the first ICA component as the astrophysical one, containing a clear transit or eclipse signal. Following the usual procedure, these astrophysical components were discarded from the light-curve fits, being replaced by an astrophysical light-curve model. The other 24 components of each AOR were attributed to instrumental systematic signals. §.§ Light-curve models Our phase-curve model approximates the planetary flux with a double sinusoid, F_p = c_0 + c_1 cos [2 π ( ϕ' - ϕ'_1 ) ] + c_2 cos [4 π ( ϕ' - ϕ'_2 ) ], where ϕ' = ϕ - Δϕ_ltd is the orbital phase corrected for the light travel delay. The orbital phase is ϕ = t - T_0/P - n, where T_0 is the epoch of transit, P is the orbital period, and n is an integer number usually chosen such that -1 ≤ϕ≤ 1. The light travel delay accounts for the displacements of the planet along the line of sight with respect to inferior conjunction. For a circular orbit, Δϕ_ltd = a sini [ 1 - cos( 2 πϕ ) ]/c P, where a and i are the orbital semimajor axis and inclination, and c is the speed of light. We adopted [<https://github.com/ucl-exoplanets/pylightcurve>] <cit.> to model the occultations, which is based on the formalism from <cit.>. We computed the stellar limb-darkening coefficients through [<https://github.com/ucl-exoplanets/ExoTETHyS>] <cit.>, using spectral model grids from the library <cit.> and the so-called claret-4 parametrisation <cit.>. §.§ Data fitting We performed similar independent fits on both visits, taken separately. For each visit, we simultaneously fitted the light-curve model and instrumental systematic effects to the raw light curve, discarding the second AOR. For each AOR segment considered, we fit a linear combination of the light-curve model and 24 ICA components attributed to instrumental signals. The scaling factors for the light-curve model and the ICA components of different AORs were independent parameters. The astrophysical parameters were planet-to-star radius ratio (p), orbital period (P), epoch of transit (T_0), impact parameter (b), total transit duration (T_14), orbital semi-major axis (a), and five phase-curve parameters (as in Equation <ref>). These parameters were shared among the AORs of the same visit. Table <ref> reports the Bayesian priors assigned to the 86 free parameters listed above. We set large uniform priors for almost all parameters. The orbital period and semi-major axis are wavelength-independent parameters that are very well known from previous observations, for which we adopted normal priors based on the results from <cit.>. We performed a preliminary optimisation using with the Nelder-Mead method <cit.>. The root mean square (rms) of the corresponding residuals was assigned as the error bar to each photometric point, which is typically larger than the nominal error bars. Then we ran emcee <cit.> with 300 walkers and 200,000 iterations. Each walker was initialised with a random value close to the preliminary parameter estimate. The first 50,000 iterations were discarded as burn-in. §.§ Alternative fits We tested fitting a phase-curve model with a single sinusoid, namely, fixing c_2=0 in Equation <ref>. The corresponding results were not preferred, as explained in Section <ref>. We also tried fixing the geometric and orbital parameters to better constrain the difference in transit depth between IRAC passbands, as discussed in Section <ref>. These tests were not used to calculate the final parameters reported in Table <ref>. § RESULTS Figure <ref> shows the best-fit models to the raw light curves and the corresponding residuals. The rms amplitudes of the normalised residuals are 2.45×10^-3 for the 3.6 μm visit, and 2.81×10^-3 for the 4.5 μm visit. We estimated them to be ∼28.7% and 6.8% above the photon noise limit. Figure <ref> shows the rms amplitudes of the binned residuals versus the bin size. The 4.5 μm residuals show no significant deviations from the theoretical behaviour of white noise. The 3.6 μm residuals present a modest amount of correlated noise, as is often the case for observations with this IRAC channel (e.g. ). §.§ Model selection We compared the phase-curve models with a single or double sinusoid, as described in Sections <ref>–<ref>. We considered the Bayesian information criterion (BIC, ) and the Akaike information criterion (AIC, ) to guide our model selection. For the 3.6 μm observation, the double sinusoid is statistically preferred according to both criteria with ΔBIC=-52 and ΔAIC=-67. We note that | ΔBIC | > 10 (or | ΔAIC | > 10) indicates a very strong evidence in favour of either model, based on the scale by <cit.>. For the 4.5 μm observation, the analogous differences have opposite sign, ΔBIC=18 and ΔAIC=2.7, thus favouring the single sinusoid model. We note that | ΔAIC | ∼ 2 indicates a weak statistical preference for the model with the lowest AIC. Finally, we selected the results obtained with the double sinusoid model for both observations. The choice to adopt the same parametrisation for both observations was taken to ensure homogeneity in their analyses. In fact, we expect the same physical phenomena to be present in observations of the same system at multiple wavelengths, albeit with different relative amplitudes and/or signal-to-noise ratios (S/Ns). Additionally, the double sinusoid parametrisation is more flexible and includes the single sinusoid as a special subcase. Even if the second sinusoid were superfluous to reproduce the 4.5 μm data, the fit should find a null amplitude (c_2 ∼ 0) without biasing the other parameters. Indeed, the fits with single and double sinusoid led to 1σ consistent results for the 4.5 μm observation, and slightly more conservative error bars by up to ∼10% when using the more complete parametrisation. As expected, the differences between the two sets of results for the 3.6 μm observation are more significant, sometimes exceeding the 3σ level. The single sinusoid led to unphysical results for the 3.6 μm phase curve, such as a negative nightside flux within ∼2σ. This issue is overcome by adopting the double sinusoid model. §.§ Transit and phase-curve parameters Table <ref> reports the best-fit parameters and others derived from those. The corresponding corner plots are presented in Appendix <ref>. The transit geometric and orbital parameters estimated independently from the two observations are consistent within 1σ. There is no evidence of different transit depths (p^2) at 3.6 and 4.5 μm within their error bars of 120-130 ppm. We also attempted to fit both light curves with fixed geometric and orbital parameters (b, P and a) to reduce their degeneracies with transit depth. When fixing the above parameters, the differential transit depth slightly increased from 60 to 100 ppm, which is still below the 1σ error bars. From the posterior distributions of the phase-curve coefficients, we numerically calculated the dayside maximum and nightside minimum fluxes (F_day^MAX and F_night^MIN) and their phase offsets from mid-transit and mid-eclipse time (Δϕ_day^MAX and Δϕ_night^MIN). We used the subpackage <cit.> to determine the brightness temperatures corresponding to the measured planetary fluxes. We obtained F_day^MAX = (4.23 ± 0.08 ) × 10^-3 and (5.09 ± 0.09 ) × 10^-3 at 3.6 and 4.5 μm, corresponding to similar brightness temperatures of T_day^MAX = 2670_-40^+55 and 2700_-50^+70 K, respectively. The two phase-curve maxima occur with slight offsets after mid-eclipse, Δϕ_day^MAX = 5^∘.9 ± 1^∘.6 and 5^∘.0_-3.4^+3.1 at 3.6 and 4.5 μm, respectively. We could only place an upper limit of F_night^MIN = ( 0.05 ± 0.24 ) × 10^-3 on the 3.6 μm nightside minimum flux, corresponding to T_night^MIN = 710_-710^+270 K. We also measured F_night^MIN = ( 0.71 ± 0.27 ) × 10^-3 at 4.5 μm, corresponding to T_night^MIN = 1130_-160^+130 K. Following the formulation by <cit.>, we estimated the Bond albedo (A_b) and circulation efficiency (ε) from the brightness temperatures. We obtained A_b = 0.37_-0.09^+0.07 and ε = 0.013_-0.013^+0.034 (3.6 μm), and A_b = 0.32_-0.10^+0.08 and ε = 0.077_-0.034^+0.040 (4.5 μm). § DISCUSSION §.§ WASP-121 b atmosphere overview The dayside emission spectrum of WASP-121 b, limited to the Spitzer/IRAC 3.6 and 4.5 μm passbands, is consistent with that of a blackbody at 2680_-45^+60 K (weighted average). The nightside emission is also consistent with that from a blackbody with 1100_-220^+165 K. From this data, there is no evidence of molecular species either in emission (blackbody spectra) or in transmission (constant transit depths). The strong day-night contrast and small peak offsets point towards inefficient heat redistribution in the WASP-121 b atmosphere. Based on the weighted average blackbody temperatures, we report ε = 0.07_-0.04^+0.05. An interesting feature from both phase curves is the indication of a westward hot-spot offset, in the opposite direction to that predicted by most global circulation models (GCM) of hot Jupiter atmospheres (e.g. ). Although they are rare, westward hot-spot offsets have been previously reported for a few hot Jupiters <cit.> and predicted theoretically <cit.>. We performed phase-curve retrievals with the phase-curve plugin <cit.> of 3.1 <cit.>, the latest version of the software <cit.>. For the atmosphere, we assumed three homogeneous regions: hot spot, dayside, and nightside. We computed the emitted flux at given phases using a quadrature integration scheme and fit all the phases for both channels in a single run. Each region is described by a plane-parallel atmosphere composed of 100 layers with pressures ranging from 10 to 10^-6 bar in log scale. In principle, multiwavelength phase-curve observations may constrain the chemistry of exoplanet atmospheres (e.g. ), and thus potentially inform us about their formation and evolution pathways (e.g. ). However, the information content in regard to the chemistry is relatively low and is degenerate in Spitzer data, especially if the thermal structure is also unknown, so we coupled the chemistry between the three regions of the planet and assumed chemical equilibrium. In the retrievals, the values for the metallicity (Z_p) and the carbon-to-oxygen (( C/O)_p) ratio were left fixed. We tested runs with Z_p = 1-10 Z_⊙ and with ( C/O)_p=0.1-1.0. The thermal profiles were described using a two-point profile with two freely moving nodes. In this model, the hot-spot region is defined by its location and size. We initially attempted to recover both parameters from the data but this led to nonphysical solutions. This issue may occur because with Spitzer data only, the hot-spot size is degenerate with the thermal structure. A similar behaviour was found and explored in more detail in <cit.>, even when the HST data are combined with Spitzer data. We therefore fixed the hot-spot size to 40^∘, but left the hot-spot offset as a free parameter. For the nightside region, we modeled clouds using an opaque grey cloud model and fitted for the cloud pressure top deck. The parameter space of this phase-curve model was explored using the MultiNest algorithm <cit.> with 512 live points and an evidence tolerance of 0.5. The priors were chosen to be uninformative, i.e., uniform priors with large bounds. More specifically, the hot-spot offset was allowed to vary between -50^∘ and 50^∘, the temperature of the T-p nodes between 300 K and 6000 K, and the pressures of the T-p nodes as well as the top of the cloud deck were explored on the full extent of the atmosphere. We show in Figure <ref> the Spitzer observations, calculated from the posterior distributions of our parametric fit based on Equation <ref>, and two recovered best-fit atmospheric models. The corresponding retrieved thermal structures are shown in Figure <ref>. While both runs indicate the likely presence of a thermal inversion, the altitude of the inversion cannot be inferred from this data as it is degenerate with the chemistry. Analysing the posterior distributions (see Figure <ref>) of our atmospheric retrievals, we find that clouds are not required to explain the WASP-121 b Spitzer data. The hot-spot offset is consistent between the two Z_p = 1 Z_⊙ and Z_p = 10 Z_⊙ retrievals, around 9^∘ westward. Looking at the residuals in Figure <ref>, we note some discrepancies between our phase-curve models and Spitzer data. The anti-correlated behaviour of 3.6 μm and the 4.5 μm residuals suggest that the hot-spot offset, size, and temperature (shared in our retrievals) might be different between the two observations. This potential difference in the hot-spot parameters could be a consequence of atmospheric temporal variability or other effects that are not accounted for by our analysis, such as the observations probing different pressure regions, or remaining systematic biases from our data reduction. We defer further modelling efforts to future work, given the high level of complexity required to reproduce both observations and difficulty in constraining many atmospheric parameters using just two photometric observations. §.§ Comparison with other observations of the same planet §.§.§ JWST/NIRSpec The WASP-121 b phase curve was recently observed by JWST/NIRSpec using the G395H grating as part of program GO-1729 (P.I. Mikal-Evans, co-P.I. Kataria). This observing mode makes use of the NRS1 and NRS2 detectors, covering the 2.70-3.72 μm and 3.82-5.15 μm wavelength ranges. <cit.> presented the results of their first look analysis of the broadband light curves, integrated over each detector passband. We note that the NRS1 and NRS2 passbands largely overlap with those of Spitzer/IRAC channels 1 and 2, respectively. Hence, it makes sense to compare the results obtained from Spitzer and JWST observations. Figure <ref> shows the comparison between physical parameters from our Spitzer/IRAC data analysis and those based on the JWST/NIRSpec observations reported by <cit.>. Figure <ref> compares the corresponding phase-curve profiles. There is a good agreement between the two sets of parameters, albeit with larger error bars for the Spitzer/IRAC ones. In particular, the two-points dayside emission spectra of WASP-121 b inferred from Spitzer/IRAC or JWST/NIRSpec are both consistent with that of blackbodies. There is an apparent offset of ∼80 K between the two sets of brightness temperatures, which is not statistically significant. The nightside temperatures reported for the Spitzer/IRAC and JWST/NIRSpec passbands have similar trends, the bluer temperatures being ∼200-400 K lower than the redder. While the difference for the JWST/NIRSpec passbands is significant at the 12σ level, the statistical significance of the difference is decreased by the order-of-magnitude larger error bars for the Spitzer/IRAC measurements. The phase-curve maxima present different offsets from mid-eclipse, ranging from modest eastward to westward hot-spot positions for JWST/NIRSpec and Spitzer/IRAC measurements, respectively. The reported JWST/NIRSpec offsets are 3.36^∘±0.11^∘ (NRS1) and 2.66^∘±0.12^∘ (NRS2) prior to mid-eclipse. These differences may reveal the variable weather of WASP-121 b <cit.>, or could be caused by instrumental systematic effects <cit.>. The planet-to-star radii ratios obtained in the Spitzer/IRAC passbands are consistent with the JWST/NIRSpec measurements within 1σ. The redder passbands have larger radii ratios at the 12σ level for JWST/NIRSpec. The Spitzer/IRAC measurements present a similar, but smaller, trend, that is not significant due to much larger error bars. §.§.§ Spitzer/IRAC Two other eclipses of WASP-121 b were observed with each of the Spitzer/IRAC channels in 2017, as part of the program ID 13044 (PI: Drake Deming). <cit.> reported lower dayside temperatures of 2490±77 K (3.6 μm) and 2562±66 K (4.5 μm) for WASP-121 b, based on those eclipse observations. The corresponding planet-to-star flux ratios reported by <cit.> are (3.685±0.114)×10^-5 and (4.684±0.121)×10^-5, which are smaller than our measurements by 545 and 406 ppm, respectively. These differences could be caused by the phase-blend effect <cit.>, that was likely neglected by <cit.>. We estimated this effect for the 4.5 μm eclipse using , assuming the dayside and nightside temperatures reported in Table <ref> and the 8.5-hour duration of the Spitzer AORs. Indeed, the resulting phase-blend bias was -391 ppm, which is very similar to the discrepancy between the flux ratio reported by <cit.> and our value (-406 ppm). §.§.§ HST/WFC3 <cit.> analysed two phase curves of WASP-121 b observed with HST/WFC3 using G141 grism, which are spectrally resolved over 1.1-1.7 μm. They reported dayside and nightside spectra with significant deviations from blackbody spectra, which they attributed to emission and absorption of H^- and H_2O. Nonetheless, they adopted the brightness temperatures derived from the blackbody fits for the dayside and nightside hemispheres to estimate the Bond albedo and circulation efficiency of WASP-121 b atmosphere, finding A_b = 0.14 ± 0.08 and ε = 0.29 ± 0.02. We calculated the corresponding brightness temperatures to be T_day=2760 ± 100 K and T_night=1665 ± 65 K (not reported by ). These HST/WFC3 observations suggest significantly lower Bond albedo, higher circulation efficiency and nightside temperatures than those that we obtained from Spitzer/IRAC observations. However, given the different wavelength ranges probed by HST and Spitzer observations, these apparent discrepancies do not necessarily reveal physical odds, but rather the limits of an oversimplified model behind these calculations. <cit.> pointed out that HST/WFC3 brightness temperatures can be overestimated due to neglecting the reflected star light component, namely, interpreting the observed flux from the planet dayside as pure emission. We calculated the reflected light component integrated over the HST/WFC3 G141 passband to be ∼92 ppm in eclipse, assuming a geometric albedo of 0.32 (equal to the Bond albedo from Spitzer/IRAC channel 2). Neglecting this component could bias the inferred dayside temperatures by about +50 K, but it should not affect the nightside temperature estimates. <cit.> performed joint retrievals on a suite of emission and transmission spectra of WASP-121 b, based on Spitzer/IRAC and HST/WFC3 observations. They retrieved atmospheric temperatures, weighted by the contribution function, of 2602±53 K for the dayside and 1386_-366^+340 K for the terminator. We note that the terminator temperature is not informed by the nightside spectrum and should be intermediate between the dayside and nightside temperatures. Concerning the phase-curve maxima, <cit.> found modest offsets ahead of mid-eclipse for HST/WFC3 broadband and spectroscopic light curves. Their posterior median for the broadband light-curve fit is Δϕ_day^MAX∼ -6^∘, in the opposite direction of our Spitzer/IRAC measurements. §.§.§ TESS The optical phase curve of WASP-121 b, as obtained from TESS data, has an amplitude of ∼400-500 ppm <cit.>. The former study found two solutions consistent with purely reflected starlight, leading to geometric albedo of ∼0.37, or pure thermal emission with T_day=2870 ± 50 K and T_night < 2200 K (3σ). The latter assumed pure thermal emission with a different stellar template from <cit.>, leading to T_day=3012_-42^+40 K and T_night=2022_-602^+44 K. From the second set of results <cit.>, we calculated A_b=0.05_-0.22^+0.18 and ε=0.40_-0.28^+0.17. We note that infrared observations provide tighter constraints on the atmospheric thermal properties, thanks to their larger phase-curve amplitudes and less reflection. Even neglecting reflection, our Spitzer/IRAC error bars on A_b and ε are 2-8 times smaller than TESS ones. §.§ Comparison with other planets WASP-121 b belongs the class of UHJs, i.e., gas giants with dayside temperature ≳2200 K <cit.>. This temperature is above the condensation threshold of most species, except highly refractory ones such as Al and Ti <cit.>. For this reason, we may expect cloud-free dayside in UHJs <cit.>. The lack of a reflecting cloud layer should also imply low geometric and Bond albedo (≲0.2), as confirmed by optical to near-infrared eclipse measurements <cit.>. We estimated a higher Bond albedo of ≳0.3 from the Spitzer/IRAC phase curves of WASP-121 b at 3.6 and 4.5 μm, although their posteriors are consistent with A_b=0.2 within 2σ. <cit.> highlighted a common trend of measuring systematically higher Bond albedos from thermal phase curves of gas giants (A_b∼ 0.35) compared to the geometric albedos inferred from visible eclipses (A_g∼ 0.1). This trend holds, with a few exceptions, for more recent observations of UHJs (see Table <ref>). The thermal phase curves of UHJs typically have large amplitudes, corresponding to strong day-night contrasts, and small hot-spot offsets. These properties indicate inefficient heat redistribution. We derived ε≲ 0.1 for most UHJs, the lowest values were obtained for WASP-121 b (see Table <ref>). There are no evident trends between the irradiation temperatures of UHJs and the observed thermal phase-curve parameters. § CONCLUSIONS We analysed, for the first time, two thermal phase curves of WASP-121 b taken with Spitzer. Despite these datasets being affected by stronger than usual instrumental systematic effects, we obtained meaningful information on the exoplanet atmosphere. The measured brightness temperatures and transit depths are consistent within 1σ with those obtained from much more precise JWST observations in similar passbands. We estimated the Bond albedo and circulation efficiency of the WASP-121 b atmosphere, which are similar to those of other UHJs. However, we measured unusual westward hot-spot offsets, which are significantly different from the JWST measurements. These discrepancies may hint at atmospheric variability or instrumental systematic effects. We further explored the possible thermal profiles using phase-curve retrievals, which are coupled with chemistry. Our analysis confirms the validity of Spitzer phase-curves to infer exoplanet atmospheric properties. More precise, spectrally resolved observations, such as those obtained with JWST, will enable us to better understand their complex behaviour. This work is based on archival data obtained with the Spitzer Space Telescope, which was operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. G. M. has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 895525, and from the Ariel Postdoctoral Fellowship program of the Swedish National Space Agency (SNSA). Q. C. is funded by the European Space Agency under the 2022 ESA Research Fellowship Program. We acknowledge the availability and support from the High Performance Computing platforms (HPC) from the Simons Foundation (Flatiron), DIRAC and OzSTAR, which provided the computing resources necessary to perform this work. This work was performed using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. DiRAC is part of the National e-Infrastructure. Additionally, this work utilised the OzSTAR national facility at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government. § CORNER PLOTS Figures <ref> and <ref> show the corner plots with the posterior distributions of astrophysical parameters obtained from the 3.6 μm and 4.5 μm observations, respectively. Figure <ref> shows the corner plots for the atmospheric parameters retrieved from both observations assuming Z_p = 1 Z_⊙ and Z_p = 10 Z_⊙. aa
http://arxiv.org/abs/2307.01013v1
20230703134436
SynthCal: A Synthetic Benchmarking Pipeline to Compare Camera Calibration Algorithms
[ "Lala Shakti Swarup Ray", "Bo Zhou", "Lars Krupp", "Sungho Suh", "Paul Lukowicz" ]
cs.CV
[ "cs.CV", "cs.GR" ]
Microwave Gaussian quantum sensing with a CNOT gate receiver Hany Khalifa1,2 0000-0002-1276-5428, Kirill Petrovnin2, Riku Jäntti1 Senior, IEEE, Gheorghe Sorin Paraoanu 2,3 Received ; accepted =================================================================================================================== Accurate camera calibration is crucial for various computer vision applications. However, measuring camera parameters in the real world is challenging and arduous, and there needs to be a dataset with ground truth to evaluate calibration algorithms' accuracy. In this paper, we present SynthCal, a synthetic camera calibration benchmarking pipeline that generates images of calibration patterns to measure and enable accurate quantification of calibration algorithm performance in camera parameter estimation. We present a SynthCal-generated calibration dataset with four common patterns, two camera types, and two environments with varying view, distortion, lighting, and noise levels. The dataset evaluates single-view calibration algorithms by measuring reprojection and root-mean-square errors for identical patterns and camera settings. Additionally, we analyze the significance of different patterns using Zhang's method, which estimates intrinsic and extrinsic camera parameters with known correspondences between 3D points and their 2D projections in different configurations and environments. The experimental results demonstrate the effectiveness of SynthCal in evaluating various calibration algorithms and patterns. camera calibration, benchmarking, synthetic dataset, pattern recognition § INTRODUCTION When we capture an image using a camera, the captured digital image can differ from the real-world scene in terms of perspective, distortion, color, resolution, and other visual properties. This is because real-world scenes are three-dimensional and continuous, while digital images captured by a camera are two-dimensional and discrete, and contain distortion and other imperfections. To minimize thse differences and improve the accuracy of image-based computer vision tasks, camera calibration is essential. Camera calibration involves calculating camera parameters that refer to its intrinsic and extrinsic characteristics for accurately mapping points in the 3D world to their corresponding 2D image coordinates. Once the camera is calibrated, it can accurately measure distances, angles, and sizes of objects in the 3D world and perform other image-based computer vision tasks such as object tracking <cit.>, 3D reconstruction <cit.>, augmented reality <cit.>, medical imaging <cit.>, and autonomous driving <cit.>. Geometric camera calibration <cit.> is one of the most widely used calibration methods. It involves using a calibration target with known geometric features, such as a calibration grid, to estimate the camera parameters. The target is captured from different angles, and the resulting images are used to estimate the camera parameters that minimize the difference between the predicted and observed image points. However, creating a real camera calibration data with ground truth for calibration algorithms can be challenging because it is difficult to measure camera position and rotation accurately, and the camera's intrinsic parameters can change with changes in the zoom level, focus distance, or temperature. Additionally, the aging of the camera or misalignment of its components can also affect the intrinsic parameters. Moreover, cameras can have different intrinsic parameters, even if they are of the same make and model, because of manufacturing tolerances, assembly errors, and differences in lens quality. Observing the calibration pattern in the image along with the previous knowledge of the pattern, we can determine the intrinsic and extrinsic parameters using various calibration algorithms, such as Zhang's calibration method <cit.>, the Tsai's method <cit.>, or Bouguet method <cit.>. Previous works have tried to compare different camera calibration algorithms <cit.>. However, there is a need for a benchmarking procedure that can provide a quantitative comparison of calibration algorithms due to the unknown ground truth of the calibration dataset. To overcome these problems, we introduce the SynthCal pipeline, which generates a synthetic camera calibration dataset with user-defined intrinsic camera parameters while precisely measuring the extrinsic camera parameters. It enables the selection of the optimal camera calibration algorithm for specific configurations by considering all intrinsic, extrinsic, and distortion parameters. It also ensures that lighting conditions and noise are identical for the different captured datasets for accurate comparison. The idea of generating synthetic calibration data has been previously applied in other works, such as sports-based synthetic calibration <cit.> and evaluating closed-form solutions of principal line calibration <cit.>, but not necessarily for comparing calibration algorithms. Our main contributions can be summarized as follows: * We present a pipeline to generate a camera calibration dataset with ground truth parameters and select the optimal camera calibration algorithm for the specific configurations, as depicted in <ref>. * We evaluate the proposed pipeline on three different camera calibration algorithms and four different calibration patterns using a SynthCal-generated dataset with 1016 images, two distinct camera configurations, and two different lighting and noise conditions. § PROPOSED METHOD We created a modular web-based interface with OpenCV <cit.> and Blender API <cit.> in the back-end to generate a synthetic camera calibration dataset with ground truth which has functionalities to create different camera calibration patterns, simulate a camera inside Blender using the light-field analysis add-on <cit.>, render the camera calibration pattern from various positions and orientations, add radial distortions while establishing the camera's intrinsic, extrinsic and distortion parameters to formulate the ground truth. We used an OpenCV script to generate geometric patterns that take input pattern type, and pattern attributes to generate a PNG image. Our script allows us to create checkerboard patterns (Ch), symmetric circular patterns (Sc), asymmetric circular patterns (Ac), and Charuco<cit.> patterns (Cu) of different configurations as shown in <ref>. Let K be the intrinsic matrix of the camera, which includes the parameters that describe the internal configuration of the camera, such as the focal length (f_x, f_y) and principal point (c_x, c_y): K = [ f_x 0 c_x; 0 f_y c_y; 0 0 1 ] We used a Blender python API and a light-field add-on to create a synthetic camera that takes camera attributes (f_x, f_y) and (c_x, c_y) to create a camera configuration file for simulating the camera inside Blender. To capture the calibration pattern for dataset creation, we moved the pattern in a path resembling the shape of a conical spring, as depicted in <ref>. The center of the calibration pattern is always in the camera's direction, so the planar pattern can be captured in different angles, sizes, and orientations and have consistency without going out of the camera frame. Let R be the rotation matrix that describes the orientation of the camera in the global coordinate system, and let t be the translation vector that describes the position of the camera in the world coordinate system: P = [ R t; 0 1 ] The extrinsic matrix P combines the rotation matrix and the translation vector. R is, and t are evaluated by extracting the global position and orientation of the camera and calibration pattern at each frame. The camera parameters can also be described using the distortion parameters, which describe the deviations from the ideal imaging system. The distortion parameters can be represented as a vector d = [k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6] where k_1, k_2, k_3, k_4, k_5, k_6 are radial distortion coefficients and p_1, p_2 are tangential distortion coefficients. The distortions are added later using Blender undistorted node by setting up a tracking scene in Blender and defining K and d. The final equation for mapping X a 3D point in the global coordinate system to x a 2D point in the image plane, including the distortion parameters, can be written as: x = K [R | t] X + d(x_d/f_x, y_d/f_y) Where x_d and y_d are the distorted image coordinates, the distortion model d() maps the distorted image coordinates to the corrected image coordinates. The captures are saved in PNG formats, while camera parameters are saved as NumPy arrays. § RESULTS AND EVALUATION §.§ Dataset We created a dataset of four widely used distinct pattern types that are a 9×12 checkerboard pattern with a checker width of 15 mm, one 10×10 symmetric circle pattern with 7 mm circle diameter, and 15 mm circle spacing, one 9×10 asymmetric circle pattern with 9 mm diameter, and 22 mm diagonal spacing and 9×12 Charuco pattern checker width of 15 mm and ArUco dictionary <cit.> of 7×7. Two distinct camera configurations representing a high-resolution rectilinear lens with focal length (3000, 3000), principal point (2048, 1536) with distortion parameters [ 0.05, 0.02, 0.001, 0, 0, 0, 0, 0] and a low resolution wide, angle lens with focal length (600, 450), principal point (320, 240) with distortion parameters [ 0.5, 0.1, 0.03, 0, 0, 0, 0, 0] are simulated for capturing the patterns. Skew and tangential distortion are kept at zero for both camera configurations. The extrinsic parameters R a 3×3 identity matrix and t a 3×1 zero vector are calculated using vector calculation with the relative position and orientation of the camera and target pattern to establish the ground truth. Two different external lighting conditions are used while rendering, one with uniform light across the scene without noise and another with Directional lights with additive Gaussian noise in the camera captures. We created eight data configurations with 127 captures with camera intrinsic and extrinsic matrix for each configuration as specified in the <ref>. §.§ Evaluation & Analysis We used RMS Reprojection Error (RPE_RMS) as a metric to compare the algorithms and calibration patterns which can be defined as: RPE_RMS = √(1/N∑_i=1^N𝐱_i - 𝐱̂_i ^2) where N is the number of points, 𝐱_i is the observed image point in the captured image, and 𝐱̂_i is the corresponding projected image point using the estimated intrinsic and extrinsic parameters from the camera calibration. We also calculated accuracy by comparing the estimated intrinsic and extrinsic parameters of the camera to the ground truth values using Root Mean Square Error (RMSE) that can be defined as: RMSE = √(1/L∑_i=1^L( X_i - X̂_i )^2) where L is the number of parameters being estimated, X_i is the ground truth value for the i-th parameter, and X̂_i is the estimated value for the i-th parameter. We evaluated three different single-view camera calibration algorithms for both rectilinear and wide-angle camera configurations with the dataset created using 9×12 checkerboard pattern and listed in <ref>. To compare different camera calibration patterns, we calculated both RPE_RMS and RMSE for all eight available configurations using Zhang's method, which is listed in <ref>. Based on the results of we observed that RPE_RMS and RMSE are low for the high-resolution rectilinear camera with uniform lighting and no additive noise, unlike the wide-angle camera with low distortion, directional light, Gaussian noise, and high distortion factors. Both scores follow a similar trend and verify that our work aligns with the pre-established effectiveness of different calibration methods in different circumstances <cit.>. we also observed that center-based patterns are more efficient than edge-based patterns in austere environments. However, increased complexity makes their performance worse than the edge-based patterns. The Charuco pattern with the best score proves its robustness to noise compared to other patterns. § CONCLUSION In this paper, we presented SynthCal, a comprehensive pipeline for generating customized camera-specific calibration datasets that can be used to benchmark different camera calibration algorithms and patterns. By introducing ground truth parameters, the proposed pipeline enables the comparison of different calibration algorithms using not only pixel-wise reprojection error but also a new RMSE of intrinsic and extrinsic parameters. This allows us to select the optimal calibration strategy for any given camera configuration. We generated calibration datasets with four distinct calibration patterns and evaluated three calibration methods for two camera configurations. The quantitative results are consistent with previous works, but the proposed approach has a broader scope. In future work, the proposed SynthCal pipeline can be easily modified for multi-view and non-planar calibration algorithms for any camera type. § ACKNOWLEDGMENTS The research reported in this paper was supported by the BMBF (German Federal Ministry of Education and Research) in the project VidGenSense (01IW21003). IEEEbib
http://arxiv.org/abs/2307.00906v1
20230703100956
A data-driven kinetic model for opinion dynamics with social network contacts
[ "Giacomo Albi", "Elisa Calzola", "Giacomo Dimarco" ]
physics.soc-ph
[ "physics.soc-ph", "cs.NA", "math.NA" ]
𝒪 ℳ ℱ β̂ f̂ ĝ Q̂ ℝ #1 #1 theoremTheorem[section] propositionProposition[section] conditionCondition[section] corollaryCorollary[section] lemma[theorem]Lemma remarkRemark[section]
http://arxiv.org/abs/2307.01570v1
20230704084801
Machine Learning-Based Intrusion Detection: Feature Selection versus Feature Extraction
[ "Vu-Duc Ngo", "Tuan-Cuong Vuong", "Thien Van Luong", "Hung Tran" ]
cs.CR
[ "cs.CR", "cs.AI" ]
Machine Learning-Based Intrusion Detection: Feature Selection versus Feature Extraction Vu-Duc Ngo, Tuan-Cuong Vuong, Thien Van Luong, and Hung Tran Vu-Duc Ngo is with Research and Development Center, Corporation, Hanoi 11312, and also with the School of Electronics and Electrical Engineering, Hanoi University of Science and Technology, Hanoi 11657, Vietnam. (email: [email protected]). Thien Van Luong, Tuan-Cuong Vuong, and Hung Tran are with the Faculty of Computer Science, Phenikaa University, Hanoi 12116, Vietnam (e-mail: [email protected], {thien.luongvan, hung.tran}@phenikaa-uni.edu.vn) Corresponding author: Hung Tran August 1, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Internet of things (IoT) has been playing an important role in many sectors, such as smart cities, smart agriculture, smart healthcare, and smart manufacturing. However, IoT devices are highly vulnerable to cyber-attacks, which may result in security breaches and data leakages. To effectively prevent these attacks, a variety of machine learning-based network intrusion detection methods for IoT networks have been developed, which often rely on either feature extraction or feature selection techniques for reducing the dimension of input data before being fed into machine learning models. This aims to make the detection complexity low enough for real-time operations, which is particularly vital in any intrusion detection systems. This paper provides a comprehensive comparison between these two feature reduction methods of intrusion detection in terms of various performance metrics, namely, precision rate, recall rate, detection accuracy, as well as runtime complexity, in the presence of the modern UNSW-NB15 dataset as well as both binary and multiclass classification. For example, in general, the feature selection method not only provides better detection performance but also lower training and inference time compared to its feature extraction counterpart, especially when the number of reduced features K increases. However, the feature extraction method is much more reliable than its selection counterpart, particularly when K is very small, such as K=4. Additionally, feature extraction is less sensitive to changing the number of reduced features K than feature selection, and this holds true for both binary and multiclass classifications. Based on this comparison, we provide a useful guideline for selecting a suitable intrusion detection type for each specific scenario, as detailed in Tab. <ref> at the end of Section <ref>. Note that such the comparison between feature selection and feature extraction over UNSW-NB15 as well as theoretical guideline have been overlooked in the literature. Intrusion detection, UNSW-NB15, feature selection, feature extraction, PCA, machine learning, internet of things, runtime, binary/multiclass classification, NIDS, IoT. § INTRODUCTION Internet of Things (IoT) has recently witnessed an explosive expansion in a broad range of daily life and industrial applications <cit.>, such as healthcare, smart homes, smart cities, smart energy, smart agriculture, and intelligent transportation. The IoT networks aim to provide internet connections for transferring data among massive IoT devices, such as interconnected sensors, drones, actuators, smart vehicles and smart home appliances <cit.>, using either wired or wireless communications. However, most of these IoT devices are low-cost, low-power and limited-resource, making them highly vulnerable to cyber attacks as well as intrusive activities. Therefore, it is vital to develop network intrusion detection systems (NIDS) that can promptly and reliably identify and prevent malicious attacks to IoT networks. For this, a wide range of machine learning-based intrusion detection techniques have been designed for IoT, along with a number of public network traffic datasets <cit.>. These datasets often contain a large number of features, in which many are irrelevant or redundant, which adversely affect both the complexity and accuracy of machine learning algorithms. Thus, many feature reduction methods have been developed for NIDS, in which feature selection and feature extraction are two of the most popular ones <cit.>, as discussed next.[Note that several recent works that apply deep learning and blockchain to secure IoT networks can be found in <cit.>, in the fields of healthcare system, unmanned aerial vehicle and Android malware. ] In NIDS, feature selection has been widely used for reducing the dimensionality of original traffic data. For example, in <cit.>, a mutual information (MI)-based feature selection algorithm was proposed in combination with a classifier called least square support vector machine, which achieves higher accuracy and lower runtime complexity than the existing schemes, over three datasets, namely, KDD99 <cit.>, NSLKDD <cit.> and Kyoto 2006+ <cit.>. Before that a MI-based scheme was also proposed for NIDS in <cit.>, which however suffers from higher computational complexity than the approach in <cit.>. Additionally, several approaches that rely on genetic algorithm (GA) as a search strategy to select the best subset of features can be found in <cit.>. These methods provide lower false alarm rates than the baselines, where UNSW-NB15 <cit.> and KDD99 <cit.> datasets are used. In <cit.>, a hybrid feature selection approach, which relies on the association rule mining and the central points of attribute values, was developed, showing that UNSW-NB15 dataset achieves a better evaluation than NSLKDD. In <cit.>, another hybrid feature selection method that comprises particle swarm optimization (PSO), ant colony algorithm, and GA was proposed, learning to better detection performance than the baselines such as GA <cit.>, in the presence of both NSLKDD and UNSW-NB15 datasets. In <cit.>, a Pigeon inspired optimizer was used for selecting features of NIDS, which achieves higher accuracy than the PSO <cit.> and hybrid association rules methods <cit.>. Note that the aforementioned feature selection schemes often suffer from high computational cost, especially for those relying on GA, PSO or machine learning-based classifiers. For this, a correlation-based feature selection method that offers low computational cost was investigated for NIDS over KDD99 and UNSW-NB15 datasets in <cit.>, taking the correlation level among features into account. Recently, this correlation-based method was combined with ensemble-based machine learning classifiers to significantly improve the accuracy of NIDS <cit.>, at the cost of higher complexity. Hence, aiming at a real-time and low-latency attack detection solutions, this work will more focus on the correlation-based feature selection method.[Note that several matrix factorization-based dimensionality reduction methods were developed for gene expression analysis in <cit.>.] Unlike feature selection, which retains a subset of the original features in NIDS, feature extraction attempts to compress a large amount of original features into a low-dimensional vector so that most of information is retained. There are a number of feature extraction techniques that have been applied for reducing data dimension in NIDS, such as principal component analysis (PCA), linear discriminant analysis (LDA), and neural network-based autoencoder (AE). For instance, in <cit.>, PCA was applied to significantly reduce the dimension of KDD99 dataset, improving both accuracy and speed of NIDS, where support vector machine was used for attack classification. Then, several variants of PCA were adopted to intrusion detection, such hierarchical PCA neural networks <cit.> and kernel PCA with GA <cit.>, which can enhance the detection precision for low-frequent attacks. Some of applications of PCA to recent network traffic datasets such as UNSW-NB15 and CICIDS2017 <cit.> can be found in <cit.>. In addition to PCA, LDA was also employed as a feature reduction method for NIDS in <cit.>, which remarkably reduces the computational complexity of NIDS. Then, in <cit.>, both PCA and LDA were combined into a two-layer dimension reduction, which is capable of reliably detecting low-frequency malicious activities, such as User to Root and Remote to Local, over NSLKDD dataset. To further improve the efficiency of feature extraction in NIDS, a AE-based neural network was used in a range of research works <cit.>. In particular, a stacked sparse AE approach was developed in <cit.> to conduct a non-linear mapping between high-dimensional data and low-dimensional data over NSLKDD dataset. In <cit.>, a deep stacked AE was used to noticeably reduce the number of features to 5 and 10 for binary and multiclass classification, respectively, leading to better accuracy than the previous methods. Additionally, a number of AE architectures based on long short-term memory (LSTM) were developed for dimensionality reduction of NIDS, such as variational LSTM <cit.> and bidirectional LSTM <cit.>, which can efficiently address imbalance and high-dimensional problems. Note that these AE-based methods suffer from a high computational cost compared to PCA and LDA, both in training and testing phases. To address this issue, a network pruning algorithm has been recently proposed in <cit.> to considerably lower complexity of AE structures in extracting features of NIDS. In <cit.>, a network architecture uses an autoencoder based on convolutional and recurrent neural networks to extract spatial and temporal features without human engineering. It is worth noting that most of the aforementioned papers have focused on either improving the detection accuracy or reducing the computational complexity of NIDS, by using machine learning-based classifications in combination with feature engineering methods such as feature selection and feature extraction for reducing data dimensionality. However, a comprehensive comparison between these two feature reduction methods has been overlooked in the literature. Our paper appears to address this gap. In particular, we first provide an overview of NIDS, with a focus on the phase of feature reduction, where feature extraction with PCA and feature selection with correlation matrix are the two promising candidates for realistic low-latency operations of NIDS. Then, using the modern UNSW-NB15 dataset, we thoroughly compare the detection performance (precision, recall, F1-score) as well as runtime complexity (training time and inference time) of these two methods, taking into account both binary and multiclass classifications as well as the same number of selected/extracted features denoted as K. Based on our extensive experiments, we found that feature selection generally achieves higher detection accuracy and requires less training and inference times when the number of reduced features K is large enough, while feature extraction outperforms feature selection when K gets smaller, such as K=4 or less. Furthermore, in order to gain a deeper insight into detection behaviors of both methods, we investigate and compare their accuracy for each attack class when varying K, based on their best machine learning classifiers, which revealed that feature extraction is not only less sensitive to varying the number of reduced features but also capable of detecting more diverse attack types than feature selection. Additionally, both tend to be able to detect more attacks, i.e., Abnormal classes, when having more features selected or extracted. Relying on such comprehensive observations, we provide a theoretical guideline for selecting an appropriate intrusion detection type for each specific scenario, as detailed in Tab. <ref> at the end of Section <ref>, which is, to the best of our knowledge, not available in the literature. The rest of this paper is organized as follows. Section II discusses machine learning-based network intrusion detection methods for IoT networks. The overview of UNSW-NB15 dataset and data pre-processing are explained in Section III. Section IV provides the experimental results and discussion. Finally, Section V concludes this paper. § MACHINE LEARNING-BASED NETWORK INTRUSION DETECTION METHODS In this section, we describe an overview of a network intrusion detection system (NIDS) based on machine learning, followed by details on the two major feature reduction methods, namely, feature selection and feature extraction. §.§ Overview of NIDS A NIDS consists of three major components, namely, data pre-processing, feature reduction, and attack classification, as illustrated in Fig. <ref>. In particular, in the first phase, the raw data is denoted as the dataframe 𝐙, whose features may include unexpected or non-numeric values, such as null or nominal. 𝐙 is pre-processed in order to either replace these unexpected values with valid ones or transform them to the numeric format using one-hot encoding. Several features that do not affect detection performance, such as the source IP address and the source port number, are dropped out. Furthermore, depending on the classifier we use for identifying attacks, we may use the normalization technique, for example, to constrain the values of all features, i.e., the elements of the output vector of the first phase 𝐗 in Fig. <ref>, to range from 0 to 1. We will discuss this in detail in Section <ref> when presenting UNSW-NB15 dataset. As such, after the first phase, the pre-processed data 𝐗∈ℝ^D× N is likely to have much more features than the original data 𝐙, particularly due to the use of one-hot encoding, where D is the number of dimensions, or equivalently, the number of features of 𝐗, and N is the number of data samples. For example, when UNSW-NB15 dataset is used, the dimension of data increases from 45 to nearly 200, which is too large for classification techniques to quickly recognize the attack type. In order to address this fundamental issue, in the second phase, we need to reduce the number of features that will be used for the attack classification phase (the last phase in Fig. <ref>). For this, two feature reduction methods called feature selection and feature extraction are widely used to either select or extract a small number of most important features from pre-processed traffic data. This procedure also helps to remove a large amount of unnecessary features, which not also increase the complexity of NIDS, but also degrade its detection performance, as will be illustrated in experimental results in Section <ref>. Herein, the output data of the feature reduction block is denoted as vector 𝐔∈ℝ^K× N in Fig. <ref>, which is expected to have a much lower dimension than 𝐗, i.e., K≪ D, while retaining its most important information. Finally, in the third phase of NIDS, a number of binary and multiclass classification approaches based on machine learning, such as decision tree, random forest and multilayer perception neural networks, are employed to detect the attack type. Relying on attack detection results, the system administrators can promptly make a decision to prevent malicious activities, ensuring the security of IoT networks. Here, note that the detection performance and latency of a NIDS strongly depend on which classifier and which feature reduction method it employs. Therefore, in this contribution, we comprehensively investigate detection performance (in terms of recall, precision, F1-score) and latency (in terms of training time and inference time) of different detection methods in presence of both feature selection and feature extraction as well as different machine learning classifiers. We also focus more on the comparison between these two feature reduction methods, which will be described in detail in the following subsections. §.§ Feature selection There are a number of feature selection techniques used in intrusion detection, namely, information gain (IG) <cit.> and feature correlation <cit.>. In this work, we focus on using feature correlation for selecting important features, since this method has been shown to achieve competitive detection accuracy and complexity compared to other selection counterparts. Using this correlation-based method, we aim to select features that are most correlated to other features based on the correlation matrix calculated from the training dataset. More specifically, the correlation coefficient between feature Ω_1 and feature Ω_2 is calculated based on the numeric pre-processed training dataset 𝐗 as follows <cit.>: 𝒞_Ω_1,Ω_2=∑_i=1^N(α_i-E_Ω_1)(β_i-E_Ω_2)/√(∑_i=1^N(α_i-E_Ω_1)^2).√(∑_i=1^N(β_i-E_Ω_2)^2), where α_i and β_i are the values of these two features, E_Ω_1=∑_i=1^Nα_i/N and E_Ω_2=∑_i=1^Nβ_i/N are their means over N training data samples. Note that after preprocessing the raw data 𝐙 to obtain 𝐗, all features of 𝐗 are now numeric, i.e., α_i and β_i are numeric, making (<ref>) applicable to process. By doing this, we obtain a D× D correlation matrix 𝐂, whose elements are given by c_ij=𝒞_Ω_i,Ω_j for i,j=1,2,...,D. The average correlation of feature Ω_i to other features is computed as follows: C_i=∑_j=1^Dc_ij/D, where c_ii=1 for j=i and c_ij∈ [-1;1] for j≠ i. Note that the self-correlation coefficient c_ii does not affect selection results, since it contributes the same amount to all C_i for i=1,2,...,D. Then, using a suitable threshold, as will be detailed in Section <ref>, we are able to select K most important features corresponding to K largest elements C_i. It is worth noting that we only need to calculate such feature correlation in the training phase, while in the testing phase, we simply pick up K features from the high-dimensional data 𝐗 to form the reduced-dimensional data 𝐔 in Fig. <ref>. This does not require much computational resource when compared with the feature extraction method, which is presented next. §.§ Feature extraction Principal component analysis (PCA) <cit.> and autoencoder (AE) <cit.> are the two major feature extraction methods used in the NIDS. Different from feature selection, whose selected features are identical to those appearing in the original data, these feature extraction techniques compress the high-dimensional data 𝐗 into the low-dimensional data 𝐔 using either a projection matrix or an AE-based neural network learned from training dataset. Note that the AE approach usually suffers from high computational complexity of a deep neural network (DNN), leading to higher latency than the PCA. Thus, in this work, we concentrate on the PCA-based feature extraction approach in order to fulfill a strict requirement on the latency of the NIDS for promptly preventing severe cyber attacks. In what follows, we introduce the procedure of producing the D× K projection matrix 𝐖 in the training phase, and how to utilize this matrix in the testing phase. In particular, based on the pre-processed training data 𝐗 of N samples, we normalize it by subtracting all samples of 𝐗 by its mean over all training samples, i.e., the normalized data is given as follows: 𝐗̂=𝐗-𝐗̅, where 𝐗̅ is the mean vector. Then, we compute the D× D covariance matrix of training data as follows: 𝐑=1/N𝐗̂𝐗̂^T. Based on this, we determine its eigenvalues and eigenvectors, from which, we select K eigenvectors corresponding to K largest eigenvalues for constructing the D× K projection matrix 𝐖. Herein, these K eigenvectors are regarded as the principal components that create a subspace, which is expected to be significantly close to the normalized high-dimensional data 𝐗̂. Finally, the compressed data is determined by 𝐔=𝐖^T𝐗̂, which now has the size of K× N instead of D× N of the original data. In the testing phase, for each new data point 𝐱_i∈ℝ^D, its dimension is reduced using PCA according to 𝐮_i=𝐖^T(𝐱_i-𝐗̅). This indicates that the output of the training phase of PCA includes both the projection matrix 𝐖 and the mean vector of all training samples 𝐗̅. It should be noted that such projection matrix calculation would be computationally expensive, particularly when D and K are large. § OVERVIEW OF UNSW-NB15 DATASET We now present some key information about UNSW-NB15 dataset, which will be used in our experiments in Section <ref> to compare between feature selection and feature extraction. Then, the data pre-processing for this dataset is also discussed. §.§ Key information of UNSW-NB15 dataset UNSW-NB15 dataset was first introduced in <cit.>, which offers better real modern normal and abnormal synthetical network traffic compared with the previous NIDS datasets such as KDD99 <cit.> and NSLKDD <cit.>. A total of 2.5 million records of data are included in the UNSW-NB15 dataset, in which there are one normal class and nine attack classes: Analysis, Backdoor, DoS, Exploits, Fuzzers, Generic, Reconnaissance, Shellcode, and Worms. Flow features, basic features, content features, time features, additional generated features, and labeled features are six feature groups, which consist of a total of 49 features in the original data <cit.>. However, in this work, we use a 10% cleaned dataset of UNSW-NB15, which includes a training set of 175,341 records and a test set of 82,332 records. There are a few minority classes with proportions of less than 2%, including Analysis, Backdoor, Shellcode, and Worms (see Fig. <ref> and Fig. <ref>). In the 10% dataset, some unrelevant features were removed, such as scrip (source IP address), sport (source port number), dstip (destination IP address), and dsport (destination port number). Therefore, the number of features was reduced to 45, including 41 numerical features and 4 nominal features. §.§ Pre-processing dataset As mentioned above, the 10% dataset of UNSW-NB15 has 45 features, including 41 numerical features and 4 nominal features. We remove the id feature in numerical features, since it does not affect the detection performance. The attack_cat nominal feature that contains the names of attack categories is also removed. Thus, there are 3 remaining helpful nominal features, namely, proto, service, state. In addition, null values appearing in the service feature are treated as 'other' type of service. One-hot encoding is used for transforming nominal features, i.e., proto, service, state, to numerical values. For example, assume that the proto feature has a total of 3 different values, namely, A, B, C, then its one-encoding will result in 3 numerical features, namely, proto_A, proto_C, proto_C, whose values are 0 or 1, as illustrated in Tab. <ref>. As a result, after pre-processing data, the number of features will increase from 45 features in Z to approximately 200 features in U (see Fig. <ref>), where many of them are not really helpful in classifying attacks. Therefore, it is necessary to reduce such a large number of features to a few of the most important features, which allows to reduce the complexity of machine learning models in the classification phase. Finally, we note that when feature extraction is used, we normalize the input feature with the min-max normalization method <cit.> to improve the classification accuracy, while we do not use that data normalization for feature selection, since it does not improve the performance. § EXPERIMENTAL RESULTS AND DISCUSSION We now present extensive experimental results for investigating the performance of the NIDS using both feature selection and feature extraction methods described in Section <ref>, in combination with a range of machine learning-based classification models. More particularly, the performance metrics used for comparison include recall (R), precision (P), F1-score, training time and inference time, which will be explained in detail in Subsection <ref>. Both binary and multiclass classifications are considered. We also investigate the accuracy for each attack class to provide an insight into the behaviors of different detection methods. Last but not least, based on our extensive comparison between feature selection and feature extraction, we provide a helpful guideline on how to choose an appropriate detection technique for each specific scenario. §.§ Implementation setting §.§.§ Computer configuration The configuration of our computer, its operation system as well as a range of Python packages used for implementing intrusion detection algorithms in this work are detailed in Tab. <ref>. §.§.§ Evaluation Metrics We consider the following performance metrics: precision, recall, F1-score, as well as training time and inference time. In particularly, F1-score is calculated based on precision and recall as follows: F1-score=2×precision/precision+recall, which is regarded as a harmonic mean of precision and recall. As shown in Fig. <ref>, the two feature reduction methods considered in this work go through the same pre-processing data step, so we do not take the time requried for this step into account when estimating their training and inference time. Particularly, the training time consists of the training time of classification models and the time duration consumed by feature reduction in training (FR_train), as follows: training time = time_train + time_FR_train, Meanwhile, the inference time consists of the prediction time of machine learning classifiers and the time duration required for feature reduction in the testing phase, given by inference time = time_predict + time_FR_test. §.§.§ Classification models We use five machine learning models to do both binary and multiclass classification tasks, which are available in Python Scikit-learn library, namely, Decision Tree, Random Forest (max_depth = 5), K-nearest Neighbors (n_neighbors = 5), Multi-layer Perceptron (MLP) (max_iter = 100, hidden_layer_sizes = 200), and Bernoulli Naive Bayes. Additionally, for a better insight of feature selection, we provide lists of 4, 8 and 16 selected features in Tab. <ref>, as well as the corresponding thresholds of the average correlation used to achieve those numbers of selected features. §.§ Binary classification We first investigate the detection performance and runtime of feature selection and feature extraction methods when using binary classification in Tabs. <ref>, <ref> and <ref> for 4, 8, 16 selected/extracted features, respectively. In these tables, the best values (i.e. the maximum values of precision, recall, and F1-score, and the minimum values of training and inference times at each column of the tables) are highlighted in bold, especially the best values for both feature selection and feature extraction are highlighted both in bold and red color. The training time is measured in second (s), while the inference time for each data sample is measured in millisecond (μs). In terms of detection performance, it is shown from Tab. <ref>, <ref> and <ref> that when the number of reduced features (i.e. extracted or selected) K increases, the detection performance of feature extraction generally improves, while that of feature selection does not improve when we increase K from 8 to 16. In fact, the precision, recall and F1-score of feature selection even slightly degrade from Tab. <ref> to Tab. <ref>. This phenomenon is understandable due to the fact that if the number of selected features gets larger, it is likely to have more noisy or unimportant features appearing in the selected ones, which are expected to deteriorate the detection performance. Moreover, comparing the two feature reduction methods, we find that when the number of reduced features is small, i.e., K=4, the detection performance of feature extraction is much better than that of feature selection. For instance, in Tab. <ref>, the highest F1-score of feature extraction is 85.42% when the KNeighbors classifier is used, while that of feature selection is lower with 81.94% when the Decision Tree classifier is used. However, for larger K such as 8 and 16 in Tab. <ref> and Tab. <ref>, the feature selection method achieves better accuracy than its extraction counterpart, especially when using Decision Tree for classification. For example, when Decision Tree is employed in Tab. <ref> to achieve the lowest inference time, the F1-score of feature selection is 87.47%, which is higher than that of feature extraction with 85.69%. It is also shown from Tab. <ref>, <ref> and <ref> that when using feature selection, the Decision Tree classification method always provides the best precision, recall as well as F1-score. By contrast, the feature extraction method would enjoy the KNeighbors classifier when K are small, i.e., 4 or 8, while Decision Tree is only its best classifier when K becomes larger, such as K=16. In terms of the runtime performance, Tabs. <ref>, <ref> and <ref> demonstrate that both the training time and the inference time of feature selection is lower than that of feature extraction. This is because of the fact that the feature extraction method requires additional computational resources when compressing the high-dimensional data into low-dimensional data, as explained in Section <ref>, while the feature selection almost do not require any computing resources when just picking up K out of D features. More particularly, in Tab. <ref>, the best inference time of feature selection is 0.11 μs, which is 36 times lower than that of feature extraction being 3.95 μs, where the Decision Tree classifier is the best choice for both feature reduction methods for minimizing the inference time. Again, Decision Tree is one of the best classifiers for minimizing both training and inference times, in addition to the Naive Bayes classifier, which however does not achieve a good accuracy. Finally, in order to better understand the attack detection performance of feature selection and feature extraction, in Tab. <ref>, we provide the accuracy comparison for each class in binary classification, namely, Normal and Abnormal. Similar to Tabs. <ref>, <ref> and <ref>, we consider the number of reduced features K being 4, 8 and 16. Besides, based on the results obtained from these three tables, we only include the classifiers that offer the highest F1-scores for accuracy comparison for each class in Tab. <ref>, namely, MLP and KNeighbors for feature extraction and Decision Tree for feature selection. Herein, the highest accuracy for each class with respect to K is highlighted in bold, while the highest values in both feature selection and feature extraction are highlighted in bold and red color. It is worth noting from this table that in both feature reduction methods, while the accuracy of detecting Normal class steadily improves when increasing K, that of detecting Abnormal class gradually degrades. This interestingly indicates that in order to detect more attacks, we should select small K rather than large K. In addition, Tab. <ref> shows that for both feature reduction methods, the accuracy of Abnormal class is much higher than that of Normal class. Observe the average accuracy from this table, we find that the accuracy of feature extraction is less sensitive to varying K that that of feature selection, which varies significantly with respect to K. §.§ Multiclass classification We compare both the detection performance and runtime of feature selection and feature extraction in Tabs. <ref>, <ref>, and <ref> for 4, 8, and 16 selected/extracted features, respectively, when multiclass classification is considered. Here, we still employ five machine learning models as in binary classification. As shown via these three tables, similar to the binary case, the precision, recall and F1-score of both methods generally improve when increasing the number of reduced features K. For example, the highest F1-scores of feature extraction are 74.11%, 75.39%, and 75.52%, while that of feature selection are 65.43%, 78.36% and 77.64%, when K = 4, 8, and 16 reduced features, respectively. As such, feature extraction outperforms its counterpart when K is small such as K=4, however, this is no longer true when K gets larger such as K=8 and 16, where feature selection performs much better than feature extraction. Again, akin to the binary classification, it is shown from Tabs. <ref> and <ref> that the detection performance of feature selection degrades when K increases from 8 to 16, mostly due to the impact of noisy or irrelevant features when having more features selected. Besides, unlike the binary case, where KNeighbors is the best classifier for feature extraction when K is small such as 4 and 8, with multiclass classification, MLP now provides the best detection performance of feature extraction for any values of K, as shown via Tabs. <ref>, <ref>, and <ref>. Meanwhile, feature selection still enjoys the Decision Tree classifier to achieve the highest detection performance, similar to the binary classification analyzed in the previous subsection, while MLP does not offer a good detection performance for feature selection. Additionally, the Naive Bayes classifier achieves the worst accuracy for both feature reduction methods. With regard to the runtime comparison, again, Tabs. <ref>, <ref>, and <ref> demonstrate that the training and inference times of feature selection are significantly lower than that of feature extraction. For example, using the same Decision Tree model for achieving the lowest runtime, in Tab. <ref> when K=8, the inference time of feature selection is 0.19 μs, which is 26 times lower than that of feature extraction with 5.04 μs. Similarly, it is shown from this table that the training time of feature selection is also 2 times lower than that of its extraction counterpart. In addition, the Decision Tree model provides the lowest inference time for both feature reduction methods, while the neural network-based MLP classifier exhibits both the highest inference and training times for them. Finally, we compare the accuracy for detecting each attack type (including 9 attack classes and 1 normal class, as described in Section <ref>) between feature selection and feature extraction in Tab. <ref>, where the values of K are 4, 8, and 16 reduced features. Herein, we employ MLP and Decision Tree classifiers for feature extraction and feature selection, respectively, in order to achieve the best detection performance, as analyzed in the previous discussions. It is observed from Tab. <ref> that feature selection performs better than feature extraction in most of classes, except for Exploits and Fuzzers classes. This table also shows that both methods are capable of achieving higher accuracy for Exploits, Generic, Normal and Reconnaissance classes than the remaining ones. Additionally, similar to the binary classification discussed in Tab. <ref>, the multiclass classification accuracy of feature extraction is less sensitive to the number of reduced features K than that of its selection counterpart. More importantly, feature selection with MLP is unable to correctly detect any samples of Analysis and Backdoor, even for all three values of K. By contrast, feature selection with Decision Tree classifier is capable of correctly detecting samples from all classes. We found that this is mainly due to the machine learning classifier rather than the feature reduction method we choose. In order to clarify this issue, we compare the accuracy for each class between the two feature reduction methods using the same Decision Tree and MLP classifiers in Tab. <ref> and Tab. <ref>, respectively. It is shown via Tab. <ref> that using the same MLP, similar to feature extraction, feature selection is unable to detect any samples of Analysis and Backdoor correctly. Observe these two tables, we found that if the same classifier is employed, feature extraction tends to be able to detect more diverse attack types than feature selection. This is due to the fact that feature extraction can extract key information from all available features, leading to more diverse attack types, instead of relying solely on a subset of selected features as in the feature selection approach. In other words, feature selection tends to detect only attack types, which are highly correlated to the features it selects. In summary, considering both binary and multiclass classification for the NIDS, the feature selection method not only provides better detection performance but also lower training and inference time compared to its feature extraction counterpart, especially when the number of reduced features K increases. However, the feature extraction method is much more reliable than its selection counterpart, particularly when K is very small, such as K=4. Additionally, among five considered classifiers, while Decision Tree is the best classifier for improving the accuracy of feature selection, a neural network-based MLP is the best one for feature extraction. Last but not least, feature extraction is less sensitive to changing the number of reduced features K than feature selection, and this holds true for both binary and multiclass classifications. For more details, we provide a comprehensive comparison between feature selection and feature extraction in intrusion detection systems in Tab. <ref>. § CONCLUSIONS We have compared two typical machine learning-based intrusion detection methods, namely, feature selection and feature extraction, in the presence of the modern UNSW-NB15 dataset, where both binary and multiclass classifications were considered. Our extensive comparison showed that when the number of reduced features is large enough, such as 8 or 16, feature selection not only achieves higher detection accuracy, but also requires less training and inference times than feature extraction. However, when the number of reduced features is very small, such as 4 or less, feature extraction notably outperforms its selection counterpart. Besides, the detection performance of feature selection tends to degrade when the number of selected features becomes too large, while that of feature extraction steadily improves. We also found that while MLP is the best classifier for feature extraction, Decision Tree is the best one for feature selection for achieving the highest attack detection accuracy. Finally, our accuracy analysis for each attack class demonstrated that feature extraction is not only less sensitive to varying the number of reduced features but also capable of detecting more diverse attack types than feature selection. Both tend to be able to detect more attacks, i.e., Abnormal classes, when having more features selected or extracted. We believe that such insightful observations about the performance comparison between two feature reduction methods give us a helpful guideline on choosing a suitable intrusion detection method for each specific scenario. Finally, note that our study evaluated the effectiveness of feature reduction methods only on the UNSW-NB15 dataset. In the future, we intend to explore whether our observations with UNSW-NB15 are applicable to other intrusion detection datasets, such as, NSL-KDD, KDD99, CICIDS2017, and DARPA1998. We also plan to thoroughly investigate the performance of various deep learning classification models for NIDS, and compare with existing machine learning models. Declarations Author Contribution Vu-Duc Ngo and Tuan-Cuong Vuong wrote the main manuscript. Thien Van Luong and Hung Tran reviewed and corrected the manuscript. Ethical Approval Not applicable. Acknowledgement This work was supported in part by the SSF Framework Grant Serendipity and R&D project of Brighter Gates AB, Sweden. Data Availability The paper does not include any supporting data. Conflicts of interests All authors declare that they do not have any conflict of interest. Funding Not applicable. IEEEtran [ < g r a p h i c s > ]Vu-Duc Ngo received the Ph.D. degree from the Korea Advanced Institute of Science and Technology in 2011. From 2007 to 2009, he was a Co-Founder and the CTO of Wichip Technologies Inc., USA. Since 2009, he has been a Co-Founder and the Director of uVision Jsc, Vietnam. Since 2012, he has been serving as a BoM Member of the National Program on Research, Training, and Construction, High-Tech Engineering Infrastructure of Vietnam. He is currently a Researcher with the Research and Development Center, MobiFone Corporation, and also a Lecturer with the School of Electrical and Electronics Engineering, Hanoi University of Science and Technology, Hanoi, Vietnam. His research interests are in the fields of SoC, NoC design and verification, VLSI design for multimedia codecs, and wireless communications PHY layer. He was a recipient of the IEEE 2006 ICCES, the IEEE 2012 ATC and the NICS 2021 Best Paper Awards. [ < g r a p h i c s > ]Tuan-Cuong Vuong is currently a second year Bachelor student, working at AIoT Lab, Faculty of Computer Science, Phenikaa University, Hanoi, Vietnam. His research interests include applied machine learning and deep learning in cyber security and computer vision. [ < g r a p h i c s > ]Thien Van Luong is currently a Lecturer with the Faculty of Computer Science, and a Leader of AIoT Lab (https://aiot.phenikaa-uni.edu.vn/), Phenikaa University, Vietnam. He was a Research Fellow with University of Southampton, U.K. Prior to that he obtained the Ph.D. degree at Queen's University Belfast, U.K., and the B.S. degree at Hanoi University of Science and Technology, Vietnam. His research interests include applied machine learning in signal processing and wireless communications. [ < g r a p h i c s > ]Hung Tran Hung Tran received the B.S. and M.S. degrees in information technology from Vietnam National University, Hanoi, Vietnam, in 2002 and 2006, respectively, and the Ph.D. degree from the Blekinge Institute of Technology, Sweden, in March 2013. In 2014, he was with the Electrical Engineering Department, ETS, Montreal, Canada. From 2015 to 2020, he was a Researcher at Malardalen University, Sweden. He is currently working as a Lecturer at the Computer Science Department, Phenikaa University, Vietnam. Besides doing research in the areas of wireless communication, he is also interested in topics of natural language processing and artificial intelligence, which have been applied to develop core engines for the academic gates platform (https://www.academicgates.com).
http://arxiv.org/abs/2307.03335v1
20230707003237
Randomized subspace gradient method for constrained optimization
[ "Ryota Nozawa", "Pierre-Louis Poirion", "Akiko Takeda" ]
math.OC
[ "math.OC" ]
Randomized subspace gradient method for constrained optimization]Randomized subspace gradient method for constrained optimization [1]Ryota [email protected] 2]Pierre-Louis [email protected] 1,2]Akiko [email protected] *[1]Department of Mathematical Informatics, The University of Tokyo, Hongo, Bunkyo-ku, 113-8656, Tokyo, Japan [2]Center for Advanced Intelligence Project, RIKEN, Nihonbashi, Chuo-ku, 103-0027, Tokyo, Japan We propose randomized subspace gradient methods for high-dimensional constrained optimization. While there have been similarly purposed studies on unconstrained optimization problems, there have been few on constrained optimization problems due to the difficulty of handling constraints. Our algorithms project gradient vectors onto a subspace that is a random projection of the subspace spanned by the gradients of active constraints. We determine the worst-case iteration complexity under linear and nonlinear settings and theoretically confirm that our algorithms can take a larger step size than their deterministic version. From the advantages of taking longer step and randomized subspace gradients, we show that our algorithms are especially efficient in view of time complexity when gradients cannot be obtained easily. Numerical experiments show that they tend to find better solutions because of the randomness of their subspace selection. Furthermore, they performs well in cases where gradients could not be obtained directly, and instead, gradients are obtained using directional derivatives. [ [ August 1, 2023 ================== § INTRODUCTION In this paper, we consider the following constrained optimization problem: min_x∈ℝ^n f(x) s.t. 𝒞 := { x  |  g_i(x)≤ 0, i = 1,...,m }, where f is L-smooth and g_i are L_g-smooth, neither of which need be convex functions as long as an optimal solution exists. There is a growing demand for solving large-scale constrained optimization problems in many applications such as machine learning, statistics, and signal processing <cit.>; for example, sparse optimization problems are often formulated as min_x L(x) s.t. ℛ(x) ≤ s, where L(x) is a loss function, ℛ is some sparsity-inducing norm such as l_1, and s is some fixed positive integer value. Recently, machine-learning models (<ref>) including a fairness constraint <cit.> have attracted researchers' attention, but difficulties have emerged in solving such large-scale problems. For solving large-scale unconstrained problems, i.e., (<ref>) with 𝒞=ℝ^n, <cit.> have proposed subspace methods using random projection. These methods update the iterate point as follows: x_k+1 = x_k + M_k d_k, where M_k is an n× d (d<n) random matrix. One of the difficulties with high-dimensional problems is calculating the gradient ∇ f. Although there are some methods that calculate gradients, such as automatic differentiation that is popular in machine learning, when the objective function has a complicated structure, backward-mode automatic differentiation leads to an explosion in memory usage <cit.>. Kozak et al. <cit.> proposed a stochastic subspace descent method combining gradient descent with random projections to overcome the difficulties of calculating gradients. The method uses the direction d_k = -M_k^⊤, and it was shown that d_k can be computed by using finite-difference or forward-mode automatic differentiation of the directional derivative. To obtain the full gradients by using finite difference or forward-mode automatic differentiation, we need to evaluate the function values n times. On the other hand, the projected gradient can be computed by evaluating the function values only d times. Compared with unconstrained problems, subspace optimization algorithms using random projections have made little progress in solving constrained problems. To the best of our knowledge, apart from <cit.>, there is no paper on subspace optimization algorithms using (<ref>) for constrained optimization (<ref>). Here, Cartis et al. <cit.> proposed a general framework to investigate a general random embedding framework for bound-constrained global optimization of f with low effective dimensionality. The framework projects the original problem onto a random subspace and solves the reduced subproblem in each iteration: min_d f(x_k + M_k d)     x_k + M_k d ∈𝒞. These subproblems need to be solved to some required accuracy by using a deterministic global optimization algorithm. In this study, we propose gradient-based subspace methods for constrained optimization. The descent direction d_k is obtained without solving any subproblems, by projecting the gradient vector ∇ f onto a subspace that is a random projection of the subspace spanned by the gradients of the active constraints. A novel property of our algorithms is that when the dimension n is large enough, they are almost unaffected by the constraint boundary; because of this, they can take a longer step than their deterministic versions. In standard constrained optimization methods, the iterates become difficult to move when they get close to the constraint boundary. Our methods randomly update the subspace where the iterate is reconstructed in each iteration, which makes it possible for them to take a large step. We present two randomized subspace gradient algorithms: one is specialized for linear constrained optimization problems and the other is able to handle nonlinear constraints. On linearly constrained problems, these algorithms work very similarly to the gradient projection method <cit.> if random projections are not used. The gradient projection method (GPM) <cit.> is one of the active set methods. Under linear constraints, GPM projects the gradients ∇ f onto a subspace spanned by the gradients of the active constraints. The projected gradient does not change the function values of active constraints. Hence, the advantage of GPM is that all of the updated points are feasible. Rosen <cit.> augmented GPM to make it able to handle nonlinear constraints and proved global convergence. Also for nonlinear constraints, a generalized gradient projection method (GGPM) <cit.> was derived from GPM; it does not require an initial feasible point. Our algorithms need O(n/dε^-2) iterations to reach an approximate Karush-Kuhn-Tucker (KKT), while other standard gradient-algorithms for solving unconstrained problems need O(ε^-2) iterations. While the theoretical worst-case iteration complexity increases because of the subspace projection, the gradient computation at each iteration costs less; when gradients cannot be obtained easily, our algorithms have the same advantage as stochastic subspace gradients <cit.> by calculating the gradients with d directional derivatives. Due to their advantages of taking longer step and reducing the computational complexity of calculating the gradients, our algorithms are especially efficient in view of time complexity. This is because when gradients are obtained by directional derivatives, ours need O(d) × O(n/dε^-2) function evaluations to reach an approximate KKT, which is the same as the number of evaluations in standard gradient-algorithms for solving unconstrained problems (O(n) × O(ε^-2)). Furthermore, our algorithms can reduce the worst time complexity to reach an approximate KKT from O(n) × O(max(ε^-2,(u_g^ε)^-1ε^-2)) to O(d) × O(n/dε^-2) compared to their deterministic version when gradients are obtained by directional derivatives. u_g^ε represents a particular value that is dependent on the given constraints and the parameter ε. In the case where all constraints are linear functions, u_g^ε simplifies to ε. In numerical experiments, we show that our algorithms work well when the gradient of the objective functions cannot be calculated. The numerical results indicate that our algorithms tend to find better solutions than deterministic algorithms, because they can search for solutions randomly in a wide space without being affected by the luck involved in the initial solution selection. § PRELIMINARIES §.§ Notations In this paper, we denote the optimum of (<ref>) by x^*. The vector norm · is assumed to be the l_2 norm and the matrix norm is the operator norm. ·_F denotes the Frobenius norm. λ_min(A) and λ_max(A) are the minimum eigenvalue and maximum eigenvalue of a matrix A, respectively. For a vector a, we define [a]_+ be a vector whose i-th entry is given by [a_i]_+ := max(0,a_i). 1 denotes an all-ones vector and χ_S(x) denotes a step function for all x ∈ℝ; χ_S(x) = {[ 1 (x∈ S),; 0 (x∉ S).; ]. χ_- and χ_+ are step functions with S = {x | x≤ 0} and S = {x| x > 0}, respectively. For a vector a, we define χ_+(a) (or χ_-(a)) to be a vector whose i-th entry is χ_+(a_i) (or χ_-(a_i), respectively). §.§ Key lemma The following lemma implies that a random projection defined by a random matrix P nearly preserves the norm of any given vector x with arbitrarily high probability. It is a variant of the Johnson-Linderstrauss lemma <cit.>.  <cit.> Let P∈ℝ^d× n be a random matrix whose entries are independently drawn from . Then for any x∈ℝ^n and ε∈ (0,1), we have Prob[(1-ε)x^2≤1/dPx^2≤ (1+ε)x^2]≥ 1-2exp(-C_0ε^2 d), whose C_0 is an absolute constant. § ALGORITHM FOR LINEAR INEQUALITY CONSTRAINTS In this section, we describe our randomized subspace gradient method for solving linear constrained problems (RSG-LC), which is listed as Algorithm <ref> below. §.§ Outline of our algorithm: RSG-LC Let denote the set of Gaussian matrices of size d× n whose entries are independently sampled from , and define M_k = 1/nP_k^⊤, where P_k is a Gaussian random matrix sampled from . Definition of 𝒜_k Let 𝒜_k :=𝒜(x_k) be the index set of active constraints such that the inequality constraints are almost satisfied by equality at the iterate x_k. The active set usually refers to the set of indices whose constraints satisfy g_i(x)=0. We use loose criteria for the active set of our randomized subspace algorithm: 𝒜(x) = {i ∈{1,2,⋯, m}| [-g_i(x)]_+ ≤ε_0 ∇ g_i(x)} using a parameter ε_0 (>0). Section <ref> shows that the definition of 𝒜_k makes the iterates {x_k} all feasible with high probability, and that the step size α_k is larger than when the random matrices are not used. Notice that we can replace ∇ g_i in 𝒜_k with the projected gradients M_k^⊤∇ g_i to reduce the computation costs when calculating ∇ g_i is difficult. This is because, from Lemma <ref>, [-g_i(x_k)]_+/M_k^⊤∇ g_i(x_k)≈n/√(d)[-g_i(x_k)]_+/∇ g_i(x_k) holds with high probability. Therefore, the active set 𝒜_k using projected gradients can be almost the same as the set using the full gradients by adjusting ε_0. For the sake of simplicity, we define [-g_i(x_k)]_+/∇ g_i(x_k)≤ε_0 to be the active set 𝒜_k. Update of Iterates Algorithm <ref> calculates the sequence {x_k} by using the step size α_k and the Lagrange multiplier λ^(k)∈ℝ^|𝒜_k | of (<ref>) corresponding to constraints in 𝒜_k as x_k+1 = x_k- α_kM_kM_k^⊤( + G_kλ^(k)) = x_k -α_kM_k( I - M_k^⊤ G_k(G_k^⊤ M_kM_k^⊤ G_k)^-1G_k^⊤ M_k)M_k^⊤ or x_k+1 = x_k - d/nα_kM_k M_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1[-λ^(k)]_+, where G_k = (∇ g_i(x_k))_i∈𝒜_k∈ℝ^n×|𝒜_k |. This corresponds to taking d_k as (<ref>) or (<ref>). The search directions x_k+1- x_k in (<ref>) and (<ref>) are descent directions with high probability as shown in Propositions <ref> and <ref> in Section <ref>. Propositions <ref> and <ref> in Section <ref> ensure that if the α_k are chosen to be less than some threshold, all points in {x_k} are feasible. Hence, the while-loop reducing the value of α_k stops after a fixed number of iterations. Theoretically, (<ref>) reduces the objective function more than (<ref>) does at each iteration. Therefore, the algorithm first computes (<ref>) and if the search direction is small enough and the Lagrange multiplier λ^(k) is not nonnegative, (<ref>) is used for the next iterate x_k+1. The search direction in (<ref>) is regarded as the projected gradient vector onto a subspace that itself is a random projection of the subspace spanned by the gradients of active constraints. Note that if we ignore the random matrix M_k (i.e., by setting M_k=I), the update rule of (<ref>) becomes x_k+1 = x_k - α_k (I- G_k (G_k^⊤ G_k)^-1G_k^⊤) and (I- G_k (G_k^⊤ G_k)^-1G_k^⊤) is the projected gradient onto the subspace spanned by the gradients of the active constraints in 𝒜_k. If 𝒜_k is an active set in the usual sense defined by the valid constraints, the projected gradient is the same as the one used in the gradient projection method (GPM) <cit.>. By the definition of d_k of (<ref>), if d_k≈ 0, then + G_k λ^(k)≈ 0 holds by Lemma <ref>. If λ^(k)≥ 0 holds, the resulting point is an approximate Karush-Kuhn-Tucker (KKT) point. As shown later in Theorem <ref>, given inputs d, ε_0,ε_2 and δ_1 = √(d(1-ε)/n^2)ε_1, Algorithm <ref> generates for the original problem (<ref>) with large n an output x_k that is with high probability an (ε_1,ε_2,O(ε_0))-KKT point within O(n/d min(ε_1^2,ε_2^2)) iterations. §.§ Decrease in objective value We will make the following assumptions. * f is L-smooth, i.e., ∇ f(x)-∇ f(y)≤ Lx-y for any x,y ∈ℝ^n. * The level set {x∈𝒞| f(x)≤ f(x_0)} is compact. <ref> and <ref> together imply that is bounded, i.e., ≤ U_f for some value U_f>0. * The vectors ∇ g_i(x), ∀ i ∈𝒜_k, of the active constraints at x_k ∈𝒞 are linearly independent. * The reduced dimension d is larger than the number of active constraints |𝒜_k|. Notice that Assumption <ref> implies that M_k^⊤ G_k has full row rank with probability 1, which ensures the existence of (G_k^⊤ M_kM_k^⊤ G_k)^-1. Here, we define λ^*_min = min_𝒜⊂{1,2,...,m},|𝒜| <n, x∈𝒞, ∇ g_i(x) (i∈𝒜) are linear independentλ_min(G_𝒜(x)^⊤ G_𝒜(x)), where G_𝒜(x) = (∇ g_i(x))_i ∈𝒜. Obviously, λ^*_min > 0 and Assumption <ref><ref> ensure that λ^*_min≤λ_min(G_k^⊤ G_k). By defining Z_k:= I-M_k^⊤ G_k (G_k^⊤ M_kM_k^⊤ G_k)^-1 G_k^⊤ M_k, we can rewrite the next iterate (<ref>) as x_k+1 = x_k - α_k M_k Z_k M_k^⊤. Under Assumptions <ref><ref> and <ref>, when d_k = - M_k^⊤(+G_kλ^(k)) of (<ref>) in Algorithm <ref>, then f(x_k+1)≤ f(x_k)-(α_k - Lα_k^2(1+ε)/2n)Z_kM_k^⊤^2 holds with probability at least 1-2exp(-C_0 ε^2 n). Since f is L-smooth from Assumption <ref><ref>, we have f(x_k+1)-f(x_k) ≤⟨, x_k+1-x_k⟩ +L/2x_k+1-x_k^2. We also have x_k+1-x_k=- α_k M_k Z_k M_k^⊤ from (<ref>). Using this equality in (<ref>), we obtain f(x_k+1)-f(x_k) ≤ -α_k ^⊤ M_kZ_kM_k^⊤ +α_k^2L/2M_kZ_kM_k^⊤^2. By definition, Z_k is an orthogonal projection matrix; hence, Z_k^2 = Z_k. Therefore, ^⊤ M_kZ_kM_k^⊤=Z_kM_k^⊤^2. Furthermore, by noticing that M_k = 1/nP_k^⊤ and using Lemma <ref>, we obtain that M_kZ_kM_k^⊤^2 = 1/n^2P_k^⊤ Z_kM_k^⊤^2≤1+ε/nZ_kM_k^⊤^2. With these relations, we find that f(x_k+1)-f(x_k) ≤ -(α_k -α_k^2L(1+ε)/2n) Z_kM_k^⊤^2 with probability at least 1-2exp(-C_0ε^2n). Because Z_kM_k^⊤=M_k^⊤ ( + G_kλ^(k)), which is derived from (<ref>) and (<ref>), Proposition <ref> implies that the function value f(x_k) strictly decreases unless d_k = 0. Under Assumptions <ref><ref> and <ref>, when d_k = - d/nM_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1 [-λ^(k)]_+ of (<ref>) in Algorithm <ref>, then f(x_k+1)≤ f(x_k) - d/n(α_k - Lα_k^2(1+ε)/2(1-ε)λ_min(G_k^⊤ G_k))[-λ^(k)]_+^2 with probability at least 1-2exp(-C_0ε^2n)-2exp(-C_0ε^2d). We can confirm that M_k d_k is a descent direction by using the definition (<ref>) of λ^(k) for the second equality: ^⊤ M_k d_k = -d/n^⊤ M_k M_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1 [-λ^(k)]_+ =d/nλ^(k)⊤ [-λ^(k)]_+ = -d/n[-λ^(k)]_+^2 <0. Now let us evaluate f(x_k+1) - f(x_k) using (<ref>) with x_k+1-x_k=α_k M_kd_k as f(x_k+1) ≤ f(x_k) + α_k^⊤ M_k d_k +Lα_k^2/2 M_k d_k^2. Notice first that M_k^⊤ M_k = 1/n^2P_kP_k^⊤ and Lemma <ref> give M_k d_k ^2 ≤1+ε/nd_k^2 with probability at least 1-2exp(-C_0ε^2n). Moreover, d_k^2 = d^2/n^2 ([-λ^(k)]_+)^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1 [-λ^(k)]_+ ≤d^2/n^2λ_min(G_k^⊤ M_k M_k^⊤ G_k)[-λ^(k)]_+^2 holds from the definition of d_k. Let w be the eigenvector corresponding to the minimum eigenvalue of G_k^⊤ M_k M_k^⊤ G_k. From Lemma <ref>, we have λ_min(G_k^⊤ M_k M_k^⊤ G_k) = w^⊤ (G_k^⊤ M_k M_k^⊤ G_k) w ≥d(1-ε)/n^2 w^⊤ (G_k^⊤ G_k) w ≥d(1-ε)/n^2λ_min(G_k^⊤ G_k) with probability at least 1-2exp(-C_0ε^2d). Combining (<ref>) and (<ref>), we find that d_k^2 ≤d/(1-ε)λ_min(G_k^⊤ G_k)[-λ^(k)]_+^2. From Assumption  <ref><ref>, λ_min(G_k^⊤ G_k) has a positive lower bound. (<ref>) together with (<ref>) leads to f(x_k+1) ≤f(x_k) - α_k d/n[-λ^(k)]_+^2+Lα_k^2/2 M_k d_k^2 ≤f(x_k) - α_k d/n[-λ^(k)]_+^2 + Lα_k^2(1+ε)/2nd_k^2 ≤f(x_k) - α_k d/n[-λ^(k)]_+^2 + dLα_k^2(1+ε)/2n(1-ε)λ_min(G_k^⊤G_k)[-λ^(k)]_+^2, with probability at least 1-2exp(-C_0ε^2n)-2exp(-C_0ε^2d). The second inequality follows from (<ref>) and the last inequality follows from (<ref>). Proposition <ref> shows that if 0 < α_k <2(1-ε)λ_min(G_k^⊤ G_k)/(1+ε)L and min_iλ_i^(k) < 0, then f(x_k+1) < f(x_k). §.§ Feasibility In this section, we derive conditions under which that the sequence {x_k} generated by Algorithm <ref> is feasible. Assuming that the initial point is feasible, we update the iterate while preserving feasibility. We prove the following lemma by utilizing the properties of linear constraints. Suppose that Assumptions <ref> and <ref> hold and that all constraints g_i are linear and x_k is feasible. Then, if the step size α_k satisfies 0≤α_k ≤ε_0/U_fn^2/(1+ε)d when d_k = - M_k^⊤ (∇ f(x_k)+ G_k λ^(k)) of (<ref>) in Algorithm <ref>, x_k+1 is feasible with probability at least 1-2(m-|𝒜_k|+1)exp(-C_0ε_1^2d). Note that λ^(k) is the solution of G_k^⊤ M_kM_k^⊤ G_k λ^(k) = -G_k^⊤ M_kM_k^⊤, which is equal to G_k^⊤ M_k d_k =0 when d_k is defined as (<ref>). Since the constraints are linear, the direction M_kd_k preserves feasibility for the active constraints: g_i(x_k +α_k M_kd_k) = g_i(x_k) +α_k ∇ g_i(x_k)^⊤ M_k d_k = g_i(x_k), ∀ i ∈𝒜_k. As for i∉𝒜_k, notice that x_k+1 is feasible if g_i(x_k+1) = g_i(x_k) - α_k∇ g_i(x_k)^⊤ M_k Z_k M_k^⊤ ≤ g_i(x_k) + α_kM_k^⊤∇ g_i(x_k)Z_k M_k^⊤≤ 0 is satisfied. By Lemma <ref>, we have M_k^⊤∇ g_i(x_k)≤√(d(1+ε)/n^2)∇ g_i(x_k) (i∉𝒜_k), with probability at least 1 - 2(m-|𝒜_k|) exp(-C_0ε^2d). Since Z_k is an orthogonal projection, we further have Z_kM_k^⊤∇ f(x_k)≤M_k^⊤∇ f(x_k)≤√(d(1+ε)/n^2)∇ f(x_k) with probability at least 1-2exp(-C_0ε^2d). From Assumption <ref><ref>, we have g_i(x_k) + α_kM_k^⊤∇g_i(x_k) Z_k M_k^⊤ ≤g_i(x_k) + α_kd(1+ε)/n^2∇g_i(x_k) ≤g_i(x_k) + α_k d(1+ε)/n^2∇g_i(x_k) U_f. Because of the assumption on α_k and ε_0 < -g_i(x_k)/∇ g_i(x_k) for i∉𝒜_k, we have 0≤α_k ≤ε_0/U_fn^2/(1+ε)d < 1/U_fn^2/(1+ε)d-g_i(x_k)/∇ g_i(x_k), which ensures that the right-hand side of (<ref>) is upper-bounded by 0. Hence, x_k+1 is feasible. If we do not use random matrices, the condition becomes α_k ≤ε_0/U_f and the original dimension n does not appear. Also, since M_k^⊤ (∇ f(x_k)+ G_k λ^(k))≈√(d)/n∇ f(x_k)+ G_k λ^(k), we see that, using a random subspace allows us to use a larger step size n/√(d). Notice, that when the dimension n is large enough, the random matrices allow us to ignore the step-size condition because the step size is at least proportional to √(n) ( < n/√(d) ), which comes from d < n. Suppose that Assumptions <ref> and <ref> hold and that all constraints g_i are linear and x_k is feasible. Then, if the step size α_k satisfies 0≤α_k ≤n/d(1-ε)λ_min^*/(1+ε)U_fε_0 when d_k = - d/nM_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1 [-λ^(k)]_+ of (<ref>) in Algorithm <ref>, x_k+1 is feasible with probability at least 1-2(m-|𝒜_k|+2)exp(-C_0ε^2d). Regarding the active constraints, G_k^⊤ M_kd_k = - d/nG_k^⊤ M_kM_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1 [-λ^(k)]_+ = -d/n[-λ^(k)]_+ ≤ 0 holds. Therefore, g_i(x_k +α_k M_kd_k) = g_i(x_k) +α_k ∇ g_i(x_k)^⊤ M_k d_k ≤ g_i(x_k), ∀ i ∈𝒜_k. Hence, g_i(x_k+1)≤ 0 is satisfied for the active constraints. As for the nonactive constraints (i∉𝒜_k), it is enough to prove the inequality, g_i(x_k) + α_k M_k^⊤∇ g_i (x_k)d_k≤ 0, which leads to g_i(x_k+1) = g_i(x_k) + α_k∇ g_i^⊤ (x_k) M_kd_k ≤ g_i(x_k) + α_k M_k^⊤∇ g_i (x_k)d_k≤ 0. From Lemma <ref>, we find that g_i(x_k) + α_k M_k^⊤∇g_i (x_k)d_k ≤g_i(x_k) + α_k √(d(1+ε)/n^2)∇g_i (x_k)d_k ≤g_i(x_k) + α_k d/n√((1+ε)/(1-ε)λ_min(G_k^⊤G_k)) ∇g_i(x_k)[-λ^(k)]_+ ≤g_i(x_k) + α_k d/n√((1+ε)/(1-ε)λ_min^*) ∇g_i(x_k)[-λ^(k)]_+ (i∉𝒜_k) holds with probability at least 1-2(m-|𝒜_k|+1)exp(-C_0ε_1^2d). The second inequality follows from (<ref>). Now, we will show that the step size satisfies 0≤α_k ≤n/d(1-ε)λ_min^*/(1+ε)U_fε_0 ≤n/d[-λ^(k)]_+ √((1-ε)λ_min^*/(1+ε))ε_0, which proves (<ref>) from ε_0 ≤-g_i(x_k)/∇ g_i (x_k) together with (<ref>). Let us evaluate the upper bound on λ^(k). From (<ref>), we have λ^(k) = (G_k^⊤ M_k M_k^⊤ G_k)^-1G_k^⊤ M_k M_k^⊤ ≤(G_k^⊤ M_k M_k^⊤ G_k)^-1G_k^⊤ M_k M_k^⊤, where the first norm · on the right-hand side is the operator norm for a matrix. We obtain (G_k^⊤ M_k M_k^⊤ G_k)^-1G_k^⊤ M_k^2 = λ_max((G_k^⊤ M_k M_k^⊤ G_k)^-1) = 1/λ_min(G_k^⊤ M_k M_k^⊤ G_k). Then, from (<ref>) and Lemma <ref>, we find that λ^(k) ≤M_k^⊤/√(λ_min(G_k^⊤M_k M_k^⊤G_k)) ≤√(n^2/d(1-ε))M_k^⊤/√(λ_min(G_k^⊤G_k)) ≤√((1+ε)/(1-ε)) /√(λ_min(G_k^⊤G_k)) ≤√((1+ε)/(1-ε)) U_f/√(λ_min(G_k^⊤G_k)) ≤√((1+ε)/(1-ε)) U_f/√(λ_min^*) holds with probability at least 1-4exp(-C_0ε^2d). The second inequality follows from (<ref>) and the third inequality follows from Lemma <ref>. Hence, using (<ref>) with [-λ^(k)]_+≤λ^(k), the second inequality in (<ref>) holds and it is confirmed that x_k+1 satisfies the nonactive inequality constraints. Thus, x_k+1 is feasible with probability at least 1-2(m-|𝒜_k|+2)exp(-C_0ε^2d); the probability can be derived by applying Lemma <ref> to M_k^⊤, M_k^⊤∇ g_i(x_k) (i ∉𝒜_k) and (<ref>). Similarly to Proposition <ref>, we can ignore this condition when the original dimension n is large enough. §.§ Global convergence We say that (x,η) ∈ℝ^n ×ℝ^m is an (ε_1,ε_2,ε_3)-KKT pair of Problem (<ref>) if the following conditions hold: ∇ f(x) + ∑_i = 1^mη_i ∇ g_i(x)≤ε_1, g_i(x)≤ 0, η_i ≥ -ε_2, |η_i g_i(x) |≤ε_3. We can construct η^(k)∈ℝ^m from the output λ^(k)∈ℝ^|𝒜_k | of Algorithm <ref> as follows: copy the values of λ^(k) to the subvector of η^(k) corresponding to the index set 𝒜_k, filling in the other elements of η^(k) with 0. We will regard (x_k, η^(k)) as the output of Algorithm <ref>. Now, let us prove that the output (x_k,η^(k)) is an (ε_1,ε_2,ε_3)-KKT pair for some ε_1,ε_2,ε_3. Suppose that Assumptions <ref> and <ref> hold. Let the constraints g_i be linear functions and U_g = max_i ∇ g_i(x). Moreover, let the optimal value of (<ref>) be f^*(> -∞), and let δ(ε,ε_0,ε_1,ε_2) := min(min(n/L(1+ε), ε_0/U_fn^2/(1+ε)d)d(1-ε)/2n^2ε_1^2,.                     .min((1-ε)λ_min^*/(1+ε)L, n/d(1-ε)λ_min^*/(1+ε)U_fε_0)d/2nε_2^2). Then Algorithm <ref> generates an (ε_1,ε_2,O(ε_0))-KKT pair from the inputs d,ε_0,ε_2, δ_1 = √(d(1-ε)/n^2)ε_1 within K := ⌈f(x_0)-f^*/δ(ε,ε_0,ε_1,ε_2)⌉ iterations, with probability at least 1 - 2Kexp(-C_0ε^2n)-2K(m+5)exp(-C_0ε^2d). The points {x_k} are feasible when the step size α_k satisfies the conditions of Propositions <ref> and <ref>. Hence, (<ref>) is satisfied. Next, we prove (<ref>). In terms of i∈𝒜_k, [-g_i(x)]_+/∇ g_i(x)≤ε_0 and Lemma <ref> imply that ∑_i∈𝒜_k (η_i^(k))^2([-g_i(x_k)]_+)^2 ≤∑_i∈𝒜_k ε_0^2 (η_i^(k))^2∇g_i(x_k)^2 ≤ε_0^2max_j ∇g_j(x_k)^2∑_i∈𝒜_k (η_i^(k))^2 =ε_0^2max_j ∇g_j(x_k)^2 λ^(k)^2 ≤ε_0^2max_j ∇g_j(x_k)^2/λ_min(G_k^⊤G_k)λ^(k)⊤ G_k^⊤G_k λ^(k) ≤ε_0^2 n^2/d(1-ε) max_j ∇g_j(x_k)^2/λ_min(G_k^⊤G_k) λ^(k)⊤ G_k^⊤M_kM_k^⊤G_k λ^(k). The first inequality follows from [- g_i(x_k)]_+ ≤ε_0 ∇ g_i(x_k) and the last inequality follows from Lemma <ref>. From (<ref>) we have λ^(k)⊤ G_k^⊤ M_kM_k^⊤ G_k λ^(k) = ^⊤ M_k M_k^⊤ G_k(G_k^⊤ M_kM_k^⊤ G_k)^-1G_k^⊤ M_k M_k^⊤ ≤M_k^⊤^2, where we have used the fact that M_k^⊤ G_k(G_k^⊤ M_kM_k^⊤ G_k)^-1G_k^⊤ M_k is an orthogonal projection matrix and hence its maximum eigenvalue is equal to 1. From these inequalities, we obtain ∑_i∈𝒜_k (η_i^(k))^2([-g_i(x_k)]_+)^2 ≤ε_0^2 n^2/d(1-ε) max_j ∇g_j(x_k)^2/λ_min(G_k^⊤G_k) M_k^⊤^2 ≤ε_0^2 (1 + ε)/(1-ε) max_j ∇g_j(x_k)^2^2/λ_min(G_k^⊤G_k) ≤ε_0^2 (1+ε)/(1-ε)U_g^2 U_f^2/λ_min^*. The second inequality follows from Lemma <ref>. Hence, |η_i^(k)g_i(x_k) |=|η_i^(k)[-g_i(x_k)]_+ | = O(ε_0), satisfying (<ref>). Next, we derive (<ref>). When Algorithm <ref> terminates at iteration k̅, M_k̅^⊤∇ f(x_k̅) + M_k̅^⊤ G_k̅λ^(k̅)≤δ_1 holds, implying from Lemma <ref> that √(d(1-ε)/n^2)∇ f(x_k̅) + G_k̅λ^(k̅)≤δ_1 = √(d(1-ε)/n^2)ε_1 with probability at least 1-2exp(-C_0ε^2d). By definition of η^(k̅), we have ∇ f(x_k̅) + G_k̅λ^(k̅)=∇ f(x) + ∑_i = 1^mη_i ∇ g_i(x). Furthermore, because min_i λ_i^(k̅) > -ε_2 also holds, (x_k̅,η^(k̅)) is an (ε_1,ε_2,O(ε_0))-KKT pair. Now we prove that the iteration number k̅ for finding an (ε_1,ε_2,O(ε_0))-KKT pair is at most K, i.e., k̅≤ K. We will show that the function value strictly and monotonically decreases in the two directions (denoted as Case 1 and Case 2, here) at each iteration k (≤k̅-1) of Algorithm <ref>. * Case 1: When Z_k M_k^⊤=M_k^⊤ + M_k^⊤ G_k λ^(k)≥δ_1 and 0 ≤α_k ≤n/L(1+ε), we have α_k - Lα_k^2(1+ε)/2n≥1/2α_k. These relations and Proposition <ref> yield the following: f(x_k+1)≤ f(x_k) -1/2α_kd(1-ε)/n^2ε_1^2. If the step size α_k satisfies the condition of Proposition <ref>, f(x_k+1) - f(x_k) ≤ -1/2min(n/L(1+ε), ε_0/U_fn^2/(1+ε)d)d(1-ε)/n^2ε_1^2 holds and x_k+1 is feasible. * Case 2: Next, when M_k^⊤ + M_k^⊤ G_k λ^(k)≤δ_1, we update the point by d_k = -d/nM_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1 [-λ^(k)]_+. Since Algorithm <ref> does not terminate at iteration k, we have min_i λ_i^(k)<-ε_2, and this inequality leads to [-λ^(k)]_+^2≥ε_2^2. When then step size α_k satisfies 0≤α_k ≤(1-ε)λ_min(G_k^⊤ G_k)/(1+ε)L, from Proposition <ref> and α_k - Lα_k^2(1+ε)/2(1-ε)λ_min(G_k^⊤ G_k)≥1/2α_k, we have f(x_k+1)- f(x_k) ≤ - α_kd/2nε_2^2. If the step size α_k satisfies (<ref>) in Proposition <ref>, f(x_k+1)- f(x_k) ≤ - min((1-ε)λ_min^*/(1+ε)L, n/d(1-ε)λ_min^*/(1+ε)U_fε_0)d/2nε_2^2 holds and x_k+1 is feasible. Combining (<ref>) and (<ref>) and then summing over k and using the relation f^* ≤ f(x_k̅), we find that f^* - f(x_0) ≤ f(x_k̅) - f(x_0) ≤ -k̅δ(ε,ε_0,ε_1,ε_2) holds with probability as least 1 - 2k̅exp(-C_0ε^2n)-2k̅(m+5)exp(-C_0ε^2d). Accordingly, (<ref>) implies k̅≤ K, which completes the proof. If the original dimension n is large enough, δ(ε,ε_0,ε_1,ε_2) becomes δ(ε,ε_0,ε_1,ε_2) = min((1-ε)/2L(1+ε)d/nε_1^2, (1-ε)λ_min^*/2L(1+ε)d/nε_2^2) without the parameter ε_0 that is used in the definition of the active constraints. This means that the random projection allow us to ignore the boundary of the constraints, while the convergence rate becomes O(n/dmax(ε_1^-2,ε_2^-2)). Our methods become more efficient when calculating the gradient ∇ f(x_k) is difficult and require the use of finite difference or forward-mode automatic differentiation. We can obtain M_k^⊤ with O(d) function evaluations. On the other hand, calculating the full gradients requires O(n) function evaluations and this calculation is time-consuming when n is large. Hence, if the computational complexity per iteration is dominated by the gradient calculation, the total complexity is reduced compared with that of the deterministic algorithm. We can prove convergence of the deterministic version of our algorithm (i.e., M_k = I) by the same argument in Section <ref>. However, the iteration complexity becomes O(max (max(ε_1^-2, ε_2^-2), ε_0^-1max(ε_1^-2, ε_2^-2) ) ) and we cannot ignore the parameter ε_0. When calculating the gradient ∇ f(x_k) is difficult, the time complexity of the deterministic version to reach an approximate KKT point becomes O(n) × O(max (max(ε_1^-2, ε_2^-2), ε_0^-1max(ε_1^-2, ε_2^-2) ) ), which is worse than ours with randomness (O(d) × O(n/dmax(ε_1^-2,ε_2^-2))). Here, we evaluate the computational complexity per iteration of our algorithm. Let T_value, T_grad be the computational complexities of evaluating the function value and the gradient of f,g_i. Our method requires O(dn|𝒜_k| + m(T_value+T_grad) + mT_value|log(d/ε_0 n)|) complexity. The first term O(dn|𝒜_k|) comes from calculating M_k^⊤ G_k. Using automatic differentiation, we can reduce the complexity to O(|𝒜_k| T_grad). The second term O(m(T_grad+T_value)) comes from calculating the active sets 𝒜_k. The last term comes from the while-loop in the proposed method. From Propositions <ref> and <ref>, if hβ^k ≤ O(min(n^2ε_0/d,nε_0/d)) holds, the while-loop will terminate. Then, we check the feasibility at most O(|log(d/ε_0 n)|) times. The total computational complexity of our algorithm is estimated by multiplying the iteration complexity in Theorem <ref> and the above complexity per iteration. § ALGORITHM FOR NONLINEAR INEQUALITY CONSTRAINTS In this section, we extend the application of the randomized subspace gradient method to nonlinear constrained problems. The algorithm is summarized in Algorithm <ref>. §.§ Outline of our algorithm: RSG-NC Update of Iterates Algorithm <ref> calculates the sequence {x_k} by using the step size α_k and the Lagrange multiplier λ̅^(k)∈ℝ^|𝒜_k | corresponding to the constraints in 𝒜_k as x_k+1 = x_k - α_k M_kM_k^⊤ ( + G_kλ̅^(k)) = x_k - α_k M_k (I - M_k^⊤ G_k ((G_k-μ_k/M_k^⊤s_k^⊤)^⊤ M_kM_k^⊤ G_k )^-1. .(G_k-μ_k/M_k^⊤s_k^⊤)^⊤ M_k)M_k^⊤ or x_k+1 = x_k - ε_2 d/nα_k M_k M_k^⊤ G_k (G_k^⊤ M_k M_k^⊤ G_k)^-1d̅^(k). The search directions x_k+1 - x_k in (<ref>) and (<ref>) are descent directions with high probability as shown in Propositions <ref> and <ref> later in Section <ref>. Propositions <ref> and <ref> in Section <ref> ensure that if α_k are chosen to be less than some threshold, all points in {x_k} are feasible with high probability. Hence, the while-loop reducing the value of α_k stops after a fixed number of iterations. RSG-NC first computes (<ref>) and if the search direction is small enough and the Lagrange multiplier λ^(k), defined by (<ref>), is not nonnegative, it uses (<ref>) for the next iterate x_k+1. The direction d_k of (<ref>) is identical to the one (<ref>) in Algorithm <ref> when μ_k = 0. Here, we define the following matrices, R'_k := M_k^⊤ G_k((G_k-μ_k/M_k^⊤s_k^⊤)^⊤ M_kM_k^⊤ G_k )^-1 (G_k-μ_k/M_k^⊤s_k^⊤)^⊤ M_k, R_k:= I -R'_k, and y_k := ε_2 d/n(G_k^⊤ M_k M_k^⊤ G_k)^-1d̅^(k). We can verify that R_k and R'_k are projection matrices; hence, R_k^2=R_k and R'_k^2=R'_k. We can therefore rewrite the next iterate (<ref>) as x_k+1=x_k - α_k M_kR_k M_k^⊤ and (<ref>) as x_k+1=x_k - α_k M_k M_k^⊤ G_k y_k. When R_kM_k^⊤ = M_k^⊤ ( + G_k λ̅^(k)) is small enough, Lemma <ref> shows that, by choosing a specific value of {μ_k}, Z_k M_k^⊤ = M_k^⊤ ( + G_k λ^(k)) is also small. We recall here that Z_k is defined in (<ref>). Therefore, we will check whether x_k is a KKT point or not with the Lagrange multiplier λ^(k). As will be shown later in Theorem <ref>, when given inputs d,ε_0,ε_2,δ_1 = √(d(1-ε)/n^2)ε_1 and μ_k = 1/2√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1s_k), Algorithm <ref> generates an output x_k that is guaranteed to be an (ε_1,ε_2,O(ε_0))-KKT point. §.§ Decrease in objective value We will make the following assumptions. * All constraints, g_i for ∀ i, are L_g-smooth on the feasible set. * There exists an interior point x, i.e., g_i(x)<0 for each i. Assumption <ref><ref> is satisfied when Assumption <ref><ref> is satisfied and g_i are twice continuously differentiable functions. Assumptions <ref><ref> and <ref><ref> imply that ∇ g_i are continuous and bounded on the feasible set (∇ g_i(x)≤ U_g). The following problems for all i: min_x∇ g_i(x) s.t. f(x)≤ f(x_0), [-g_i(x)]_+ ≤ε_0 ∇ g_i(x) ,  x∈𝒞 and problems for all i: max_x g_i(x) s.t. f(x)≤ f(x_0), ε_0 ∇ g_i(x) ≤ [-g_i(x)]_+ ,  x∈𝒞 have nonzero optimal values. This assumption means that ∇ g_i(x) is not zero when x is close to the boundary g_i(x) = 0 and g_i(x) is not zero when x is far from the boundary. If the problems (<ref>) and (<ref>) have no solutions, we set the optimal values to ∞ and -∞, respectively. We set l_g^ε_0 (>0) for the minimum of all optimal values of (<ref>) over all i and u_g^ε_0 (<0) for the maximum of all optimal values of (<ref>) over all i. Let g_* (<0) denote the minimum of all optimal values of the following problems over i: min_x g_i(x) s.t. f(x)≤ f(x_0),  x∈𝒞. g_* is bounded from Assumptions <ref><ref> and <ref><ref>. The following lemma implies that if the constraints are all convex, Assumption <ref> is not necessary because it is proved to hold from more general standard assumptions as follows. Suppose that Assumptions <ref><ref> and <ref> hold and that g_i are all convex. Then, Assumption <ref> holds. If the feasible set is not empty, from Assumptions <ref><ref> and <ref><ref>, there exist optimal solutions for (<ref>) and (<ref>), respectively. Suppose that there exists an optimal solution x̂^* such that ∇ g_i(x̂^*)=0 for (<ref>). Since [-g_i(x̂^*)]_+ ≤ε_0∇ g_i(x̂^*) holds, g_i(x̂^*)=0. Similarly under the assumption that there exists x̃^* such that g_i(x̃^*)=0 for (<ref>), we see that ε_0 ∇ g_i(x̃^*)≤ [- g_i(x̃^*)]_+ holds and obtain ∇ g_i(x̃^*)=0. By the convexity of g_i, for all x and x^* ∈{x̂^*, x̃^*} g_i(x)≥ g_i(x^*)+⟨∇ g_i(x^*),x-x^*⟩ = 0 holds for any i; hence, g_i(x)≥ 0. This contradicts Assumption <ref><ref>. Next, we prove that the update direction in Algorithm <ref> is a descent direction for a specific value of {μ_k}_k. Let us consider the orthogonal projection of M_k^⊤∇ f(x_k) into the image of M_k^⊤ G_k: M_k^⊤∇ f(x_k)= -M_k^⊤ G_kλ^(k) + Z_k M_k^⊤, where λ^(k) is defined by (<ref>). Suppose that Assumption <ref> holds. Let μ_k = 1/2√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1 s_k). Then, * ((G_k-μ_k/M_k^⊤s_k^⊤)^⊤ M_kM_k^⊤ G_k )^-1 exists, * the following inequalities hold: 2/3Z_k M_k^⊤^2 ≤^⊤ M_k R_k M_k^⊤≤ 2 Z_k M_k^⊤^2, Z_k M_k^⊤^2≤R_k M_k^⊤^2 ≤ 2 Z_k M_k^⊤^2, and * d_k = - M_kR_kM_k^⊤ is a descent direction. Let ν = μ_ks_k/M_k^⊤; we will show that (G_k-ν^⊤)^⊤ M_kM_k^⊤ G_k is non-singular. Here, (G_k-ν^⊤)^⊤M_kM_k^⊤G_k = G_k^⊤M_kM_k^⊤G_k -ν^⊤M_kM_k^⊤G_k = G_k^⊤M_kM_k^⊤G_k + νλ^(k)⊤ G_k^⊤M_kM_k^⊤G_k = (I + νλ^(k)⊤)(G_k^⊤M_kM_k^⊤G_k), where the second equality follows from (<ref>). We will confirm that I + νλ^(k)⊤ is invertible: notice that ν is an eigenvector of I + νλ^(k)⊤ whose corresponding eigenvalue is 1+ν^⊤λ^(k). All the other eigenvectors are orthogonal to λ^(k) and their corresponding eigenvalues are equal to one. From the definition of ν,μ_k,λ^(k), |ν^⊤λ^(k)| = μ_k/M_k^⊤|s_k^⊤(G_k^⊤M_k M_k^⊤G_k)^-1G_k^⊤M_k M_k| ≤μ_k/M_k^⊤ M_k^⊤G_k (G_k^⊤M_k M_k^⊤G_k)^-1s_kM_k^⊤ = μ_k √(s_k^⊤(G_k^⊤M_k M_k^⊤G_k)^-1s_k) = 1/2 holds, as M_k^⊤ G_k (G_k^⊤ M_k M_k^⊤ G_k)^-1s_k^2=s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1s_k. Hence, we obtain 1/2 ≤ν^⊤λ^(k) +1≤ 3/2 which implies that (I + νλ^(k)⊤) is invertible as all of its eigenvalues are non-zero. From Assumption <ref>, G_k^⊤ M_k M_k^⊤ G_k is also invertible; thus, from (<ref>), ((G_k-ν^⊤)^⊤ M_kM_k^⊤ G_k)^-1 exists. Next, we calculate ^⊤ M_k R_k M_k^⊤ and R_k M_k^⊤^2. From (<ref>) and the definition of the orthogonal projection, we obtain M_k^⊤^2 = M_k ^⊤ G_k λ^(k)^2 + Z_k M_k^⊤^2. In order to project M_k^⊤ by R_k, first of all, we need to rewrite R'_k in (<ref>) as R_k' = M_k^⊤G_k((G_k-ν^⊤)^⊤M_kM_k^⊤G_k )^-1(G_k-ν^⊤)^⊤M_k = M_k^⊤G_k(G_k^⊤M_kM_k^⊤G_k)^-1(I + νλ^(k)⊤)^-1(G_k-ν^⊤)^⊤M_k using (<ref>). We project M_k^⊤ by R_k' and obtain R_k' M_k^⊤ = M_k^⊤G_k (G_k^⊤M_k M_k^⊤G_k)^-1(I + νλ^(k)⊤ )^-1 (G_k^⊤M _k M_k^⊤- M_k^⊤^2 ν) = M_k^⊤G_k (G_k^⊤M_k M_k^⊤G_k)^-1(I + νλ^(k)⊤ )^-1 (-G_k^⊤M _k M_k^⊤G_kλ^(k) - M_k^⊤^2 ν) = -M_k^⊤G_k (G_k^⊤M_k M_k^⊤G_k)^-1(I + νλ^(k)⊤ )^-1 ((I+ νλ^(k)⊤)G_k^⊤M _k M_k^⊤G_kλ^(k) + Z_kM_k^⊤^2 ν) = -M_k^⊤G_k λ^(k) - Z_kM_k^⊤^2 M_k^⊤G_k (G_k^⊤M_k M_k^⊤G_k)^-1(I + νλ^(k)⊤ )^-1 ν. The second equality follows from (<ref>) and the third equality comes from (<ref>). Note that ν is a eigenvector of (I + νλ^(k)⊤)^-1, and (I + νλ^(k)⊤)^-1ν = ν/1+λ^(k)⊤ν holds. These relations leads us to R_k' M_k^⊤ = -M_k^⊤ G_k λ^(k) - Z_kM_k^⊤^2/1 + λ^(k)⊤ν M_k^⊤ G_k (G_k^⊤ M_k M_k^⊤ G_k)^-1ν. From this equation, we obtain ^⊤M_k R'_k M_k^⊤ = -^⊤M_kM_k^⊤G_k λ^(k) - Z_kM_k^⊤^2/1 + λ^(k)⊤ ν ^⊤M_kM_k^⊤G_k (G_k^⊤M_k M_k^⊤G_k)^-1ν = M_k^⊤G_k λ^(k) ^2 + ν^⊤λ^(k) / ν^⊤λ^(k)+1 Z_kM_k^⊤^2, where the last equality follows from (<ref>). We also obtain R'_kM_k^⊤^2 = M_k^⊤ G_k λ^(k)^2 + 2ν^⊤λ^(k)/ν^⊤λ^(k)+1Z_kM_k^⊤^2 +Z_kM_k^⊤^4/(ν^⊤λ^(k)+1)^2ν^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1ν. Recalling that R_k = I - R_k', we have ^⊤M_k R_k M_k^⊤ = M_k^⊤^2 - ^⊤M_k R'_k M_k^⊤ = M_k ^⊤G_k λ^(k)^2 + Z_kM_k^⊤^2 - M_k^⊤G_k λ^(k) ^2 - ν^⊤λ^(k) / ν^⊤λ^(k)+1 Z_kM_k^⊤^2 = 1/ ν^⊤λ^(k)+1 Z_kM_k^⊤^2. The second equality follows from (<ref>) and (<ref>). Using (<ref>) and 1/2 ≤ν^⊤λ^(k) +1≤ 3/2, we find that 2/3Z_kM_k^⊤^2 ≤^⊤ M_k R_k M_k^⊤≤ 2 Z_kM_k^⊤^2. Using R_k = I - R_k', we also have R_k M_k^⊤^2 = M_k^⊤^2 - 2 ^⊤M_k R'_k M_k^⊤+ R'_kM_k^⊤^2 = Z_kM_k^⊤^2 + Z_kM_k^⊤^4/(ν^⊤λ^(k)+1)^2ν^⊤(G_k^⊤M_k M_k^⊤G_k)^-1 ν. The last equality follows from (<ref>),(<ref>) and (<ref>). We can now evaluate the last term of (<ref>) as Z_kM_k^⊤^2ν^⊤(G_k^⊤M_k M_k^⊤G_k)^-1 ν = μ_k^2Z_kM_k^⊤^2/M_k^⊤^2 s_k^⊤(G_k^⊤M_k M_k^⊤G_k)^-1 s_k = 1/4Z_kM_k^⊤^2/M_k^⊤^2 ≤1/4. The first equality follows from definition of ν, the second equality follows from definition of μ_k, and the last inequality follows from (<ref>). Finally, by using (<ref>) and the above upper bound, we obtain Z_kM_k^⊤^2≤R_k M_k^⊤^2 ≤ 2 Z_kM_k^⊤^2. Lastly, we can easily confirm from (<ref>) that d_k = - M_kR_kM_k^⊤ is a descent direction. Let μ_k = 1/2√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1s_k). Under Assumptions <ref><ref> and <ref>, when d_k = - M_k^⊤ ( + G_k λ̅^(k)) of (<ref>) in Algorithm <ref>, we have f(x_k+1)≤ f(x_k) - (2/3α_k -α_k^2 L(1+ε)/n)Z_kM_k^⊤^2 with probability at least 1-2exp(-C_0 ε^2 n). Using the same argument as in Proposition <ref>, from the L-smoothness (<ref>) of the objective function f, Lemma <ref>, and x_k+1-x_k = -α_k M_k R_k M_k^⊤ from (<ref>), we obtain f(x_k+1) - f(x_k) ≤- α_k^⊤M_kR_k M_k^⊤+α_k^2L/2 M_kR_k M_k^⊤^2 ≤- α_k^⊤M_kR_k M_k^⊤+α_k^2L(1+ε)/2n R_k M_k^⊤^2. The last inequality follows form Lemma <ref>. Combining (<ref>) and (<ref>), we find that f(x_k+1) - f(x_k) ≤ -(2/3α_k -α_k^2 L(1+ε)/n)Z_kM_k^⊤^2 holds with probability at least 1-2exp(-C_0 ε^2 n). Suppose that Assumptions <ref><ref> and <ref> hold and let d_k= -ε_2d/n M_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1d̅^(k), i.e., (<ref>) with d̅^(k) defined by (<ref>) or (<ref>) in Algorithm <ref> and min_iλ^(k)_i < - ε_2. Then, f(x_k+1)- f(x_k) ≤ -α_k(1 - (1+ε)Lα_k|𝒜_k|/(1-ε)λ_min(G_k^⊤ G_k))ε_2^2d/2n holds with probability at least 1-2exp(-C_0ε^2n)-2exp(-C_0ε^2d). First, we show that M_kd_k is a descent direction for both d̅^(k) defined by (<ref>) and by (<ref>). We have ^⊤ M_kd_k = -ε_2 d/n^⊤ M_kM_k^⊤ G_k(G_k^⊤ M_kM_k^⊤ G_k)^-1d̅^(k) = ε_2 d/nλ^(k)⊤d̅^(k) using the definition (<ref>) of λ^(k). When -1^⊤λ^(k)≥ε_2/2 and d̅^(k) = 1, i.e., (<ref>), (<ref>) gives ^⊤ M_kd_k ≤ -ε_2^2 d/2n. Moreover, when d̅_i^(k) = χ_-(λ_i^(k)) + ∑_λ_j^(k)≤0 -λ_j^(k)/2∑_λ_j^(k)>0λ_j^(k)χ_+(λ_i^(k) ), i.e., (<ref>), we have ^⊤M_kd_k = ε_2 d/n (∑_λ_j^(k) ≤0 λ_j^(k) + ∑_λ_j^(k) > 0 λ_j^(k)∑_λ_i^(k) ≤0 -λ_i^(k)/2∑_λ_i^(k)>0 λ_i^(k)) = ε_2 d/n (∑_λ_j^(k) ≤0 λ_j^(k) - 1/2∑_λ_j^(k) ≤0 λ_j^(k) ) = ε_2 d/2n∑_λ_j^(k) ≤0 λ_j^(k) < -ε^2_2 d/2n. The first equality follows from (<ref>) and definition of d̅^(k). The last inequality follows from min_iλ_i^(k)<-ε_2. Thus, in both cases, M_kd_k is a descent direction. Next, we will evaluate the decrease f(x_k+1)- f(x_k). By using (<ref>) from Assumption <ref><ref> and x_k+1 - x_k = α_k M_kd_k =-α_k M_k M_k ^⊤ G_k y_k from (<ref>), we have f(x_k+1)≤ f(x_k) + α_k ^⊤ M_k d_k + L/2α_k^2 M_kM_k^⊤ G_k y_k^2. We apply Lemma <ref> to the last term and obtain M_kM_k^⊤G_ky_k^2 ≤1+ε/nM_k^⊤G_ky_k^2 = ε_2^2d^2(1+ε)/n^3 d̅^(k)⊤ (G_k^⊤M_k M_k^⊤G_k)^-1 d̅^(k) ≤ε_2^2d^2(1+ε)/n^3λ_min(G_k^⊤M_k M_k^⊤G_k)d̅^(k)^2. The first inequality follows from Lemma <ref> with M_k = 1/nP_k^⊤ and the first equality follows from definition of y_k in (<ref>). Furthermore, we evaluated λ_min(G_k^⊤ M_k M_k^⊤ G_k) in (<ref>) with probability at least 1-2exp(-C_0ε^2d). Then, we have M_kM_k^⊤ G_ky_k^2 ≤ε_2^2d(1+ε)/n(1-ε)λ_min(G_k^⊤ G_k)d̅^(k)^2. We can compute an upper bound for d̅^(k), where d̅^(k) is defined by (<ref>) or (<ref>). When -1^⊤λ^(k)≥ε_2/2 and d̅^k = 1 from (<ref>), it is clear that d̅^(k) = 1= √(|𝒜_k|)<√(d) (by Assumption <ref><ref>). When -1^⊤λ^(k) < ε_2/2 and d̅^(k) is defined by (<ref>), we have -∑_λ^(k)_i≤ 0λ^(k)_i < ∑_λ^(k)_i> 0λ^(k)_i + ε_2/2, and since min_iλ_i^(k)< - ε_2<0, we have ε_2 < -∑_λ^(k)_i≤0λ^(k)_i. From (<ref>) and (<ref>), we find that ε_2/2 < ∑_λ^(k)_i> 0λ^(k)_i. These relations leads us to ∑_λ^(k)_j ≤0 -λ^(k)_j/2∑_λ^(k)_j>0 λ^(k)_j < ε_2/4∑_λ^(k)_j>0 λ_j^(k) + 1/2 < 1/2 + 1/2 = 1. The first inequality follows from (<ref>) and the last inequality follows from (<ref>). Accordingly, we have d̅^(k) = √(∑_λ^(k)_j ≤ 0 1 + ∑_λ^(k)_j >0(∑_λ^(k)_j <0 -λ^(k)_j/2∑_λ^(k)_j>0λ^(k)_j)^2)≤√(|𝒜_k|). Thus, d̅^(k)≤√(|𝒜_k |) holds when d̅^(k) is defined by (<ref>) and by (<ref>). Combining (<ref>), (<ref>), (<ref>), (<ref>) and d̅^(k)≤√(|𝒜_k|), we obtain the following lower bound of the step size, f(x_k+1)- f(x_k) ≤- α_k ε_2^2 d/2n + L/2α_k^2 M_kM_k^⊤G_ky_k^2 ≤- α_k ε_2^2 d/2n + L/2α_k^2 (1+ε)ε_2^2d/n(1-ε)λ_min(G_k^⊤G_k)d̅^(k)^2 ≤-α_k(1 - (1+ε)Lα_k|𝒜_k|/(1-ε)λ_min(G_k^⊤G_k))ε_2^2d/2n. The first inequality follows from (<ref>) and, (<ref>) or (<ref>). The second inequality follows from (<ref>) and the last inequality from d̅^(k)≤√(|𝒜_k|). §.§ Feasibility Here, by utilizing the L_g-smoothness of the constraints from Assumption <ref><ref>, we derive conditions on the step size so that the sequence {x_k} generated by Algorithm <ref> is feasible. Let μ_k = 1/2√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1 s_k) and assume that x_k is feasible. Furthermore, suppose that Assumptions <ref><ref>, <ref>, <ref>, and <ref> hold. Then, if the step size α_k satisfies 0 ≤α_k ≤min( √(λ_min^*) l_g^ε_0(1-ε)/3U_fU_gL_g(1+ε)^2n/√(d), 1/U_f√(n^3/2d(1+ε)^2)-2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*)), when d_k =- M_kR_kM_k^⊤ of (<ref>) in Algorithm <ref>, x_k+1 is feasible with probability at least 1-2exp(-C_0ε^2n)-(2|𝒜_k|+6)exp(-C_0ε^2d). First, let us consider the active constraints g_i (i∈𝒜_k). Since g_i are L_g-smooth from Assumption <ref><ref>, we have g_i(x_k+1) ≤ g_i(x_k) + ∇ g_i(x_k)^⊤ (x_k+1 - x_k) + L_g/2x_k+1- x_k^2. From (<ref>) and x_k+1 - x_k = -α_k M_k R_k M_k^⊤ of (<ref>), we have g_i(x_k+1)≤g_i(x_k) - α_k ∇g_i(x_k)^⊤M_kR_k M_k^⊤+α_k^2L_g/2 M_kR_k M_k^⊤^2. Note that λ̅^(k) of (<ref>) is the solution of (G_k - μ_k s_k^⊤/M_k^⊤)^⊤ M_k (M_k^⊤ + M_k^⊤ G_kλ̅^(k)) = 0. Recalling that R'_k := M_k^⊤ G_k((G_k-μ_k/M_k^⊤s_k^⊤)^⊤ M_kM_k^⊤ G_k )^-1 (G_k-μ_k/M_k^⊤s_k^⊤)^⊤ M_k, R_k:= I -R'_k, we deduce that R_k M_k^⊤ = M_k^⊤ + M_k^⊤ G_k λ̅^(k); thus, G_k^⊤ M_k R_k M_k^⊤ = μ_k s_k/M_k^⊤^⊤ M_k R_k M_k^⊤, which is equivalent to ∇ g_i(x_k)^⊤ M_k R_k M_k^⊤ = μ_k M_k^⊤∇ g_i(x_k)/ M_k^⊤^⊤ M_k R_k M_k^⊤, for all i ∈𝒜_k. From this equation and (<ref>), we obtain that for all i ∈𝒜_k, g_i(x_k+1) ≤g_i(x_k)- μ_kα_k M_k^⊤∇g_i(x_k)/M_k^⊤∇f(x_k)^⊤M_k R_k M_k^⊤+α_k^2L_g(1+ε)/2n R_k M_k^⊤^2 ≤g_i(x_k)- 2/3μ_kα_k M_k^⊤∇g_i(x_k)/M_k^⊤∇f(x_k)Z_k M_k^⊤^2 +α_k^2L_g(1+ε)/nZ_k M_k^⊤^2. The first inequality follows from (<ref>) and Lemma <ref>. The last inequality follows from (<ref>) and (<ref>). Hence, if the step size α_k satisfies 0≤α_k ≤μ_kM_k^⊤∇ g_i(x_k)/M_k^⊤∇ f(x_k)2n/3L_g(1+ε), g_i(x_k+1)≤ 0 holds for all i ∈𝒜_k. We now show that a nonzero lower bound of α_k exists by computing a lower bound for M_k^⊤∇ g_i(x_k)/M_k^⊤∇ f(x_k). We apply Lemma <ref> to and ∇ g_i(x_k) (i∈𝒜_k); from Assumptions <ref> and <ref>, it follows that M_k^⊤∇ g_i(x_k)/M_k^⊤≥√((1-ε)/(1+ε))∇ g_i(x_k)/≥√((1-ε)/(1+ε))l_g^ε_0/U_f (i ∈𝒜_k) holds with probability at least 1-2(|𝒜_k|+1)exp(-C_0ε^2d). This inequality shows that M_k^⊤∇ g_i(x_k)/M_k^⊤ has a nonzero lower bound. Next we find a lower bound of μ_k = 1/2√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1 s_k). First, we compute an upper bound for s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1 s_k. We apply Lemma <ref> to ∇ g_i(x_k) (i∈𝒜_k) with M_k = 1/n P_k^⊤; from (<ref>), we find that s_k^⊤(G_k^⊤M_k M_k^⊤G_k)^-1 s_k ≤s_k^2/λ_min(G_k^⊤M_k M_k^⊤G_k) ≤n^2/d(1-ε)λ_min(G_k^⊤G_k) ∑_i ∈𝒜_kM_k^⊤∇g_i(x_k)^2 ≤(1+ε)/(1-ε)λ_min(G_k^⊤G_k) ∑_i ∈𝒜_k∇g_i(x_k)^2 < |𝒜_k|U_g^2(1+ε)/(1-ε)λ_min(G_k^⊤G_k) ≤dU_g^2(1+ε)/(1-ε)λ_min(G_k^⊤G_k) ≤dU_g^2(1+ε)/(1-ε)λ_min^* holds. The second inequality follows from (<ref>) and the third inequality follows from Lemma <ref>. The 4th and 5th inequalities come from Assumptions <ref><ref> and <ref>. Inequality (<ref>) implies that μ_k has a lower bound. Hence, upon combining (<ref>), (<ref>) and (<ref>), we see that if the step size α_k satisfies 0≤α_k ≤√(λ_min^*) l_g^ε_0(1-ε)/3U_fU_gL_g(1+ε)^2n/√(d), then g_i(x_k)≤ 0 (i∈𝒜_k) holds. As for the nonactive constraints (i.e., i∉𝒜_k), from the L_g-smoothness of constraint functions (<ref>), we have that for all i ∉𝒜_k, g_i(x_k+1)≤ g_i(x_k) + α_k ∇ g_i(x_k) M_kR_kM_k^⊤∇ f(x_k) +α_k^2L_g/2M_kR_kM_k^⊤∇ f(x_k)^2. In solving the quadratic inequality, g_i(x_k) + ∇ g_i(x_k)z +L_g/2z^2 ≤ 0 with z = α_k M_kR_kM_k^⊤∇ f(x_k), we find that if the step size α_k satisfies 0≤α_k ≤1/L_gM_kR_kM_k^⊤∇ f(x_k)-2L_g g_i(x_k)/∇ g_i(x_k) + √(∇ g_i(x_k)^2 - 2L_g g_i(x_k)), then g_i(x_k+1)≤ 0 (i ∉𝒜_k). Assumptions <ref><ref>, <ref>, and <ref> yield ∇ g_i≤ U_g,-∞ < g_*≤ g_i≤ u_g^ε_0<0. From these relations, we find that x_k+1 is feasible if the step size α_k satisfies 0≤α_k ≤1/M_kR_kM_k^⊤∇ f(x_k)-2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*). From Lemma <ref> and (<ref>), M_k R_k M_k^⊤^2 ≤1+ε/nR_kM_k^⊤^2 ≤2(1+ε)/nZ_k M_k^⊤^2 ≤2(1+ε)/nM_k^⊤^2 ≤2d(1+ε)^2/n^3U_f^2 hold with probability at least 1-2exp(-C_0ε^2n)-2exp(-C_0ε^2d). The first inequality follows from Lemma <ref> with M_k = 1/nP_k^⊤ and the second inequality follows from (<ref>). The third inequality follows from (<ref>) and the last inequality follows from Lemma <ref> with M_k^⊤ = 1/nP_k and Assumption <ref><ref>. (<ref>) and (<ref>) together yield 0≤α_k ≤1/U_f√(n^3/2d(1+ε)^2)-2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*). The upper bound of the step size α_k in Proposition <ref> consists of two terms, an O(n/d^1/2) term and an O(n^3/2/d^1/2) term. When the original dimension n is large enough, the O(n^3/2/d^1/2) term becomes larger than the O(n/d^1/2) term. Accordingly, the step size condition becomes 0≤α_k ≤√(λ_min^*) l_g^ε_0(1-ε)/3U_fU_gL_g(1+ε)^2n/√(d). Next, we prove that there exists a non-zero lower bound for the step size in the second direction d_k = -M_k^⊤ G_ky_k. Suppose that Assumptions <ref><ref>,<ref>, <ref>, and <ref> hold and that x_k is feasible and min_i λ_i^(k) < -ε_2. Then, if the step size α_k satisfies 0≤α_k ≤min(2(1-ε)λ_min^*/ε_2(1+ε)L_gd, 1/U_fL_g√((λ_min^*)^3(1-ε)^3/d^3(1+ε)^3),. . √((1-ε)λ_min^*/1+ε)√(n)/dε_2-2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*)), when d_k = -ε_2d/n M_k^⊤ G_k(G_k^⊤ M_k M_k^⊤ G_k)^-1d̅^(k) of (<ref>) with d̅^(k) defined by (<ref>) or (<ref>) in Algorithm <ref>, x_k+1 is feasible with probability at least 1-2exp(-C_0ε^2n)-6exp(-C_0ε^2d). Regarding the active constraints, from the L_g-smoothness of g_i to (<ref>) and x_k+1- x_k = -α_k M_k M_k^⊤ G_k y_k of (<ref>), we have g_i(x_k+1) ≤ g_i(x_k) -α_k ∇ g_i(x_k)^⊤ M_kM_k^⊤ G_ky_k +L_g/2α_k^2 M_kM_k^⊤ G_k y_k ^2, which leads to g_i(x_k+1) ≤g_i(x_k) -α_k ∇g_i(x_k)^⊤M_kM_k^⊤G_ky_k +L_g/2α_k^2 ε_2^2 d(1+ε)/n(1-ε)λ_min(G_k^⊤G_k)d̅^(k)^2 = g_i(x_k) - α_k ε_2d/nd̅_i^(k) +L_g/2α_k^2 ε_2^2 d(1+ε)/n(1-ε)λ_min(G_k^⊤G_k)d̅^(k)^2 = g_i(x_k) - α_k ε_2d/nd̅_i^(k) +L_g/2α_k^2 ε_2^2 d(1+ε)/n(1-ε)λ_min^*d̅^(k)^2, with probability at least 1-2exp(-C_0ε^2n)-2exp(-C_0ε^2d). The first inequality follows from (<ref>) and the first equality follows from definition (<ref>) of y_k. Hence, if the step size satisfies -α_k d̅^(k)_i +L_g/2α_k^2 ε_2(1+ε)/(1-ε)λ_min^*d̅^(k)^2≤ 0, this direction preserves feasibility. Therefore, we have 0≤α_k ≤2(1-ε)λ_min^*/ε_2(1+ε)L_gd̅_i^(k)/d̅^(k)^2. Next, we evaluate the lower bound of d̅_i^(k)/d̅^(k)^2 with d̅^(k) defined by (<ref>) and by (<ref>). From the proof of Proposition <ref>, d̅^(k)^2 ≤|𝒜_k | holds for both (<ref>) and (<ref>). In the case of d̅^(k) = 1, i.e., (<ref>) in Algorithm <ref>, we have that d̅_i^(k) = 1 and d̅^(k)_i/d̅^(k)^2 ≥ 1/|𝒜_k| > 1/d from Assumption <ref><ref>. In the case of d̅^(k)_i = χ_-(λ_i^(k)) + ∑_λ_j^(k)≤ 0 -λ_j^(k)/2∑_λ_j^(k)>0λ_j^(k)χ_+(λ_i^(k)), i.e., (<ref>) in Algorithm <ref>, when λ_i^(k) is non-positive, d̅_i^(k) = 1 and d̅^(k)_i/d̅^(k)^2 > 1/d. When λ_i^(k)> 0, we have d̅^(k)_i/d̅^(k)^2 ≥∑_λ^(k)_j ≤0 (-λ^(k)_j)/2|𝒜_k|∑_λ^(k)_j > 0λ^(k)_j > ε_2/2|𝒜_k|λ^(k)_1 > ε_2/2dλ^(k)_1 > 0. The first inequality follows from the definition of d̅_i^(k) of (<ref>) and d̅^(k)≤√(|𝒜_k|). The second inequality follows from min_iλ^(k)_i < -ε_2 and the last inequality follows from Assumption <ref><ref>. Furthermore, from (<ref>), we have λ^(k)_1 ≤√(|𝒜_k|)λ^(k)≤√(d(1+ε)/(1-ε))U_f/√(λ_min(G_k^⊤ G_k))≤√(d(1+ε)/(1-ε))U_f/√(λ_min^*) with probability at least 1-4exp(-C_0ε^2d). Combining (<ref>) and (<ref>), we have d̅^(k)_i/d̅^(k)^2≥1/2U_f√(λ_min^*(1-ε)/d^3(1+ε))ε_2. From (<ref>) and, (<ref>) or d̅^(k)_i/d̅^(k)^2 > 1/d, if the step size α_k satisfies 0≤α_k ≤min(2(1-ε)λ_min^*/ε_2(1+ε)L_gd, 1/U_fL_g√((λ_min^*)^3(1-ε)^3/d^3(1+ε)^3)), g_i(x_k+1)≤ 0 is satisfied for the active constraints. If i∉𝒜_k, we can apply the same argument as in (<ref>) of Proposition <ref> by replacing M_k^⊤ R_k M_k^⊤ with M_kM_k^⊤ G_k y_k. Thus, we have 0≤α_k ≤1/M_kM_k^⊤ G_k y_k -2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*). From (<ref>), (<ref>), and d̅^(k)≤√(|𝒜_k|)≤√(d), if the step size α_k satisfies 0≤α_k ≤√((1-ε)λ_min^*/1+ε)√(n)/dε_2-2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*), g_i(x_k+1) ≤ 0 for the non-active constraints with probability at least 1-2exp(-C_0ε^2n)-2exp(-C_0ε^2d). Following a similar argument to Proposition <ref>, the upper bound of the step size α_k in Proposition <ref> consists of three terms, an O(1/d) term, an O(1/d^3/2) term, and an O(n^1/2/d) term. When the original dimension n is large enough, the O(n^1/2/d) term becomes larger than other terms and the step size conditions can be written as 0≤α_k ≤min(2(1-ε)λ_min^*/ε_2(1+ε)L_gd, 1/U_fL_g√((λ_min^*)^3(1-ε)^3/d^3(1+ε)^3)). §.§ Global convergence We will construct η^(k)∈ℝ^m from λ^(k) of Algorithm <ref> in the same way as described in Section <ref>. Suppose that Assumptions <ref>,<ref>,<ref> and <ref> hold. Let the optimal value of (<ref>) be f^*(> -∞), and let δ(ε,ε_0,ε_1,ε_2) = min( min(O(n),O(n/√(d)),O(√(n^3/d)| u_g^ε_0|))d/n^2ε_1^2,. .min(O(d^-1),O(ε_2^-1d^-1),O(d^-3/2),O(√(n)/dε_2| u_g^ε_0|)) d/nε_2^2 ). Then, Algorithm <ref> generates an (ε_1,ε_2,O(ε_0))-KKT pair from inputs d, ε_0, ε_2, δ_1 = √(d(1-ε)/n^2)ε_1, μ_k = 1/2√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)^-1s_k) within K:= ⌈f(x_0)- f^*/δ(ε,ε_0,ε_1,ε_2)⌉ iterations with probability at least 1 - 4Kexp(-C_0ε^2n)-2K(d+6)exp(-C_0ε^2d). The points {x_k} are feasible because the conditions of Propositions <ref> and <ref> are satisfied. Hence, (<ref>) is satisfied. Furthermore, we can prove (<ref>) in a similar way to (<ref>) in Theorem <ref>. Next, we prove (<ref>) and (<ref>). If Algorithm <ref> stops, we have d_k≤δ_1 and min_iλ_i^(k)≥ -ε_2. The second inequality is identical to (<ref>). From (<ref>) and (<ref>), we obtain δ_1 = √(d(1-ε)/n^2)ε_1≥M_k^⊤(+ G_k λ̅^(k)) = R_k M_k^⊤ ≥Z_k M_k^⊤ = M_k^⊤(+ G_k λ^(k)) ≥√(d(1-ε)/n^2)+ G_k λ^(k). The second inequality follows from (<ref>) and the last inequality follows from Lemma <ref>. Then, ε_1 ≥ + G_k λ^(k) = + ∑_i = 1^m η_i^(k)∇ g_i(x_k) holds with probability at least 1-2exp(-C_0ε^2d), and we have confirmed that (x_k,η^(k)) is an (ε_1,ε_2, O(ε_0))-KKT pair. Now let us prove that Algorithm <ref> terminates at the k̅th iteration with k̅≤ K, by using the same argument as in Theorem <ref>. Assuming an arbitrary iteration k ≤k̅-1, we will show that the function value strictly and monotonically decreases in the two directions. * Case 1: When M_k^⊤( + G_kλ̅^(k))> δ_1 and 0 ≤α_k≤n/3L(1+ε) hold, we have δ_1^2 = d(1-ε)/n^2ε_1^2 <M_k^⊤( + G_kλ̅^(k))^2 =R_kM_k^⊤^2 < 2Z_k M_k^⊤^2 from (<ref>) and α_k - 3α_k^2 L(1+ε)/2n≥1/2α_k. These relations together with Proposition <ref> lead us to f(x_k+1) - f(x_k) ≤ - 1/6α_k d(1-ε)/n^2ε_1^2, when the step size α_k satisfies 0 ≤α_k ≤n/3L(1+ε). For the first direction d_k = - R_kM_k^⊤ of (<ref>), Proposition <ref> allows us to set the step size α_k as α_k = min( n/3L(1+ε), √(λ_min^*) l_g^ε_0(1-ε)/3U_fU_gL_g(1+ε)^2n/√(d), 1/U_f√(n^3/2d(1+ε)^2)-2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*) ) = min(O(n),O(n/√(d)),O(√(n^3/d)|u_g^ε_0|)). Then, f(x_k+1) - f(x_k) ≤ - min(O(n),O(n/√(d)),O(√(n^3/d)| u_g^ε_0|))d/n^2ε_1^2. * Case 2: When M_k^⊤ + M_k^⊤ G_k λ̅^(k)≤δ_1, we update the point by d_k = M_k^⊤ G_k y_k. Since Algorithm <ref> does not terminate at iteration k, we have min_i λ_i^(k) < -ε_2. When the step size α_k satisfies 0 ≤α_k ≤(1-ε)λ_min(G_k^⊤ G_k)/2(1+ε)|𝒜_k| L, from Proposition <ref> and the following inequality, α_k - (1+ε)L α_k^2 |𝒜_k |/(1-ε)λ_min(G_k^⊤ G_k)≥1/2α_k, we have f(x_k+1) - f(x_k) ≤ -α_k ε_2^2 d/4n. By Proposition <ref>, we can set the step size α_k to α_k = min( (1-ε) λ_min^*/2(1+ε) Ld, 2(1-ε)λ_min^*/ε_2(1+ε)L_gd, 1/U_fL_g√((λ_min^*)^3(1-ε)^3/d^3(1+ε)^3),. .√((1-ε)λ_min^*/1+ε)√(n)/dε_2-2u_g^ε_0/U_g+√(U_g^2 - 2L_gg_*)) = min(O(d^-1),O(ε_2^-1d^-1),O(d^-3/2),O(√(n)/dε_2| u_g^ε_0|)). Accordingly, we have f(x_k+1) - f(x_k)≤ -min(O(d^-1),O(ε_2^-1d^-1),O(d^-3/2),O(√(n)/dε_2| u_g^ε_0|)) d/nε_2^2. From the relations (<ref>) and (<ref>), Algorithm <ref> decreases the objective function value by δ(ε,ε_0,ε_1,ε_2): f(x_k+1) - f(x_k)≤- δ(ε,ε_0,ε_1,ε_2) <0. Summing over k, we find that f^* - f(x_0) ≤ f(x_k̅) - f(x_0) ≤ -k̅δ(ε,ε_0,ε_1,ε_2), which implies k̅≤ K, with probability at least 1 - 4k̅exp(-C_0ε^2n)-2k̅(d+6)exp(-C_0ε^2d). If the original dimension n is large enough, the denominator of the iteration number K, δ, becomes δ(ε,ε_0,ε_1,ε_2) = min( min(O(n),O(n/√(d)),)d/n^2ε_1^2,. .min(O(d^-1),O(ε_2^-1d^-1),O(d^-3/2) ) d/nε_2^2 ), and we can ignore the terms of | u_g^ε_0|. We can prove convergence of the deterministic of our algorithm (i.e., M_k = I) by the same argument in Section <ref>. However, the iteration complexity becomes O(max (max(ε_1^-2, ε_2^-2), (| u_g^ε_0|)^-1max(ε_1^-2, ε_2^-2) ) ) and we cannot ignore u_g^ε_0 terms. When calculating the gradient ∇ f(x_k) is difficult, the time complexity of the deterministic version to reach an approximate KKT point becomes O(n) × O(max (max(ε_1^-2, ε_2^-2), (| u_g^ε_0|)^-1max(ε_1^-2, ε_2^-2) ) ), which is worse than ours with randomness (O(d) × O(max(n/√(d)ε_1^-2,n√(d)ε_2^-2))). The computational complexity per iteration of the proposed method is O(dn|𝒜_k| + m(T_grad + T_value) + mT_valueN_while). N_while denotes the number of executions of the while-loop to satisfy feasibility and N_while is O(|log√(d)/n|) or O(logd). The first and second terms come from calculating M_k^⊤ G_k and the active set, respectively. The last term comes from the while-loop. From Propositions <ref> and <ref>, if hβ^k ≤min(O(n/√(d)),O(√(n^3/d)| u_g^ε_0|)) with the direction of (<ref>) or hβ^k ≤min( O(ε_0^-1d^-1), O(d^-3), O(√(n)| u_g^ε_0|/dε_2)) with the direction of (<ref>) is satisfied, the while-loop will terminate. Hence, we can find a feasible solution within at least O(|log√(d)/n|) or O(logd) steps. § NUMERICAL EXPERIMENTS In this section, we provide results for test problems and machine-learning problems using synthetic and real-world data. For comparison, we selected projected gradient descent (PGD) and the gradient projection method (GPM) <cit.>. Furthermore, we compared our method with a deterministic version constructed by setting M_k to an identity matrix, which is the same as in GPM when setting ε_0=0 of 𝒜_k for all k. All programs were coded in python 3.8 and run on a machine with Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz and Nvidia (R) Tesla (R) V100 SXM2 16GB. §.§ Linear constrained problems §.§.§ Nonconvex quadratic objective function We applied Algorithm <ref> to a nonconvex quadratic function under box constraints: min_x1/2x^⊤ Q x + b^⊤ x, s.t. -1≤ x ≤ 1. We used Q∈ℝ^1000× 1000,b∈ℝ^1000, whose entries were sampled from , and thus, Q was not a positive semi-definite matrix. We set the parameters ε_0, δ_1, ε_2, β, and the reduced dimension d as follows: ε_0 = 10^-6, δ_1 = 10^-4, ε_2 = 10^-6, β = 0.8, d= 1000. For M_k = 1/nP_k^⊤, we set the step size as h ∈{10^2n/L,10^1n/L,n/L,10^-1/L}. For M_k = I, we set the step sizes as h ∈{10^2/L,10^1/L,1/L,10^-1/L}. L denotes the maximum eigenvalue of Q. As shown in Table <ref>, our method with random projections worked better than the deterministic version. This is because our method could move randomly when there are many stationary points. This result indicates that our randomized subspace algorithm tends to explore a wider space than the deterministic version. PGD converges to a stationary point faster than our method does, while our method obtains better solutions achieving smaller function values than those of the PGD in most cases for some choices of random matrices. §.§.§ Non-negative matrix completion We applied Algorithm <ref> to optimization problems with realistic data. We used the MovieLens 100k dataset and solved the non-negative matrix completion problem: min_U,V𝒫_Ω(X) - 𝒫_Ω(UV^⊤) ^2, s.t. U≥ 0, V≥ 0. Here, X∈ R^943× 1682 is a data matrix, and U ∈ℝ^943× 5 and V∈ℝ^1682× 5 are decision variables. We set Ω⊂{(i,j) | i=1,…,943, j=1…,1682} and defined 𝒫_Ω as (𝒫_Ω(X))_ij = {[ X_ij ((i,j)∈Ω),; 0 (otherwise). ]. We set the parameters ε_0, δ_1, ε_2, β, and reduced dimension d as follows: ε_0 = 10^-4, δ_1 = 10^-5, ε_2 = 10^-5, β = 0.8, d = 600. For M_k = 1/nP_k^⊤, we set the step size as h ∈{10^2n,10^1n,n,10^-1n,10^-2n}. For M_k = I, we set the step size as h ∈{10^2,10^1,1,10^-1,10^-2}. As shown in Table <ref>, our method with randomness obtained the best result. It converged to a different point from the point found by the other methods; that made its computation time longer. §.§ Nonlinear constrained problems §.§.§ Neural network with constraints We applied Algorithm <ref> to a high-dimensional optimization having nonlinear constraint(s) as a regularizer ℛ of (<ref>). We used three different three-layer neural networks with cross entropy loss functions ℒ (x) := ∑_i=1^48000ℓ_i(x) for the MNIST dataset, in which the dimension of x is 669706; * (a) neural network with l_1-regularizer x_1 ≤ 12000 and sigmoid activation function, * (b) neural network with the same l_1-regularizer with (a) and ReLu activation function, and * (c) neural network with fused lasso <cit.> x_1 ≤ 12000, ∑_i=2^669706| x_i-x_i-1|≤ 14600 and sigmoid activation function. We set the parameters ε_0, δ_1, ε_2, β, μ_k as follows: ε_0 = 10^-1, δ_1 = 10^-8, ε_2 = 10^-5, β = 0.8, μ_k = r/√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)s_k) (r ∈{0.5, 0.1, 0.05}). For M_k = 1/nP_k^⊤, we set the step size as h ∈{10^4n,10^3n,10^2n,10^1n,n,10^-1n}. For M_k = I, we set the step size as h ∈{10^4.10^3,10^2,10,1,10^-1}. We also used the dynamic barrier method <cit.> for the l_1 regularizer and fused lasso problems, and PGD for the l_1 regularizer problems. PGD performed well when the projection onto the constraints could be calculated easily, while the dynamic barrier methods worked well when the number of constraints was equal to one. Therefore, problem settings (a) and (b) are good for these methods. Figure <ref>(a,b) shows that our method with randomness worked as well as the compared methods under l_1-regularization. Furthermore, it performed better than the deterministic versions, although we did not prove convergence in the non-smooth-constraints setting due to the l_1-norm. As shown in Figure <ref>(c) for the non-simple projection setting, our method with randomness outperformed the compared methods. The step size of RSG-NC with M_k = I came close to 0 in order to satisfy feasibility under l_1 regularization. On the other hand, our method with M_k = 1/nP_k^⊤ performed well and the step size did not come close to 0. §.§.§ CNN with constraints Since machine-learning problems often have highly nonconvex complicated objective functions consisting of numerous terms, we can not obtain the full gradient because of limitations on memory. In such a situation, we can use finite difference to calculate gradients. Here, we applied our method to this optimization problem that needs only the projected gradients M_k^⊤∇ f ∈ℝ^d. If the reduced dimension d is much smaller than original dimension n, it would save on time complexity. We optimized the CNN with the cross entropy loss under l_2 regularization, x_2^2≤ 50, on the MNIST dataset. We set the parameters ε_0, δ_1, ε_2, β, μ_k as follows: ε_0 = 10^-6, δ_1 = 10^-8, ε_2 = 10^-4, β = 0.8, μ_k = 1/2√(s_k^⊤ (G_k^⊤ M_k M_k^⊤ G_k)s_k). For M_k = 1/nP_k^⊤, we set the step size as h ∈{10^2n,10^1n,n}. For M_k = I, we set the step size as h ∈{100,10,1}. Figure <ref> shows that our method with random projection performs better than the deterministic version. § CONCLUSION We proposed new methods combining the random projection and gradient projection method. We proved that they globally converge under linear constraints and nonlinear constraints. When the original dimension is large enough, they converge in O(ε^-2). The numerical experiments showed some advantages of random projections as follows. First, our methods with randomness have the potential to obtain better solutions than those of their deterministic versions. Second, under non-smooth constraints, they did not become trapped at the boundary, whereas their deterministic versions did become trapped. Last, our methods performed well when the gradients could not be obtained directly. In the future, we would like to investigate the convergence rate of our algorithms in a non-smooth constraints setting. § COMPLIANCE WITH ETHICAL STANDARDS This work was partially supported by JSPS KAKENHI (23H03351) and JST ERATO (JPMJER1903). There is no conflict of interest in writing the paper.
http://arxiv.org/abs/2307.01400v1
20230703233643
Spatio-Temporal Surrogates for Interaction of a Jet with High Explosives: Part II -- Clustering Extremely High-Dimensional Grid-Based Data
[ "Chandrika Kamath", "Juliette S. Franzman" ]
cs.LG
[ "cs.LG", "cs.NA", "math.NA" ]
Spatio-Temporal Surrogates for Interaction of a Jet with High Explosives: Part II - Clustering Extremely High-Dimensional Grid-Based Data Chandrika Kamath and Juliette S. Franzman Lawrence Livermore National Laboratory 7000 East Avenue, Livermore, CA 94551, USA <kamath2, [email protected]> 9 June 2023 =========================================================================================================================================================================== Building an accurate surrogate model for the spatio-temporal outputs of a computer simulation is a challenging task. A simple approach to improve the accuracy of the surrogate is to cluster the outputs based on similarity and build a separate surrogate model for each cluster. This clustering is relatively straightforward when the output at each time step is of moderate size. However, when the spatial domain is represented by a large number of grid points, numbering in the millions, the clustering of the data becomes more challenging. In this report, we consider output data from simulations of a jet interacting with high explosives. These data are available on spatial domains of different sizes, at grid points that vary in their spatial coordinates, and in a format that distributes the output across multiple files at each time step of the simulation. We first describe how we bring these data into a consistent format prior to clustering. Borrowing the idea of random projections from data mining, we reduce the dimension of our data by a factor of thousand, making it possible to use the iterative k-means method for clustering. We show how we can use the randomness of both the random projections, and the choice of initial centroids in k-means clustering, to determine the number of clusters in our data set. Our approach makes clustering of extremely high dimensional data tractable, generating meaningful cluster assignments for our problem, despite the approximation introduced in the random projections. 0.7 § INTRODUCTION A common task in building surrogates for the spatio-temporal outputs from computer simulations is to transform these outputs into a lower-dimensional space using a linear method such as the principal component analysis (PCA) <cit.>. However, when the data do not lie on a linear manifold, the dimension of this lower-dimensional space can be large, prompting the consideration of non-linear dimension-reduction methods. A simple approach to introducing non-linearity is to cluster the simulation outputs by similarity and then use a linear transform on each cluster separately <cit.>, creating a locally-linear surrogate. However, clustering the outputs becomes challenging when the spatial domains vary across simulations or the spatial data generated at each time step of the simulation is extremely large, composed of values at over a million grid points in the spatial domain. This report describes our experiences with addressing these challenges. We start this report by describing the problem for which we want to build a spatio-temporal surrogate, namely, the interaction of a jet with high explosives, and the two-dimensional outputs that are generated during the simulations of this problem (Section <ref>). These outputs form the data set that we want to cluster. We then discuss the issues that make it challenging to cluster these outputs (Section <ref>) and place our contributions in the context of related work (Section <ref>). Our solution approach (Section <ref>) describes how ideas from other domains make it possible to cluster the high-dimensional data. We show how we can select the parameters used in our algorithms to generate meaningful clustering results for our data set (Section <ref>). We conclude this report with a summary of our experiences and the lessons learned (Section <ref>). This report is the second of two reports summarizing our work on building spatio-temporal surrogates for the problem of a jet interacting with high explosives. In the first report <cit.>, we discuss the applications aspect of our work and describe how we can build an accurate surrogate despite the small number of simulations in our data set. One of the approaches to improving the accuracy of the surrogate is by building locally-linear surrogates. This requires clustering of the data, which is the focus of this report. § DESCRIPTION OF THE DATA We illustrate our ideas on clustering extremely high-dimensional data using simulation output from a problem describing the interaction of a jet with high explosives (HE). The domain of the problem is a right cylinder with its axis oriented horizontally as shown on the left in Figure <ref>. There is a steel plate, 1cm thick near the right end of the cylinder, with the LX14 high explosives to the left of the plate. Both the plate and the HE have a fixed radius of 10cm. A copper jet, aligned along the center line of the cylinder, enters the HE from the left. The simulation models what happens as the jet moves through the HE and the plate. The jet is modeled initially as uniform cylinder. It is 10 cm in length with a varying radius. The jet tip velocity is specified as an input parameter; a linearly varying velocity profile is applied to the remainder of the cylinder that represents the jet to approximate a stretching metal jet. As the problem is radially symmetric about the axis of the cylinder, only the two-dimensional region shown by a dotted rectangle on the left, and schematically on the right, is simulated. At each time step, the simulation outputs variables of interest, such as mass and momentum, at different points on a grid in the two-dimensional region. There are three input parameters for the simulation: the radius of the jet, the length of the high explosives to be traversed by the jet, and the tip velocity of the jet in the positive x direction. By running the simulations at select values of these input parameters, and collecting the output at different time steps, we can create a data set that could be used to build a surrogate model for predicting the output at a new set of input parameters and a given time step. We are interested in determining, for example, whether the plate breaks; what is the final position of the plate; and, if the plate breaks, what is the velocity of the tip of the jet as it comes out on the other side of plate. To answer these questions, we need to build an accurate spatio-temporal surrogate, a topic we discuss in the companion report <cit.>, which focuses on the accuracy of the surrogates created using only a small number of simulations. This report focuses on the tasks of processing and clustering of extremely high-dimensional data that are crucial to building this accurate surrogate. To illustrate the instances in our data set, we use four simulations whose parameters are listed in Table <ref>. Figure <ref> shows the output variable, mass, at the first and last time steps for these four example simulations. As explained earlier, we have simplified the three-dimensional problem by assuming radial symmetry around the axis of the cylinder, so the output from the simulation is shown as two dimensional images, with the axis of the cylinder shown at the bottom, that is, at y = 0. The domain extent in x (along the length of the cylinder) varies as the length of the HE varies across simulations; however, the domain in y ranges from 0 to 11cm for all simulations. In Figure <ref> the vertical plate, shown in red, is stationary at time t = 0. To the left of the plate is the HE shown in light blue. The jet is shown in red at the bottom of the domain to the left of the HE; it is quite thin relative to the radius of the cylinder, and is barely visible in the images. As the simulation evolves, the jet moves to the right, through the HE, which expands, pushing the plate to the right. At late time, depending on the simulation input parameters, the plate could * break, with the jet going through the plate and coming out clearly on the other side; * almost break, with the jet either going completely through the plate but barely coming out the other side or the jet going almost all the way through the plate, leaving it barely connected at the bottom; * not break, with the plate remaining attached, either partially or completely, at the bottom. The plate could have moved from its original position at time t =0. We used the last two time steps in each simulation to assign one of these three class labels to the simulation. This label was not used in building the surrogate; it was used only to ensure we had a good coverage of the design space. We selected the four example simulations in Figure <ref> to illustrate these three cases. The output at each time step of a simulation consists of the values of variables of interest that are generated at grid points in the two-dimensional rectangular domain. These grid points are on a regular grid, with Δ x = Δ y = 0.0125cm. There are three output variables: mass, x-momentum, and y-momentum; the latter two are shown in later in Appendix <ref> and <ref>, respectively. The values of these variables are defined at the center of the square cell formed by four nearby grid points. Thus the data appear as an image, with regularly spaced pixels. However, in general, in a simulation, the grid points need not be on a regular grid; they could form an unstructured grid, as in a finite element mesh, or a locally structured grid, as in an Adaptive Mesh Refinement (AMR) mesh, where the mesh evolves with the simulation. As a result, unlike an image, most output from simulations also include the (x,y) coordinates of the grid points. In our work, we retain this association of the coordinates with the grid points as they enable us to extract sub-domains of the larger domain for processing. Each simulation is run for a fixed number of time steps which is determined as ( ⌊ (HE-length/jet-tip-velocity⌋ ) + 23 ), with the output generated at each time step. As both HE-length and jet tip velocity vary with the simulation, the number of time steps also varies across simulations. At early time, as the jet starts to move through the HE, there is little of interest in the simulation output. Once the jet is partway through the HE, as indicated by the first term in the equation above, it starts to influence the location of the plate, until 23 μsec later, it is expected that we should know the final status of the plate. In our work, we consider all the time steps in the analysis; an alternative would be to consider only the later 23 time steps. The data processed in this report was obtained by running 45 simulations at select values of the input parameters. These values, or sample points in the three input dimensions, were generated incrementally using a modified version of the best candidate algorithm <cit.>, which selects samples randomly, but far apart from each other. We started with a small number of sample points and an initial guess at the range of each of the three inputs. We then restricted the ranges to focus on the break cases, and added new sample points until we had a total 45 sample points <cit.>. Our data set, shown in Figure <ref>, indicates that at high jet tip velocity, but low HE-length, the plate breaks, while at low jet tip velocity and high HE-length, the jet is unable to penetrate through the plate. It is clear that our data set is unbalanced as we have 9 samples where the plate does not break, 31 where the plate breaks, and 5 that are almost break. Generating an appropriate data set for a problem like ours is challenging as time constraints limit the number of simulations we can run. In addition, we do not know a priori the range of inputs that will give us sample points with the desired outcome, and the boundary between the classes is poorly defined, making it difficult to generate a balanced data set. We erred on the side of having more break cases as these were of greater interest; the no-break cases also tended to have output that appeared very similar, and we expected that a small number of such cases would suffice. Admittedly, our choice of sample points will affect the clustering results. The output data for a variable at a time step in a simulation is referred to as a snapshot, so named as it is a snapshot of the evolution of the simulation at a particular point in time. These 45 simulations generate a total of 1604 snapshots. The simulation at the extreme corner of the input space, with length, jet tip velocity, and jet radius equal to 5.0cm, 0.950cm/μs, and 0.125cm, respectively, is referred to as the baseline simulation. It has the smallest number of time steps, with 29 snapshots. § CHALLENGES TO THE ANALYSIS There are two main challenges to building a high-quality spatio-temporal surrogate for our problem: * The first is how do we build a surrogate that is accurate when we can only run a small number of simulations? We discuss several options in our companion report <cit.>, one of which is to cluster the snapshots and build a separate surrogate for each cluster. * This leads to our second challenge - how do we cluster the snapshots in our data set? Table <ref> and Figure <ref> indicate that the snapshots are high-dimensional, each with more than two million grid points; the sizes of the snapshots vary across simulations; and the snapshots are not aligned in any way; and the data at each time step are available in multiple files. All these factors would make clustering the snapshots challenging. In addition, popular clustering algorithms, such as k-means, are iterative, which can lead to computational inefficiencies in processing extremely high-dimensional data. The first of these two challenges is the focus of the companion report <cit.>; this report focuses on the second challenge, namely, generating a meaningful clustering of the high-dimensional snapshots from our problem of jet-HE interaction. To accomplish this goal, we need to address the two issues discussed next. §.§ The unsuitability of the raw data for clustering The output data generated for each of the 45 simulations, regardless of the size of the domain, are available in 360 files in HDF5 format <cit.> for each time step. Each HDF5 file includes five variables — x and y coordinates, mass, x-momentum, and y-momentum. Within each file, the variables are in natural ordering, that is, ordered by increasing values of the y-coordinate, and for a fixed y-coordinate, ordered by increasing values of the x-coordinate. All simulations are on a regular grid with Δ x = Δ y = 0.0125cm. To apply our analysis algorithms to these data, we first need to re-arrange the data so that we have three snapshot matrices, one for each of the three variables of interest. Each snapshot matrix should have the grid points as the rows, listed in natural order of the (x,y) coordinates, and, as columns, the variable values at each time step of each simulation. A snapshot matrix, X∈^D × N, for any one of the three variables, can then be written as X = [ x_1, x_2, …, x_N ] , where x_i ∈^D, N is the number of snapshots, and D is the number of grid points in a snapshot. Since we have a total of 1604 time steps in the 45 simulations, there are 1604 columns in the snapshot matrix. However, identifying the rows of the snapshot matrix is more challenging. To cluster the snapshots, we first need to bring the data in each simulation to a common grid, which is not straightforward for several reasons: * The domain over which the data are generated is different for each simulation. The y values are in the range [0:11.0] cm for all simulations, but the range of x values varies as the HE-length varies, resulting in snapshots that vary in length across simulations. * Figure <ref> shows that the origin of the domain in all simulations is at the left edge of the HE, with 12 cm of air to the left of the origin. So, if we align the snapshots at the left edge at x = -12.0, the plate locations will not be aligned, even at time step 0, as the HE length varies across simulations. Since the plate forms an important structure in the output data, the lack of alignment of the plate across simulations would not give us the clustering results we expect. * For a simulation, the distribution of the data in the 360 files is the same across all time steps. However, the files within a simulation have four different sizes, as shown in Figure <ref>. But, as the domain size varies across simulations, the sizes of these HDF5 files also vary across simulations. As each row in the snapshot matrix corresponds to the same (x,y) coordinates, it becomes non-trivial to map a point in one of the 360 files at a time step to its location in the snapshot matrix efficiently. * All the simulations have the coordinate (-12,0) as the lower left corner. But, due to the way in which the (x, y) coordinates are generated for domains with different ranges in x, the values of Δ x and Δ y are not exactly 0.0125cm across simulations. While the coordinates in y, which has a fixed range of values, are identical across simulations, this is not the case for the x coordinates. * Finally, the total number of grid points across the 360 HDF5 files for the data at any time step is nearly three million (see Table <ref>). A snapshot matrix with such a large number of rows and 1604 columns is too large to be read into memory for processing and we need to consider alternative ways to store the snapshot matrix; this would also affect the analysis algorithms that read in the data. §.§ The iterative nature of k-means clustering algorithm In addition to the challenges posed by the size of the raw data and the storage format, we also need to consider the clustering algorithm used to group the snapshots. We intuitively expect that the snapshots can be clustered because the early time steps, when the plate has barely moved, are distinctly different from the later time steps, where, depending on the input parameters of the simulation, the plate moves but does not break, or the plate moves and breaks. However, as neighboring time steps are often quite similar, any clustering will place some consecutive time steps from a simulation into different clusters, suggesting that any clusters identified are not very well separated. In the absence of any information on the shape of possible clusters or the density of points in the very high-dimensional space, it is not obvious which clustering algorithm is the most suitable one for our data. We chose to start with the simplest clustering method, k-means clustering <cit.>, described in Algorithm <ref>, with the expectation that it might provide insight to guide the choice of a more appropriate algorithm. [htb] k-means clustering We observe that at each iteration, this algorithm requires the calculation of the distance of each snapshot to the centroid of each cluster. As the snapshot matrix is too large to fit into memory, reading in the matrix, piece-meal, on each iteration, will be time consuming. We therefore need alternate ways to make the clustering of the snapshots computationally tractable. § RELATED WORK We address two tasks in this report - converting the data from the HDF5 files into a snapshot matrix and clustering the columns of the high-dimensional snapshot matrix. The first task is specific to each data set and is based on the characteristics of the data. This section therefore focuses on the second task. In the field of spatio-temporal modeling, the problems solved typically have snapshots with grid points numbering in the thousands or tens of thousands, and occasionally hundreds of thousands <cit.>. Clustering these relatively low-dimensional snapshots then becomes a straightforward application of a clustering algorithm, such as k-means or k-mediods, with a suitably defined distance metric, such as Euclidean distance, reconstruction distance, or Grassmann distance <cit.>. To cluster extremely high-dimensional data, where each snapshot has over two million grid points, we borrowed ideas from the field of data mining, where such data sets are common place. A typical solution is to first reduce the dimension and then cluster the data. Ritter and Kohonen in 1989 suggested using random projections <cit.> for dimension reduction, followed by clustering using self-organizing maps, which is another iterative technique like k-means. Using the same combination of methods, Kaski in 1998 <cit.> showed that random projections was faster than PCA for dimension reduction; it required a slightly larger number of reduced dimensions, but the results were equally good, and similar to the results with the original data. In later work, Fern and Brodley <cit.> found that the randomness of random projections could result in different cluster assignments and proposed using ensemble clustering <cit.> to generate a stable cluster assignment. They also found that random projections performed better than PCA, and that there was no universal single best way to combine the results of the ensemble <cit.>. More recently, Anderlucci et al. <cit.> have further investigated different ways to combine the different cluster assignments resulting from random projections. Based on this prior work, we selected random projections as the approach for reducing the dimension of the three snapshot matrices in our data set. Our contributions in this work are as follows: * We show how we can generate the snapshots matrices when the data available are on spatial domains with different sizes, at grid points that vary in their (x,y) locations across simulations, and in a format that distributes the output across multiple files at each time step of a simulation. * We discuss how we can apply random projections to extremely high-dimensional data, determine the reduced dimension, and evaluate the results. * We indicate how we can exploit the randomness resulting both from the random projections, and the initial choice of centroids in k-means clustering, to obtain the number of clusters in the data. § SOLUTION APPROACH Our approach to clustering the very-high dimensional data set resulting from the simulations of a jet interacting with high explosives (HE) requires us first to convert the raw data into a format that we can cluster and then to identify a computationally feasible approach to clustering these high-dimensional snapshots. We next describe how we address these tasks. §.§ Pre-processing the data As described in Section <ref>, we have data for 1604 time steps across the 45 simulations. For each simulation, at each time step, the three variables, along with the corresponding (x,y) coordinates of the grid points, are spread across 360 files, each covering a small part of the domain. We need to convert this to a snapshot matrix for each variable, where the matrix has all the grid points, in natural ordering, along the rows, and each column contains the values of the variable at a different snapshot. At first glance, this appears to be just a re-arrangement of data from one set of files to another. But as we described in Section <ref>, we also need to account for the different ranges of x values, the lack of alignment of the plate even at early time, and the inconsistent values for the (x,y) coordinates across the simulations. This will require computation on the data values, in addition to the re-arrangement of values in files. We first decided that instead of storing the final data for each variable in a single snapshot matrix file as in Equation <ref>, we would split the matrix into blocks, with each block stored in a separate file, as follows: X = [ X_b1; X_b2; …; X_bk; ] This would make the processing of the very large number of grid points tractable as we could read the matrix a block at a time. However, all processing would have to be modified to work with the matrix spread across several files. We chose each block to contain all grid points in a specific non-overlapping range of y values. As the grid points are stored in natural order within a block, concatenating the blocks in the order of their y values creates a single snapshot-matrix file with all the grid points stored in natural order. There are many ways in which we can transform the HDF5 data from 360 files per time step, per simulation, into the three snapshot matrices, one for each variable that represents all the data for the variable across the 45 simulations. The solution we describe next is one we found expedient to implement. We used computationally efficient implementations, exploiting parallelism where possible. As we wanted our codes to remain flexible for processing other data formats, we kept the steps in this multi-step process distinct, though for higher computational efficiency, we could have merged some steps, or executed them in a different order. §.§.§ Creating a data file for each time step in a simulation We started by first processing the raw data for each simulation. For each time step in the simulation, the data are available in 360 files, one for each sub-domain. Each file contains the (x,y) coordinates and all variable values for that sub-domain. In this first step, the 360 files were combined into a single file for each time step in a simulation. This essentially involved reading each file in turn, extracting the coordinates and variable values, and writing out this information, a row at a time, to the single data file. The rows in this file are the grid points in the same order as they appear in the HDF5 file, starting at sub-domain 0 and ending at sub-domain 359. The columns are the (x,y) coordinates and the three variable values. For each time step, we also generated a summary file with statistics on each sub-domain, including the range of x and y values, and the starting location of each sub-domain in the single data file. This file is used to improve the computational efficiency of subsequent steps in the processing of the data. This step is just a re-arrangement of the data and involves reading a set of files, extracting the relevant information, and writing it out. The time steps across the simulations can all be processed in parallel. §.§.§ Aligning and cropping coordinates to a common domain After the first step, we have a large file for all variables at each time step in a simulation, along with an associated summary file. When we consider these files across the 45 simulations, the number of grid points is different as the range of x values is different for each simulation. In addition, as the origin in each simulation is at the start of the HE, the plate is not aligned across simulations at time t=0. To fix this, we first changed the origin of the coordinates in each file to be at the lower right corner of each domain. This shift in x values is the same for all time steps in a simulation, but differs across simulations. This step automatically aligns the plate at time t = 0 across all simulations because the left edge of the 1cm wide plate is 15 cm from the right edge of the domain. Such alignment of the data is common in tasks such as face recognition using the eigenface approach <cit.>, where each image is pre-processed so the face covers the full image, is upright, and centered in the image. In our problem, unlike the face, which is stationary, the plate and the HE move as the simulation progresses. By aligning the plate at initial time we reduce the amount of misalignment of the plate across the snapshots. Next, we cropped the data in each file so that grid points with the new shifted x coordinate outside the range [-32, 0.0] are excluded. This range was selected as it corresponds to the smallest HE length of 5.0cm. After this step, the data files for all simulations have the same range of x values and the same plate location at initial time. However, the values of the variables are at different (x,y) coordinate locations and the grid points are still in the same order as they were within each sub-domain, with the sub-domains stored in order. These changes make it possible to generate a meaningful clustering of the snapshots; they can be applied in parallel for the time steps across the simulations. §.§.§ Remapping simulations to a common grid Next, we used a simple interpolation scheme to remap the data values for each simulation to a common grid. This remapping consists of two steps. We first defined the common grid by choosing an x-range of [-31.5, -0.5], which is slightly smaller than the range for each simulation to ensure that all remapped values were being interpolated, not extrapolated. The y-range was selected as [0.0063, 10.99]. We kept the grid resolution the same at Δ x = Δ y = 0.0125, resulting in a total of D = 2,180,799 grid points in the common grid. Then, given the fine resolution of the common grid, we used a simple 1-nearest-neighbor algorithm for interpolation, though more complex algorithms are also an option. We created the common grid in a block form, as in Equation <ref>, with each block written to a separate file. A block spans the full range of x values, but a smaller range of y values. We chose bk = 22 blocks, with each block representing approximately 0.5cm of the y-range. This gave approximately 100K grid points in the first 21 blocks, and a smaller number in the last block. Within a block, the grid points were listed in natural ordering so that any data remapped to this common grid would have the rows in the form required by the final snapshot matrix. We generated the remapped data a block at a time. For each time step in a simulation, we extracted a block of data that had a range of (x,y) coordinates slightly larger than the block of the common grid to which we were remapping this data. This ensured that we were interpolating, not extrapolating, to the common grid. For computational efficiency, we used the summary file associated with the time step to identify and process only those sub-domains that had coordinates in the required range. We also observed that for our data, all time steps of a simulation could be remapped together because the data extracted for a block for the different time steps had values at the same (x,y) coordinates in the same order. So, for each block, we first concatenated the columns at different time steps to create a file for each simulation with (2+3*#time steps) columns to store the (x,y) coordinates and the three outputs across all the time steps. Merging these columns is possible as we had maintained the ordering of the data in the sub-domains and a fixed grid was used for all time steps. This re-mapping for the 45 simulations can be performed in parallel. Once we have remapped the data from each simulation to each block of the common grid, we have all data for the 45 simulations in the same order as in the common grid, which simplifies the next steps discussed in Section <ref> This step of remapping has several benefits. It introduces some flexibility in the analysis as we can reduce the data size by mapping to a coarser grid or remapping only the data in a smaller sub-region of the full domain when the rest of the domain is of lesser interest. By remapping to the common grid that is in the block form of the final snapshot matrix, we can generate the remapped data for each block in parallel, and with the smaller file sizes, process the data in memory as well. However, we observe that the computational efficiencies in the remapping are the result of the same grid being used across time steps in a simulation. Generating the common grid will be more challenging for AMR grids; the remapping will also be more time consuming as it has to be done separately for each time step. §.§.§ Converting remapped data to snapshot matrices After the remapping step, we have 22 files for each simulation. Each file corresponds to a block, with columns containing all time steps for the three variables, along with the (x,y) coordinates. The remapping ensures that these coordinates are the same across the simulations. To create the final snapshot matrices, we split each block for a simulation into three files, one for each variable, and then merge the columns of these files across simulations. For each variable, this gives the values at 2,180,799 grid points, split across 22 files, with each file having 1604 columns, which is the total number of time steps across the simulations. These snapshot matrices do not include the (x,y) coordinates. Figure <ref> shows the mass variable for the first and last snapshot, for each of our four example simulations, after the raw output data have been aligned, cropped, and remapped. The corresponding images for x-momentum and y-momentum are shown in Figures <ref> and <ref> in Appendix <ref> and <ref>, respectively. §.§ Clustering the snapshots Once the three snapshot matrices, corresponding to the three output variables, are created in block form, we can cluster the columns in the matrices. In Section <ref>, we described how the high dimensionality of our data can be an issue for iterative clustering algorithms as the full snapshot matrix is too large to fit into memory. There are two obvious solutions to this problem of high dimensionality that we discuss next. §.§.§ Using an iterative method with dimension reduction Our first solution is to use the iterative k-means clustering algorithm, but with modifications to account for the high dimensionality of the data. One way to achieve this is by using a distributed-memory version of the algorithm, which operates on data divided into blocks (as in Equation <ref>). We can assign each data block, and the corresponding block of the k-means centroids, to a processor and calculate, in parallel, the partial distance-squared of each snapshot to each centroid. Then a communication step would add the partial distances and identify the closest centroid to each snapshot, followed by an update of each block of the centroids. This approach has limited parallelism equal to the number of blocks and would require implementing a distributed-memory version of the k-means algorithm. Alternately, we could reduce the dimension of the data so that the transformed matrix fits into the memory of a single processor. To accomplish this, we used random projections <cit.> which projects the D-dimensional matrix X onto a d-dimensional subspace, with d ≪ D using a random matrix R ∈^d × D X^RP_d × N = R_d × D X_D × N We can justify the use of random projections as the Johnson-Lindenstrauss lemma <cit.> states that for any distortion ϵ, with 0 < ϵ < 1, and any integer N, if d is a positive integer such that d ≥ 4 ( ϵ^2/2 - ϵ^3/3)^-1ln N , then, for any set X of N points in ^D, there is a map, f : ^D→^d such that for all u, v ∈ X, (1-ϵ) u -v ^2 ≤ f(u) - f(v) ^2 ≤ (1+ϵ) u -v ^2 . In other words, points in a sufficiently high-dimensional space can be projected onto a suitable lower dimensional space, while approximately preserving the distances between the points. As the k-means algorithm is based on Euclidean distances, this means we can cluster the randomly-projected snapshot matrix X^RP instead of the original snapshot matrix X and expect to get approximately similar clustering results. To apply the map f in Equation <ref>, in the form of a random projection of our snapshot matrix, X, we need to define the random matrix, R, and determine the new reduced dimension, d. As our snapshot matrix is extremely high dimensional, we prefer a sparse matrix, such as the one proposed by Achlioptas <cit.> with i.i.d. entries as: r_i,j = √(s) 1 with probability 1/2s 0 with probability 1-1/s -1 with probability 1/2s where s = 1 or s = 3, with the former resulting in a dense projection matrix. Li, Hastie, and Church <cit.>, proposed using an even higher value of s = √(D). For our problem, with D = 2,180,799, this choice results in a very sparse matrix, whose sparsity can be exploited for computational efficiency. As the random projection changes the distances between two snapshots by (1 ±ϵ), we can obtain the new dimension d by first choosing a value of ϵ, that is, the distortion we can tolerate, and use N = 1604 in Equation <ref>. For ϵ = 0.1, we get d > 6325, while ϵ = 0.05 gives d ≥ 24431 and ϵ = 0.01 gives d ≥ 594383. It has been observed that values of d less than the ones suggested by Equation <ref> work well in practice <cit.>. We chose d based on both the distortion in distances we can tolerate and the space available to store the randomly-projected snapshot matrix X^RP in memory. We generated X^RP incrementally by reading in each block of the snapshot matrix X a row at a time, generating a column of the sparse random matrix R, and calculating the outer product of the column and row and adding it to X^RP. By not storing either R or a block of X, we can choose a larger value for d as we require storage only for X^RP. Once we have obtained the randomly projected snapshot matrix X^RP of size d × 1604, we can apply the k-means algorithm to cluster the randomly-projected snapshots. The clustering results so obtained could vary based on the randomness of the projection and the random initial choice of the cluster centroids. We discuss these issues, and our choice of parameters for random projections and k-means, further in Section <ref>. §.§.§ Using an iterative method with a reduced representation An alternative that is similar in spirit to the use of random projections, but specific to our approach to building spatio-temporal surrogates, is to exploit an intermediate step in the building of these surrogates. As explained in the companion report <cit.>, we perform a singular value decomposition (SVD) on the snapshot matrix X from Equation <ref> to obtain X = UΣ V^T where X∈^D × N , U∈^D × D , Σ∈^D × N , and V∈^N × N . Here, Σ is a diagonal matrix whose non-zero diagonal elements, σ_ii, are the singular values of the matrix X and the columns of the orthonormal matrices U and V are the left and right singular vectors of X. The rows and the columns of the U and V matrices are ordered such that the singular values are in descending order in Σ. Since D >> N, only the top N rows of Σ will have non-zero diagonal elements, assuming rank(X) = N. If the rank, r, is less than N, then only the first r diagonal elements are non-zero. The columns of the matrix U = [ u_1, u_2, …, u_N ] where u_i ∈^D form a basis in ^D for the data, so each snapshot can be written as a linear combination of the u_i: x_i = ∑_k=1^N w_ki u_k where the coefficient w_ki of the k-th basis for the i-th snapshot is w_ki = u_k^T x_i for k = 1, …, N . In building the spatio-temporal surrogate, we consider only the more important u_i corresponding to the larger singular values, and, by truncating the summation in Equation <ref>, we create a reduced representation. However, for the purpose of clustering the snapshots, we can view the reduced representation of the i-th snapshot as the vector of the N weights w_i = [ w_1i, w_2i, …, w_Ni] , and then obtain a clustering of the snapshots by clustering the vectors of weights that define each snapshot. If only the few initial weights are calculated, they can be used as approximate representations of a snapshot. In efforts where we want to compare the quality of a global spatio-temporal surrogate, built using all snapshots, with a set of local surrogates created after clustering the snapshots, we can generate the cluster assignment directly using the weights obtained from the SVD of the full data set (Equations <ref> and <ref>), avoiding any additional computation. However, if we are creating only the local surrogates, this approach would be computationally more expensive than using random projections with sparse matrices (Section <ref>). In this report, we considered the option of clustering the snapshot using the weights obtained from the SVD as it allows us to compare the cluster assignment obtained from an exact representation of the data with that obtained from an approximate representation generated by random projections. §.§.§ Using a non-iterative method An alternative to using an iterative clustering algorithm, which is made tractable using distributed memory processing or a dimension reduction technique, is a clustering algorithm that does not require iterating over the snapshot matrix. One such method is hierarchical clustering <cit.>, a greedy algorithm which, in its agglomerative form, begins with every data point in its own cluster. At each step a linkage criterion is used to measure the similarity between all the clusters, and the two most similar clusters are merged together. This process of merging clusters continues until the desired number of clusters are obtained or until the clusters no longer satisfy a desired level of similarity <cit.>. Hierarchical clustering is usually accompanied by a dendrogram, so this stopping condition can be visualized as making a cut across the dendrogram at the specified similarity threshold to create a set of clusters. Hierarchical clustering requires the following parameters: a distance metric, often Euclidean; a linkage criterion; and the number of clusters. [htb] Hierarchical clustering One of the benefits of hierarchical clustering in processing extremely high-dimensional data is that it is not an iterative algorithm, requiring only the pairwise distance matrix between the data points. This allowed us to generate results using the original data set, without the need to create an approximation using random projections. While calculating the pairwise distance matrix is computationally expensive, it can be computed once and then used to test the algorithm with several different parameters. § RESULTS AND DISCUSSION We next present the results of clustering the snapshots for the three variables using three methods: i) random projections followed by k-means, ii) k-means using the weight vectors from SVD, and iii) hierarchical clustering on the original snapshot matrix. We also discuss how we set various parameters in the algorithms to generate a cluster assignment for each snapshot. All codes, unless otherwise stated, were written in C++ and Python. We exploited parallelism where possible through the use of the sub-process capability in Python, but did not explicitly optimize the conversion of the HDF5 files to the snapshot matrices, which was the more time consuming part of our solution. We begin by first taking a closer look at the data to be clustered, now focusing on multiple time steps in two example simulations, key r01_i017 and key r02_i028, after the original output has been pre-processed as discussed in Section <ref>. Figure <ref> shows how the mass variable evolves with time in these two simulations. We again observe that the first and last time steps in the example simulations are very different, suggesting there are at least two clusters in the data. In addition, there is similarity in intermediate time steps, even though one simulation is a break case and the other a no-break case. This suggests that there is inherent clustering in the data, though we expect the clusters will not be well separated as any clustering would assign some neighboring time steps, which are very similar, to different clusters. The corresponding images for x-momentum and y-momentum are shown in Figures <ref> and <ref> in Appendix <ref> and <ref>, respectively. Unlike the mass variable, the intermediate time steps are less similar between the break and no-break case. §.§ Results with random projections and k-means clustering Our approach to clustering using random projections followed by k-means clustering, introduces randomness in the projections and in the choice of initial cluster centers, both of which can influence the assignment of snapshots to clusters. If this influence is large, it can result in uncertainty in cluster assignment. §.§.§ Understanding the effect of the randomness of random projections We first empirically evaluated the effect of the randomness in the projection by repeating the projection of the snapshot matrix twice for a reduced dimension d = 2200. This new dimension gives nearly a 1000-fold reduction from the original dimension of D = 2,180,799. We calculated the original and reduced-dimensional distances between all snapshots and the 29 snapshots of the baseline simulation with HE length, jet tip velocity, and jet radius equal to 5.0cm, 0.950cm/μs, and 0.125cm, respectively. This simulation is at an extreme corner of the input space. The results for the mass variable are shown in Figure <ref>; the corresponding figures for the x- and y-momentum are shown in Figures <ref> and <ref> in Appendix <ref> and <ref>, respectively. These figures indicate that for our data set, the randomness of the random projections has little effect on the distances between snapshots, unlike the results of Fern and Brodley <cit.>, who found random projections to be unstable for clustering. Though we present the results only for two repetitions, we observed similar behavior in multiple repetitions. In addition, as has been observed by others <cit.>, we can use a smaller reduced dimension than the one suggested by the Johnson-Lindenstrauss lemma (Equation <ref>) for a specific value of the distortion ϵ. Our choice of d = 2200, which is much smaller than the dimension of 6325 suggested by the lemma for ϵ = 0.1, results in a distortion less than 0.05 for most snapshots. §.§.§ Exploiting the randomness of k-means to determine number of clusters Next we applied the k-means algorithm to the randomly-projected snapshot matrix. As the results depend on the random choice of the initial centroids, we used ensemble clustering, where we repeat the algorithm to generate results for 10 different initial cluster centroids. In our implementation of k-means (Algorithm <ref>), we set niter, the maximum number of iterations to 100 and thresh, the maximum distance moved by any centroid between iterations to be 0.0. For our data, the latter condition is satisfied before 100 iterations. Another parameter we need to select is the number of clusters, nc. A typical approach used in building spatio-temporal surrogates is to generate results for different values of nc, and select the best based on some metric. This metric can be the quality of reconstruction of test snapshots <cit.>, which is computationally expensive for a large data set like ours, or silhouette analysis <cit.>, which is useful when the clusters are compact and clearly separated <cit.>, which is not the case for our data. In practice, the number of clusters is a trade-off between larger values that give a better approximation of the non-linear manifold, and smaller values that make the class assignment of the snapshots more stable. To determine the number of clusters, we first tried the iterative consensus clustering (ICC) method <cit.>, which had its own parameters that were difficult to set. It required generating results with a range of values for nc and combining them into a consensus matrix, which identifies the number of times two snapshots are clustered together. We found the results to be sensitive to the range of values of nc; they also did not give a clear indication of the number of clusters inherent in the data. In addition, once the number of clusters had been identified, the method did not provide a way to obtain a cluster assignment for the snapshots. However, we realized that a consensus matrix could be exploited to understand the sensitivity of the clustering results to both the number of clusters and the randomness of the random projections, and, in addition, to identify the number of clusters. For our data set with 1604 snapshots, we were interested in moderate- to large-sized clusters where possible. We ran the k-means ensemble clustering for small values of nc = 3, 4, 5, and 6, with 10 repetitions each, changing the initial centroids each time. Unlike the ICC method that generated a single consensus matrix combining all these results, we generated a separate consensus matrix for each nc and for each of the two repetitions of the random projections (referred to as a and b). Our consensus matrices represented the fraction of times two snapshots were in the same cluster; with matrix entries between 0.0 and 1.0, it became easier to set parameters for subsequent processing. Using the mass variable as an example, we next describe how we analyze the consensus matrices to identify the number of clusters and the cluster assignment for the snapshots. Table <ref> shows the statistics on the eight consensus matrices. For nc = 3 and 4, most of the values are either 0.0 (snapshots never in same cluster) or 1.0 (snapshots always in same cluster). However, as the number of clusters increases to nc = 5 and 6, there is a range of values, indicating that some snapshots are clustered together only occasionally. This suggests that for larger nc, the results are sensitive to the initial cluster centroids. Further, for a given value of nc, there is less difference between the results of the two random projections when nc = 3 and 4, than when nc = 5 and 6, that is, the clusters are more stable for lower nc. This last observation does not necessarily hold for the x- and y-momentum variables as shown in Tables <ref> and <ref> in Appendix <ref> and <ref>, respectively. Figure <ref> shows a small 108 × 108 subset of the consensus matrices for (nc=3, a) and (nc=5, a) that further highlight the differences between the small and large values of nc. The subsets show the first three simulations with 29, 38, and 41 snapshots, respectively, in order of the time steps, starting with the first time step of the first simulation, and ending at the 41-st time step of the third simulation. As described in the caption, we can visually identify which snapshots in which simulations are in the same cluster. This observation gives us a simple approach to cluster assignment for the snapshots. We start with the first row of a consensus matrix, and put all snapshots with a value 1.0 in this row into one cluster. We repeat with the rows corresponding to each of these snapshots, and so on, until we have identified all snapshots in the first cluster. Then, we identify a snapshot not yet assigned to a cluster, and repeat to create the second cluster. And so on. This approach gives us the class assignment for the snapshots. However, we found that if we only combine snapshots that are always grouped together (that is, matrix entries with value 1.0), we will identify more clusters than nc because a snapshot that occurs together with another only nine times out of ten (with a value of 0.9 in the matrix) would be in a different cluster. We observed this behavior for both (nc=3, a) and (nc=5, a) as shown in Figure <ref> using the full consensus matrix with the snapshots reordered by the cluster number. For (nc=3, a), we have five clusters, the three clearly seen in the figure and two small ones with 4 and 2 snapshots that appear at the end. In contrast, (nc=5, a) has nine clusters instead of 5, as indicated by the diagonal blocks with values equal to 1.0. If we identified the clusters using matrix entries greater than a threshold, say 0.7, the results were sensitive to the threshold. We also had to address the issue of entries with values equal to 0.5, which occurs when neighboring time steps are at a cluster boundary and assigned to one cluster or the other half the time. To understand how the time steps for the 45 simulations are distributed across these 5 and 9 clusters, Figure <ref> shows the cluster assignment for each snapshot as a function of HE length. For (nc=3, a), there are two small clusters, with 4 and 2 snapshots, that are at the time step boundary between two clusters and could be assigned to either. For (nc=5, a), there are several small clusters that are connected to other clusters either weakly or strongly, with matrix values 0.3 and 0.7, respectively, based on Table <ref>. An option in this specific case is to use the reordered consensus matrix to merge a small cluster with another if they are strongly connected. However, when the consensus matrix entries take values closer to 0.5, it becomes less clear which clusters should be merged and why. In addition, when a small group of consecutive (in time) snapshots in a simulation spans 4 or 5 clusters, as shown in Figure <ref>, for (nc=5, a) at large values of HE length, it is not clear whether these snapshots should remain with their original large cluster, or be merged into the cluster of neighboring (in time) snapshots. We suspect that this poor cluster assignment is the result of either too large a value of nc and/or the clustering of snapshots being more sensitive to the initial choice of centroids for larger nc, resulting in values in the consensus matrix other than 0.0, 0.5, or 1.0. This analysis suggests that we should select the number of clusters that gives more repeatable results in the ensemble as it reflects a more stable clustering. Using this criterion, we chose the cluster assignment identified by (nc = 3, b) for the mass variable as it has only 0.0 and 1.0 in the consensus matrix; the result is shown in Figure <ref>. The results for the x- and y-momentum variables, using random projections and k-means clustering, are shown in Appendix <ref> and <ref>, respectively, and include the distribution of values in the consensus matrix (Tables <ref> and <ref>), along with the cluster assignment (Figures <ref> and <ref>). These results indicate 4 clusters for the x- and y-momentum variables. We discuss these results further in Section <ref>. §.§ Results with k-means clustering using weights from SVD Next, we applied k-means to the weights that formed the reduced representation of each snapshot obtained using the SVD of the full snapshot matrix as described in Section <ref>. We used the number of clusters that were identified using random projections with k-means (Section <ref>) for the three variables. As a sanity check, we repeated the clustering ten times for each variable and confirmed that the clustering results presented in Figures <ref>, <ref>, and <ref> for the mass, x- and y- momentum, respectively, were stable. These results are discussed further in Section <ref>. §.§ Results with hierachical clustering We used the implementation of hierarchical clustering available in the SciPy Python library <cit.>, version 1.10.1. This method requires three parameters: the number of clusters, a linkage criterion, and a distance metric between two snapshots. We considered both three and four clusters for each of the three variables as these were the most likely values for nc identified using the consensus matrix analysis (Section <ref>). We wanted to evaluate whether hierarchical clustering on the original matrix X would give different results from the k-means method. To allow experimentation with different parameters, we precomputed the pairwise distance matrix using Euclidean distances. We used a serial implementation for the distance calculation, though the block form of the snapshot matrix (Equation <ref>) could be exploited for parallel computation. As there is no randomness in hierarchical clustering, we ran the method once. SciPy provides several linkage criteria that define the similarity (or dissimilarity) between clusters; at each step, the two clusters that are the most similar, or least dissimilar, are merged. For the single, complete, and average linkage criteria, the dissimilarity between two clusters is defined as the minimum, maximum, and average Euclidean distance, respectively, between two snapshots, one in each of the two clusters. The Ward linkage defines the dissimilarity between two clusters as the increase in the error sum of squares (ESS) when the two clusters are merged. If x_i is a snapshot in a cluster with n snapshots, whose mean is x̅, then ESS of a cluster is defined as: ESS = Σ_i=1^n ( x_i - x̅)^2 . Note the similarity between the definition of Ward linkage and the k-means algorithm, where each snapshot is assigned to the closest centroid. We evaluated the different linkage criteria using two quality metrics. First, we wanted the clusters to be of moderate size, especially the cluster at late time that was of interest in our problem. Second, as neighboring time steps in a simulation are similar, we wanted all the snapshots in each simulation that are assigned to a cluster to be at contiguous time steps. We observed the following: * As neighboring time steps in a simulation are similar, we expected the single linkage criterion would place all or most snapshots within a single cluster. We found this to be true for all three variables. * The complete linkage criterion on the mass variable created a small cluster of late time snapshots, but only for the simulations with moderate to large HE length, excluding similar snapshots at late time steps for small HE length. For the x-momentum variable, complete linkage split some clusters across non-contiguous time steps in a simulation, while results for the y-momentum variable were acceptable. * Average linkage for the mass variable gave results similar to those with complete linkage, but performed poorly for the other two variables. For the x-momentum variable, with nc = 3, it created a single cluster composed of the late time snapshots in two simulations at small HE length; for nc = 4, this cluster was split into two, one for each simulation. For the y-momentum variable, with nc = 3, some simulations had all snapshots assigned to one cluster, while for nc = 4, the early and late-time snapshots were merged into a single cluster, while the mid-time snapshots were in different clusters. * Ward linkage generated the most meaningful cluster assignments. These are shown in Figures <ref>, <ref>, and <ref> for the mass, x- and y- momentum variables, respectively, and discussed further in Section <ref>. §.§ Discussion We next discuss the cluster assignment results from the three clustering methods shown in Figures <ref> and <ref> for the mass variable, Figures <ref> and <ref> in Appendix <ref> for the x-momentum variable, and Figures <ref> and <ref> in Appendix <ref> for the y-momentum variable. These figures also include the mean snapshots for each cluster generated using the cluster assignment from the k-means algorithm with random projections. These results, and our experiences, indicate the following: * We found that the theory behind random projections and the Johnson-Lindenstrauss lemma works in practice. As observed by others <cit.>, it is possible to obtain low distortion in the projected data even though the reduced dimension is smaller than the value recommended for a specific distortion. For our data set, we obtained a thousand-fold reduction in dimension with ≈5% distortion of the data. * We can optimize the calculation of the randomly-projected data by using a very sparse random projection matrix <cit.> and reordering the computation such that the projected matrix is generated incrementally by reading in the data one row at a time and generating the random matrix on the fly, a column at a time. This initial step required for the k-means method takes roughly the same time as the calculation of the distance matrix for the hierarchical clustering, making the two methods competitive. * The results obtained by clustering using k-means and the weights from the SVD of the snapshot matrix are close to those obtained using random projections and k-means. This indicates that the approximation resulting from random projections has little effect on the clustering results. * Selecting a value for nc, the number of clusters, and generating a class assignment for the snapshots, remains challenging. For k-means, we used the randomness in choice of initial centroids and random projections, as well as the structure in the consensus matrix, to identify a stable clustering. For our data, we found smaller values of nc gave more consistent results, but the similarity between snapshots at consecutive time steps resulted in uncertainty in cluster assignments for some snapshots. We could exploit domain information to assign such snapshots in very small clusters to other larger clusters, but this may not be an option if there are too many small clusters and it is not clear which clusters should be merged. For hierarchical clustering, an analysis of the dendrograms did not shed any light on the appropriate number of clusters; we therefore used the number of clusters obtained from random projections and k-means clustering. * Hierarchical clustering is sensitive to the choice of the linkage criterion, which is expected. As has long been observed <cit.>, the Ward linkage produces the best results, generating a meaningful cluster assignment for the time steps in a simulation. This might be expected, given the similarity between the Ward linkage and the k-means algorithm. * The plots of the cluster assignment for the snapshots and the HE length helped to evaluate whether the results are meaningful. As expected, each cluster captures the behavior over contiguous (in time) snapshots, with the early time snapshots in one cluster, the late time in another, and the mid-time snapshots in one or more clusters. This structure in the plots, where each cluster, assigned a different color, appears as a band, is clearer for the mass and y-momentum variables, but less so for the x-momentum, especially with hierarchical clustering, where the banded structure is less well defined (Figure <ref>). * Comparing the clustering results for the three variables, we see that the assignment of snapshots to clusters, as well as the sizes of the clusters, are very different. For the mass variable, the early time cluster is the largest. For the two methods based on k-means, the mid- and late-time clusters are of the same size, while hierarchical clustering gives a smaller mid-time cluster and larger early- and late-time clusters. Each simulation has snapshots in all three clusters. This is not the case for the x-momentum variable, where the simulations with small HE length have time steps assigned to just three of the four clusters. This is because the cluster to which the very early time steps are assigned depends on the value of HE length. At early time, the x-momentum variable is dominated by the jet that enters from the left of the domain, moving to the right. Since we cropped the simulations with larger HE length on the left during pre-processing, the jet is not seen in these simulations in the very early time steps, which are therefore assigned to a different cluster. For the y-momentum variable, we observe two unusual aspects of the cluster assignment. First, all clustering methods create a relatively small cluster just after early time; this cluster is wedge shaped, thin for small HE length and growing wider as the HE length increases (Figures <ref> and <ref>). This cluster captures what happens as the wave that emanates from the tip of the jet (Figure <ref>) reaches the top of the domain. Second, when we use the k-means method, a small number of snapshots at low values of HE length, which appear to be part of the wedge-shaped cluster, are in fact assigned to the late time cluster, which is now split across time steps. This result was consistent across multiple runs of the k-means ensemble, for clustering using both the weights from the SVD and the randomly-projected snapshot matrix with runs a and b, though the apparently mis-assigned snapshots varied slightly. We suspect that these snapshots may indeed be closer to the centroid of the late-time cluster, if only by a tiny amount. Changing the cluster assignment of these 13 snapshots to avoid splitting the late-time cluster across time steps resulted in minor changes in the mean snapshots of the corresponding clusters (Figure <ref>); we selected this as the cluster assignment for the y-momentum variable. Note that the hierarchical clustering does not split the late time cluster. * Comparing the different clustering methods, we find that clustering using k-means, with either the random-projected snapshots or the weights from SVD, works quite well. It is a simple method, and though it was proposed several decades ago, it is still an admissible method <cit.>. As it is iterative, issues such as a poor initial choice of centroids, or a snapshot assigned to the wrong cluster, are fixed in later iterations, unlike hierarchical clustering. For our data set, with the high-dimensional snapshot matrix in block form, k-means requires the reduction in dimension through random projection, while hierarchical clustering requires calculation of the distance matrix; both take roughly the same compute time. We were able to use the randomness of the initial centroids in k-means to select the number of clusters, nc, but there was no such option we could use for hierarchical clustering with Ward linkage. § CONCLUSIONS In this report, we used output from simulations of a jet interacting with high explosives to address two challenges in building spatio-temporal surrogates for high-dimensional data. First, the data were available on spatial domains of different sizes, at grid points that varied in their spatial coordinates, and in a format that distributed the output across multiple files at each time step of the simulation. We described how we reorganized these large data sets into a consistent format efficiently, exploiting parallelism where possible. Second, to improve the accuracy of the surrogates, we wanted to cluster the data by similarity and build a separate surrogate for each cluster. However, as the outputs are high-dimensional, with the spatial domain represented by more than two million grid points, traditional iterative clustering methods, such as k-means, could not be applied directly. We showed how we could use random projections to make the clustering of these outputs tractable. Our experiences indicated that the approximation introduced by the random projections had little effect on the clustering results. Moreover, we could use the randomness of both the random projections and the initial choice of cluster centroids in k-means to identify the number of clusters. The effectiveness of our approach is discussed further in the companion report <cit.>, where we show how clustering the data by similarity improves the accuracy of the spatio-temporal surrogates created from a small set of simulations. § ACKNOWLEDGMENT We would like to thank the Defense Threat Reduction Agency (DTRA) for funding this work. The simulations of the interaction of the jet with high explosives were performed using the ARES code developed at Lawrence Livermore National Laboratory. LLNL-TR-850159 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. plainnat § APPENDIX: DATA AND RESULTS FOR X-MOMENTUM § APPENDIX: DATA AND RESULTS FOR Y-MOMENTUM
http://arxiv.org/abs/2307.02462v2
20230705173458
Expert-Agnostic Ultrasound Image Quality Assessment using Deep Variational Clustering
[ "Deepak Raina", "Dimitrios Ntentia", "SH Chandrashekhara", "Richard Voyles", "Subir Kumar Saha" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Expert-Agnostic Ultrasound Image Quality Assessment using Deep Variational Clustering Deepak Raina^12*, Dimitrios Ntentia^23, SH Chandrashekhara^4, Richard Voyles^2, Subir Kumar Saha^1 This work was supported in part by SERB (India) - OVDF Award No. SB/S9/Z-03/2017-VIII; PMRF - IIT Delhi under Ref. F.No.35-5/2017-TS.I:PMRF; Daniel C. Lewis Professorship; Berea College, Kentucky, USA and PU-IUPUI Collaborative Seed Grant. ^1Indian Institute of Technology (IIT), Delhi, India ({deepak.raina, saha}@mech.iitd.ac.in); ^2Purdue University (PU), Indiana, USA ({draina, ntentiad, rvoyles}@purdue.edu) ^3Berea College, Kentucky, USA ([email protected]); ^4All India Institute of Medical Sciences (AIIMS), Delhi, India ([email protected]). ^*Corresponding author is Deepak Raina August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Ultrasound imaging is a commonly used modality for several diagnostic and therapeutic procedures. However, the diagnosis by ultrasound relies heavily on the quality of images assessed manually by sonographers, which diminishes the objectivity of the diagnosis and makes it operator-dependent. The supervised learning-based methods for automated quality assessment require manually annotated datasets, which are highly labour-intensive to acquire. These ultrasound images are low in quality and suffer from noisy annotations caused by inter-observer perceptual variations, which hampers learning efficiency. We propose an UnSupervised UltraSound image Quality assessment Network, US2QNet, that eliminates the burden and uncertainty of manual annotations. US2QNet uses the variational autoencoder embedded with the three modules, pre-processing, clustering and post-processing, to jointly enhance, extract, cluster and visualize the quality feature representation of ultrasound images. The pre-processing module uses filtering of images to point the network's attention towards salient quality features, rather than getting distracted by noise. Post-processing is proposed for visualizing the clusters of feature representations in 2D space. We validated the proposed framework for quality assessment of the urinary bladder ultrasound images. The proposed framework achieved 78% accuracy and superior performance to state-of-the-art clustering methods. The project page with source codes is available at https://sites.google.com/view/us2qnethttps://sites.google.com/view/us2qnet. § INTRODUCTION UltraSound (US) is the most frequently employed medical imaging modality in clinical practice due to its real-time feedback, low-cost, portability, and non-ionizing nature. It is useful for diagnosis, pre- and post-operative assessment, and surgical interventions. However, the diagnosis by ultrasound depends a lot on the image quality, which is determined manually by sonographers during image acquisition <cit.>. Ultrasound images are quite difficult to interpret due to noise, shadows, poor contrast, and other sensors and motion artifacts <cit.>. The blurred boundaries and presence of multiple anatomical structures further increase the difficulty in ensuring quality during acquisition. Thus, the requirement of minimal manual effort and ensuring consistent quality among novice sonographers during image acquisition has prompted the research community to use automated methods for US Image Quality Assessment (US-IQA). The supervised learning-based methods using a deep Convolutional Neural Network (CNN) have shown promising results in medical imaging analysis, including segmentation and classification using curated and annotated large-scale datasets <cit.>. However, the manual data annotation process is quite difficult for US image quality, as the US images with the same quality have significant differences and images with significant differences in quality appear similar. Moreover, the image quality annotation is noisy due to inter- and intra-observer perceptual variations <cit.>. Thus, the annotation process is often labour-intensive, time-consuming and requires the participation of more than one expert radiologists, thereby making supervised learning quite challenging for automated US-IQA. In this work, we propose an UnSupervised learning-based UltraSound image Quality assessment Network, termed as US2QNet, as shown in Fig. <ref>. The objective of the work is to address the complexity and uncertainty in manual annotation of US image quality by expert radiologists for supervised learning. The key contributions of framework are as follows: * We proposed the first deep clustering framework for US-IQA, which uses a variational autoencoder in conjunction with pre-processing, clustering and post-processing modules. The framework will jointly enhance, extract, cluster and visualize the quality feature representation of US images. * We introduced US imaging-specific pre- and post-processing modules in deep clustering. Pre-processing uses fuzzy filtering to direct the network's attention towards the critical quality features. The post-processing module generates the visualization of high-dimensional quality feature representations in 2D space and provides an effective way to do hyperparameter tuning. * The proposed framework was validated and compared to the state-of-the-art (SOTA) methods for urinary bladder US images. The results revealed that the proposed method is effective for unsupervised US-IQA and outperformed the SOTA methods. The unsupervised deep clustering approach strategy has previously been investigated mostly for natural images <cit.>, however, to the best of the author's knowledge, this is the first attempt to use this approach for the challenging task of classifying the US images based on quality. §.§ Related work Supervised learning for US-IQA: The availability of annotated medical images has motivated the researchers to use supervised training of CNNs for automated analysis of X-ray, Magnetic Resonance Imaging (MRI), and US imaging <cit.>. For US-IQA, Wu et al. <cit.> proposed the two deep CNNs for jointly finding the region of interest and assessing the fetal ultrasound image quality. Lin et al. <cit.> used a multi-task faster regional CNN architecture to first localize the six key anatomical structures of the fetal head and then assigned the quality score based upon their clear visibility. However, they employed prior clinical knowledge to determine the relative position of anatomical structures. In addition, the fetal US datasets in <cit.> require substantial clinical expertise for labeling the region of interest and anatomical structures with comparable appearances such as stomach bubble, umbilical vein, choroid plexus, and others. Moreover, the performance of supervised IQA models is impacted by annotation noise due to inter- and intra-observer perceptual variations <cit.>. Unsupervised learning for medical imaging: In order to overcome the challenges of labour-intensive annotation in supervised learning, different techniques of unsupervised learning have been explored for medical imaging analysis <cit.>. These techniques used traditional clustering methods <cit.>, modern methods like Autoencoder (AE), generative networks <cit.>, and Deep Clustering <cit.>. AE is one of the most effective algorithms for unsupervised feature extraction and compression to low-dimensional latent space while minimizing the reconstruction error between the encoded input and decoded output. They have been mostly used for denoising and anomaly detection in medical images <cit.>. In addition, Generative networks like Variational AE (VAE) have also shown great potential for medical image analysis <cit.>. They use variational loss during training and enforce a predefined distribution in the latent space. A work by Nesovic et al. <cit.> attempted to address the US-IQA using AE combined with Random Forest Classifier (RFC). However, RFC required noise-specific hand-crafted features for quality classification, which are challenging to design for noisy US images <cit.>. Deep clustering for image classification: Clustering is another powerful unsupervised method to detect patterns in non-labelled dataset. There exist several classical clustering algorithms like k-means, Principal Component Analysis (PCA), Density-Based Spatial Clustering of Applications with Noise (DBSCAN) <cit.>, Spectral clustering <cit.>, t-distributed Stochastic Neighbor Embedding (t-SNE) <cit.>, Uniform Manifold Approximation and Projection (UMAP) <cit.>. However, they can't handle high-dimensional data. In recent years, deep clustering has gained the researcher's attention for unsupervised image analysis <cit.>. Deep clustering uses classical clustering methods in conjunction with AEs and VAEs to learn the clusterable representation in latent space. This has been used for the categorization of handwritten digits and face images <cit.>, natural image datasets <cit.>, Magnetic Resonance Imaging <cit.> and Computed Tomography images <cit.>. Yu et al. <cit.> proposed deep clustering for classifying thyroid cancer in US images. Notably, the potential of deep clustering has not yet been uncovered for US-IQA, which is quite challenging to analyze due to the noise, artifacts, varying viewpoints, low inter-class and huge intra-class variability <cit.>. § METHODOLOGY Fig. <ref> depicts the overview of the proposed framework. The framework is executed in three stages, which include: (1) Training the VAE on the pre-processed dataset using the reconstruction function as the loss function; (2) Fine-tuning the VAE by jointly optimizing the reconstruction loss and clustering loss for achieving clusterable quality feature representation of US images; and (3) Dimensionality reduction and visualization of learned feature representation using UMAP. Later, we utilized HDBSCAN for assigning labels to the clusters corresponding to the US image quality. §.§ Variaional Autoencoder VAE is the generative variant of AE, as it enforces the encoder of AE to generate features representing a normal distribution 𝒩(μ, σ) in the latent space. VAEs have shown success in multi-class clustering due to their generative representation capability. The VAE introduces and approximates a latent variable z that follows a prior distribution p(z). The model then infers a posterior distribution q_ϕ(z|x), and an output of p_θ(x|z), with encoder parameterized by ϕ and a decoder parameterized by θ. Therefore, the objective of variational inference is to precisely compute p_θ(x|z) by inferring the latent distribution from the observed dataset. During training, a latent vector z is sampled from p_θ(z), and the decoder produces the parameters of p_θ(x|z), which we may use to sample an output vector x. The following equation formulates the loss function of VAE as: L(ϕ, θ; x) = 𝔼_z∼ q_ϕ(z|x) [log p_θ(x|z)] + D_KL(q_ϕ(z|x)||p_θ(z)) where the first term is reconstruction error between observed data and data decoded from the latent vector and the second term is Kullback-Leibler (KL) divergence between the distribution of observed data q_ϕ(z|x) and a certain distribution p_θ(z). The prior p_θ(z) in VAE is often modeled by a single multivariate Gaussian in VAEs <cit.>. Lin et al. <cit.> proposed using a mixture of Gaussian as prior for unsupervised classification of the CIFAR Dataset and achieved remarkable results. Inspired by their work, we have used a mixture of Gaussian as prior p_θ(z) for the latent space distribution representation to accurately describe the structure of each cluster using a specific Gaussian. Pretraining of VAE: During pre-training, we used L(ϕ, θ; x) as the loss function (denoted as L_V) to optimize the set of parameters θ^* of VAE as θ^* = arg max_θL(ϕ, θ; x) §.§ Deep variational clustering Several algorithms have been proposed for deep clustering as described in Section <ref>. Among them, Deep Embedded Clustering (DEC) <cit.> is one of the most representative methods of deep clustering, which proposed joint optimization of feature representation and clustering. In our framework, we used VAE, in contrast to AE in <cit.> and embedded the DEC module between the encoder and decoder. We used the acronym DEC-VAE in the paper for this clustering approach. The joint loss function is then given by the sum of VAE loss (L_V) and clustering loss and is formulated as follows: L=L_V+γL_C where L_C is the clustering loss and γ is a coefficient used to control the degree of distortion in the latent space. Clustering loss: The clustering module works like a validation set, where in each epoch, we used an auxiliary target distribution Y^t to iteratively correct the distribution of predicted label assignments Y^p. To define clustering loss, we used the KL divergence to decrease the difference between the Y^t and Y^p distribution as: L_C=D_KL(Y^t||Y^p)= ∑_i^N∑_j^Ky^t_iklogy^t_ik/y^p_ik where y^p_ik denotes the probability of assigning cluster μ_k to latent feature z_i and y^t_ik is the auxiliary target distribution. The y^p_ik is calculated using Student's t-distribution as: y^p_ik=(1+||z_i-μ_k||^2)^-1/∑_i(1+||z_i-μ_k||^2)^-1 where μ_k is the initial cluster centroids and calculated by performing standard k-means clustering. The target distribution y^t_ik is defined by the following equation: y^t_ik=y_ik^p^2/ ∑_iy^p_ik/∑_i(y_ik^p^2/∑_iy^p_ik) The clustering loss in eq. (<ref>) is used to correct the weights, representing cluster centers μ_k^K, and pivot the image latent feature z_i from a generative image representation to a representation focused on discriminative quality features in the latent space distribution. We have used three clusters (K=3) in our dataset representing three levels of US image quality. §.§ Post-processing The post-processing module is used to visualize the feature representation in 2D space, which provides an effective way to analyze the model's accuracy. The visualization of clusters also played an important role in the selection and iterative correction of the network hyper-parameters. After fine-tuning the network weights, the latent space feature vector is sampled from learned feature distribution as [z_1 ∼𝒩_1,⋯,z_N ∼𝒩_N]. Then, the latent space feature vector of dimension N is transformed to lower-dimensional space Ñ as [z̃_1,⋯,z̃_Ñ] for efficient clustering of features. Several algorithms exist for dimensionality reduction and clustering analysis like PCA, k-means, t-SNE, UMAP, and HDBSCAN, as discussed in Section <ref>. Our exhaustive study on different clustering approaches revealed that UMAP for dimensionality reduction followed by HDBSCAN for label assignment outperformed all other combinations. The UMAP is chosen to ensure the efficient learning of manifold structure from a high-dimensional latent feature representation. In addition, UMAP returns reproducible results which is not the case in other dimensionality reduction methods like PCA. The capability of HDBSCAN to preserve both global and local density clusters while effectively separating overlapping data makes it a desirable option for label assignment <cit.>. § DATASET Collection: The dataset consists of US images collected during the feasibility study of our in-house developed Telerobotic Ultrasound (TR-US) system <cit.> at All India Institute of Medical Sciences (AIIMS), New Delhi (a public hospital and medical research university in India). The AIIMS Ethics Committee approved the study (Ref. No. IEC-855/04.09.2020,RP-16/2020). The study was conducted between October 2021 and December 2021. Before the scanning operation, informed written consent was obtained from the volunteers, and their privacy was protected by anonymizing their identifiers in the image. The dataset consists of male urinary bladder US images, which is used for discriminating the bladder shape, and identifying diverticula, stones, malignant tumours, and free fluid (blood) during trauma. The dataset is challenging to analyze, having spurious textures and inter- and intra-subject variations. Pre-processing: The deep learning models are quite sensitive to noise in images, therefore ultrasound images, which are often noisy, need to be filtered without affecting the key diagnostic details and edges of anatomical structures. In the pre-processing module, we employed two US imaging-specific techniques for enhancing their quality <cit.>. First, we used fuzzy modeling to distinguish noise from edges and other diagnostic details in the image. We used the fuzzy filter proposed in <cit.>, which effectively balances the degree of noise removal and edge preservation. It used the fuzzy logic-based uncertainty modeling, which acquires the statistical parameters (mean and variance) of the local window 𝕎_l in the image to find its similarity with non-local window 𝕎_nl. Then, the degree of similarity of the similar local and non-local regions is computed using euclidean distance as: d_s = ||𝕎_l - 𝕎_nl|| Finally, the noise-free pixel of image p̃_i is estimated using the fuzzy centroid technique and is given as: p̃_i = 1/∑_j=1^n w_k∑_j=1^n(p_j × w_j) where n is the count for similar local and non-local regions, p_j is the value of the central pixel of window 𝕎_j and w_j is the weight assigned to the most similar pixel, calculated using Gaussian-shaped fuzzy function with zero means (μ=0) and noise variance (σ^2) as: w_k(d_s; σ, μ) = exp^-(d_s-μ^2)^2/2σ^2 The fuzzy filtering will enable the network to extract essential quality-related features rather than being distracted by the noise in the US images. Second, we used the sharpening filter, which emphasizes the boundaries of anatomical structures by increasing the contrast between the bright and dark regions of the US image <cit.>, thereby enhancing the overall appearance of the image. We used a 4-neighbors laplacian filter: [[0,-1,0], [-1,5,-1], [0,-1,0]], to sharpen the US images. Finally, horizontal flipping is also employed to augment the dataset and prevent the over-fitting of the model on a small US dataset. This will mirror the anatomical structures in the US images, as they can appear on both sides of the transducer cone during scanning. Testing dataset: The dataset consisted of total 1833 US images. We randomly split the dataset into training and testing set with 90:10 ratio. We used the labelled testing dataset, as shown in Fig. <ref>, to validate the proposed framework. The labelling criteria is adopted from the internationally prescribed 5-level quality assessment scale <cit.>. Three radiologists with more than 15 years of experience in abdomen radiology labeled the dataset. The score assesses the image based on the diagnostic quality, which depends on structural shape/size/placement and acceptability of artifacts. We measured inter-rater reliability with Intra-Class Correlation (ICC) <cit.> and obtained a high value of 0.965, indicating excellent agreement. However, the supervised US-IQA model in <cit.> achieved only 75.3%, 77.2%, and 76.3% accuracy with respect to three experts, while accuracy for the average rating of three experts is 87.3%. These results reveal that inter- and intra-observer perceptual variations pose challenges for supervised US-IQA models. Further, we used Silhouette Coefficient (SC) analysis <cit.> on the dataset to choose an optimal value for the number of clusters. For 3 clusters, SC value was 0.7628, while for 4 and 5 clusters, it was 0.6381 and 0.5829, respectively. Therefore, we used three quality labels for the testing set as Below Average (A-), Average (A) and Above Average (A+), where A is collectively used for images rated as 2 and 3; and A+ for images rated as 4 and 5 by experts. § RESULTS AND ANALYSIS §.§ Implementation details The proposed model was implemented using Python 3.8 and Pytorch 1.11. The pretraining, training and testing have been conducted using a Dell Workstation with Nvidia GeForce RTX 3090 GPU having 24 GB of memory. We used the greyscale images and resized them to 224 × 224, and the training set included our pre-processed dataset. The size of window 𝕎 used for the fuzzy filter is 5 × 5. The weights of the VAE have been learned in the pretraining phase. The latent space dimension N is kept to 80 to extract rich features in the latent space for efficient clustering. The choice of latent space dimension has been decided iteratively based on the clustering accuracy scores. The batch size used is 128 with a learning rate of 0.01. For the fine-tuning phase, VAE was trained by embedding the clustering module, as described in Section <ref>. The value of γ used in eq. (<ref>) is 0.1. The batch size used is 64 and the learning rate is 0.005. The model was trained for 200 epochs with early stopping criteria, which is based on the stagnation of the loss function values during training for 10 epochs. §.§ Evaluation metrics In order to validate the clustering quality of our model, we used four evaluation metrics, namely Adjusted Random Index (ARI), Adjusted Mutual Information (AMI), Unsupervised Clustering Accuracy (ACC), and Normalized Mutual Information (NMI) <cit.>. The value of these metrics varies from 0 to 1, denoting the accuracy of clustering. §.§ Pre-trained VAE analysis We quantitatively analyzed the reconstruction of US images in the test set by the trained VAE using the Structural Similarity Index Measure (SSIM) and Mean Square Error (MSE), as shown in the table <ref>. In order to analyze the effect of pre-processing on extracting rich features from ultrasound images, we also analyzed the reconstructed images without the pre-processed dataset. The image with quality labels A and A+ showed significant improvement in the scores due to the enhancement of anatomical appearance features, thereby enabling the network to focus on key quality features. The image of quality A- has shown slight improvements in scores due to the absence of anatomical structures in the views. §.§ Clustering performance analysis Results with US Image dataset: We trained the model on the entire US dataset and visualized the cluster for 183 test images using UMAP for dimensionality reduction followed by HDBSCAN for label assignment. Fig. <ref> demonstrates that the clusters generated after pretraining the VAE has a lot of overlapping points, however, the model is progressively getting better after fine-tuning the weights with the clustering module, with the separation of clusters being more evident along with the reduction in overlapping data points. This change becomes quantitatively evident, with a 32.8% increase in ARI, 25.2% in AMI, 32% in NMI and 26.2% in ACC scores from the pretraining stage to training with clustering module, as shown in Fig. <ref>(a). Results with MNIST dataset: We also demonstrated the effectiveness of the proposed model on the MNIST digit dataset <cit.>. The values of evaluation matrices are shown in <ref>(b), which demonstrates similar improvements in clustering as observed in the US dataset. Note that ultrasound-specific pre-processing has not been used for MNIST. §.§ Comparison with State-of-the-art methods In this experiment, we compared our method with five classical clustering methods, including the k-means, t-SNE, UMAP, HDBSCAN and PCA with k-means (PCA+k-means) as well as state-of-the-art DEC methods, such as DEC-CAE with k-means, t-SNE and UMAP for dimensionality reduction in post-processing. To compare the DEC-VAE-based method with the AE versions, we used similar versions referred to as DEC-VAE+k-means, DEC-VAE+t-SNE and DEC-VAE+UMAP (US2QNet). The results are shown in Table <ref>. It is noted that our proposed method, US2QNet has obtained the highest performance on US and MNIST dataset. It outperforms classical clustering methods including k-means, t-SNE, UMAP, HDBSCAN and PCA+k-means for the US dataset on the NMI score by 65.1%, 54.9%, 56.9%, 59.9% and 64.0%, respectively. The DEC-VAE with k-means, t-SNE and UMAP reports an increase in NMI score in contrast to its corresponding versions of DEC-AE by 47.9%, 28.4% and 44.9%, respectively. Similar improvements have also been noticed in ARI, AMI, ACC and NMI on MNIST and US dataset, indicating that the proposed method is effective in unsupervised US-IQA. The performance of different clustering methods for the US image dataset is visualized in Fig. <ref>. The results show that the clusters generated by the proposed method (Fig. <ref>(d)) are more evenly distributed and individually closely packed than the other classical and DEC-based approaches. Further, we calculated the ACC values for 4 and 5 clusters, resulting in 0.543 and 0.542 respectively. From this, it can be asserted that the proposed framework is appropriate for 3 clusters. §.§ Ablation study To analyze the contribution of different components of the proposed method in enhancing the clustering of US image quality, we performed the ablation study of our model. The results are reported in Table <ref>. We compared the proposed method with Pre-Processing (PP) only and with Pre-Training (PT) only. The metric used in the plot is ACC. It is evident from the figure that PP and PT modules have contributed to enhancing the performance of the proposed method. Without PT and PP modules, the ACC value is 0.544. The model with only pre-processed dataset without pre-training has an ACC score of 0.569, while using the pre-trained VAE and unprocessed dataset, the model reported an ACC score of 0.643. However, with the introduction of PP along with PT, the score impressively increased to 0.783. §.§ Real-time implementation We have integrated the US2QNet with the ultrasound machine in order to evaluate its real-time efficacy. The US video is captured by the Sonosite M-TURBO ultrasound machine and is transferred to the GPU laptop using Epiphan DVI2USB 3.0 (Epiphan Video, Canada). The manual scanning of the pelvic region is conducted, as shown in Fig. <ref>. It has been found that US2QNet efficiently evaluates the quality of US images per frame in 0.0552 seconds. The sonographer found it to be a useful tool for assisting the novice operator in conducting the procedure, however, they would like the range of quality scoring to be increased as per clinical protocols for precisely guiding the probe maneuvers. § CONCLUSION AND FUTURE WORK We introduced the first-of-its-kind UltraSound Image Quality Assessment (US-IQA) using unsupervised learning. This will alleviate the burden and uncertainty associated with manual annotation of US image quality for supervised learning. The proposed model, US2QNet, leverages the variational autoencoder in conjunction with three modules, pre-processing, clustering and post-processing for effectively enhancing, classifying and visualizing the quality features representations of US images. The validation on urinary bladder US images demonstrated that the proposed framework could generate clusters with 78% accuracy and performs better than state-of-the-art methods. We hope our preliminary results will generate the community's attention towards this efficient but previously ignored topic of unsupervised US-IQA. In future, we would aim for an end-to-end DNN framework without pre-processing. Further, we would incorporate an automatic selection of quality clusters rather than fixing them beforehand. Additionally, our future work will demonstrate the applicability of the proposed framework for assisting autonomous robotic ultrasound <cit.>. ieeetr
http://arxiv.org/abs/2307.00928v1
20230703110240
Learning Differentiable Logic Programs for Abstract Visual Reasoning
[ "Hikaru Shindo", "Viktor Pfanschilling", "Devendra Singh Dhami", "Kristian Kersting" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
Learning Differentiable Logic Programs for Abstract Visual Reasoning]Learning Differentiable Logic Programs for Abstract Visual Reasoning [1]Hikaru [email protected] 1,4]Viktor Pfanschilling 1,2]Devendra Singh Dhami 1,2,3,4]Kristian Kersting *[1] TU Darmstadt, Darmstadt, Germany [2] Hessian Center for AI (hessian.AI), Darmstadt, Germany [3] Centre for Cognitive Science, TU Darmstadt, Germany [4] German Center for Artificial Intelligence (DFKI), Germany [ [ August 1, 2023 ==================  Visual reasoning is essential for building intelligent agents that understand the world and perform problem-solving beyond perception. Differentiable forward reasoning has been developed to integrate reasoning with gradient-based machine learning paradigms. However, due to the memory intensity, most existing approaches do not bring the best of the expressivity of first-order logic, excluding a crucial ability to solve abstract visual reasoning, where agents need to perform reasoning by using analogies on abstract concepts in different scenarios. To overcome this problem, we propose NEUro-symbolic Message-pAssiNg reasoNer (NEUMANN), which is a graph-based differentiable forward reasoner, passing messages in a memory-efficient manner and handling structured programs with functors. Moreover, we propose a computationally-efficient structure learning algorithm to perform explanatory program induction on complex visual scenes. To evaluate, in addition to conventional visual reasoning tasks, we propose a new task, visual reasoning behind-the-scenes, where agents need to learn abstract programs and then answer queries by imagining scenes that are not observed. We empirically demonstrate that NEUMANN solves visual reasoning tasks efficiently, outperforming neural, symbolic, and neuro-symbolic baselines. § INTRODUCTION Deep Neural Networks (DNNs) are attracting considerable interest due to significant performances in crucial tasks in Artificial Intelligence <cit.> such as image recognition <cit.>, game playing <cit.>, protein-structure prediction <cit.>, and language modeling <cit.> to name a few. DNNs are essentially data-driven, they perform pattern recognition statistically given data and perform prediction on new examples. However, a critical gap exists between human intelligence and the current data-driven machine-learning paradigm. Humans can explain and understand what they see, imagine things they could see but have not yet, and perform planning to solve problems <cit.>. Moreover, humans can learn from a small number of experiences <cit.>, but DNNs such as transformers <cit.> require a large dataset to achieve good performance on a specific task <cit.>. These essential intelligent aspects of humans, called model building <cit.>, are vital for human-level intelligence. Logic has been a fundamental element of AI for providing knowledge representations and reasoning capabilities <cit.>. Inductive Logic Programming (ILP) <cit.> is a framework to learn logic programs given examples. In stark contrast to DNNs, ILP gains some crucial advantages, it can learn from small data, and it can learn explicit programs, which are interpretable by humans. Recently, Differentiable ILP (∂ILP) has been proposed <cit.>, where they perform gradient-based learning of logic programs. In ∂ILP, forward reasoning, which derives all possible consequences given logic programs, is implemented using only differentiable operations by encoding logic programs into tensors. Thus it can be easily combined with DNNs for the perception and perform ILP on visual inputs. However, tensor-based differentiable forward reasoning is memory-intensive. Thus it assumes that logic programs to be handled are simple, each predicate takes at most two arguments, each clause has at most two body atoms, and no functors are allowed. ∂ILP-ST <cit.> has been developed to deal with structured logic programs with functors in ∂ILP, leading to αILP <cit.>, which can learn classification rules on complex visual scenes. They address the memory-consumption problem by performing a beam-search over clauses instead of generating all possible clauses by templates. However, performing a beam search is computationally expensive because every candidate of clauses needs to be evaluated in each step. Thus it takes longer to complete when handling complex programs and does not scale for more challenging tasks where agents play multiple roles, understanding visual scenes, learning abstract operations and solving queries by abstract reasoning. To mitigate this issue, we develop a memory-efficient differentiable forward reasoner and a computationally efficient learning strategy. We propose NEUro-symbolic Message-pAssiNg reasoNer (NEUMANN), a graph-based approach for differentiable forward reasoning, sending messages in a memory-efficient manner. We first introduce a new graph-based representation of logic programs in first-order logic and then perform differentiable reasoning via message passing. The graph structure efficiently encodes the reasoning process by connecting logical atoms. Then, we propose a computationally-efficient learning algorithm for NEUMANN by combining gradient-based scoring and differentiable sampling. Instead of scoring each clause exactly to perform a beam search, NEUMANN computes gradients over candidate clauses for a classification loss and uses them as approximated scores to generate new clauses. By doing so, NEUMANN avoids nested scoring loops over clauses, which has been a computational bottleneck of the beam-search approach. The memory-efficient reasoning and computationally-efficient learning enable NEUMANN to solve abstract visual reasoning, where the agent needs to perform reasoning by using analogies on abstract concepts in different scenarios. To evaluate this, we propose a new task, Visual Reasoning Behind the Scenes, where the agent needs to perform complex visual reasoning imagining scenes that are not observed. Fig. <ref> illustrates a Behind-the-Scenes task whose goal is to compute the answer of a query, “What is the color of the second left-most object after deleting a gray object?" given a visual scene. In turn, it consists of two sub-tasks. The first is to induce abstract programs from visual scenes, deletion of objects, as shown in the left of Fig. <ref>. The second is to solve the queries where the answers are derived by reasoning about non-observational scenes. To solve, the agent needs to learn abstract operations from visual input and perform efficient reasoning. The task assesses the following four essential model-building capacities: (1) learning from a small number of examples, (2) understanding complex visual scenes deeply, (3) learning explanatory programs to transfer to new tasks, and (4) imagining situations that have not been observed directly. Behind-the-Scenes is the first benchmark to cover all of these four aspects. We highlight on Tab. <ref> the difference from previous visual reasoning tasks in these aspects. Behind-the-Scenes serves as a legitimate task and dataset for the model-building abilities, which is beneficial to foster the machine-learning paradigm to perform problem-solving beyond pattern recognition. To summarize, we make the following important contributions: * We propose NEUMANN[Code is available: <https://github.com/ml-research/neumann>], a memory-efficient differentiable forward reasoner using message-passing. We theoretically and empirically show that NEUMANN requires less memory than conventional tensor-based differentiable forward reasoners <cit.>. Given G ground atoms and C^* ground clauses, conventional differentiable forward reasoners consume memory quadratically 𝒪(G × C^*), but NEUMANN consumes linearly 𝒪(G + C^*). * We propose a computationally-efficient learning algorithm for NEUMANN to learn complex programs from visual scenes. NEUMANN performs gradient-based scoring and differentiable sampling, avoiding nested loops for scoring candidate clauses. * We propose a new challenging task and a dataset, Visual Reasoning Behind the Scenes, where the agents need to perform abstract visual learning and reasoning on complex visual scenes. The task requires the agents to learn abstract operations from small data on visual scenes and reason about non-observational scenes to answer queries. The task evaluates machine-learning models on the different essential model-building properties of intelligence beyond perception, which are not covered by the previously addressed visual reasoning benchmarks. * We empirically show that NEUMANN solves visual reasoning tasks such as Kandinsky patterns <cit.> and CLEVR-Hans <cit.> using less memory than conventional differentiable forward reasoners, outperforming neural baselines. More importantly, we show that NEUMANN efficiently solves the proposed Behind-the-Scenes task, outperforming conventional differentiable forward reasoners. To this end, we show that NEUMANN gains the advantages of scalable and explainable visual reasoning and learning against symbolic and neuro-symbolic baselines. § FIRST-ORDER LOGIC, DIFFERENTIABLE REASONING, AND GRAPH NEURAL NETWORKS Before introducing NEUMANN, we revisit the basic concepts of first-order logic and graph neural networks. First-Order Logic (FOL). A Language ℒ is a tuple (𝒫, 𝒜, ℱ, 𝒱), where 𝒫 is a set of predicates, 𝒜 is a set of constants, ℱ is a set of function symbols (functors), and 𝒱 is a set of variables. A term is a constant, a variable, or a term that consists of a functor. A ground term is a term with no variables. We denote n-ary predicate p by p/n. An atom is a formula p(t_1, …, t_n), where p is an n-ary predicate symbol and t_1, …, t_n are terms. A ground atom or simply a fact is an atom with no variables. A literal is an atom or its negation. A positive literal is just an atom. A negative literal is the negation of an atom. A clause is a finite disjunction () of literals. A ground clause is a clause with no variables. A definite clause is a clause with exactly one positive literal. If A, B_1, …, B_n are atoms, then A ¬ B_1 …¬ B_n is a definite clause. We write definite clauses in the form of A  B_1,…,B_n. Atom A is called the head, and set of negative atoms {B_1, …, B_n} is called the body. We call definite clauses by clauses for simplicity in this paper. We denote 𝑡𝑟𝑢𝑒 as ⊤ and 𝑓𝑎𝑙𝑠𝑒 as . Substitution θ = { X_1 = t_1, ..., X_n = t_n} is an assignment of term t_i to variable X_i. An application of substitution θ to atom A is written as A θ. An atom is an atomic formula. For formula F and G, ¬ F, F G, and F G are also formulas. Interpretation of language ℒ is a tuple (𝒟, ℐ_𝒜, ℐ_ℱ, ℐ_𝒫), where 𝒟 is the domain, ℐ_𝒜 is the assignments of an element in 𝒟 for each constant a∈𝒜, ℐ_ℱ is the assignments of a function from 𝒟^n to 𝒟 for each n-ary function symbol f∈ℱ, and ℐ_𝒫 is the assignments of a function from 𝒟^n to {⊤, } for each n-ary predicate p∈𝒫. For language ℒ and formala X, an interpretation ℐ is a model if the truth value of X w.r.t ℐ is true. Formula X is a logical consequence or logical entailment of a set of formulas ℋ, denoted ℋ X, if, ℐ is a model for ℋ implies that ℐ is a model for X for every interpretation ℐ of ℒ. (Differentiable) Forward Reasoning is a data-driven approach of reasoning in FOL <cit.>. Forward reasoning is performed by applying a function called the T_𝒞 operator, deducing new ground atoms using given clauses and ground atoms. For a set of clauses 𝒞, T_𝒞 operator <cit.> is a function that applies clauses in 𝒞 using given ground atoms 𝒢, T_𝒞(𝒢)=𝒢∪{ A  |  A    B_1, …, B_n ∈𝒞^* ({ B_1 …, B_n }⊆𝒢) }, where 𝒞^* is a set of all ground clauses that can be produced from 𝒞. Note that the union with 𝒢 is computed to hold the ground atoms in the previous steps. The forward reasoning function can then be defined as a function that repeatedly applies the T_𝒞 operator to given ground atoms. Differentiable forward reasoning <cit.> uses only simple tensor operations to compute forward reasoning. Given G ground atoms and C clauses, the reasoner computes the grounding of clauses, removing variables, producing C^* ground clauses, then it builds index tensor 𝐈∈ℕ^G × C^*, which holds the indices of ground atoms for each ground clause. The differentiable forward reasoner computes logical entailment referring to the index tensor repeatedly. Graph Neural Networks. Graph Neural Network (GNN) <cit.> is a type of neural network that processes graphs as inputs. An input data is represented as (𝖦, 𝐱_𝑛𝑜𝑑𝑒, 𝐱_𝑒𝑑𝑔𝑒), where 𝖦 is a directed or undirected graph, 𝐱_𝑛𝑜𝑑𝑒 represents node features, and 𝐱_𝑒𝑑𝑔𝑒 represents edge features. Given an input, GNN computes the node representations by performing message-passing: x_i^(t+1) = f_𝑢𝑝𝑑𝑎𝑡𝑒(x_i^(t), ⊕_j ∈𝒩(i) c_ji· x_j^(t)), where t ∈ℕ is a time step, x_i^(t) is a node feature of node 𝗑_𝗂 at time step t, 𝒩(i) is a set of indices of neighbors of node 𝗑_𝗂, c_ji is an edge feature, and ⊕ is an aggregation function to aggregate messages from neighbors. § NEUMANN NEUMANN computes logical entailment in a differentiable manner given visual input and weighted clauses. Fig. <ref> illustrates the overview of NEUMANN's reasoning pipeline. In contrast to conventional differentiable forward reasoners <cit.>, NEUMANN performs message-passing on graphs in the following steps: (Step 1) A visual input is fed into a neural network to perceive objects in the scene. The output of the neural network is encoded into a set of probabilistic atoms 𝐱_𝑎𝑡𝑜𝑚^(0). (Step 2) Given input probabilistic atoms 𝐱_𝑎𝑡𝑜𝑚^(0), NEUMANN performs T bi-directional message-passing steps. The graph represents a set of weighted clauses, and the output node features 𝐱_𝑎𝑡𝑜𝑚^(T) represent probabilistic values of logical entailment given 𝐱_𝑎𝑡𝑜𝑚^(0) and weighted clauses. We describe each step in detail. §.§ Forward Reasoning Graph We represent a set of weighted clauses as a directed bipartite graph. Fig. <ref> shows an example of a set of weighted clauses and a corresponding forward reasoning graph. Intuitively, the graph has two groups of nodes representing nodes of ground atoms and nodes of conjunctions. Edges represent how the ground clauses connect the ground atoms and conjunctions with their weights. A Forward Reasoning Graph is a bipartite directed graph (𝒱_𝒢, 𝒱_, ℰ_𝒢→, ℰ_→𝒢), where 𝒱_𝒢 is a set of nodes representing ground atoms (atom nodes), 𝒱_ is set of nodes representing conjunctions (conjunction nodes), ℰ_𝒢→ is set of edges from atom to conjunction nodes and ℰ_→𝒢 is a set of edges from conjunction to atom nodes. Given a set of clauses and ground atoms, Algorithm <ref> shows the construction of a corresponding forward reasoning graph. (Line 1) First, the graph is initialized by adding the atom nodes for ground atoms 𝒢 and a special node ⊤, which represents true. It is used to represent clauses that have no body atoms, 𝚛(𝚇). (Line 2) The function takes a set of clauses and a language as input. In general, an infinite number of ground terms can be considered with functors in FOL. Thus we consider a subset of the ground terms by limiting the number of nested functors. A set of ground clauses 𝒞^* is obtained by substituting ground terms for variables. (Line 3–8) For each ground clause, C^*_i ∈𝒞^*, corresponding node and edges are added to the reasoning graph, edges from body atoms to a conjunction, and from a conjunction to a head atom. 𝖷 denotes the corresponding node in the graph for a logical formula X. Each ground clause corresponds to a conjunction node in the reasoning graph. §.§ Message Passing for Forward Chaining NEUMANN performs forward-chaining reasoning by passing messages on the reasoning graph. Essentially, forward reasoning consists of two steps: (1) computing conjunctions of body atoms for each clause and (2) computing disjunctions for head atoms deduced by different clauses. These two steps can be efficiently computed on bi-directional message-passing on the forward reasoning graph. We now describe each step in detail. (Direction →) From Atom to Conjunction. First, messages are passed to the conjunction nodes from atom nodes. For conjunction node 𝗏_𝗂∈𝒱_, the node features are updated: v_i^(t+1) = ⋁( v_i^(t), ⋀_j ∈𝒩(i) v_j^(t)), where ⋀ is a soft implementation of conjunction, and ⋁ is a soft implementation of disjunction. Intuitively, probabilistic truth values for bodies of all ground clauses are computed softly by Eq. <ref>. (Direction ←) From Conjunction to Atom. Following the first message passing, the atom nodes are then updated using the messages from conjunction nodes. For atom node 𝗏_𝗂∈𝒱_𝒢, the node features are updated: v_i^(t+1) = ⋁( v_i^(t), ⋁_j ∈𝒩(i) w_ji· v_j^(t)), where w_ji is a weight of edge e_j → i. We assume that each clause C_k ∈𝒞 has its weight θ_k, and w_ji = θ_k if edge e_j → i on the reasoning graph is produced by clause C_k. Intuitively, in Eq. <ref>, new atoms are deduced by gathering values from different ground clauses and from the previous step. Performing message-passing by Eq. <ref>-<ref> corresponds to deducing new atoms by Eq. <ref> in FOL using probabilistic inputs and weighted clauses. We used product for conjunction, and log-sum-exp function <cit.> for disjunction: 𝑠𝑜𝑓𝑡𝑜𝑟^γ(x_1, …, x_n) = γlog∑_1≤ i ≤ nexp(x_i / γ), where γ > 0 is a smooth parameter. Eq. <ref> approximates the maximum value given input x_1, …, x_n in a differentiable manner. §.§ Prediction The probabilistic logical entailment is computed by the bi-directional message-passing. Let 𝐱_𝑎𝑡𝑜𝑚𝑠^(0)∈ [0,1]^|𝒢| be input node features, which map a ground atom to a scalar value, 𝖱𝖦 be the reasoning graph, 𝐰 be the clause weights, ℬ be background knowledge, and T ∈ℕ be the infer step. For ground atom G_i ∈𝒢, NEUMANN computes the probability as follows: p(G_i  | 𝐱_𝑎𝑡𝑜𝑚𝑠^(0), 𝖱𝖦, 𝐰, ℬ, T) = 𝐱^(T)_𝑎𝑡𝑜𝑚𝑠[i], where 𝐱_𝑎𝑡𝑜𝑚𝑠^(T)∈ [0,1]^|𝒢| is the node features of atom nodes after T-steps of the bi-directional message-passing. Algorithm <ref> summarizes the reasoning steps on NEUMANN. (Line 1) Input scene s is converted to probabilistic atoms 𝐱_𝑎𝑡𝑜𝑚𝑠^(0) by the perception function f_𝑝𝑒𝑟𝑐𝑒𝑖𝑣𝑒. We used the perception module of αILP <cit.>, which performs visual perception to produce object-centric representations, and converts them to probabilistic atoms. Given background knowledge is also incorporated to produce 𝐱^(0)_𝑎𝑡𝑜𝑚𝑠. (Line 2-4) For each reasoning time step, the messages are propagated from the atom nodes to the conjunction nodes by Eq. <ref>. (Line 5-6) The messages are propagated from the conjunction nodes to the atom nodes by Eq. <ref>. (Line 8-9) The value for the target atom G_i is extracted by Eq. <ref> and returned. §.§ NEUMANN Memory Consumption We now compare NEUMANN to conventional differentiable forward reasoners <cit.>. NEUMANN achieves memory-efficient reasoning by message-passing. Let 𝒢 be a set of ground atoms and 𝒞 be a set of clauses, which produce a set of ground clauses 𝒞^* with a language ℒ. The memory consumption of the reasoning graph is 𝒪( |𝒢| + |𝒞^*| ), while that of the conventional differentiable forward-chaining tensors is 𝒪( |𝒢| × |𝒞^*| ). Proof. The number of atom nodes is |𝒢|, and the number of the conjunction nodes is |𝒞^*|. Thus, the memory consumption by the nodes is 𝒪(|𝒢| + |𝒞^*|). For each ground clause C^* = A  :-  B_1, …, B_n ∈𝒞^*, each body atom B_i is connected to a conjunction node, n edges, and another edge from the conjunction node to a head atom A. Thus, the memory consumption of the edges is 𝒪(|𝒞^*| × (n+1)). To this end, the total memory consumption of the sum of those of the nodes and edges, 𝒪(|𝒢| + |𝒞^*| + |𝒞^*|(n+1)) ≈𝒪(|𝒢| + |𝒞^*|). The tensor-based reasoners build a tensor 𝐈∈ℕ^|𝒢| × |𝒞^*|, which holds the indices of ground atoms for each ground clause. Thus the overall memory consumption is 𝒪(|𝒢| × |𝒞^*|). §.§ Learning Logic Programs by NEUMANN Now we describe how NEUMANN searches logic programs given a visual ILP problem. Problem Statement. Let 𝒬 = (ℰ^+, ℰ^-, ℬ, ℒ, 𝒵) be a visual ILP problem, where ℰ^+ is a set of positive examples, ℰ^- is a set of negative examples, ℬ is background knowledge, ℒ is a language, and 𝒵 is a language bias. Each example is given as a visual scene. The task is to find a logic program that can perform classification correctly based on the attributes and relations of objects in the scenes. Fig. <ref> shows an overview of learning of NEUMANN. It learns logic programs in two steps: (1) NEUMANN generates promising clauses by iterating scoring and sampling of clauses. Candidate clauses are evaluated by computing their gradients for a classification loss, and promising clauses are sampled via differentiable sampling using the Gumbel-max trick. To this end, new candidate clauses are generated by refining the sampled clauses, and a new reasoning graph is produced. (2) After the iteration of clause generation steps, NEUMANN assigns randomly-initialized clause weights and optimizes them to minimize the classification loss. It uses stochastic gradient descent for the optimization. We first describe the clause-generation step and weight-optimization step in detail, respectively, and then we explain the whole learning algorithm for NEUMANN, highlighting the difference from existing differentiable ILP solvers. §.§.§ Clause Generation NEUMANN generates candidate clauses by iteratively (1) scoring clauses using gradients and (2) performing differentiable sampling on the scores and refining them. We extend the beam search approach used in αILP <cit.> to achieve more efficient clause generation using gradients avoiding nested loops. Clause Scoring by Gradients. NEUMANN generates candidates of clauses 𝒞 by refining given initial clauses 𝒞_0 repeatedly. We evaluate clauses by computing gradients at once. By using the end-to-end reasoning architecture, NEUMANN scores each clause efficiently using automatic differentiation. Given a visual ILP problem 𝒬, a reasoning graph 𝖱𝖦, clause weights 𝐰, and background knowledge ℬ, NEUMANN computes the binary-cross entropy loss: L(𝒬, 𝖱𝖦, 𝐰) = -𝔼_(e, y) ∼𝒬 [ y log p(y  |  e, 𝖱𝖦, 𝐰, ℬ, T)  + (1-y) log (1 - p(y  |  e, 𝖱𝖦, 𝐰, ℬ, T))], where (e, y) is a tuple of a visual scene e and its label y, if e is a positive example then y = 1 otherwise y = 0. The conditional probability of the label p(y  |  e, 𝖱𝖦, 𝐰, ℬ, T) is computed by using Eq. <ref>. Using the loss, NEUMANN scores candidate clauses 𝒞 by computing gradients w.r.t. the clause weights, NEUMANN computes clause scores 𝐬∈ℝ^|𝒞|: 𝐬 = - ∑_𝒳∼𝒬∂ L(𝒳, 𝖱𝖦, 𝐰)/∂𝐰 , where 𝒳 is a sampled batch of labeled examples, 𝖱𝖦 is a reasoning graph constructed using clauses 𝒞, and 𝐰∈ℝ^|𝒞| is a clause weight. Intuitively, useful clauses to classify given visual scenes get negatively large gradients to minimize the classification loss. Thus we compute the negative raw gradients and consider them as the evaluation scores, promising clauses that contribute much to classify examples correctly will get high scores. For scoring, all clauses are associated with a uniform value to exclude the influence of the difference in weight values. Note that we do not update the clause weights 𝐰 in this step but compute gradients to score clauses. Example. Suppose we want to solve a simple classification of visual scenes with a pattern: “If there is a red cube, the scene is positive.”, a scene in Fig. <ref> is a positive example. The task is to learn a classification rule in FOL: 𝚙𝚘𝚜𝚒𝚝𝚒𝚟𝚎(𝚇)𝚒𝚗(𝙾1,𝚇),𝚌𝚘𝚕𝚘𝚛(𝙾1,𝚛𝚎𝚍),𝚜𝚑𝚊𝚙𝚎(𝙾1,𝚌𝚞𝚋𝚎). We start from a general clause 𝚙𝚘𝚜𝚒𝚝𝚒𝚟𝚎(𝚇)𝚒𝚗(𝙾1,𝚇)., and by refining, we get, 𝚙𝚘𝚜𝚒𝚝𝚒𝚟𝚎(𝚇)𝚒𝚗(𝙾1,𝚇),𝚌𝚘𝚕𝚘𝚛(𝙾1,𝚛𝚎𝚍). 𝚙𝚘𝚜𝚒𝚝𝚒𝚟𝚎(𝚇)𝚒𝚗(𝙾1,𝚇),𝚌𝚘𝚕𝚘𝚛(𝙾1,𝚋𝚕𝚞𝚎). 𝚙𝚘𝚜𝚒𝚝𝚒𝚟𝚎(𝚇)𝚒𝚗(𝙾1,𝚇),𝚌𝚘𝚕𝚘𝚛(𝙾1,𝚢𝚎𝚕𝚕𝚘𝚠). We compose a reasoning graph using these three clauses and give a uniform weight to all clauses. Using the reasoning graph, we compute the scores by Eq. <ref>. The first clause contributes the most to correct classifications and thus is scored higher than other clauses. NEUMANN performs inference over given visual scenes only once to score all clauses, not iterating it for each individual clause. Fig. <ref> illustrates the difference from the conventional clause-scoring strategy. The task is to score candidate clauses 𝒞 given visual ILP problem 𝒬 to perform clause search. In ∂ILP-ST <cit.> and αILP <cit.>, each clause C_i ∈𝒞 needs to be evaluated individually, and thus the computational cost increases quadratically with respect to the number of training data and the number of clauses to be evaluated. In contrast, NEUMANN evaluates all clauses by calling the backward function once. Generation by Differentiable Sampling. Given clause scores 𝐬, we generate new candidates of clauses by performing differentiable sampling based on the Gumbel-max trick <cit.> and refine them. The Gumbel-max simulates efficiently sampling procedures given scores in a differentiable manner. For the clause scores 𝐬, a noise term is computed as 𝐠 = -log( - log(𝐮)) where 𝐮∼𝚄𝚗𝚒𝚏𝚘𝚛𝚖(0,1). Then we add the noise to the original scores as 𝐤 = 𝐬 + 𝐠, 𝐤 represents scores mixed with a Gumbel noise. Then clause C_i ∈𝒞 is sampled with i = 𝑎𝑟𝑔𝑚𝑎𝑥 (k_1, …, k_|𝒞|). The sampled clauses are refined using downward refinement operator <cit.>, which specifies given clauses, generates more specific clauses than the given clause in terms of the number of atoms to be entailed with it. Given clause C (𝚙(𝚇,𝚈).), the refinement operator consists of the following four specifications: (i) add an atom to the body of C (𝚙(𝚇,𝚈)𝚚(𝚇,𝚈).), (ii) substitute a constant to variable in C (𝚙(𝚇,𝚊).), (iii) remove a variable by substituting another variable in C (𝚙(𝚇,𝚇).), and (iv) apply a functor (𝚙(𝚇,𝚏(𝚈,𝚉)).). Downward refinement operator ensures completness, any clauses that consist of a finite set of symbols can be generated by applying the operator for finite times to the most general clause <cit.>. NEUMANN uses mode declarations <cit.> (App. <ref>) to restrict the search space, and clauses that do not satisfy the declarations will be discarded. The newly generated clauses are added to the set of clauses 𝒞 and evaluate the added clauses by Eq. <ref> in the next step. §.§.§ Weight Optimization After the clause generation, NEUMANN performs loss minimization for classification with respect to clause weights. So far, we assumed that we have one clause weight vector. By using softmax, NEUMANN can learn to select one clause out of multiple generated clauses. However, in practice, we should be able to learn logic programs consisting of multiple clauses. NEUMANN composes differentiable logic programs that consist of multiple clauses as follows: (1) We fix the target programs' size as M, where we try to find a logic program with M clauses out of generated clauses 𝒞. (2) We introduce randomly-initialized |𝒞|-dimensional weights 𝐖 = [ w^(1), …, w^(M) ]   (𝐰^(j)∈ℝ^|𝒞|), each clause gets M individual weights. (3) We take softmax of each weight vector w^(j)∈𝐖 and softly choose M clauses out of |𝒞| clauses, 𝐰̂^(j) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(w_0^(j), …, w_|𝒞|^(j)). (4) We compose a clause weight vector 𝐰∈ [0,1]^|𝒞| as: w_i = 𝑠𝑜𝑓𝑡𝑜𝑟^γ(ŵ_i^(1), …, ŵ_i^(M)) = γlog∑_1≤ j ≤ Mexp(ŵ_i^(j) / γ), where γ>0 is a smooth parameter, approximating the maximum value out of M weights for each clause in a differentiable manner. For example, suppose 3 clauses are generated by NEUMANN, and we want to compose a logic program that consists of 2 clauses, |𝒞| = 3 and M = 2. By initializing 2 weight vectors and applying softmax to each, we get, 𝐰̂^(1) = [0.1, 0.7, 0.2]^⊤ and 𝐰̂^(2) = [0.8, 0.1, 0.1]^⊤. Using Eq. <ref>, we get 𝐰≈ [0.8, 0.7, 0.2]^⊤, where the first 2 clauses get large weights. The weights for the clauses are trained to minimize the loss function. By using the end-to-end reasoning architecture, NEUMANN finds a logic program that explains the complex visual scenes by gradient descent, solves 𝐰^* = argmin_𝐰 L(𝒬, 𝖱𝖦, 𝐰), where L is the cross-entropy loss (Eq. <ref>). NEUMANN minimizes the loss based on stochastic gradient descent. After performing sufficient weight-update steps, the generated clauses and their trained weights are returned. Algorithm <ref> shows the entire learning process of NEUMANN. (Line 1-3) An initial reasoning graph is built. (Line 5-10) Clauses 𝒞 are scored by computing gradients. Useful clauses in 𝒞 get negatively large gradients, and thus they are scored high at line 10. (Line 13-21) Sample clauses to be refined to generate new clauses according to the scores using the Gumbel-max trick. (Line 22-25) The sampled clauses are refined to generate clauses to be scored in the next iteration. (Line 27-32) NEUMANN performs weight optimization using the generated clauses 𝒞_𝑠𝑎𝑚𝑝𝑙𝑒𝑑 with randomly initialized clause weights 𝐰. We highlight the difference between NEUMANN and other differentiable ILP approaches in terms of the memory cost and clause-search cost in Tab. <ref>. As shown in Prop. <ref>, NEUMANN consumes less memory than other approaches, NEUMANN consumes memory linearly with the number of ground atoms and clauses, but others consume quadratically. ∂ILP generates clauses by templates without any symbolic search. Thus it requires no cost for searching but needs to exclude functors to manage the number of clauses to be generated. ∂ILP-ST and αILP perform beam search using exact scoring of clauses. As illustrated in Fig. <ref>, the time complexity of exact scoring is 𝒪(N_𝑑𝑎𝑡𝑎× |𝒞| × R), where N_𝑑𝑎𝑡𝑎 is the number of data, 𝒞 is the set of clauses to be scored, and R is the time complexity of the reasoning function. Although they require nested loops for data and clauses, they can handle functors because beam search can prune redundant clauses. In contrast, NEUMANN computes forward and backward pass for each data to evaluate clauses 𝒞, and thus the time complexity of the scoring is 𝒪(N_𝑑𝑎𝑡𝑎× (R + R)) ≈𝒪(N_𝑑𝑎𝑡𝑎× R) because both forward and backward pass have the time complexity of R. The scoring of clauses needs to be conducted at every step of the search, and thus it is crucial to have an efficient scoring strategy. § EXPERIMENTS We empirically show that NEUMANN is a memory-efficient differentiable forward reasoner equipped with a computationally-efficient learning algorithm by solving visual reasoning tasks. Moreover, we show that NEUMANN solves the proposed Behind-the-Scenes task, where different model-building abilities are required beyond perception. To this end, we show that NEUMANN can perform scalable visual reasoning and learning and provide visual explanations efficiently, outperforming existing symbolic and neuro-symbolic benchmarks. We implemented NEUMANN using PyTorch. All experiments were performed on one NVIDIA A100-SXM4-40GB GPU with Xeon(R):8174 [email protected] and 100 GB of RAM. We aim to answer the following questions: Q1: Does the message-passing reasoning algorithm simulate the differentiable forward reasoning dealing with uncertainty? Q2: Can NEUMANN solve visual ILP problems combined with DNNs outperforming neural baselines and consuming less memory than the other differentiable ILP benchmarks? Q3: Does NEUMANN solve the Behind-the-Scenes task outperforming conventional differentiable reasoners providing the model-building abilities (Tab. <ref>)? Q4: Does NEUMANN provide advantages over state-of-the-art symbolic and neuro-symbolic methods? §.§ Differentiable Reasoning with Uncertainty To answer Q1, we compare NEUMANN with a conventional tensor-based differentiable forward reasoner, αILP <cit.>, and show that both reasoners produce almost the same proof histories dealing with uncertainties given the same input. We explore two datasets used in ∂ILP <cit.>. Even/Odd. Even/Odd is a synthetic dataset to classify even and odd numbers. We used the following program: 1.0: 𝚎𝚟𝚎𝚗(𝚜^2(𝚇))𝚎𝚟𝚎𝚗(𝚇). and background knowledge ℬ = {𝚎𝚟𝚎𝚗(0)}. 𝚜 is a functor that represents sucessor of natural numbers, natural number 2 can be represented by a term 𝚜(𝚜(0)). The task is to deduce even numbers given the rule about even numbers and the base fact that 0 is an even number. Fig. <ref> shows the proof history produced by NEUMANN and αILP for the even/odd dataset. Each element of the x-axis represents a ground atom. For the y-axis, from top to bottom, each row represents a vector of probabilities over atoms for 5-steps of differentiable forward reasoning, 𝐱^(0)_𝑎𝑡𝑜𝑚𝑠, …, 𝐱^(5)_𝑎𝑡𝑜𝑚𝑠. Both reasoners deduced step by step the following atoms, 𝚎𝚟𝚎𝚗(0), 𝚎𝚟𝚎𝚗(𝚜^2(0)), …, 𝚎𝚟𝚎𝚗(𝚜^10(0)), with high probabilities, which are almost 1.0, but not any odd numbers, they successfully deduced even numbers producing almost the same proof histories. The message-passing algorithm simulates forward reasoning (Eq. <ref>) in FOL. Cyclic Graph. Cyclic Graph is a dataset to classify if each node in a graph is cyclic or not. We used the same directed graph used in ∂ILP <cit.> as shown in Fig. <ref> (top). We used the following weighted clauses to describe the rules of cyclicity: 0.51: 𝚌𝚢𝚌𝚕𝚒𝚌(𝚇)𝚎𝚍𝚐𝚎(𝚇,𝚇). 0.54: 𝚎𝚍𝚐𝚎(𝚇,𝚈)𝚎𝚍𝚐𝚎(𝚇,𝚉),𝚎𝚍𝚐𝚎(𝚉,𝚈). and background knowledge to represent the graph: ℬ = {[ 𝚎𝚍𝚐𝚎(𝚊,𝚋), 𝚎𝚍𝚐𝚎(𝚋,𝚌), 𝚎𝚍𝚐𝚎(𝚋,𝚍), 𝚎𝚍𝚐𝚎(𝚌,𝚊),; 𝚎𝚍𝚐𝚎(𝚍,𝚎), 𝚎𝚍𝚐𝚎(𝚍,𝚏), 𝚎𝚍𝚐𝚎(𝚎,𝚏), 𝚎𝚍𝚐𝚎(𝚏,𝚎) ]}. 𝚌𝚢𝚌𝚕𝚒𝚌(𝚇) means node 𝚇 is a cyclic node, there is a path to trace that starts and ends at node 𝚇. The task is to deduce whether each node is a cyclic node or not, given the set of nodes and weighted clauses. Fig. <ref> shows a proof history produced by NEUMANN and αILP for the Cyclic Graph dataset. We show the proof history of the 5-steps of differentiable forward reasoning. Both reasoners produced almost the same probabilities for each ground atom at each time step. More importantly, both reasoners deal with uncertainty, since the given programs have different weights and thus reasoners need to compute probabilistic values for each ground atom according to the weights, and NEUMANN successfully simulates the differentiable forward reasoning by the message-passing algorithm. These results show that the message-passing reasoning on NEUMANN is a valid differentiable forward reasoning function, even though it consumes much less memory than the tensor-based differentiable forward reasoners, ∂ILP, ∂ILP-ST, and αILP. §.§ Differentiable ILP on Complex Visual Scenes To answer Q2, we compare the performance of NEUMANN with conventional differentiable ILP solvers and neural baselines on visual reasoning tasks and show obtained explanatory programs. We also compare the memory consumption of NEUMANN with conventional differentiable forward reasoners. Dataset. We adopted Kandinsky patterns <cit.> and CLEVR-Hans <cit.> dataset. Both datasets are defined as a classification task of visual scenes, and the classification rules are defined by attributes of the objects and their relations. Fig. <ref> shows examples of the patterns we used. We use 5 Kandinsky patterns: (P1) twopairs, (P2) closeby, (P3) red-triangle, (P4) online-pair, and (P5) long-line. (P5) is an extension of (P4) where the number of objects is increased to 7 and the constraints of pairing are removed. We performed structure learning on (P1)-(P4), and for (P5), we used a given clause[𝚔𝚙5(𝚇):-𝚒𝚗(𝙾1,𝚇),𝚒𝚗(𝙾2,𝚇),𝚒𝚗(𝙾3,𝚇),𝚒𝚗(𝙾4,𝚇), 𝚒𝚗(𝙾5,𝚇),𝚒𝚗(𝙾6,𝚇),𝚒𝚗(𝙾7,𝚇),𝚘𝚗𝚕𝚒𝚗𝚎(𝙾1,𝙾2,𝙾3,𝙾4,𝙾5,𝙾6,𝙾7).] to perform reasoning to assess the scalability of logic reasoners for many objects. The dataset contains 10k training examples for each pattern for each positive and negative class, respectively. Likewise, each validation and test split contains 5k examples for each positive and negative class. The CLEVR-Hans dataset <cit.> contains CLEVR <cit.> 3D images, and each image is associated with a class label. Examples for each are shown in Fig. <ref>. We consider 3 binary classification tasks for each pattern. Each class contains 3k training images, 750 validation images, and 750 test images, respectively. More examples for both datasets are available in App. <ref>. Models. For Kandinsky patterns, we compare NEUMANN against two neural baselines and a differentiable ILP baseline. We adopted the ResNet-based CNN model <cit.> as a benchmark and also an object-centric benchmark, YOLO+MLP, where the input figure is fed to the pre-trained YOLO model <cit.>. The output of the pre-trained YOLO model is fed into MLP with 2 hidden layers with nonlinearity to predict the class label. We trained the whole YOLO+MLP network jointly. For CLEVR-Hans tasks, the considered baselines are the ResNet-based CNN model <cit.>, and the Neuro-Symbolic (NeSy) model <cit.>. The NeSy model uses slot attention <cit.> to perceive objects and feeds its output to Set Transformer <cit.>. NeSy-XIL is a NeSy model trained using additional supervision on their explanations. NeSy-XIL is the SOTA neural baseline in the CLEVR-Hans dataset. We used αILP <cit.> as a differentiable ILP baseline for both tasks, and we used the mode declarations <cit.>, a commonly used language bias in ILP. The perception networks (YOLO and slot attention) are also used for NEUMANN and αILP in Kandinsky patterns and CLEVR-Hans, respectively. More details about each baseline are in App. <ref>. Result. We show the accuracy in the test split of the Kandinsky patterns in Tab. <ref>. Both αILP and NEUMANN achieved perfect accuracies for each pattern, although CNN's over-fit while training and performed poorly with testing data. In (P5), the αILP produced run-out-of-memory[It is not trivial to distribute the tensor-based forward reasoners to several GPUs because each of the instances requires the whole index tensor, if we split the large index tensor to distribute, each distributed reasoner cannot refer some atoms and clauses, and thus the reasoning result will be incomplete. Tensor parallelism requires non-trivial engineering <cit.>. ] because the pattern involves seven objects, and many ground atoms and clauses are produced. NEUMANN can perform scalable visual reasoning beyond tensor-based reasoners. Moreover, we compare the memory consumption of NEUAMANN and αILP on Kandinsky patterns in Tab. <ref>. NEUMANN clearly consumes less memory than αILP for each pattern, NEUMANN's reasoning graph size is just 3.12% of the tensor size produced by αILP. These results show the memory efficiency of the message-passing reasoners of NEUMANN. Tab. <ref> shows the results for the CLEVR-Hans dataset. As one can see, NEUMANN achieved high accuracy similar to αILP, outperforming neural-based baselines, showing the capability of NEUMANN in complex 3D visual scenes. Moreover, CLEVR-Hans is a confounded dataset, for the first pattern (C1), a large gray cube and a large cylinder appear in the training and validation scenes, but in the test scenes, the large cube can be different colors. Thus, neural baselines perform poorly on the test split because the pure data-driven neural models can be easily confounded <cit.>. Both αILP and NEUMANN achieved high accuracy in the test split because the downward refinement-based clause generation can control the generality of clauses, prevent generating too specific clauses to avoid over-fit. This is achieved by trying the small number of search steps in the clause generation and increasing it step by step by checking the performance in the validation split. We observed peaked weight distributions after training, only one clause gets a large weight. The classification rules obtained by discretizing the clause weights by taking argmax for Kandinsky patterns and CLEVR-Hans are shown in Fig. <ref>. NEUMANN discovered explanatory clauses for each visual pattern. This shows that NEUMANN's learning algorithm can find proper classification rules given positive and negative examples as complex visual scenes. §.§ Visual Reasoning Behind the Scenes To answer Q3, we compare the performance of NEUMANN with conventional differentiable forward reasoners on the behind-the-scenes task showing that (i) it can learn from small data, (ii) it can handle complex visual scenes, (iii) it can learn explanatory programs, and (iv) it can reason about non-observational scenes. Moreover, we also compare the running time of the clause search with a differentiable ILP benchmark. The task consists mainly of two parts: (Task 1) learning abstract operations, and (Task 2) solving queries with imagination, as shown in Fig. <ref>. We describe each task in detail. §.§.§ Task 1: Learning Abstract Operations - CLEVR-List The first task is inductive logic programming from visual scenes for list operations: the member, delete, and sort functions. This is a 3D visual realization of ILP tasks with structured examples, where the goal is to learn abstract list operations given observed input-output pairs, which has been a long-standing task in classical symbolic ILP settings <cit.>, and addressed in the differentiable ILP setting recently <cit.>. We propose CLEVR-List, a visual realization of the list-program induction task by using the CLEVR environment <cit.>, which allows users to generate visual scenes that contain multiple objects with different properties, large red rubber sphere and small gray metal cube. Fig. <ref> shows positive examples for the member, delete and sort functions, which consist of several images representing the inputs and outputs of the target programs to be learned. For example, the first row in the figure represents a positive example for 𝚖𝚎𝚖𝚋𝚎𝚛(𝚌𝚢𝚊𝚗_𝚜𝚙𝚑𝚎𝚛𝚎.𝚙𝚗𝚐, [ 𝚛𝚎𝚍_𝚌𝚢𝚕𝚒𝚗𝚍𝚎𝚛.𝚙𝚗𝚐,𝚐𝚛𝚊𝚢_𝚜𝚙𝚑𝚎𝚛𝚎.𝚙𝚗𝚐,𝚌𝚢𝚊𝚗_𝚜𝚙𝚑𝚎𝚛𝚎.𝚙𝚗𝚐 ]). In each training, validation, and test split, we used 200 examples for each positive and negative label, respectively. Compared to the previously addressed visual reasoning benchmarks, Kandinsky patterns and CLEVR-Hans, CLEVR-List is challenging in the sense that the agents need to handle multiple visual scenes as input, deeply understanding them and comparing each other. The list programs are involved with functors, and thus, models need to have the capacity to deal with a large number of ground atoms produced by functors. CLEVR-List requires the following model-building abilities: learning from small data, handling several visual scenes, and learning explanatory programs. Models. We compare the performance of NEUMANN with ∂ILP-ST <cit.>, which is a state-of-the-art differentiable ILP solver dealing with functors. The parameters for the clause generation (N_𝑡𝑟𝑖𝑎𝑙, N_𝑠𝑎𝑚𝑝𝑙𝑒) are set to (5, 10) for NEUMANN, and we used the same setting for beam search in ∂ILP-ST. We performed 50 epochs of weight optimization using the RMSProp optimizer with a learning rate of 1e-2 and infer step T=5 for both models. We used mode declarations <cit.> as a language bias. The number of nested functors is at most 3, discarding lists with duplicated elements. More details including used mode declarations are in App. <ref>. Result. Fig. <ref> (left) shows the training loss and validation accuracy of NEUMANN in the delete task, showing the progress of the classification-loss minimization by gradient descent. For five different random seeds, NEUMANN achieved stable learning by producing small training loss and high validation accuracy. Fig. <ref> (right) compares the running time of structure learning of NEUMANN and ∂ILP-ST. We measured the running time of each clause generation step. NEUMANN achieved faster learning using gradient-based scoring and differentiable sampling. As highlighted in Tab. <ref>, ∂ILP-ST performs exact scoring for each clause. As the search gets deeper, a large number of clauses tend to be generated because of the large number of combinations of symbols. Thus, the running time of ∂ILP-ST increases drastically in the search, but NEUMANN consistently achieved fast structure learning. We observed peaked weight distributions after training, only some clauses with large weights. Fig. <ref> shows the logic programs learned by NEUMANN, obtained by discretizing clause weights using argmax after training. NEUMANN produced explanatory programs that achieved 1.0 of accuracy in the test split for each operation. These results show that NEUMANN solved the proposed (T1) CLEVR-List task, outperforming a differentiable ILP baseline by the running time. §.§.§ Task 2: Reasoning on Behind-the-Scenes The second task is to perform visual reasoning given queries where the answers are derived by the reasoning behind the scenes, the agent needs to think of the non-observational scenes with imagination. We consider 4 abstract operations: delete, append, reverse, and sort. The input is a pair of an image and a query represented as an atom. For example, a query “What is the color of the second left-most object after deleting a gray object?” can be represented as a query atom: 𝚚𝚞𝚎𝚛𝚢(𝚚_𝚍𝚎𝚕𝚎𝚝𝚎,𝚐𝚛𝚊𝚢,2𝚗𝚍), where 𝚚_𝚍𝚎𝚕𝚎𝚝𝚎 is a constant that represents the query type about the deletion. Forward reasoners can solve this task by combining clauses to parse input visual scenes and derive answers. We used 40k questions (10k for each operation) associated with 10k visual scenes. We compare the performance of NEUMANN and αILP <cit.>. Image Generation. We generated visual scenes using the CLEVR environment <cit.>. Each visual scene contains at most 3 objects with different attributes: (i) colors of cyan, gray, red, and yellow, (ii) shapes of sphere, cube, and cylinder, (iii) materials of metal and matte. We excluded color duplications in a single image. Query Generation. We generated queries for the dataset using query templates, which produce various queries using different features of objects. We used the following template: “What is the color of the [Position] object after [Operation]?”, where [Position] can take either of: left-most first, second, or third. [Operation] can take the following form: (i) delete an object, (ii) append an object to the left, (iii) reverse the objects, and (iv) sort the objects with an order of colors: cyan < gray < red < yellow (alphabetical order). Fig. <ref> shows examples of an input scene and some paired queries with their answers. More examples of input scenes, queries and their answers are in App. <ref>. Models. We used the clauses in Fig. <ref> for NEUMANN and αILP. The first clause about 𝚌𝚑𝚊𝚒𝚗 generates a chain of colors given a visual scene. For example, given the visual scene in Fig. <ref>, the atom 𝚌𝚑𝚊𝚒𝚗([ 𝚛𝚖𝚌.𝚙𝚗𝚐,𝚐𝚖𝚜.𝚙𝚗𝚐,𝚌𝚖𝚌.𝚙𝚗𝚐 ]) is deduced using body atoms 𝚕𝚎𝚏𝚝_𝚘𝚏(𝚛𝚖𝚌.𝚙𝚗𝚐,𝚐𝚖𝚜.𝚙𝚗𝚐) and 𝚕𝚎𝚏𝚝_𝚘𝚏(𝚐𝚖𝚜.𝚙𝚗𝚐,𝚌𝚖𝚌.𝚙𝚗𝚐). The second clause parses the chained objects to colors. For example, the atom 𝚜𝚌𝚎𝚗𝚎([𝚛𝚎𝚍,𝚐𝚛𝚊𝚢,𝚌𝚢𝚊𝚗]) is deduced using body atoms of 𝚌𝚑𝚊𝚒𝚗([ 𝚛𝚖𝚌.𝚙𝚗𝚐,𝚐𝚖𝚜.𝚙𝚗𝚐,𝚌𝚖𝚌.𝚙𝚗𝚐 ]), 𝚌𝚘𝚕𝚘𝚛(𝚛𝚖𝚌.𝚙𝚗𝚐,𝚛𝚎𝚍), 𝚌𝚘𝚕𝚘𝚛(𝚐𝚖𝚜.𝚙𝚗𝚐,𝚐𝚛𝚊𝚢), and 𝚌𝚘𝚕𝚘𝚛(𝚌𝚖𝚌.𝚙𝚗𝚐,𝚌𝚢𝚊𝚗). The last 4 clauses compute answers for different types of queries using the parsed 𝚜𝚌𝚎𝚗𝚎 atoms, list operations, and other utility predicates (Fig. <ref> in the appendix ). 𝚚𝚞𝚎𝚛𝚢2 and 𝚚𝚞𝚎𝚛𝚢3 represent queries, 𝚚𝚞𝚎𝚛𝚢2(𝚚, 𝚚_𝚜𝚘𝚛𝚝, 2𝚗𝚍) represents a query “What is the color of the 2nd-left object after sorting the objects?”. A query atom is given being paired with an input image. Result. Tab. <ref> shows the accuracy for each type of queries. αILP produced run-out-of-memory, however, NEUMANN successfully solves different types of queries with high accuracy. As shown in Fig. <ref>, the clauses to answer the queries consist of many functors for the list representation and existentially quantified variables, and thus a large number of ground atoms and clauses is generated, which is difficult to be handled by the conventional tensor-based reasoner, ∂ILP, ∂ILP-ST, and αILP. In fact, Tab. <ref> shows the numbers of ground atoms and clauses generated for Kandinsky patterns, CLEVR-Hans, and Behind-the-Scenes, respectively. Behind-the-Scenes requires the models to handle many ground representations, a large Herbrand base with functors, and thus memory-efficient reasoning is necessary. These results show that NEUMANN solved the Behind-the-Scenes task outperforming conventional tensor-based reasoners overcoming the bottleneck of the intensive memory consumption scaling to deal with functors on visual scenes and query answering. §.§ Advantages against other Symbolic and Neuro-Symbolic Methods To answer Q4, we compare the performance of NEUMANN against state-of-the-art symbolic and neuro-symbolic methods. Moreover, we show that NEUMANN can produce visual explanations efficiently using gradients using the end-to-end differentiable reasoning architecture. §.§.§ Scalable Visual Reasoning and Learning First, we show that NEUMANN can perform scalable visual reasoning, it can handle a large number of examples of complex visual scenes. To show that, we compare the inference time for Kandinsky patterns and CLEVR-Hans. We used two state-of-the-art neuro-symbolic methods to be compared: * Feed-Forward Neural-Symbolic Learner (FFNSL) <cit.> is a neuro-symbolic learning framework that integrates Answer Set Programming (ASP) with neural networks. It performs visual perception using neural networks, reads out their output as weighted logic representations, and performs Inductive Learning of Answer Set Programs (ILASP) <cit.>, which conducts efficient structure-learning of logic programs based on ASP semantics. It uses CLINGO <cit.>, a well-established ASP-solving system for inference. FFNSL can handle noisy input as weighted expressions, but its inference engine requires discrete input and returns discrete logic representations. * DeepProbLog <cit.> is a neuro-symbolic framework that integrates neural networks to ProbLog <cit.>, which is a well-established probabilistic logic inference engine. ProbLog accepts probabilistic input and produces probabilistic output by performing exact probabilistic inference by compiling logic representations into circuits, Sentential Decision Diagrams <cit.>. DeepProbLog obtains gradients out of the ProbLog output and train neural networks efficiently, and it is also applied to perform structure learning in the sketching setting <cit.>, where programs are learned to complete partially-given input programs. We measured the inference time (including the grounding of programs) for the training split. We changed the proportion of the data to be used from 0.2 (use 20% of the training data) to 1.0 (use the full training data) for each dataset. We used a batch size of 200 consistently throughout the experiments. We set 1000 seconds as the timeout. Fig. <ref> shows the inference-time comparison of NEUMANN, FFNSL, and DeepProbLog on Kandinsky patterns and CLEVR-Hans. In each dataset, NEUMANN achieved the fastest inference among the baselines. Especially in the patterns that require complex classification rules and many ground atoms, red-triangle and online-pair, NEUMANN significantly outperformed DeepProbLog and FFNSL. On online-pair, DeepProbLog timed out even with 20% of training data. This shows that NEUMANN can perform scalable visual reasoning for a large amount of data. We note that NEUMANN and DeepProbLog achieve differentiable reasoning, we can obtain gradients out of the probabilistic reasoning result, but FFNSL does not provide this function because of its pure-symbolic reasoner. Moreover, to show that NEUMANN performs scalable visual learning, we compare the performance and running time of learning of NEUMANN and FFNSL using Kandinsky and CLEVR-Hans. In this setting, FFNSL serves as a symbolic learning benchmark because it uses symbolic ILP systems, ILASP <cit.>, to learn programs. We changed the proportion of the training and validation data: 0.01 (1 %), 0.05 (5 %), 0.1 (10 %), and 0.5 (50 %). We measured the test accuracy using the full test data (100 %) consistently for each setting. We measured the running time of learning, which includes the whole process, visual perception, obtaining logic representations, and search logic programs. We used a batch size of 200 and a learning rate of 1e-2, and trained 20 epochs. We set (N_𝑡𝑟𝑖𝑎𝑙, N_𝑠𝑎𝑚𝑝𝑙𝑒) as (4, 10) for red-triangle in Kandinsky and (6, 10) for the third pattern of CLEVR-Hans. To achieve FFNSL on Kandinsky and CLEVR-Hans, we convert each visual scene to a weighted example for ILASP as described in <cit.>. We set to time out as 5000 seconds. Fig. <ref> shows the accuracy of the test split (left) and the learning time (right). In both datasets, FFNSL achieved high accuracy with a very small number of training data (1% of training visual scenes), showing the advantage of the symbolic learning method to generalize from small data. However, as the number of training data increases, the learning time increases drastically. To this end, FFNSL handled less than 10% of the training visual scenes in Kandinsky and CLEVR-Hans. In contrast, NEUMANN performed learning much faster than FFNSL using a large number of training visual scenes. This shows that NEUMANN is scalable for large datasets, it can learn from a large number of visual scenes. Moreover, in Kandinsky, although NEUMANN produced test accuracy with a large variance with a small number of training data, it achieved higher accuracy and gained stability with less variance by using more data. Overall, NEUMANN outperformed FFNSL in terms of learning time, allowing the system to use more training data and producing competitive test accuracy. §.§.§ Explanations by Gradients We show that NEUMANN can produce gradient-based explanations efficiently by working with neural networks seamlessly. We use input gradients <cit.>, which is a widely-used explanation method for any differentiable models. Input gradients are gradients with respect to input, for a differentiable function f and an input 𝐱 and output y, it computes 𝐞 = ∂ y / ∂𝐱, where each element 𝐞 represents how the corresponding input (a pixel) is effective to the output[For simplicity, we do not take the element-wise product 𝐱⊙∂ y/∂𝐱.]. NEUMANN can produce input gradients over input atoms, we compute: 𝐞_𝑎𝑡𝑜𝑚𝑠 = ∂ y/∂𝐱_𝑎𝑡𝑜𝑚𝑠^(0) = ∂𝐱_𝑎𝑡𝑜𝑚𝑠^(T)/∂𝐱_𝑎𝑡𝑜𝑚𝑠^(0)·∂ y/∂𝐱_𝑎𝑡𝑜𝑚𝑠^(T). Since the reasoning function of NEUMANN is end-to-end differentiable, 𝐞_𝑎𝑡𝑜𝑚𝑠 can be computed efficiently using automatic differentiation (AD), just calling the backward function once after the reasoning[Since 𝐱^(0)_𝑎𝑡𝑜𝑚 is not a leaf node in the computational graph, we prepare a dummy variable 𝐳_0 with the same shape as 𝐱_𝑎𝑡𝑜𝑚𝑠^(0) and all elements are initialized with 0. In the forwarding, we compute 𝐱^(0)_𝑎𝑡𝑜𝑚𝑠 = 𝐱^(0)_𝑎𝑡𝑜𝑚𝑠 + 𝐳_0, and extract gradients stored in 𝐳_0 after calling the backward function.]. Let 𝐌_1, …, 𝐌_n be (attention) masks over n objects for an input scene produced by an object-centric perception network. We compose visual explanations as follows: E(𝐱) = ∑_iϕ_i ·𝐌_i where ϕ_i is a weight for the i-th mask, which is computed as the maximum probability of input ground atoms regarding the i-th object. Let 𝒥 = { j_1, …, j_m } be indices of the ground atoms regarding the i-th object (𝚌𝚘𝚕𝚘𝚛(𝚘𝚋𝚓_𝚒,𝚛𝚎𝚍)) in ordered set of ground atoms 𝒢. We compute the weight ϕ_i for the mask 𝐌_i: ϕ_i = max (𝐞_𝑎𝑡𝑜𝑚𝑠[j_1], …, 𝐞_𝑎𝑡𝑜𝑚𝑠[j_m]), where 𝐞_𝑎𝑡𝑜𝑚𝑠 is computed by Eq. <ref>. For example, if 𝚘𝚋𝚓1 is a key factor for the classification, the corresponding attention mask 𝐌_1 gets a large weight, highlights 𝚘𝚋𝚓1. To this end, Eq. <ref> computes a heatmap that highlights only objects which are effective in the reasoning result. Fig. <ref> shows explanations produced by NEUMANN for CLEVR-Hans using clauses listed in Fig. <ref>. For each pair of images, the left one shows the original input and the right one shows the heatmap. NEUMANN successfully produced explanations highlighting objects which are the factors for the classification. For example, the first class is about a large cube and a large cylinder, and NEUMANN highlights both but not others for each input. The explanation can be completed very efficiently by using automatic differentiation (AD). The same explanation could be obtained by using gradients in DeepProbLog. However, as shown in Section <ref>, it has a scalability issue for a large amount of complex visual scenes. In contrast, NEUMANN produces gradient-based explanations but still can handle a large amount of complex visual scenes in a scalable manner. Moreover, FFNSL relies on the discrete inference engine, CLINGO, and thus it is difficult to produce gradient-based explanations using automatic differentiation, it requires additional hard coding for explanations. Overall, NEUMANN is the only framework that achieves scalable differentiable forward reasoning and learning, producing explanations on complex visual scenes working with neural networks efficiently. §.§ Discussions We now discuss NEUMANN's advantages, computation, impact, and limitations. What are the advantages compared to pure-symbolic learners? The most promising feature of NEUMANN compared to pure symbolic systems is its capability to handle a large amount of visual input in a scalable manner. As shown in Sec. <ref>, NEUMANN can perform visual reasoning and learning, outperforming state-of-the-art neuro-symbolic benchmarks regarding running time and performance. This feature is crucial for tightly integrating learning and reasoning with neural networks, algorithmic supervision <cit.> where neural networks are trained efficiently using gradients via symbolic algorithms. For such a setting, reasoners should be able to conduct scalable reasoning for a large amount of data to train neural networks. Otherwise, they would be the bottleneck, limiting the applicability of the neuro-symbolic systems. Moreover, as shown in Sec. <ref>, NEUMANN can use gradient-based XAI methods to produce visual explanations working with perception networks efficiently, and it is difficult to produce the same result with a pure symbolic system without additional hard coding. This feature of NEUMANN leads to essential applications, the right for the right reasons <cit.> approach, which trains neural networks to produce correct explanations using the gradient-based explanations. Overall, NEUMANN does not contradict symbolic approaches but provides a basis for better neuro-symbolic systems. It has been reported that a pure symbolic system can handle noisy examples <cit.>, refuting the motivation of ∂ILP <cit.>. However, NEUMANN provides many other benefits of scalable reasoning and learning paradigm that is compatible with gradient-based methods that include a significant part of the success of DNNs. What makes NEUMANN's reasoning and learning scalable? The scalable performance of NEUMANN can be explained by two reasons. (1) NEUMANN grounds programs once, then use the resulting computational graph repeatedly, as other differentiable forward reasoners do <cit.>. It means that NEUMANN does not compute logic operations (unification) for each specific query. Instead, NEUMANN performs forwarding on the computational graph and then obtains the results. In contrast, (differentiable) backward reasoning, employed in DeepProbLog <cit.>, needs to construct a new computational graph for a new query, making the reasoning expensive. (2) More importantly, NEUMANN is a graph neural network and performs reasoning on GPUs[NEUMANN is implemented using PyG, an established GNN library <https://pyg.org/>]. The ground-once scheme enables the reasoner to build and fix the computational graph as users do with neural networks, defining the network architecture and a set of weights, then the computational graph is constructed and fixed so forwarding can be conducted on GPUs. When dealing with a batch of examples (200 examples), NEUMANN can process them in parallel very efficiently. This feature is not trivial for logic reasoners. Typically, they process a batch of examples sequentially. For instance, DeepProbLog uses Sentential Decision Diagrams <cit.> for its reasoning, and it requires building different SDDs on CPUs for each query, and FFNSL uses the CPU-based reasoner (CLINGO <cit.>). Thus it requires non-trivial efforts for these reasoners to compute reasoning by using GPUs in a scalable manner. Why is it crucial to improve the memory consumption of differentiable forward reasoners? Differentiable forward reasoning, introduced in the ∂ILP framework <cit.>, encodes logic programs to tensors. Differentiable forward reasoning is inherently memory intensive and thus limits the expressivity of logic programs, no functors are allowed, each rule consists of at most 2 body atoms, and each predicate takes at most 2 arguments. ∂ILP has been extended to handle structured programs with functors <cit.>, by incorporating search techniques in ILP <cit.>, leading to αILP <cit.>, which learns logic rules from complex visual scenes. However, these methods inherit the memory-intensive tensors and thus cannot handle complex programs for abstract visual reasoning. Differentiable forward reasoning gains several advantages compared to pure symbolic reasoners, as discussed in the previous paragraph. Thus it is crucial to improve memory consumption so that a wide range of programs can be handled in the framework to expand the applicability of neuro-symbolic approaches. Why is graph encoding more efficient than dense tensor encoding? Graphs can represent the relations between different atoms more efficiently than dense tensors. For example, suppose we want to encode the following information, "Atom X is not deduced by atom Y". A reasoning graph can represent this information simply by not having edges between X and Y. In contrast, the dense tensors in differentiable ILP frameworks <cit.> need to hold false symbol for each combination of X and Y, the system needs to keep all of the combinations of ground atoms. This results in NEUMANN being memory efficient as shown in Proposition <ref>. Can sparse tensors result in reasoners comparable to message-passing reasoners? Sparse tensors cannot account for differentiable forward reasoning because they do not support some essential tensor operations as they compress the tensors by breaking the row-and-column structure. As shown in ∂ILP <cit.>, differentiable forward reasoning uses the slice and gather operations on tensors repeatedly, and those are not supported by sparse tensors. Thus simply using sparse tensors does not lead to memory-efficient differentiable forward reasoners. Why can existing VQA models not solve the Behind-the-Scenes task? VQA models accept input as a tuple of an image and a question in natural-language sentences <cit.>. Symbolic programs, typically described as a Domain Specific Language (DSL) to compute answers, are generated by parsing the input question using neural networks. Given visual scenes, it is unclear how to perform program induction on their DSL since there is no uniform structure-learning algorithm for each DSL. Thus simply using existing VQA models for the proposed task cannot be a solution. Limitations Although NEUMANN is a more general framework compared to classic symbolic and neuro-symbolic frameworks, it does suffer from some limitations: (1) The language to be handled is limited to definite clauses, which are rules in FOL with a single head atom. Symbolic systems can handle more complex structures, choice rules in Answer Set Programming (ASP) systems <cit.>. (2) The learning algorithm is not jointly training perception networks and logic programs. (3) The message-passing algorithm is not connected to well-known probabilistic semantics, distribution semantics <cit.>. § RELATED WORK NEUMANN builds upon different sub-fields of AI. We revisit relevant studies. Symbolic AI. Symbolic representations, First-Order Logic (FOL), provide essential functions of knowledge representation and reasoning capabilities to AI systems, which are difficult to be provided by purely neural-based models <cit.>. A pioneering study of inductive inference was done in the early 70s <cit.>. Many systems have been developed for inductive inference <cit.>, Model Inference System (MIS) <cit.> has been implemented as an efficient search algorithm for logic programs. Inductive Logic Programming (ILP) <cit.> has emerged at the intersection of machine learning and logic programming. ILP systems using Answer Set Programming (ASP) can learn logic programs beyond definite clauses <cit.>, choice rules. ILP has advantages compared to data-driven DNNs, it can learn explicit programs and learn from small data. Thus, combining ILP with DNNs is a promising approach to overcoming the limitations of the current data-driven machine-learning paradigm. NEUMANN embraces the symbolic learning approaches in the neuro-symbolic setting, where logical reasoning and neural learning are tightly integrated. Probabilistic Logic and Neuro-Symbolic AI. Combining probabilities with symbolic logic has been addressed to establish reasoning systems that can handle uncertainty, distribution semantics  <cit.> and Bayesian logic programs <cit.>. Probabilistic Inductive Logic Programming <cit.> combines ILP with probabilistic semantics establishing a new learning paradigm. Structure learning algorithms for probabilistic logic programs have been developed, SLIPCOVER <cit.>. These approaches focus on learning with probabilistic semantics, but NEUMANN engages differentiable reasoning and learning, where parameters get gradients optimized via gradient descent. Lifted inference <cit.> addresses efficient reasoning, reducing computational graphs by using symmetry, and these techniques could be incorporated into NEUMANN since it employs graphs as its representation. Markov Logic Networks (MLNs) <cit.> takes a similar approach to ground the logic programs to produce a graph structure. MLNs perform inference based on Bayesian networks, but NEUMANN computes differentiable forward reasoning by message-passing as graph neural networks. Integration of symbolic computations and neural networks, called neuro-symbolic AI <cit.>, has attracted a lot of attention in recent years. Many frameworks have been developed for parameter estimation of DNNs using symbolic programs, DeepProbLog <cit.>, NeurASP <cit.>, SLASH <cit.>, NS-CL <cit.>, differentiable theorem provers <cit.>, and Embed2Sym <cit.>. In a similar vein, differentiable structure learners have been developed <cit.>, and NEUMANN extends their capacity by having memory-efficient reasoning and computationally-efficient learning. TensorLog <cit.> performs message-passing for backward reasoning, but NEUMANN realizes forward reasoning. GNNs have been used for reasoning <cit.> by composing logical expressions as graphs, where neural representations are trained given symbolic knowledge. In contrast, NEUMANN performs structure learning using reasoning graphs and fuzzy logic operations. Logical Neural Networks (LNNs) <cit.> is a class of neural networks where each node has its logical semantics. LNNs parameterize soft-logical operations, but NEUMANN parameterizes clauses with their weights. Lifted Relational Neural Networks <cit.> uses rules as a template to produce deep neural networks. NEUMANN uses rules as a template to produce differentiable message-passing forward reasoner. Integration DNNs with abductive learning <cit.> has been addressed, where the agent learns to complete a symbolic knowledge base. This approach does not address program induction from raw inputs. In contrast, NEUMANN performs structure learning from complex visual scenes. MetaABD <cit.> has been proposed to perform program induction based on abductive learning by integrating a learning system Metagol <cit.>. Metagol handles definite clauses without functors, but NEUMANN can learn programs with functors. Visual Reasoning Datasets. The deep-learning community has developed many image datasets for evaluating different image-recognition models, MNIST <cit.> and ImageNet <cit.>. However, these datasets are dedicated to simple label classification, and thus difficult to assess the reasoning abilities of machine-learning models. To overcome this limitation, visual datasets with reasoning requirements have been developed. Visual Question Answering (VQA) <cit.> is a well-established scheme to learn to answer questions given as natural language sentences together with input images. VQA has an assumption that the programs to compute answers are given as input questions. However, in Behind-the-Scenes, the agents need to learn abstract programs to compute the answers. Moreover, VQA models do not address learning from small data and transferring the obtained knowledge to new tasks, which are parts of the Behind-the-Scenes requirement. Neuro-symbolic models achieve multi-modal learning on VQA <cit.>, but NEUMANN addresses rather structure-learning problems with differentiable logic programming. VQAR <cit.> is a variant of VQA with relational reasoning, CLEVRER <cit.> is an extension of CLEVR with sequential input, and MNIST-Addition <cit.> is about learning DNNs to add hand-written digits. These datasets involve essential aspects of reasoning, , relations with multiple entities, temporal reasoning, and arithmetic computation. However, as shown in Tab. <ref>, Behind-the-Scenes achieve the four essential model-building aspects: (i) learning from small data, (ii) learning from complex visual scenes, (iii) learning explanatory programs, and (iv) reasoning beyond observations. Previously proposed datasets cover some of these aspects but not all of them. The proposed Behind-the-Scenes task is the first dataset to assess the four model-building abilities of machine-learning models. Abstract Visual Reasoning (AVR) has been addressed to test the ability to apply previously gained knowledge and programs in a completely new setting, posing challenges to DNNs <cit.>. The methods have been evaluated on simple tasks of abstract puzzles, Raven's progressive matrices <cit.>. The proposed task, Behind-the-Scenes, requires structured program induction and reasoning beyond observation in complex 3D visual scenes, which have not been addressed in previous AVR studies. A motivation of the proposed behind-the-scene task is problem solving, which is an essential aspect of human intelligence of solving problems beyond perception using reasoning <cit.>. Humans can learn much from a small number of experiences developing capacities to represent physical objects and reason about their motion <cit.>. Inspired by these studies, the development of adaptive learning skills of humans has been addressed as model building problems <cit.>, and data-driven DNNs are insufficient to achieve these aspects. NEUMANN tackles this challenge by performing memory-efficient differentiable forward reasoning using DNNs. Graphs and Circuits for Reasoning. Many approaches have been developed to encode the reasoning structures to graphs and circuits. Binary Decision Diagrams (BDDs) <cit.> encode propositional logic expressions compactly as a directed graph, leading to variant structures, Sentential Decision Diagrams (SDDs) <cit.>, Zero-suppressed Decision Diagrams (ZDDs) <cit.>, and Zero-supressed Sentential Decision Diagrams (ZSDDs) <cit.>. These architectures are developed for propositional logic or combinatorial optimization, but the reasoning graph of NEUMANN represents first-order logic and addresses differentiable reasoning. Their efficient compression algorithms and operations between graphs (taking conjunction between two graphs) for these structures could be applied to reasoning graphs in NEUMANN. Sum-Product Networks (SPNs) <cit.> encode tractable probability distributions in graphs, which repeatedly consist of sum and product layers. NEUMANN shares a similar structure since its reasoning graph consists of atom nodes to compute disjunctions and conjunction nodes, and performs bi-directional message-passing. SPNs solve exact probabilistic inference, but NEUMANN addresses differentiable reasoning on first-order logic. § CONCLUSION We presented NEUMANN, a memory-efficient differentiable forward reasoner that passes messages on reasoning graphs. NEUMANN compiles logic programs in first-order logic to a graph that encloses the process of forward reasoning and performs message-passing in a neural fashion. Moreover, we proposed a computationally-efficient learning algorithm combining gradient-based scoring and differentiable sampling of clauses. In our experiments, we have shown: (1) The message-passing reasoning algorithm simulates the differentiable forward reasoning dealing with uncertainty. (2) NEUMANN can solve visual ILP problems combined with DNNs, outperforming neural baselines and consuming less memory than the other differentiable ILP benchmarks. (3) NEUMANN solves the Behind-the-Scenes task outperforming conventional differentiable reasoners, providing model-building abilities beyond simple perception capabilities, learning from small data, understanding visual scenes deeply, learning explanatory programs, and reasoning about non-observational scenes. (4) NEUMANN performs scalable visual reasoning and learning, outperforming state-of-the-art symbolic and neuro-symbolic methods regarding running time and performance. Moreover, NEUMANN can incorporate XAI methods efficiently, NEUMANN produces gradient-based visual explanations using DNNs. NEUMANN provides several interesting avenues for future work. NEUMANN is an instance of GNNs, providing the capability of representation learning to make neuro-symbolic reasoning more robust and multi-modal. Moreover, NEUMANN enables differentiable reasoning on complex logic programs with functors and thus can be used for vital applications, such as planning, meta-interpreters, and knowledge-enhanced foundation models. NEUMANN is also promising for the right for the right reasons approach <cit.>, where neural networks are trained to produce correct explanations and thus a vital factor to achieve explainable machine learning systems. Generally, it bridges the current data-driven machine learning paradigm to perform problem-solving beyond perception with knowledge representation and reasoning. § LOGIC PROGRAMS FOR BEHIND-THE-SCENES We show logic programs used for solving the Behind-the-Scenes task but not shown in the main text. Fig. <ref> shows a set of clauses to define utility predicates to solve the task, extracting a color of an object according to its position. Fig. <ref> shows additional logic programs used for query answering. § EXPERIMENTAL DETAILS ON REASONING BEHIND-THE-SCENES We describe the experimental details of the behind-the-scenes task. Task 1 (T1). We used the mode declarations in Tab. <ref> for both NEUMANN and ∂ILP-ST. The definition of mode declarations is available in Sec. <ref>. We performed 50 epochs of weight optimization with a batch size of 64. We used the RMSProp <cit.> optimizer with a learning rate of 1e-2. To prune too general clauses in the clause generation step, we gradually increased the ratio of negative examples from 20% to 100% by 20% in the first 5 trials of the clause generation. For 𝚜𝚘𝚛𝚝, we performed curriculum learning, learning a simple predicate first (𝚒𝚜_𝚜𝚘𝚛𝚝𝚎𝚍) and then finalizing the complete learning (𝚜𝚘𝚛𝚝). We limit the number of nested functors at most 3 and discard lists with duplicated elements. We used the same perception model as in CLEVR-Hans, which is described in Section <ref>. Visual examples for each operation are shown in Sec. <ref>. Task 2 (T2). Queries about different operations are given randomly, so the model needs to handle different types of queries in the prediction. To generate visual scenes with queries and their answers, we adopted the generation code of CLEVR <cit.>. When solving, NEUMANN reads out a JSON file which contains instances, and each instance contains a path to an image file and pairs of a query and a corresponding answer, (𝚚𝚞𝚎𝚛𝚢2(𝚚_𝚍𝚎𝚕𝚎𝚝𝚎,𝚌𝚢𝚊𝚗,2𝚗𝚍), 𝚊𝚗𝚜(𝚛𝚎𝚍)). NEUMANN assigns probabilities over query atoms according to the input, it gives 1.0 for a query atom given input and 0.0 for other query atoms. The answer is used only for computing accuracies and never in the prediction pipeline. We used a batch size of 64 for NEUMANN and 1 for αILP, αILP produced out-of-memory even with the smallest batch size, as shown in Tab. <ref>. We used the same perception model as in CLEVR-Hans, which is described in Section <ref>. We limit the number of nested functors at most 3 and discard lists with duplicated elements. We used additional abstract operations (𝚊𝚙𝚙𝚎𝚗𝚍 and 𝚛𝚎𝚟𝚎𝚛𝚜𝚎) shown in Fig. <ref>. Examples of input scenes paired with queries and their answers are shown in Sec. <ref>. § EXPERIMENTAL DETAILS ON KANDINSKY PATTERNS AND CLEVR-HANS In this section, we describe the experimental setting of Kandinsky Patterns and CLEVR-Hans. §.§ Kandinsky Patterns CNN. We trained ResNet18 for 300 epochs with a batch size of 512. We used the Adam optimizer <cit.> with a learning rate of 1e-5. YOLO+MLP. We used MLP with two hidden layers. Each hidden layer applies a linear transformation and a non-linearity. The output of the pre-trained YOLO model is reshaped and fed into MLP to predict the class label. We jointly trained the whole YOLO+MLP network for 1000 epochs with a batch size of 512. We used the Adam optimizer <cit.> with a learning rate of 1e-5. αILP/NEUMANN. We trained the αILP and NEUMANN model for 100 epochs with a batch size of 64. We used the RMSProp <cit.> optimizer with a learning rate of 1e-2. For αILP, we used 500 positive examples in the validation split to generate clauses by beam search. Mode declarations we used are shown in Tab. <ref>. Tab. <ref> shows the data types and constants, and Tab. <ref> shows the predicates used in our experiments. #obj represents the number of objects to be focused on the classification, which can be identified by trying from the smallest number and evaluating by validation split and increasing if the performance is not enough. We set the initial clause to be the root node in the beam search as: kp(X)   in(O1,X),…,in(On,X)., where n is the number of objects to be focused. Background knowledge given in for Kandinsky patterns is shown in Tab. <ref>. §.§ CLEVR-Hans We trained the αILP and NEUMANN model for 100 epochs with a batch size of 256. We used the RMSProp optimizer <cit.> with a learning rate of 1e-2. For αILP, we used 500 positive examples in the validation split to generate clauses by beam search. Mode declarations we used are shown in Tab. <ref>. Tab. <ref> shows the data types and constants, and Tab. <ref> shows the predicates. We set the initial clause to be the root node in the beam search as: ch(X)   in(O1,X),in(O2,X). We did not provide any background knowledge for CLEVR-Hans tasks. We refer to <cit.> for details about CNN, NeSy, and NeSy-XIL benchmarks. §.§ Perception Models We describe the experimental setting of the pre-training of the perception models in our experiments. §.§.§ YOLO for Kandinsky Patterns Model. We used YOLOv5[https://github.com/ultralytics/yolov5] model, whose implementation is publicly available. We adopted the YOLOv5s model, which has 7.3M parameters. Dataset. We generated 15,000 pattern-free figures for training, 5000 figures for validation. The class labels and positions are generated randomly. The original image size is 620 × 620, and resized into 128 × 128. The label consists of the class labels and the bounding box for each object. The class label is generated by the combination of the shape and the color of the object, e.g., red circle and blue square. The number of classes is 9. Each image contains at least 2 objects and, at most 10 objects. Optimization. We trained the YOLOv5s model by stochastic gradient descent (SGD) for 400 epochs using the pre-trained weights[https://github.com/ultralytics/yolov5/releases]. We used the loss function that approximates detection performance, presented in <cit.>. We set the learning rate to 0.01 and the batch size to 64. The SGD optimizer used the momentum, which is set to 0.937. We set the weight decay as 0.0005. We took 3 warmup epochs for training. §.§.§ Slot Attention for CLEVR We used the same model and training setup as the pre-training of the slot-attention module in <cit.>. In the preprocessing, we downscaled the CLEVR images to a dimension of 128 × 128 and normalized the images to lie between -1 and 1. For training the slot-attention module, an object is represented as a vector of binary values for the shape, size, color, and material attributes and continuous values between 0 and 1 for the x, y, and z positions. We trained the slot attention model with the set prediction architecture following <cit.>, using the loss function, which is based on the Hungarian algorithm. We refer to <cit.> for more details. § MODE DECLARATION Mode Declaration <cit.> is one of the common language biases for Inductive Logic Programming. We used mode declaration, which is defined as follows. A mode declaration is either a head declaration modeh(r, p(mdt_1, …, mdt_n)) or a body declaration modeb(r, p(mdt_1, …, mdt_n)), where r∈ℕ is an integer, p is a predicate, and mdt_i is a mode datatype. A mode datatype is a tuple (pm, dt), where pm is a place-marker and dt is a datatype. A place-marker is either #, which represents constants, or + (resp. -), which represents input (resp. output) variables. r represents the number of the usages of the predicate to compose a single clause. § MORE EXAMPLES IN DATASETS Fig. <ref> shows some positive and negative examples for member and sort in CLEVR-List. Fig. <ref> shows some positive and negative examples for delete in CLEVR-List. We show some examples of visual input, queries, and their answers in the behind-the-scenes task in Fig. <ref>. We show some examples for each pattern we used in Kandinsky patterns in Fig. <ref>. We also show some examples for each class of CLEVR-Hans in Fig. <ref>.
http://arxiv.org/abs/2307.01254v1
20230703180001
Riemann zeros as quantized energies of scattering with impurities
[ "Andre LeClair", "Giuseppe Mussardo" ]
hep-th
[ "hep-th", "cond-mat.stat-mech", "math.NT" ]
⊗ [ ] ⋆ [ ] (() ) z̅ z̅ #/1#2#1#2 #11#1 12 ∂̣ #1∂∂#1 #1#2∂#1 ∂#2 #1⟨#1 ⟩ |#⟩1 | #1 ⟩ | 0⟩ ⟨ 0 | pi2π i #1e^^#1 #1 ∇_#1 .15ex/-.57em∂ .15ex/-.13.5mu D γ Γ β α ϵ ε λ Λ δ Δ ω Ω σ Σ φ A B C D E F G H J J K L M N O P Q R S T U V W X Y Z | 0⟩ ⟨ 0 | #1#2 #1 , #2 pi2π i #1e^^#1 #1 ∇_#1 .15ex/-.57em∂ .15ex/-.13.5mu D =cmss12 =cmu10 scaled1 height8pt width0.4pt depth-0.1pt height8pt width0.5pt depth-5.9pt height2pt width0.5pt depth0.1pt Z0.8pt2.2pt Z6pt1pt Q3.8ptQ I1.7pt N C3.8ptC I1.7pt R k∇⃗E⃗B⃗⊕⊖φφ̅2 Ẽξ̃f̃#1y^( 0)_#1 Li↑↓2√(2)Å0=h0=0 0 by-1ex.670'27Aχ̂ Arg #1#1#1#1𝔣 gcd Li⋰1mu@ [email protected] [email protected]@.1mu Li#1#1#1#1 #1FIG. <ref>#1TABLE <ref> plaintheoremTheoremlemma[theorem]Lemmaproposition[theorem]Propositioncorollary[theorem]CorollaryremarkremarkRemark arg Arg12s_∙ρ_∙σ_∙t_∙ϑt̃𝔰t^+t^-^+^-t̃ r̊qξχ Cornell University, Physics Department, Ithaca, NY 14850, USA SISSA and INFN, Sezione di Trieste, via Bonomea 265, I-34136, Trieste, Italy We construct a physical model of a single particle interacting with impurities spread on a circle, where the quantized energies coming from a Bethe Ansatz equation correspond to the non-trivial zeros of the Riemann ζ-function. Riemann zeros as quantized energies of scattering with impurities Giuseppe Mussardo August 1, 2023 ==================================================================== § INTRODUCTION If the Riemann Hypothesis has fascinated mathematicians for decades (see <cit.>), it has also fascinated theoretical physicists for quite a long time (for a review up to 2011, see for instance <cit.>). The idea that a remarkable mathematical property may be understood from the simple and elegant requirements of a physical system is too appealing to pass up. “Understanding" is obviously different from “proving” but it may nevertheless be the first promising step toward a more rigorous approach. It is precisely with such a “theoretical physicist attitude" that we approach the famous problem of the alignment of all zeros of the Riemann zeta function ζ(s) along the axis ℝe s = 1/2. One of the most prominent physical proposals is probably the Hilbert-Pólya idea: it turns the problem of establishing the validity of the Riemann Hypothesis into the existence of a single particle quantum Hamiltonian whose eigenvalues are equal to the ordinates of zeros on the critical line. This has been pursued in several relevant papers, such as <cit.>. In that approach, one searches for a quantum Hamiltonian where the bound state energies are the non-trivial Riemann zeros on the critical line. A different approach, based on statistical physics where random walks play an essential role, has also been pursued <cit.>. The aleatory nature of the problem arises from the pseudo-randomness of the Möbius coefficients evaluated on primes and this property can be checked with astonishing accuracy <cit.>. Up to logarithmic corrections, in this statistical physics approach the validity of the Riemann Hypothesis can be easily understood in physical terms from the universality of the critical exponent 1/2 of the random walk. The zeta function has also been employed in describing scattering amplitudes in quantum mechanics <cit.> and quantum field theory <cit.>. In this article we define a new approach, formulated as a quantum mechanical scattering problem rather than a bound state problem. Our proposal radically differs from those previously mentioned <cit.> for, in the model we construct, we have a quantization condition for the energies of the system which comes from a Bethe Ansatz equation. The solutions of this Bethe Ansatz equation are exactly the Riemann zeros. This relies on both the functional equation satisfied by ζ(s) and its Euler product formula in an essential manner. p𝔭r̊ § BETHE ANSATZ EQUATION FOR IMPURITIES ON A CIRCLE: THE MOST GENERAL CASE Consider a single particle of momentum moving on a circle of circumference R without any internal degree of freedom. Such a particle has a dispersion relation E(p) where E is the energy of the particle, typically relativistic or non-relativistic. For generality we leave this dispersion relation unspecified for this section. We suppose there are N stationary impurities spread out on the circle, with no particular location, except that they are separated, and label them j = 1, 2, … N, as illustrated in Figure <ref>. When the particle scatters through a single impurity, there is generally both a transmission and reflection amplitude. We assume there is no reflection, namely the scattering is purely transmitting. There are many known examples of purely transmitting relativistic theories <cit.>, in fact infinitely many that are integrable, and there are also non-relativistic examples of reflectionless potentials <cit.>. To each impurity labeled j we associate a transmission S-matrix S_j (), which by unitarity, is a phase S_j ( ) = e^i ϕ_j () . Due to the purely transmitting property, the scattering matrix for 2 impurities j,j' is simply S_j () S_j'(), and so on. As the particle moves around the circle, it scatters through each impurity and, coming back to its original position, the matching requirement for its wavefunction leads to the quantization condition of its momentum p expressed by e^i R ∏_j=1^N S_j () = ± 1 , where +1, -1 corresponds to bosons, fermions respectively (see, for instance <cit.> and references therein). If we take the particles to be fermions, we end up with the Bethe-ansatz equation <cit.>_n R + ∑_j=1^N ϕ_j (_n ) = 2 π (n - ) , for some integer n. Then the quantized energies of the system are E_n = E(p_n). There are several physical applications of this general formula: let us mention, for instance, that if the scattering phases ϕ_j are random, this is essentially a problem of electrons in a random potential in 1 dimension and related to Anderson localization. § SCATTERING PROBLEM THAT ASYMPTOTICALLY YIELDS THE RIEMANN ZEROS ON THE CRITICAL LINE In this section we construct a physical scattering problem where the E_n of the last section are asymptotically the Riemann zeros on the critical line. We first specify the dispersion relation E() for the free particle between impurities. Let us take = E log (E/ħω) / v where v is a speed, such as the speed of light, and ω is a fixed frequency with units of inverse time. Without the log E term, this is a relativistic dispersion relation for a massless particle where v is the speed of light. We henceforth set ħ =1. Without loss of generality we can redefine v, ω such that and E are dimensionless, and = E logE/2 π e where e is the Euler number, i.e. 1 = log e. Note that is a monotonic function of E. Hence, the above dispersion relation can be inverted: E() = p/ W(/2 π e ) where W is the Lambert W function. For large x, W(x) = log x - loglog x + ..... Thus for large p, E() ≈/log . Note that as → 0, E( ) → 0, which is the natural limit of this quantity, based on physics. Let us now specify the scattering phases. We suppose that the transmission S-matrices are more easily expressed in terms of the energy E rather than the momentum p. To each impurity j we associate a positive real number _j: S_j (E) = _j^1/2+δ - e^i E log_j /_j^1/2 +δ - e^- i E log_j , where δ is a positive real number which we will eventually take to δ→ 0^+ . Note that at zero energy S_j (0) = 1, which is again physically reasonable, since there should be no scattering if the particle has zero momentum. This implies that the scattering phases are ϕ_j (E) = - 2 log1 - e^- i E log_j /_j^1/2 +δ . Whether there exists a Hamiltonian that leads to these S-matrices we leave as a problem for further study. Hereafter we take the attitude that a theory is defined by its S-matrices and there are well known problems where the theory is defined through its S-matrix while the hamiltonian remains unknown (see, for instance <cit.>). Without loss of generality, we set R=1, since R with physical dimension of length can be absorbed into various constants above, such as v, ω. The Bethe equation (<ref>) now reads E_n /2logE_n /2 π e - ∑_j=1^N log1 - e^- i E_n log_j /_j^1/2 +δ = (n - 32 ) π where we have shifted for later convenience n by -1. So far, this is still quite general, and there are no convergence issues if N is finite. For our purposes we chose to take _j to be the j-th prime number, i.e. _j = _j where {_1, _2, _3, …} = { 2, 3, 5, …}. Let's mention, en passant, that another interesting choice would be to take _j as any random integer number between two consecutive primes <cit.>, although we address the study of this case somewhere else <cit.>. The impurities do not have to be ordered in any specific manner. Now, if δ > 1/2 the sum over scattering phases in (<ref>) converges as N →∞ and, when _j is the j-th prime number, the equation becomes E_n /2logE_n /2 π e + ζ ( + δ + i E_n )= (n - 32 ) π where ζ(s) is the Riemann zeta function, defined both as a Dirichlet series on the integers n and an infinite product on the primes ζ(s) = ∑_n=1^∞1/n^s = ∏_j=1^∞1/1 - 1/_j^s      ( (s) > 1 ). It is important to remark that z in (<ref>) is the true phase that keeps track of branches, and not z = z   mod  2 π, where is the principle branch with -π < < π. Although each term in (<ref>) is an by the nature of the scattering phase, the sum of 's can accumulate, i.e. the sum of 's is not of the sum, but of the sum. See also Section V for more specific remarks. When δ > 1/2, there is a unique solution to the equation (<ref>) for every n since for E sufficiently large, i.e. E > E_0 (and it is sufficient to take E_0 ∼ 10), the left hand side of the equation is a monotonically increasing function of E_n. In order to get inside the critical strip to the right of the critical line requires 0< δ < 1/2. In this case, there are essentially three options to deal with this: * First, one can simply take a finite number N of impurities, so there are no convergence issues. It is known that a truncated Euler product for finite N can be a good approximation to ζ if the truncation is well chosen, as we will explain below. * The second option is to just declare that ζ(s) in (<ref>) is the standard analytic continuation presented by Riemann. * The third is to replace the ζ function by one based on a non-principal Dirichlet character χ, where the Euler product takes the form L(s, χ) = ∑_n χ(n)/n^s = ∏_j =1^∞1 - χ (_j )/_j^s^-1,       (s) > 1. This function has no pole at s =1 thus it is possible the Euler product converges to the right of the critical line. In fact it was argued that this is indeed the case due to a random walk property of the sum ∑_χ () arising from the pseudo-randomness of the primes <cit.>. There are an infinite number of potentially interesting scattering problems based on these generalized zeta functions, but for simplicity here we will only consider the first two options. As δ→ 0^+, the quantized energies asymptotically approach the Riemann zeros on the critical line. The equation (<ref>) was first proposed in <cit.> and, as a matter of fact, it is not very asymptotic at all. For the lowest zero at n=1, with δ = 0.0001, one finds E_1 = 14.1347, which is correct to 6 digits. By systematically reducing δ one can calculate the Riemann zeros to great accuracy from the exact equation described in the next section, from thousands to even millions of digits for even the lowest zeros <cit.>. In this article we limit the numerics to zeros around the 100-th for illustration, but very similar results apply to much higher zeros. From (<ref>) we obtain { E_100, E_101, … , E_104} = { 236.524, 237.769, 239.555, 241.049, 242.823} which are identical to the true Riemann zeros to the number of digits shown. (Here we have also taken δ = 0.0001.) If one adopts the prescription of a finite number of impurities, one still obtains good results. For only N=1000 impurities, for example, one finds { E_100, E_101, … , E_104} = { 236.521, 237.777, 238.139, 241.057, 242.812 } . § SCATTERING PROBLEM FOR THE EXACT RIEMANN ZEROS In this section we explain the phenomenon observed in the last section and refine the model to give the exact Riemann zeros on the critical line. Following standard conventions in analytic number theory, let us define a complex variable s= σ + i t, where based on the above notation σ = + δ and t= E. We consider zeros on the critical line, which are known to be infinite in number <cit.>. Denote the n-th zero on the upper critical line as ρ_n = + i t_n,      n=1,2,3, … where t_1 = 14.1347.. is the first zero, and so forth. Labeling them this way, we define below an impurity scattering problem where E_n of the previous section become the exact t_n. Define a completed ζ function as follows[Riemann defined an entire function which is the above χ (s) multiplied by s(s-1)/2 in order to cancel the simple pole at s=1, however this will not be necessary for our purposes. ]: χ (s) = π^-s/2Γ(s/2) ζ (s), which satisfies the non-trivial functional equation ξ(1-s) = ξ(s). Let us now write ξ (s) in terms of a positive, real modulus |ξ(s)| and argument θ: ξ(s) = |ξ (s) | e^i θ (σ,t), i.e. θ (σ,t) = ξ (σ + it). Again, here it is important that θ is the true , not =   mod  2 π, where is the principle branch with -π < < π. Obviously θ (σ, t) = (σ,t) + ζ (σ+ i t) where (σ, t) ≡ Γ(σ + i t) - t2logπ. On the critical line (, t) is commonly referred to as the Riemann-Siegel function. Below, if it is implicit that we are on the critical line we will simply write (t) ≡ (, t). On the critical line, ξ (s) is real due to the functional equation. Thus moving up the critical line, ξ(s) must jump by π at each simple zero. We will call this a vertical approach. However we can also approach a zero from other directions and again relate t_n to a specific angle. We will consider both vertical and horizontal approaches. Vertical approach Approaching a zero along the critical line from above: lim_ϵ→ 0^+ θ (, t_n + ϵ ) = (n-1)π It's important to note that the non-zero ϵ in (<ref>) is absolutely necessary: if ϵ=0 the equation is not well defined since ζ ( + i t_n ) is not defined unless one specifies a direction of approach to the zero. Horizontal approach Approaching a zero along the horizontal direction from the right of the critical line: lim_δ→ 0^+ θ ( + δ, t_n ) = (n-32)π . This is just a 90^∘ rotation of equation (<ref>) thus we sent n→ n -. The advantage of this horizontal approach is that there are better convergence properties to the right of the line with δ >0. One can easily check that for all known Riemann zeros, which is quite a large set, the above equation is exactly satisfied. It can in fact be used to calculate Riemann zeros to high accuracy <cit.>. Ignoring the ζ term, we have t_n ≈_n where (_n ) = (n-32)π . These are anti-Gram points, namely where the real part of ζ( +i t) is zero, but the imaginary part is non-zero[The usual well-known Gram points are the opposite, i.e points where the imaginary part is zero but the real part is non-zero. They satisfy (t) = (n-1) π and are thus more appropriate to the vertical apprroach based on (<ref>).]. One expects these points to be closer to the actual zeros than the Gram points since it is known that the real part of ζ ( + i t ) is nearly always positive. For large t one can use the Stirling approximation for Γ = logΓ to obtain on the critical line (t ) = t/2logt/2 π e - π/8 + 48 t + O(1/t^3) Thus the dispersion relation that we assumed initially for our scattering problem can be refined to be p = 2 (E) which asymptotically is the same as in (<ref>). Since (E) is monotonic, it is invertible and therefore asymptotically eq.(<ref>) is valid. For large n the solution to the equation (<ref>) above is approximately _n ≈2 π (n-118)/W(n- 118)/e . In this limit of large n, the solution to the exact equation (<ref>) is approximately a solution to (<ref>). Again the non-zero δ in (<ref>) is absolutely necessary, since if δ =0, ζ ( + i t ) is not defined at a zero t= t_n unless a direction of approach is specified. In fact we have a theorem based on eq. (<ref>) shown in <cit.>:   (França-LeClair)   If there is a unique solution to the following equation (t_n) + lim_δ→ 0^+ ζ( + δ + i t_n ) = (n - 32 ) π then the Riemann Hypothesis is true and all zeros are simple. We will refer to the equation (<ref>) as the França-LeClair equation. The reason this theorem is correct is quite simple. Let N(T) denote the number of zeros in the entire critical strip 0< σ < 1 up to height T, then it is known from the argument principle that N(T) = (T)/π + (T) + 1 where (t) = π ζ ( + i t ) . Then the solutions to the equation (<ref>) saturate the counting formula N(T). The shift by 1 above is due to the simple pole at s=1. The essence of the problem falls on evaluating ζ (s), which is a challenging task. In particular on the critical line it is defined by the well-known but notoriously difficult function (t) = π ζ ( + i t ) where is conventionally defined by piecewise integration of ζ' /ζ from (σ, t) = (0,2) to (2, t) to (, t ). It is here that verses is important. The sum of terms in (<ref>) is a sum of 's due to their nature, however the sum can accumulate such that the sum is not between -π and π and is actually the of ζ. This will be clear from known results in Section V which show that (t) is unbounded. It is interesting to study the behavior of a fixed t_n = E_n as one increases the number of impurities. Focussing again on E_100 we present results in the Table <ref>. There are several important remarks to make. The approximation _n based on the Lambert W function is smooth, and usually gets the first n digits of t_n correct, but has no interesting statistics. For instance _100 = 235.987. The random matrix statistics of the Montgomery/Odlyzko conjecture <cit.> obviously come from the fluctuations in (t), as is evident from Table <ref>. These fluctuations are due to the pseudo-randomness of the primes. These statistics were reproduced for solutions of the asymptotic equation of the last section and the exact solutions of (<ref>) in <cit.>. § KNOWN PROPERTIES OF (T) Nearly all results about this function are in the mathematics literature, briefly summarised below. (i) A classical result of Bohr and Landau <cit.> states that, when t increases, (t) has infinitely many sign changes and its average is zero <cit.>. (ii) (t) is unbounded. Von Mangoldt first showed that (t) = O (log t ). Backlund computed specific bounds in 1918. The most recent bound we could find is due to Trudgian <cit.> which is only a modest improvement of Backlund's bound: |(t) | ≤ 0.1013 log t . This result does not assume the Riemann Hypothesis (RH). It is expected that this is a large overestimate if RH is true. In fact the largest value of (t) observed in computations around the 10^30-th zero is roughly 3.3455<cit.>. (iii) A celebrated theorem of Selberg <cit.> states that (t) over a large interval 0 < t < T satisfies a normal distribution with zero mean and variance (t)^2≡T∫_0^T (t)^2 dt = 2 π^2loglog T + O( √(loglog T) ). This is very interesting for our purposes since in order to derive this result, for the 2k-th moment of (t) one needs to truncate the Euler product to primes <x where x< T^1/k. Thus this means that a finite N number of impurities can capture important properties of the Riemann zeros. For the second moment one should truncate at < T, thus for large T the scattering problem still converges for a very large number N of impurities. All properties above have clear implications for this article. First, the ζ term in (<ref>) being O(log t) is strongly subdominant compared to the t log t term coming from (t). Thus for large enough t one expects the left hand side of the equation to be monotonic since it is dominated by the t log t term which is monotonic, and there should be a solution for every n. The RH is thus more likely to fail at small rather than large t! For instance, equating the t log(t/2 π e) term in (<ref>) with the bound (<ref>) one expects the RH to be true for t above the very low value t ≈ 18. Secondly, Selberg's central limit theorem shows that a finite number N of impurities is a meaningful approximation to the Euler product if one truncates it properly. For more recent recents on the validity of truncated Euler products, we refer to Gonek's work <cit.>. § CONCLUSIONS The scattering problem we constructed which leads exactly to the Riemann zeros on the critical line requires both the duality equation χ (s) = χ (1-s) and the Euler product formula. The obvious question is: how could Theorem <ref> fail such that the RH is false? It is actually more likely to fail for relatively low zeros, where we know the RH is true! In fact we have argued that it is more and more likely that there is a unique solution to the exact equation (<ref>) in the limit of large n since the fluctuating (t) term is more and more subleading as t→∞. The only way we can imagine Theorem <ref> to fail is if (t) becomes somehow ill-defined in some region of t. Specifically if (t) discontinuously jumps by 2, there would be no solution to (<ref>) for some n, and this would signify a pair of zeros off the line which are complex conjugates of each other, or a double zero on the line. This could be the case if we employed a phase shift coming from a function which only satisfies the duality relation. But, for a phase shift coming from the Riemann zeta function, we have its Euler product representation. This ensures its continuity for any arbitrary truncation in its number of terms. Moreover, Selberg's central limit theorem for (t) is based on the truncation of the Euler product. As a matter of fact, we have shown that adding more terms to the Euler product representation only increases the accuracy of computing the actual zeros t_n, never causing them to disappear, so long as one truncates the product properly (see Section V). These results and others will be presented in more detail in an extended version of this article <cit.>. § ACKNOWLEDGEMENTS We thank German Sierra and Ghaith Hiary for discussions. AL would like to thank SISSA where this work was both started and completed in June 2023. 99Edwards H.M. Edwards, Riemann’s Zeta Function, Academic Press, New York, 1974. Borwein P. Borwein, S. Choi, B. Rooney, A. Weirathmueller, The Riemann Hypothesis. A Resource for the Afficionado and Virtuoso Alike, Canadian Mathematical Society, Springer, 2008 Conrey B. Conrey, “The Riemann Hypothesis”, Notices of AMS, March, 341 (2003). Schumayer D. Schumayer and D.A.W. Hutchinson, Physics of the Riemann Hypothesis, Rev. Mod. Phys. 83, 307 (2011), arXiv:1101.3116 [math-ph], and references therein. BerryK M. V. Berry and J. P. Keating, The Riemann zeros and eigenvalue asymptotics, SIAM Review Vol. 41 (1999). BerryK2 M. V. Berry and J. P. Keating, A compact hamiltonian with the same asymptotic mean spectral density as the Riemann zeros, J. Phys. A: Math. Theor. 44, 285203 (2011), and references therein. Sierra G. Sierra, The Riemann zeros as spectrum and the Riemann hypothesis, Symmetry 2019, 11(4), 494, arXiv:1601.01797 [math-ph], and references therein. Sredincki M. Srednicki, Nonclassical Degrees of Freedom in the Riemann Hamiltonian, Phys. Rev. Lett. 107, 100201 (2011) Bender Carl M. Bender, Dorje C. Brody, Markus P. Müller, Hamiltonian for the zeros of the Riemann zeta function, Phys. Rev. Lett. 118, 130201 (2017) MLRW G. Mussardo and A. LeClair, Randomness of Möbius coefficents and brownian motion: growth of the Mertens function and the Riemann Hypothesis, J. Stat. Mech. (2021) 113106, arXiv:2101.10336, and references therein. Gutzwiller M. Gutzwiller, Stochastic behavior in quantum scattering, Physica 7D (1983), 341. Remmen G. N. Remmen, Amplitudes and the Riemann Zeta Function, Phys. Rev. Lett. 127, 241602 (2021). DMS G. Delfino, G. Mussardo and P. Simonetti, Statistical models with a line of defect, Phys. Lett. B 328 (1994) 123-129; Scattering theory and correlation functions in statistical models with a line of defect, Nucl. Phys. B 432 (1994) 518-550. KonikLeClair R. Konik and A. LeClair, Purely Transmitting Defect Field Theories, Nucl.Phys. B538 (1999) 587, arXiv:hep-th/9703085 . Corrigan P. Bowcock, E. Corrigan and C. Zambon, Classically integrable field theories with defects, International Journal of Modern Physics A 19.supp02 (2004): 82, arXiv:hep-th/0305022, and references therein. NonRelativistic0 R. E. Crandall and B. R. Litt, Annals of Physics, 146 (1983) 458 NonRelativistic F. Cooper, A. Khare, U. Sukhatme, Supersymmetry and quantum mechanics, Phys. Rept. 251 (1995) 267-385 Grosswald E. Grosswald and F.J. Schnitzer, A class of modified ζ and L-functions, Pacific Journal of Mathematics, 74 (1978) 375 MussardoBook G. Mussardo, Statistical Field Theory, An Introduction to Exactly Solved Models in Statistical Physics , 2010, Oxford University Press. yangyang C.N. Yang and C.P. Yang, Thermodynamic of a one-dimensional system of bosons with repulsive delta -function interaction, Journal of Mathematical Physics 10, 1115 (1969). ZamTBA Al.B. Zamolodchikov, Thermodynamic Bethe Ansatz in relativistic models: scaling 3-state Potts and Lee-Yang models, Nucl. Phys. B 342 (1990), 695. LMDirichlet A. LeClair and G. Mussardo, Generalized Riemann Hypothesis, Time Series and Normal Distributions, J. Stat. Mech. 2019 023203, arXiv:1809.06158 [math-NT]. Electrostatic A. LeClair, An electrostatic depiction of the validity of the Riemann Hypothesis and a formula for the N-th zero at large N, Int. J. Mod. Phys. A28 (2013) 1350151, arXiv:1305.2613 . FrancaLeClair G. França and A. LeClair, Transcendental equations satisfied by the individual zeros of Riemann ζ, Dirichlet and modular L-functions, Communications in Number Theory and Physics, Vol. 9, No. 1 (2015), arXiv:1502.06003 (math.NT). BL H. Bohr and E. Landau, Beiträge zur Theorie der Riemannschen Zetafunktion, Math. Ann. 74:1 (1913), 3–30. Trudgian T. Trudgian, An improved upper bound for the argument of the Riemann zeta-function on the critical line, Mathematics of Computation, Volume 81, Number 278, April 2012, Pages 1053–1061. Ghaith J.W. Bober and G. A. Hiary, New computations of the Riemann zeta function on the critical line, Exp. Math. 27 (2018), no. 2, 125–137. Hardy G.H. Hardy, (1914), Sur les Zeros de la Fonction ζ(s) de Riemann, C. R. Acad. Sci. Paris, 158: (1914), 1012–1014. Selberg A. Selberg, Contributions to the theory of the Riemann zeta-function. Arch. Math. Naturvid., 48(5):89– 155, 1946. Gonek S. Gonek, Finite Euler products and the Riemann hypothesis, Transactions of the American Mathematical Society 364.4 (2012): 2157-2191. Montgomery H. L. Montgomery, The pair correlation of zeros of the zeta function, Analytic number theory, Proc. Sympos. Pure Math., XXIV, Providence, R.I.: American Mathematical Society, Vol. 24 (1973) 181. Odlyzko A.M. Odlyzko, On the distribution of spacings between zeros of the zeta function, Mathematics of Computation, American Mathematical Society, 48(177), 273 (1987). LM2 A. LeClair and G. Mussardo, in preparation.
http://arxiv.org/abs/2307.00357v1
20230701150358
Abstract Orientable Incidence Structure and Algorithms for Finite Bounded Acyclic Categories. II. Data Structure and Fundamental Operations
[ "Yu-Wei Huang" ]
cs.DS
[ "cs.DS", "math.AG", "math.CO" ]
unicodehyperref hyphensurl 011=0 Scale=MatchLowercase []Ligatures=TeX,Scale=1 upquote.sty microtype.sty [protrusion]basicmath ifundefinedKOMAClassName parskip.sty parskip=half xurl.sty bookmark.sty same HighlightingVerbatimcommandchars= {} Shaded @nat@width>@nat@width @nat@height>@nat@height Ginwidth=,height=,keepaspectratio @figurehtbp paper2.bib Abstract Orientable Incidence Structure and Algorithms for Finite Bounded Acyclic Categories. II. Data Structure and Fundamental Operations Yu-Wei [email protected] August 1, 2023 =========================================================================================================================================== A data structure for finite bounded acyclic categories has been built, which is useful to encode and manipulate abstract orientable incidence structure. It can be represented as a directed acyclic multigraph with weighted edges, where the weighs encode the algebraic structure between edges. The fundamental operations on this data structure are investigated from geometrical, categorical and programming perspectives. introduction § INTRODUCTION In the previous articlehuang2023abstract, we introduced the orientable incidence structure, which is a bounded acyclic category with some additional properties. It provides another approach to investigate the computer graphics in any dimension. In order to apply the theory to practical use, the computer needs to understand the bounded acyclic category. It's more similar to dealing with graphs than categories; a finite category can be treated as a transitive graph with algebraic structure between edges. Moreover, manipulating finite bounded acyclic categories is simpler than non-acyclic ones, just like manipulating directed acyclic graphs is easier than general graphs. In this article, an efficient data structure for finite bounded acyclic categories is introduced, and fundamental operations on this data structure is developed. data-structure § DATA STRUCTURE model §.§ Model The most intuitive way to encode bounded acyclic categories is using a transitive directed multigraph, where nodes indicate objects, and edges indicate morphisms. But this is not enough to describe a category, the additional information required by the category is the composition of morphisms. The computer should also store the multiplication table of all morphisms, but this is a waste of memory because not all morphisms are composable and some cases can be omitted due to transitivity. In the previous article, we shown that a bounded acyclic category is equivalent to the category of upper categories and downward functors, so one can model the category of upper categories instead of the bounded acyclic category, where upper categories are encoded as nodes, and downward functors between them as edges. Since such category is acyclic, it forms a transitive directed acyclic multigraph. Notice that there is exactly one root node for this directed acyclic multigraph (representing the host bounded acyclic category), so all others nodes (representing upper categories) are directly connected from this root node by exactly one edge (representing downward functors). There may have multiple edges between nodes, and adjacent edges can be composed to an edge that preserves transitivity, whose rule is determined by the downward functors. That means storing all downward functors as edges is not necessary, one can just keep some of them so that other downward functors can be obtained by following along paths. The objects of an upper category are encoded as a set of symbols on the corresponding node, and object's mappings of downward functors between upper categories are encoded as mappings between those symbols, which are stored as the weights of edges. The downward functor represented by a given path can be constructed by composing mappings along the path. Objects and their mappings are now fully encoded. Morphisms and their mappings are also full encoded by these information, as explained below. There is no need to encode morphisms of upper categories: in an upper category, initial morphisms are also represented by its objects, and non-initial morphisms can be obtained by applying downward functors to initial morphisms of some upper categories. There is also no need to encode composition of morphisms: consider that a downward functor maps an initial object to a non-initial object, then this downward functor should also represent the initial morphism of this non-initial object. So the mapping representing this downward functor also represents a process of making composite morphism by the morphism this functor indicated. At the end of the day, the computer just needs to remember objects and the mappings between them. One can visualize this model as a graph-like diagram, which we call utility pole diagram. Draw each node of graph as a vertical line, and put symbols as points on it, which indicate objects of an upper category. One should put the initial object at the bottom. Draw the edge of graph as serial wires connecting points on the nodes, which indicate mappings between objects. The direction of edges should be to the right. For example, the utility pole diagram of a cone, which is composed by seven facets, can be drawn as seven poles connected by wires, see Figure <ref>. We use the curvatures and colors to distinguish edges with the same parent and child nodes. The mappings of composite functors can be obtained by following wires through a certain paths. A path of this graph can be indicated by a string of wires ended at the bottom of a pole, which are called base wires. The left endpoint of the base wires of a path indicates the identity of the corresponding functor: if the base wires of two paths start at the same point, they should end at the same node on the right, and the mappings of paths should be the same. That means the circle formed by the base wires of two paths can be lifted up to any right endpoint. This property is called supportivity; the wires of a path are said to be supported by the base wires (see Figure <ref>). implementation §.§ Implementation In this model, a bounded acyclic category can be fully described by all nodes and weighted edges, where nodes represent upper categories, and weighted edges represent downward functors. However, a node cannot fully describe an upper cateogry, instead, all forwardly reachable nodes should be included, which is called an upper subgraph. An upper subgraph also represents a bounded acyclic category, so this model can be encoded as a recursive data structure: define a data type , which is composed by a root node and some child BACs, where the root node is connected to each root node of the child BACs by a type . Written in a pseudocode, it is defined as The field is a list of tuple of an and a pointer to the subgraph, where the edge is from the field of this graph to the field of the subgraph. Under this definition, each BAC represents an acyclic directed multigraph, which is defined by unioning smaller graphs recursively. This structure is similar to the recursive definition of a tree, except that the child BACs are shared. But in fact, it really is almost a tree, as explained below. BACs cannot be mutable referenced. Consider a BAC with two different child BACs and , whose identities are determined by above process, and assume that there is a parent BAC such that these two child BACs correspond to the same upper category with respect to . If BACs are mutable referenced, the in-place mutation of the child BAC with respect to will ruin the structure of the parent BAC , since and are now different, which is fine for but not for . If BACs are immutable referenced, there is no such problem since child BACs can only be modified by replacement. References to BAC should be implicit. The same subgraphs don't represent the same upper category, but the same upper category should be represented by the same instance of BAC. To determine which BAC indicates which upper category with respect to a given node, one should determine the identity of corresponding downward functors. If there are two paths their functors map an initial object to the same object, these two functors should be the same, since the target object (the initial morphism of the target object) represents the identity of the functor. Because the identity of upper category is determined by downward functors, addresses between child BACs are meaningless to corresponding bounded acyclic category. References to nodes should therefore be immutable and implicit, and such definition of BAC is no different than the recursive definition of tree except for data manipulation. There are two ways to implement this data structure according to mutability. A bounded acyclic category can be implemented as a normal tree structure, where child BACs are implicitly shared. “Implicitly shared” mean that users should not know who shared this BAC with who, since it is meaningless. In this implementation, operations should be carefully designed so that two paths representing identical functor still point to the same BAC. As a consequence, it is preferred to use immutable data structure and functional programming technique. It can also be implemented as a directed acyclic graph, where child BACs are explicitly shared but encapsulated like implicitly shared BACs. In this implementation, operations should be carefully designed so that BACs will be copied on write when there are multiple paths which represent different functors pointing to this BAC. In this article, the former implementation will be used. Due to the immutability nature of the first implementation, it is suitable to use Haskell to show how it works. The purpose of the code in this work is just a proof of concept, so performance and debuggability are not concerned. The source code will be placed in the repository <https://github.com/worldmaker18349276/bac>. This data structure is implemented as a weighted tree with symbol maps as weights: [] newtype Tree e = Tree (Map e (Tree e)) deriving (Eq, Ord, Show) type BAC = Tree Dict type Dict = Map Symbol Symbol type Symbol = Natural The instances of BAC, called nodes, represent bounded acyclic categories, whose objects are marked by type |Symbol|. Some nodes are implicitly shared, but due to referential transparency, users cannot know the physical identity of the node, so one can only say that they are exactly the same. Symbols on a node are not included in the data structure since it can be determined by the weights of edges. The symbol list on a node can be obtained by function , where there must be a special symbol representing the initial object of the category. There is no symbol representing the terminal object since it doesn't provide any information. Also, in this data structure dealing with non-terminal morphisms and terminal morphisms are very different. A type is defined as a structure of a dictionary and the target BAC, representing a local embedding between categories: [] data Arrow = Arrow {dict :: Dict, target :: BAC} deriving (Eq, Ord, Show) edges :: BAC - [Arrow] edges (Tree m) = fmap (uncurry Arrow) (Map.toList m) The edges of a node can be obtained by function , which are represented as type |Arrow|. The weight of an edge, called a dictionary, is a non-empty mapping from the symbols on child node to the symbols on the parent node, representing the mapping of the objects. It is a non-empty map because there must be a base symbol as a key. In a node, dictionaries of its edges must cover all symbols except the base symbol, so the valid symbols can be determined by weights of outgoing edges. A path of the tree, which is a sequence of connected edges, also has a derived dictionary, and it can be obtained by concatenating all dictionaries along the path by . A path from a node to itself by following no edge is called a null path. A null path possesses a trivial dictionary, which is the identity mapping of symbols of the node. A null path is said to be an improper path. Dictionaries of paths together with the target nodes represent downward functors between bounded acyclic categories. It is called an arrow in the program, which can also be represented by a type |Arrow|. An arrow represents a downward functor from the category of the child node to the category of the parent node, where the mapping of objects is represented by the dictionary of an arrow, and the mapping of morphisms can be determined by further analysis of the structure of the child node. As a downward functor, the field represents an upper category constructed under the category of the parent node. An arrow can also represent an initial morphism in the category of the parent node, where the field represents the target object of this morphism. There is an one-to-one correspondence between objects and initial morphisms, so it also represents an object in the category, in which the symbol that refers to the object is the symbol to which the dictionary maps the base symbol, and all other values of the dictionary represent all descendant objects of this object. Two arrows starting at the same node should correspond to the same functor if their dictionaries map the base symbol to the same symbol. In this case, the target nodes should be implicitly shared or exactly the same. Note that it's not true the other way around; two arrows that have the same target node doesn't mean they correspond to the same functor. In program, arrows are more useful than symbols because they contain more information. Some algebraic operations can be performed on arrows by utilizing these information. The arrow of a null path, which represents the identity functor, can be obtained by . The arrow of a path can be obtained by joining all edges along the path, which is implemented as a function . One can explore all upper categories under a given category using these functions, which is equivalent to explore all objects of this category. Reversely, some arrows start at the same node are divisible, and the result can be obtained by function : the result of division between the divisor and the dividend is the set of arrows such that |join arr12 arr23 = arr13|. As a node of a tree, an arrow knows whether a symbol refers to its descendant, which is implemented as a function . The upper category specified by a given symbol can be obtained by tracing a symbol with an arrow, which is implemented as . Reversely, the symbol specifying a given arrow can be obtained with the function , which also indicates the identity of the given arrow. In a category represented by a node, an object can be specified by a symbol, and a morphism can be specified by a tuple of symbols: the source object of the morphism is represented by the first symbol, and it should be a valid symbol in this node; the target object of the morphism is represented by the second symbol, and it should be a valid symbol in the node referenced by the first symbol. The object specified by a symbol can also be represented by an arrow, which can be obtained by . Also, the morphism specified by two symbols can be represented by a tuple of connected arrows, which corresponds to two connected downward functors. The conversion between the two representations are implemented as the functions and . In another viewpoint, a symbol indicates an 1-chain of the host category, and a tuple of symbols indicates a 2-chain, moreover, a sequence of n symbols indicates a n-chain. Conclude above descriptions, there are three laws for BAC: enumi. * totality: the dictionary of an edge should be a mapping from all valid symbols on the child node to valid symbols on the parent node. * surjectivity: all valid symbols should be covered by the dictionaries of outgoing edges, except the base symbol. * supportivity: if dictionaries of given two paths with the same starting node map the base symbol to the same symbol, then they should have the same dictionary and target node. Note that null paths also count. A valid bounded acyclic category should satisfy these laws, and they immediately derive two laws: enumi. * The dictionary of a proper path cannot maps a symbol to the base symbol, otherwise the target node should be the starting node. * The dictionary of a proper path should map the base symbol to a unique symbol compared with other values in this dictionary, otherwise the target node will have a descendant as itself. The essence of recursive types is folding. To traverse objects of a bounded acyclic category, or equivalently, to traverse upper categories of a bound acyclic category, all descendant nodes should be visited via arrows, which is implemented as the function . While folding, identical nodes (from the root's perspective) should only be visited once, which is important for non-purely functional programming language (support for side effects without explicit types), otherwise it is possible to obtain inconsistent results with identical inputs. Users may more commonly fold data only under a certain symbol (visit only ancestor nodes of the node referenced by this symbol). In this situation, the relative location of a node should be marked. It is defined as the function , which only visit the node that can reach to a given symbol. The fold method can also be used to modify data structures. Convenient specialized functions and handle how edges should be modified and put back into nodes. This allows us to edit data in the form: [] dosomething :: BAC - Maybe BAC dosomething node = do -- check if this operation is valid guard ...-- edit the boundary nodelet res0 =...-- edit edges under the boundary node fromReachable res0 node & modifyUnder src \(curr, edge) -\case-- if the edge points to an outer nodeAtOuter-...-- if the edge points to the boundary nodeAtBoundary-...-- if the edge points to an inner node, which has been modified to `res`AtInner res -... With those utility functions, it should be easy to manipulate BAC. Below we will discuss how to edit a BAC from geometrical, categorical and programming perspectives. fundamental-operations § FUNDAMENTAL OPERATIONS The data structure BAC can be used to encode the incidence structures of geometric objects. In the BAC, symbols on the root node represents geometric objects, and arrows represent incidence relations between geometric objects. This implementation does not encode any meaning of arrows, only the relation and composition between arrows. Users should associate arrows and incidence relations outside of this data structure. Below we will focus on algebraic properties of incidence relations. In order to manipulate geometric objects in the form of bounded acyclic categories, all geometric operations need to be rewritten. To simplify the whole workflow, one need to develop a set of basic operations, such as boolean operations. Even for the intersection operation, the actions in categorical level are complicated. For example, to put two facets together, not only the relationship between two facets is needed, the relationship between their subfacets, and subfacets of subfacets are also needed. So we need to further decompose boolean operations into more fundamental operations on category. These operations may not make sense on geometrical perspective, but they are critical on categorical level. Let's discuss how to intersect two balls: we just need to slice a ball by a sphere, then remove the unwanted part. Before start slicing, how did the ball come into our world? Moreover, how did the world began? We need an simplest polytope as our starting point to construct all kind of polytopes. Such polytope is called “nullitope”, which is a geometric object of nothing. Then we need a method to add a ball into this polytope. This operation is called “introduce”. Now we want to slice this ball by a sphere, which should result in a UFO shape. But before that, we need to compute the intersection of their surface, which is a circle. We then can use “introduce” to bring the circle into our world, and claim that this circle is covered by those two spheres. This operation is called “incident”. Now we have two spheres with a circle on it, but wait, doesn't it is disconnected? It should be separated as a cap and a cup, so we need an operation called “disconnect”. Now we have two caps to build our UFO shell, but how to get rid of those two cup? To do that, we need “remove” to clean up them. Ok, there are enough tools to intersect two balls, let's describe it step-by-step. enumi. * Prepare an empty space by “nullitope”. * Prepare a ball. * Create an infinite 3D space by “introduce”. * Create a sphere with radius and centered at by “introduce”, labeled as . * Claim that the sphere is covered by this 3D space using “incident”. * Separate inner space and outer space by sphere as two disconnected components using “disconnect”. * Remove the outer space by “remove”, and label the remaining one as . * Create a sphere with radius and centered at by “introduce”, labeled as . * Cut the ball by this sphere . * Compute the intersection between shell and , which is a circle. * Bring this circle into the world by “introduce”, labeled as . * Claim that the circle is covered by spheres and using “incident”. * Separate spheres and into cups and caps by “disconnect”, labeled as , , , separately. * Remove the cup by “remove”. * Claim that the cap is covered by volume using “incident”. * Separate volume into two part using “disconnect”, labeled as , . * Remove unwanted part. * Remove outer volume by “remove”. * Remove cup by “remove”. Those operations have corresponding categorical meanings, which provide another perspective to investigate fundamental operations. Now let's discuss how to define those operations. emptysingleton §.§ Empty/Singleton “Nullitope” is the simplest polytope, which is a geometric object of nothing. It is just like an empty workspace after launching GeoGebra 3D calculator. In categorical perspective, it has only two objects: the initial object and the terminal object. This category is called empty bounded acyclic category. The empty bounded acyclic category has only one non-degenerated morphism chain, and it is locally-embedded into any nondecomposable morphism of a bounded acyclic category. In program, it is implemented as a function , which is a node without children. “Singleton” is the second simplest polytope, which represents a geometric object without boundary. It is just like an primitive shape in the GeoGebra 3D calculator, such as a sphere, an infinite plane or a circle. In categorical perspective, it has only one proper object. This category is called singleton bounded acyclic category. In program, it is implemented as a function , which is a node with only one child. merge-categories §.§ Merge Categories “Introduce” bring a facet into the universe, just like we can simply drop a sphere, a cube or a cylinder into the workspace. For example, the process of intersecting two surfaces is done by adding intersected line and claiming the incidence relations between them. After adding intersected line, the computer doesn't know the relation between this line and those surfaces. This doesn't break the categorical structure, so there is no need to add incidence relations at the same time. In categorical perspective, it merges two categories into one, where the initial object and the terminal object are merged, and others proper objects are considered disjoint. In program, it corresponds to merging trees at the root. All symbols in the root nodes are unioned except the base symbols, which will be merged into one base symbol. It is implemented as a function with a parameter |nodes :: [BAC]|, representing the categories to merge. remove-a-terminal-morphism §.§ Remove a Terminal Morphism “Remove” removes a facet out of the universe. The subfacets of removed facet will not be removed. For example, if a square is removed, its borders are not removed along with it. The removed facet cannot be covered by other facets (not a subfacet of anther facet). In categorical perspective, it removes a nondecomposable terminal morphism of the given object, and as a consequence, the morphisms pointing to this object are also removed. In program, this process corresponds to removing a leaf node specified by a symbol. Since it is a leaf node, there is only one symbol (the base symbol) on this node. All wires that connect to the base point of removed node should be removed, so all passing points will also be removed. Information related to those wires will be lost (see Figure <ref>). It is implemented as a function with a parameter |tgt ::Symbol|, indicating the object to remove. The object specified by the symbol should have nondecomposable terminal morphism. remove-a-non-terminal-morphism §.§ Remove a Non-Terminal Morphism In categorical perspective, “remove” removes a nondecomposable terminal morphism. It is natural to generalize it to any nondecomposable morphism. To remove a nondecomposable morphism, all composition rules in the multiplication table related to this morphism will also be removed. For example, to remove morphism ϕ, the rule ϕ∘ϕ' = ψ for any possible ϕ' and ψ will also be removed. However, removing decomposable morphism is not simple; there are multiple ways to remove a decomposable morphism consistently. For example, to remove morphism ϕ while there is a composition rule ϕ = ϕ_1 ∘ϕ_2, one of morphism ϕ_1 or ϕ_2 should also be removed: one can remove ϕ_1 then ϕ, remove ϕ_2 then ϕ, or remove both ϕ_1 and ϕ_2 then ϕ. Two coherent choices are: removing all prefix ones, or removing all suffix ones. This process can be decomposed into multiple steps of removing nondecomposable morphisms. Here only nondecomposable morphisms are concerned. In geometrical perspective, it is called “unincident” because it corresponds to removing an incidence relation. Removing an initial morphism leads to the target object being removed, which corresponds to removing a subfacet of some facets, and such subfacet should be boundaryless. It is the same as “remove” if it is not a subfacet of any facet. Removing a non-initial morphism can be understood as removing a subfacet of some facets from a vertex figure, which results in “unincident” some facets. Removing a subfacet can lead to some ill geometric shapes. For example, it is nonsense to remove the boundary circle of a disk. If the subfacet to be removed has a 2-dimension difference from the facet, such as a point covered by a plane, it is usually fine, otherwise it should be carefully dealt with. For example, a point on the circle can be safely removed, but removing the endpoint of a line segment leads to an ill geometric object. In program, this process removes a symbol on a node, and keep all other unrelated wires unchanged. Where “keep all other unrelated wires unchanged” means that only the outgoing and incoming edges of this node will be modified. Consider a base wire passing through but not ending at this point, the starting point of this wire should not change after this process. It is a minimal operation, since only a small part of this data structure is changed. By removing a symbol on a node, the adjacent wires will also be removed. It's fine to remove incoming wires, but invalid for removing outgoing wires, except when outgoing wires is connected to the base point, since now the entire outgoing edge should be removed (see Figure <ref>). Symbols connected only to the base points in the right are said to be nondecomposable, which represent nondecomposable initial morphisms. Above discussion shows only nondecomposable symbols can be removed. All wires that crossed the removed wires should find alternative paths. For all parent nodes of the source node of the removed edge, one only need to check base wires connected to the target node of the removed edge (see left one of Figure <ref>). Similarly, for all child nodes of the target node of the removed edge, one only need to check base wires connected to this child node (see right one of Figure <ref>). It is equivalent to: surjectivity of dictionaries should not be violated after removing this edge. It is implemented as a function with a parameter |(src, tgt) :: (Symbol, Symbol)|, indicating the morphism to remove, which should be nondecomposable. The decomposability of a morphism can be checked by the function . add-a-non-terminal-morphism §.§ Add a Non-Terminal Morphism “Incident” is a method to claim the incidence relations between two facets. This method is the reverse of “unincident”. It is just like attach a point to a surface in the GeoGebra 3D calculator, but here we don't change the position of the point. Notice that “a line segment is covered by a plane” implies “the endpoints of this line segment are also covered by this plane”. To claim A is covered by B, all objects covered by A should be already covered by B. In this example, we need to claim “the endpoints are covered by this plane” first. But to do that, all subfacets of endpoints, which is the null face, should be covered by this plane, and this is already true. In the opposite, “a point is covered by an edge of a square” implies “this point is also covered by this square”. To claim A is covered by B, all objects that cover B should already cover A. In this example, we need to claim “this point is covered by this square” first. But to do that, all superfacet of this square, which is the universe, should cover this point, and this is already true. In some case, it is not enough to claim incidence relations by just saying “A is covered by B”. For example, the dried persimmon shape described by equation z^2 = (1-x^2-y^2)(x^2+y^2)^2 is a sphere but north and south poles are pinched together, so the center point is covered by this surface in two directions. Consider a curved line segment starting at the center point, to claim “this segment is covered by this dried persimmon shape”, it is also necessary to specify “which side of this shape”. In the vertex figure of the center point, the problem becomes that “this point is covered by which one of two circles” (see right sphere of Figure <ref>). In the opposite, consider a curved line segment on this dried persimmon shape, to claim “this segment covers the center point”, it is also necessary to specify “which side of this shape”. In the face figure of this shape, the problem becomes that “this segment covers which one of two points” (see left sphere of Figure <ref>). In categorical perspective, this process just add a non-terminal morphism into this category between two given objects. Assume we are adding a morphism ϕ : F → S, and if there exists another morphism ϕ' : P → F but no morphism such that ? : P → S, we will also need to add another morphism ϕ” : P → S such that ϕ” = ϕ∘ϕ', otherwise it will break the closure property of category; it is the first point we discussed above. Similarly, if there exists another morphism ϕ' : S → V but no morphism such that ? : F → V, we will also need to add another morphism ϕ” : F → V such that ϕ” = ϕ' ∘ϕ; it is the second point we discussed above. So we should add those morphisms first, and it is possible by recursively repeating it down/up to the initial/terminal object. After confirming that closure property can be retained, we need to expand the multiplication table of morphisms. We only need to provide how to compose nondecomposable morphisms, since other cases can always be decomposed into this case. Even though this process can be automatically done in the most case, but for non-trivial case it becomes necessary. The choices of new composition rules should agree with the existing equivalence relations: if we add a morphism ϕ : S → F and claim relations ϕ_1' = ϕ∘ϕ_1 and ϕ_2 = ϕ_2' ∘ϕ, where ϕ_1 : P → S, ϕ_2 : S → V, ϕ_1' : P → F, ϕ_2' : F → V, the equivalence relation ϕ_2 ∘ϕ_1 = ϕ_2' ∘ϕ_1' should already exist. Also, if we claim that ϕ_1' = ϕ∘ϕ_1 and ϕ_2' = ϕ∘ϕ_2, where ϕ_1 : P_1 → S and ϕ_2 : P_2 → S, and there exists two morphisms ψ_1 : V → P_1 and ψ_2 : V → P_2, such that ϕ_1 ∘ψ_1 = ϕ_2 ∘ψ_2, then the equivalence relation ϕ_1' ∘ψ_1 = ϕ_2' ∘ψ_2 should already exists. A dual version should also hold. They can be concluded into three laws: enumi. * ϕ_R' = ϕ∘' ϕ_R ϕ_L' = ϕ_L ∘' ϕϕ_L' ∘ϕ_R = ϕ_L ∘ϕ_R'. * ϕ_R1' = ϕ∘' ϕ_R1ϕ_R2' = ϕ∘' ϕ_R2ϕ_R1∘ζ_R1 = ϕ_R2∘ζ_R2ϕ_R1' ∘ζ_R1 = ϕ_R2' ∘ζ_R2. * ϕ_L1' = ϕ_L1∘' ϕϕ_L2' = ϕ_L2∘' ϕζ_L1∘ϕ_L1 = ζ_L2∘ϕ_L2ζ_L1∘ϕ_L1' = ζ_L2∘ϕ_L2'. Where ∘' is the composition related to ϕ, which decides how to compose the new morphism with old one: enumi. * For ϕ_R : S' → S, ϕ_R' = ϕ∘' ϕ_R is a morphism ϕ_R' : S' → F. * For ϕ_L : F → F', ϕ_L' = ϕ_L ∘' ϕ is a morphism ϕ_L' : S → F'. One only need to define the composition rule for all nondecomposable ones of ϕ_R and ϕ_L. There are multiple choices of ϕ_R' : S' → F as a result of ϕ∘' ϕ_R, and each choice can be denoted as a tuple (ϕ_R, ϕ_R') called a coangle. Similarly, a choice of the composition rule in another direction can be denoted as a tuple (ϕ_L, ϕ_L') called an angle. Angles and coangles form a vertex set, where they are grouped by ϕ_R and ϕ_L. To make a composition rule, one should choose a vertex for each group. The third constraint with ϕ_L1 = ϕ_L2 excludes some angles. A fork of a morphism ϕ is a pair of distinct morphisms ψ, ψ' such that ψ∘ϕ = ψ' ∘ϕ. A angle (ϕ_L, ϕ_L') is valid if forks of the morphism ϕ_L are also forks of the morphism ϕ_L'. Similarly, some coangles are excluded by the second constraint. The first constraint limits how to choose ϕ_L' and ϕ_R' for each pair of ϕ_L and ϕ_R. For a valid choice of pair (ϕ_L', ϕ_R'), we draw an edge between the angle (ϕ_L, ϕ_L') and the coangle (ϕ_R, ϕ_R'). Such construction makes the induced subgraph of vertex groups ϕ_L and ϕ_R forms a biclique cover (disjoint union of complete bipartite graphs). The second constraint also limits how to choose ϕ_R1' and ϕ_R2' for each pair of ϕ_R1 and ϕ_R2. Define pseudo-equalizer between ϕ_R1 and ϕ_R2, which is a pair of morphisms (ψ, ψ') such that ϕ_R1∘ψ = ϕ_R2∘ψ'. Two coangles (ϕ_R1, ϕ_R1') and (ϕ_R2, ϕ_R2') are compatible if all pseudo-equalizers between ϕ_R1 and ϕ_R2 are also pseudo-equalizers between ϕ_R1' and ϕ_R2'. Note that one only need to check all minimal pseudo-equalizers between ϕ_R1 and ϕ_R2. For a valid choice of pair (ϕ_R1', ϕ_R2'), we draw an edge between coangles (ϕ_R1, ϕ_R1') and (ϕ_R2, ϕ_R2'), and the induced subgraph also forms a biclique cover. The same analysis can be applied to the third constraint. Finally, it leads to a graph problem: find a complete subgraph of a semi-complete multipartite graph by selecting a vertex in each group. A semi-complete multipartite graph is a multipartite graph where the induced subgraph of each pair of groups is a biclique cover. In the practical application of geometry, it is not necessary to actually find all possible solutions of this problem, since selecting are done by geometric calculations. This constraint can be used to check if the geometric calculations are conflicting, or reduce the amount of calculations by some strategies: one can utilize information theory to estimate the entropy of selection event for each group, and calculate the largest one first. In program, this process adds a symbol to a node, and keep all other unrelated wires unchanged. The wires connected to this point should also be added. It's fine to add an incoming wire, but it is impossible to add an outgoing wire to an existing outgoing edge. That means the added symbol should be nondecomposable, and a new edge should be added such that the base wire is connected to this point (see Figure <ref>). The target node of the added edge may be unreachable from the given node. One should notice that the added edge shouldn't form a directed loop in the graph. Each added wire can be determined by following process: for any pair of connected edges in which one is the added edge, a base wire of this path should be specified so that a wire can be added like Figure <ref>. Some wires are determined by supportivity. Such choice is not arbitrary, supportivity must still hold after adding wires. For a pair of connected edges in which the second is the added edge (left one of Figure <ref>), its base wire (lower purple line) together with two wires form a triangle, and all added outgoing wires (blue dashed lines) should be supported by this triangle. Different triangles must agree with each other. Similarly, for a pair of connected edges in which the first is the added edge (right one of Figure <ref>), its base wire (lower green line) together with two wires form a triangle, and some added outgoing wires (blue dashed lines) should be supported by this triangle. Different triangles should agree with each other. Moreover, if there is a circle formed by two base wires of the target node (see Figure <ref>), a pair of corresponding added wires should be supported by such circle. The possible choices can be obtained by a helper function , which returns two groups of picklists. The second group contains picklists of angles, which are used to determine the outgoing wires (right one of Figure <ref>). The first group contains picklists of coangles, which are used to determine the incoming wires (left one of Figure <ref>). User should select one of angle or coangle for each picklist. This does not guarantee a valid choice. A valid choice of angles and coangles can be checked by functions , and . The process of adding a non-terminal morphism is implemented as a function with parameters |src, tgt, sym ::Symbol| and |src_alts :: [Coangle]| and |tgt_alts :: [Angle]|. and indicate source object and target object of the added morphism, and will indicate the added morphism. is the list of picked coangles, and is the list of picked angles. add-a-terminal-morphism §.§ Add a Terminal Morphism Above we discuss how to add a non-terminal morphism for two reachable objects. It is not the complete inverse of “remove a morphism” because there are two special cases: removing an initial or terminal morphism will result in the object being removed, which also drop a lot of information, so that cannot then be directly added back. The reverse processes of these two cases cannot be done by this method. In categorical perspective, to add a terminal morphism ϕ : F →𝕌, one should also add an object F and its incoming morphisms. An incoming morphism, say ψ : S → F, and the added terminal morphism ϕ should be composable, and the result, denoted as ϕ∘' ψ, should be assigned to an unique terminal morphism ζ : S →𝕌. The compositions between added incoming morphisms ψ : S → F and existing morphisms, say ξ : V → S, also need to be given, the result, denoted as ψ∘”ξ, should be assigned to another incoming morphism ψ' : V → F. Also, the new composition rules ∘” and ∘' should respect to original ones: for two compositions ψ' = ψ∘”ξ and ζ = ϕ∘' ψ, there is ζ' = ϕ∘' ψ' for ζ' = ζ∘ξ. Consider an added incoming morphism ψ : S → F, for any two parallel incoming morphisms ξ, ξ' : V → S, one should decide ψ∘”ξ and ψ∘”ξ' should be equivalent or not. As a reverse process of removing terminal morphism, such information has been dropped by removing terminal morphism. To minimize input information, we choose the most trivial rule: let ψ∘”ξ and ψ∘”ξ' be assigned to different morphisms if ξ≠ξ'. Where we further assume that the source objects of all added incoming morphisms have a unique maximum S, so that all other incoming morphisms can be unqiuely derived by the incoming morphism ψ : S → F. In this setup, the new composition rules already respect to original ones. This operation can also be seen as inserting an object in the middle of a terminal morphism, such that it can be decomposed into two added morphisms. Utilizing this simplified operation, the reverse process of removing a terminal morphism now can be built. First, insert an object in the middle of a terminal morphism, which will not introduce additional structure. Second, repeat the first step, and merge all added objects by merging their initial morphisms (see merge-non-terminal-morphismsMerge Non-Terminal Morphisms), so that all incoming morphisms of all added objects are disjointly unioned together. And third, merge some incoming morphisms of added objects (see merge-non-terminal-morphismsMerge Non-Terminal Morphisms), so that the additional structure can be encoded. In geometrical perspective, it corresponds to “add a trivial facet”, which is a reverse process of “removing a trivial facet”. A facet is said to be trivial if it has no superfacet and has only one direct subfacet, and the incidence relation is directly inherited from this subfacet. This is like extruding a facet, which make a superfacet on top of it. No subfacet will be glued after extruding (no new incidence relation will be created). In program, it is a process to add a leaf node to a node. A symbol is also added to the node, which is connected to the added leaf node. The wires to the leaf node should also be added for every ancestor nodes, which should have the same shape to the base wires of the node (see Figure <ref>). It is implemented as a function with parameters |src, sym ::Symbol| and |inserter :: (Symbol, Symbol) -Symbol|. indicates an object whose terminal morphism will be interpolated, and will indicate the only morphism from such object to the inserted object. For all incoming morphisms of the object , say , the pair of symbol will indicate the incoming morphism of the inserted object with the same source object. add-an-initial-morphism §.§ Add an Initial Morphism The opposite of above operation is to add an initial morphism. To add an initial morphism ϕ : ∅→ F, one should also add an object F and its outgoing morphisms. An outgoing morphism, say ψ : F → S, and the added initial morphism ϕ should be composable, and the result, denoted as ψ∘' ϕ, should be assigned to a unique initial morphism ζ : ∅→ S. The compositions between added outgoing morphisms ϕ : F → S and existing morphisms, say ζ : S → V, also need to be given, the result, denoted as ξ∘”ψ, should be assigned to another outgoing morphism ψ' : F → V. Also, the new composition rules ∘” and ∘' should respect to original ones: for two compositions ψ' = ξ∘”ψ and ζ = ψ∘' ϕ, there is ζ' = ψ' ∘' ϕ for ζ' = ξ∘ζ. Similarly, we only consider the most trivial case. There is a minimal object S such that the added outgoing morphism ψ : F → S uniquely determines the other outgoing morphisms by composition: ξ∘”ψ and ξ' ∘”ψ are different outgoing morphisms iff ξ≠ξ'. This operation can also be seen as inserting an object in the middle of an initial morphism, such that it can be decomposed into two added morphisms. The reverse process of removing an initial morphism now can be built. First, insert an object in the middle of an initial morphism. Second, repeat the first step, and merge all added objects into one (see merge-terminal-morphismsMerge Terminal Morphisms), so that all outgoing morphisms of all added objects are disjointly unioned together. And third, merge some outgoing morphisms of added objects (see merge-non-terminal-morphismsMerge Non-Terminal Morphisms), so that the additional structure can be encoded. In geometrical perspective, it creates an boundaryless subfacet on a facet, for example, creates a point on a plane, or creates a circle in a box. The created subfacet is placed at the inner of the facet, so that the incidence relations to its superfacets are directly inherited from it. In program, it is a process that add a node in the middle of an arrow started at the root (see Figure <ref>). The inserted node has only one edge, and its dictionary is one-to-one, so there is no additional structure. A corresponding symbol should be added to the root node, which connects to the base point of the added node. It is implemented as a function with parameters |tgt, sym ::Symbol| and |mapping ::Dict|, where indicates the initial morphism to be interpolated, and will indicate the incoming morphism of the added object, and is the dictionary of the outgoing morphism of the added object. split-a-non-terminal-morphism §.§ Split a Non-Terminal Morphism “Disconnect” is used to separate disconnected components of a disconnected facet, such ill facet may be produced after claiming incidence relation. For example, after claim that a circle is covered by a sphere, the sphere becomes disconnected: the sphere has been cut into a cap and a cup. Local connectedness can also be manipulated by this way. The connectedness of a facet is manifested by identity of morphisms, so in categorical perspective, it corresponds to splitting a morphism into multiple parts, and each part represents each connected component. After splitting the target morphism, one should also figure out how to modify the multiplication table. The composition of the target morphism and other morphisms will be duplicated for each splitted one. For the composition that result in the target morphism, things are complicated. We need to find all morphism chains of the target morphism, and determine which chain will compose which splitted morphism. For example, a sphere with a circle on it can be splitted into a cap and a cup; one covers this circle in positively-oriented way, another in negatively-oriented way. Splitting the maximum chains means breaking equivalence relations, which is not always possible; one should find a way to consistently break equivalence relations. All equivalence relations to be broken should be a minimal equivalence relation: for equivalence relation ϕ_1 ∘…∘ϕ_n = ϕ_1' ∘…∘ϕ_t' there is no non-trivial subpath decomposition (ϕ_1 ∘…∘ϕ_m) ∘ (ϕ_m+1∘…∘ϕ_n) = (ϕ_1' ∘…∘ϕ_s') ∘ (ϕ_s+1' ∘…∘ϕ_t') such that ϕ_1 ∘…∘ϕ_m = ϕ_1' ∘…∘ϕ_s' and ϕ_m+1∘…∘ϕ_n = ϕ_s+1' ∘…∘ϕ_t'. If one try to break some relations aren't minimal equivalence relations, one of sub-relations should also be broken. In this case, one should break this sub-relation first in the same way. So it is limited that broken equivalence relations should be minimal. The constraint of minimal equivalence relations just corresponds to the condition of linked objects of the section category of the target morphism, in this point of view, this process just separates out splittable parts in a category. In program, this process splits a symbol on a node into two, and keep all other unrelated wires unchanged. Focus on the wires passing through the point to be splitted, the incoming wire should be splitted in two, and the outgoing wire should choose to connect to one of the splitted points (see left one of Figure <ref>). There is a special case: if outgoing wire directly connects to a base point, this edge will be duplicated so that there are two base wires that connect to two splitted points respectively (see right one of Figure <ref>). The new dictionaries of incoming edges should map two splitted symbols to the same symbol as before, and the new dictionaries of outgoing edges should maps to one of the splitted symbols: it should be decided where the mapping to the old symbol should now maps to, and such choices aren't arbitrary. Considering two outgoing wires starting at the point to be splitted, they should be modified to start at the same splitted symbol if they are supported by a circle of two base wires (see Figure <ref>). Since such base wires are not modified, the supportivity ensures that they should start at the same point. This constraint is the same as the condition of linked objects. A function is defined to determine splittable parts, which has a parameter |tgt ::Symbol|, indicating the morphism to partition. This function searches for all 3-chains of given morphism, in which the first and the third arrows are edges. For each 3-chain, its two direct subchains indicate two linked minimal and maximal objects. The return value is a list of groups of tuples of symbols, each group contains all symbols on child nodes that should be mapped to the same splitted symbol. The main process is implemented as a function , which has two parameters |(src, tgt) :: (Symbol, Symbol)| and |partition :: [(Symbol, [(Symbol, Symbol)])]|. for all indicates a splitted morphism, and the morphism chains in the group will compose to this morphism. An splitted morphism which no morphism chains compose to is nondecomposable. split-a-terminal-morphismcategory §.§ Split a Terminal Morphism/Category Splitting a non-terminal morphism corresponds to disconnect a facet or a vertex figure, while splitting a terminal morphism, which leads to the source object being splitted, corresponds to split a facet from two sides, so we call this process “split”. For example, a point covered by two segments in two directions can be splitted into two points, so that they are covered by two segments respectively. It is useful when one needs to separate two disconnected facets with shared boundaries. Complex shared boundaries can be splitted from up to down using “split”. For example, to split two sticked cubes, one needs to split the shared facet first, then split shared edges and shared vertices. After that, one can split this category into two, and rotating or moving them respectively now works. Splitting the terminal morphism of the initial object is a special case, which is just splitting a category into multiple categories. this is the reverse process of merging categories. In categorical perspective, splitting a terminal morphism ϕ : F →𝕌 into N parts is special since the object F is also splitted. Such process can be further decomposed into: duplicate all incoming morphisms of object F, then split initial morphism ϕ' : ∅→ F, terminal morphism ϕ : F →𝕌 and object F simultaneously. To duplicate all incoming morphisms, one should start with minimal ones, so that longer ones can be duplicated coherently. Denote the duplications of an incoming morphism ψ' : P → F as (ψ_n' : P → F)_n = 1 ∼ N. “Duplicate” means they behave the same up to indices, that is, ξ∘ψ_n' = ξ∘ψ_m' for all ξ : F → G and ψ_n' ∘ζ = ψ_n”ψ_m' ∘ζ = ψ_m” for all ζ : Q → P. To split initial and terminal morphisms and the given object simultaneously, the target object of incoming morphisms and the source object of outgoing morphisms will change. Let's say the outgoing morphism ξ_m : F → G, which is assigned to be splitted into m-th group, now becomes ξ̃_m : F_m → G, and incoming morphisms (ψ_n' : P → F)_n = 1 ∼ N now become (ψ̃_n' : P → F_n)_n = 1 ∼ N. For any pair of an incoming morphism ψ̃_n' and an outgoing morphism ξ̃_m with n ≠ m, the composition between them is no longer possible, so such case will be removed from the multiplication table. Before this step, there is a morphism η = ξ_m ∘ψ_n', which will not be removed because for any incoming morphism ψ̃_n' : P → F_n, there is ψ̃_m' : P → F_m such that ξ̃_m ∘ψ̃_m' = η. Splitting an initial morphism also satisfies the same statement in an opposite way. In program, this process splits a node into multiple parts. The symbols on this node (except the base symbol) should be distributed to each part, as should the wires connected to them (see left one of Figure <ref>). The symbols support or are supported by the same symbol should be distributed to the same part (see Figure <ref>). All wires that connect to the base symbol of this node should be duplicated, so all passing points will also be duplicated (see right one of Figure <ref>). For the special case of splitting a category, it should be defined in another function since it has different return type. Because it is equivalent in the categorical perspective, the criteria is the same. To determine splittable parts, an utility function is also defined, which is similar to . The function finds splittable groups of symbols by applying union find algorithm to all values of dictionaries of edges. The process of splitting a category is defined as a function with a parameter |partition :: [[Symbol]]|, which are groups of symbols representing each splitted category. Splitted categories are built just by separating the edges of the root via . The process of splitting an object is defined as a function with parameters |tgt ::Symbol|, the symbol of the object to split, and |partition :: [((Symbol, Symbol) -Symbol, [Symbol])]|. For all incoming morphisms of the object to split, say , the pair of symbol |(s1, splitter (s1, s2))| for will indicate the incoming morphism of splitted object with the same source object. merge-non-terminal-morphisms §.§ Merge Non-Terminal Morphisms The reversed process of “disconnect” is “connect”. It is usually used to union two disjoint facets. This is needed when, for example, merging two sticked squares into one rectangle: before remove the boundary between them, one should union the area of two squares. In that moment, the merged geometric object becomes disconnected. The facets to be merged should be covered by the same facets. For example, you cannot merge two sticked squares which are faces of two different cubes; you should first merge the two cubes. In categorical perspective, this process merges multiple non-terminal morphisms into one. Merging two morphisms ϕ_n0 and ϕ_m0' means letting ϕ_n0 = ϕ_m0', which implies equivalence relations ϕ_n ∘…∘ϕ_1 ∘ϕ_0 = ϕ_m' ∘…∘ϕ_1' ∘ϕ_0' between their morphism chains ⟨ϕ_n, …, ϕ_2, ϕ_1 ⟩ and ⟨ϕ_m', …, ϕ_2', ϕ_1' ⟩. To merge two morphisms ϕ : S → F and ϕ' : S → F, if there is a morphism ψ : P → S, the equivalence relation ϕ∘ψ = ϕ' ∘ψ should already exist. Similarly, if there is a morphism ψ : F → V, the equivalence relation ψ∘ϕ = ψ∘ϕ' should already exist. When merging two initial morphisms ϕ : ∅→ F and ϕ' : ∅→ F', the target objects of morphisms are not the same. In this case, one should find an upper isomorphism of their upper categories, which is an isomorphism μ : ℱ↑ F' →ℱ↑ F such that their downward functors are equivalent under it: F^↓∘μ = F'^↓. In program, this process merges multiple symbols on a node, and keep all other unrelated wires unchanged. The incoming edge of this node should maps the symbols to be merged to the same point. The nodes referenced by these symbols should be the same, and their dictionaries should be the same except the base one, otherwise supportivity will be violated (see Figure <ref>). If their dictionaries map one symbol to two different symbols, user should first merge those symbols. When merging non-initial morphisms, there is no need to check whether they have the same target node, since it is already implied by another requirement. But for merging initial morphisms, this is important. To merge initial morphisms, the target nodes should be equivalent in data structure. If they are equivalent in category but not in data structure, they should be unified via and . Only target nodes are needed to be relabeled and rewired, since their descendants are already the same, as implied by another requirement. The process of merging morphisms is implemented as a function with parameters |(src, tgts) :: (Symbol, [Symbol])|, indicating morphisms to merge, and merged symbol , so will be indicate the merged morphism. When merging initial morphisms, it will checks if the structures of the target nodes are the same. Users should unify target nodes before merging. merge-terminal-morphisms §.§ Merge Terminal Morphisms In the previous subsection, we discuss the process of merging non-terminal morphisms called “connect”, which is the reverse process of “disconnect”, while merging terminal morphisms is called “merge”, which is the reverse process of “split”. It can be used to merge facets. For example, stick two squares by merging their edges, so that this edge now is covered by the two squares. The facets to be merged should cover the same facets. In the above case, before merging edges, their vertices should be merged first. A categorical perspective can be obtained by reversing the discussion of split-a-terminal-morphismcategorysplitting a terminal morphism. Merging N terminal morphisms (ϕ_n : F_n →𝕌)_n = 1 ∼ N is special since their source objects are different. Such process can be further decomposed into: merge initial morphisms (ϕ_n' : ∅→ F_n)_n = 1 ∼ N, terminal morphisms (ϕ_n : F_n →𝕌)_n = 1 ∼ N and objects (F_n)_n = 1 ∼ N simultaneously, then un-duplicate all incoming morphisms of the merged object. To merge initial and terminal morphisms and objects simultaneously, the target object of incoming morphisms and the source object of outgoing morphisms will change. Let's say the outgoing morphism ξ_m : F_m → G now becomes ξ̃_m : F → G, and the incoming morphism ψ_n' : P → F_n now becomes ψ̃_n' : P → F. Incoming morphisms have been grouped into (ψ̃_n' : P → F)_n = 1 ∼ N according to certain properties, such that the second step can be done. For any pair of an incoming morphism ψ̃_n' and an outgoing morphism ξ̃_m with n ≠ m, the composition between them is now possible, so such case should be added to the multiplication table. The result of ξ̃_m ∘ψ̃_n' should be set as ξ̃_m ∘ψ̃_m', where ψ̃_n' and ψ̃_m' are in the same group. To un-duplicate all incoming morphisms, one should start with maximal ones, so that the shorter ones can be un-duplicated coherently. Each group of incoming morphisms (ψ̃_n' : P → F)_n = 1 ∼ N will be identified as the same morphism, then be denoted as ψ̃' : P → F, which is called an un-duplication. They can be “un-duplicated” only if they behave the same up to indices, that is, ξ∘ψ̃_n' = ξ∘ψ̃_m' for all ξ : F → G and ψ̃_n' ∘ζ = ψ̃_n”ψ̃_m' ∘ζ = ψ̃_m” for all ζ : Q → P, where ψ̃_n' : P → F and ψ̃_m' : P → F are in the same group, ψ̃_n” : Q → F and ψ̃_m” : Q → F are in the same group. Conclude above discussion, to merge terminal morphisms (ϕ_n : F_n →𝕌)_n = 1 ∼ N, objects (F_n)_n = 1 ∼ N should have the same lower closure on the induced poset except upper bounds. Furthermore, their incoming morphisms should be grouped into (ψ_n' : P → F_n)_n = 1 ∼ N such that they behave the same up to indices: ϕ_n ∘ψ_n' = ϕ_m ∘ψ_m' and ψ_n' ∘ζ = ψ_n”ψ_m' ∘ζ = ψ_m” for all ζ : Q → P and (ψ_n' : P → F_n)_n = 1 ∼ N and (ψ_n” : Q → F_n)_n = 1 ∼ N. This requirement is equivalent to find lower isomorphisms between them, which are isomorphisms between their lower categories μ_nm : ℱ↓ F_m →ℱ↓ F_n such that their upward functors are equivalent under it: F_n^↑∘μ_nm = F_m^↑. In program, this process merges multiple nodes into one node. All symbols (except base symbols) on the nodes being merged are disjointly unioned. Wires connected to these symbols will remain the same. But the wires connected to base symbols should be modified since they are now merged into one base symbol (see left one of Figure <ref>). Consider the common parent of the nodes to be merged, which may have multiple edges between them (see right one of Figure <ref>). To merge nodes, these edges should also be merged. It should be decided which base wire should be merged into which. It causes some symbols on the parent node to be merged. All symbols referencing the nodes being merged also need to be merged coherently. “Coherent” means the shape of base wires should remain the same after merging. Note that incorrectly merging incoming edges will result in inconsistent situations. For example, consider two triples of symbols on a node, say and , and assume there are some incoming wires: . In this case, merging and will lead to merging all four symbols , , , , which changes the configuration of the wires (see Figure <ref>). It is implemented as a function with parameters |tgts_suffix :: [(Symbol, [(Symbol, Symbol)])]| and |merger :: (Symbol, [Symbol]) -Symbol|, where contains nodes to merge and the corresponding nondecomposable incoming edges, and is the function to merge symbols on all ancestor nodes. All incoming morphisms of these objects, say , will be merged into the morphism indicated by pair of symbol . The nondecomposable incoming edges of the nodes to merge will be paired up by function according to the keys. summary § SUMMARY Above we discussed fundamental operations in categorical level, which can be classified as: enumi. * remove: remove-a-non-terminal-morphismremoving a non-terminal morphism, remove-a-terminal-morphismremoving a terminal morphism. * add: add-a-non-terminal-morphismadding a non-terminal morphism, add-a-terminal-morphismadding a terminal morphism and add-an-initial-morphismadding an initial morphism. * split: split-a-non-terminal-morphismsplitting a non-terminal morphism and split-a-terminal-morphismcategorysplitting a terminal morphism/category. * merge: merge-non-terminal-morphismsmerging non-terminal morphisms, merge-terminal-morphismsmerging terminal morphisms and merge-categoriesmerging categories. These operations are fundamental because they directly manipulate morphisms, like we do with graphs. Fundamental operations are not simple, as some operations require multiple steps to complete. For example, to add a morphism, it is required to analyze the situation first then select required options, which are not arbitrary and may fail; user should understand the mechanism behind it. Since initial and terminal morphisms are special, operations of them are more complicated and can be decomposed into multiple operations. For example, removing an initial or terminal morphism can be decomposed into removing all related morphisms then splitting category, which make adding an initial or terminal morphism very complicated as a reverse process. Splitting initial or terminal morphisms also can be decomposed into splitting all related morphisms then splitting on duplications. A similar decomposition can be applied to merge initial or terminal morphisms, while should give isomorphisms between the objects to be merged. Noting that adding/merging initial or terminal morphisms is much more complicated than removing/splitting initial or terminal morphisms, and the complexity comes from the huge information changes during these operations. In different perspectives, the similarity between operations are different. In categorical perspectives, operations on initial and terminal morphisms are very different from operations on non-initial non-terminal morphisms. In programming perspectives, however, dealing with initial morphisms and non-terminal morphisms are almost the same, but not the same as dealing with terminal morphisms. The difference lies in the way the BAC stores the information. A node in BAC contains the information of the upper closure of an object, which constitutes a upper category. This directional structure makes it lose the duality of categories. Instead, it is more suitable for describing incidence structures, thus making the interpretation of geometric objects more intuitive. These complementary strengths drive investigations into fundamental operations, and also provide a more stereoscopic approach to analyze BAC. There are some functions in the library we didn't mention. and are generalizations of and , and such generalizations are natural in the programming perspectives. and are similar to and in program. and provide a way to validate upper and lower isomorphisms and traverse BACs in parallel. There are some operations we didn't implement, such as isomorphism, section category. Various operations worked on cell complexes or simplicial complexes, such as Cartesian product, join and connected sum, also can be generalized. There are not necessary for building incidence structures, so will not be discussed in this series of articles. In the next article, we will develop a computational geometry system to describe geometric objects consisting of curved facets in any dimension.
http://arxiv.org/abs/2307.02068v1
20230705071816
Properties of secondary components in extensive air shower of cosmic rays in knee energy region
[ "Chen Yaling", "Feng Zhang", "Hu Liu", "Fengrong Zhu" ]
astro-ph.HE
[ "astro-ph.HE", "hep-ex" ]
UTF8gbsn -1mu -2mu -2mu Properties of secondary components in extensive air shower of cosmic rays in knee energy region Chen Yaling^1, Feng Zhang^1, Hu Liu^1,†, Fengrong Zhu^1 =============================================================================================== [2]Email: [email protected] ^1 School of Physical Science and Technology, Southwest Jiaotong University, Chengdu 610031, Sichuan, China The “knee” of cosmic ray spectra reflects the maximum energy accelerated by galactic cosmic ray sources or the limit to the ability of galaxy to bind cosmic rays. The measuring of individual energy spectra is a crucial tool to ascertain the origin of the knee. However, the measuring of energy and the identifying of primary nuclei are the foundation of measuring the energy spectra of individual components. The Extensive Air Shower of cosmic rays in the knee energy region is simulated via CORSIKA software. The energy resolution for different secondary components (include electron, gamma, muon, neutron and Cherenkov light) and primary nuclei identification capability are studied. The energy reconstruction by using electromagnetic particles (electron, gamma and Cherenkov light) in the energy around “knee” is better than by using other secondary particles. The resolution is 10% –19% for proton, and 4% –8% for iron. For the case of primary nuclei identification capability, the discriminability of density of muons is best both at low ( 100 TeV) and high ( 10 PeV) energy, the discriminability of the shape of lateral distribution of electron and gamma-rays are good at low energy and the discriminability of density of neutrons is good at high energy. The differences between the lateral distributions of secondary particles simulated by EPOS-LHC and QGSJet-Ⅱ-04 hadronic model are studied. For electron, gamma and Cherenkov light, the differences of the number of particles are within 5%; for muon, when the perpendicular distance from the shower axis is greater than 100 m, the difference of the muon number is within 5%; for neutron, the difference in neutron number between the two models is larger than 10%. The results in this work can provide important information for selecting the secondary components and detector type during energy reconstruction and identifying the primary nuclei of cosmic rays in the knee region. extensive air shower, cosmic rays, composition and phase identification, energy reconstructions § INTRODUCTION Cosmic ray is a high-energy particle from cosmic space, whose energy spectrum obeys the power law spectrum, and the maximum energy reaches about 1021eV .The main feature of its energy spectrum is that at about 1015eV. And the Spectral index of the power law spectrum changes from – 2.7 to – 3.1, which is called the "knee" region. The origin of cosmic ray knee region is an important subject in cosmic ray physics <cit.> . The “knee” of cosmic ray spectra reflects the maximum energy accelerated by galactic cosmic ray sources or the limit to the ability of galaxy to bind cosmic rays. Different models predict the different characteristics of the inflection energy (the energy where the Spectral index changes) of the single component cosmic ray energy spectrum in the knee region. For example, some models predict that the inflection energy is proportional to the charge Z of the original particle <cit.>, and some models predict that the inflection energy is proportional to the mass number A of the original particle <cit.>. The measurement of single component energy spectrum of cosmic ray is of great significance to the study of these transformations. At present, cosmic ray can be measured directly or indirectly. Direct measurement is mainly to measure cosmic ray through High-altitude balloon and space experiments, such as CREAM <cit.>, AMS <cit.>, DAMPE <cit.>, etc. Its advantage is that it can directly measure the charge of primary particles, and it has good discrimination ability for cosmic ray with different charges. At the same time, it can use the beam of accelerator experiment to calibrate the detector, and the absolute energy scale is relatively easy to determine. However, due to load limitations, the effective detection area is small, and the upper limit for measuring the energy spectrum can only reach around 100 TeV <cit.>. Therefore, the measurement of cosmic ray in the knee area mainly relies on indirect measurements from ground-based experiments, such as KASCADE <cit.>, ARGO-YBJ <cit.>, LHAASO <cit.>, ICECUBE <cit.>, TALE <cit.>, TUNKA <cit.> and AS-g <cit.> . The ground experiment measures the primary cosmic ray by measuring the secondary components produced by the cosmic ray in the extensive air shower (EAS). Compared with the direct measurement method, it has the advantage of large effective detection area and can measure the energy spectrum of cosmic ray in the knee region. However, because the original cosmic ray particles are not directly measured, the ability to identify the composition of cosmic ray is not high, and the method of energy reconstruction often depends on the composition of the original particles and the absolute energy scale is not easy to determine, so for ground experiments, the energy measurement and the ability to identify the composition of the original Cosmic ray are the constraints for accurate measurement of single component energy spectrum. At present, most experiments only measure one or more of the secondary particles. For example, the detection energy band of KASCADE/KASCADE-Grand experiment is about 100 TeV-100 PeV, which can detect the electron, muon and hadron components in the secondary particles <cit.>. The proton, helium nucleus, carbon, silicon and iron elements in the "knee" cosmic ray are identified and measured by the electromagnetic particle number and muon number <cit.>; ARGO-YBJ and LHAASO-WFCTA prototypes detected charged particles and Cherenkov photons in secondary particles, and measured the full particle energy spectrum and light component energy spectrum of cosmic ray in the energy range of 1 TeV-10 PeV <cit.>. The measurement energy band of ICETOP/ICECUBE is about 250 TeV-1 EeV <cit.>, Aartsen et al<cit.> used the deep learning technology to reconstruct the energy and composition of cosmic ray using Cherenkov photons generated by secondary particles in ice, thus realizing the energy spectrum measurement of components. These experiments measure different types of secondary components and the measurement results of energy spectra do not match <cit.> . In this paper, we will study the energy reconstruction accuracy and particle identification ability of these secondary components, as well as their dependence on the strong interaction model. To provide reference for understanding the differences in measurement results between different experiments and how to obtain better energy reconstruction accuracy and particle identification ability. The measurement is conducted at the altitude where the longitudinal development of EAS reaches the maximum. The fluctuation of secondary particles is smaller, which can obtain better detection performance. Many experiments also measure cosmic ray at this altitude. In this paper, the secondary particles and Cherenkov photons of vertically incident cosmic ray at 4400 m above sea level will be studied in the knee region. Section <ref> introduces the parameter settings for simulation, including the selection of detection planes, the parameter settings for secondary particles and Cherenkov light; Section <ref> studies the horizontal distribution characteristics of secondary components in EAS and the differences between different Strong interaction models; Section <ref> studies the energy reconstruction accuracy of secondary components to the original Cosmic ray; Section <ref> studies the component identification ability of secondary components to the original Cosmic ray; Section <ref> is a summary. § EAS SIMULATION In this paper, CORSIKA Version-7.7410 software package <cit.> is used to simulate the EAS of Cosmic ray in the atmosphere. EPOS-LHC and QGSJet - Ⅱ -04 are used as the high-energy strong interaction models, and EPOS-LHC Strong interaction model is used specifically. However, these two high-energy Strong interaction models are compared (figure <ref> and figure <ref>). The low-energy Strong interaction models use FLUKA, and the electromagnetic interaction models use EGS4. The five primary components are proton, helium, CNO, MgAlSi, and iron, respectively. The mass number of CNO and MgAlSi are 14 and 27, respectively. The initial particle energy log_10(E/GeV) is fixed at 5.1, 5.3, 5.5, 5.7, 5.9, 6.1, 6.5, and 6.9. The zenith angle is fixed at 0 °, and the azimuth angle is evenly projected within 0 ° -360 °. In order to study the effects of non-vertical incidence, this article simulated a case where the zenith angle was fixed at 45 °, and compared the results of vertical incidence and zenith angle at 45 ° (figure <ref>), with all other results being vertical incidence. The observation plane is selected at an altitude of 4400 meters, and the horizontal component and vertical component of the Earth's magnetic field at the observation plane are 34.618 μ T and 36.13 μ T, respectively. The truncation kinetic energy of secondary particles is set to: hadron 0.1 GeV, muon 0.1 GeV, electron 1 MeV, and gamma ray 1 MeV. The selected truncation kinetic energy is lower than the default value in the CORSIKA manual to store more secondary particles, and the contribution of secondary particles below the selected truncation kinetic energy to the overall transverse distribution is very small. The wavelength of Cherenkov light is set at 200-1000 nm. The collection area of Cherenkov photons is a circular area whose vertical distances from the shower axis are respectively r=20, 50,100, 150, 200, 300 and 400 m. The radius of the circle is 3 m. In real experiments, the atmosphere has absorption and scattering effects on Cherenkov light, including Rayleigh scattering, aerosol scattering and ozone absorption. But they depend on specific models, and this article mainly studies the detection performance under ideal conditions, so these processes will not be considered in the simulation for the time being. § LATERAL DISTRIBUTION OF SECONDARIES §.§ Type of secondaries The secondary components produced in EAS include Cherenkov photons, positron-electron , gamma rays, muons, neutrons, and other particles. Figure <ref> shows the types and particle number of secondary components produced by Cosmic ray with protons (black) and iron nuclei (red) as their primary particles in the EAS process when energy is log_10(E/GeV)=5.1. Other primary cosmic rays produce similar secondary components in EAS, which will not be described here. Furthermore, in the setting observation plane, the secondary particles in descending order with a large number are Cherenkov photons, gamma rays, positron-electron, muon and neutrons. At present, most experiments are also conducted to measure these secondary particles, and this paper will only study these secondary components. §.§ Lateral distribution In the EAS process, the vertical distance from the shower axis is recorded as r, and the relationship between the secondary particle number density at different locations and the location of r is the horizontal distribution of secondary particles. Figure <ref> and figure <ref> respectively show the transverse distribution of secondary components generated in EAS by Cosmic ray with energy log_10 (E/GeV)=5.1, proton and energy log_10 (E/GeV)=6.9, and iron core. It can be seen that at the same place, the number density of Cerenkov photons are thousands of times the number density of gamma ray photons, and the number density of gamma ray photons is 100-1000 times the density of neutron particle number with the smallest density. With r=100m and energy log_10(E/GeV)=6.9, the primary particle takes iron core as an example. Cherenkov photons number density is about 4×10 m^– 2, the gamma ray density is about 100 m^-2, the positron-electron number density is about 10 m^-2, the muon number density is about 0.3 m^-2 and the neutron number density is about 0.05 m^-2. In order to facilitate the viewing of the distribution range of secondary particles in the detection plane, a ring is taken with the core as the center and the vertical distance from the core as the radius. The radius of the ring is taken and the number of secondary components in the ring is counted, as shown in figure <ref>. Figure <ref> and figure <ref> respectively show the primary particles with energy log_10(E/GeV)= 5.1, the primary particles with energy log_10(E/GeV)=6.9 composed of protons and the primary particles with iron core. The abscissa is a uniform ring for log_10(r). It can be seen that for primary Cosmic ray with different energies and components, the number of secondary particle first increases with the vaule of r. When it reaches the maximum, the number of secondary particle start to decrease with the vaule of r. Positive and negative electrons are mainly distributed within 10-100 meters; Gamma rays and muons are mainly distributed within the range of tens to hundreds of meters from the core site; Neutrons are mainly distributed around 1 km away from the core; and Cherenkov photons are mainly distributed near the 100 meters away from the core. For EAS generated by electromagnetic particles, the Kamata-Greisen (NKG) function is usually used to describe the lateral distribution of its secondary particles, expressed as ρ_1(r)=N_size C(s)(r/R_M)^s-2(1+r/R_M)^s-4.5 C(s)=1/2 π R_M^2×Γ(4.5-s)/Γ(s) Γ(4.5-2 s) In the formula, C(s) is a function of s, Γ represents the Gamma function, the vertical distance from the EAS shower axis, the particle population density at the location, the total number of secondary particles, the Morrill radius at the location of the observation plane, and the age of EAS development<cit.>. For Cosmic ray whose primary particles are protons, helium, oxygen, silicon and iron, different ground experiments have modified the NKG function to describe the transverse distribution of its secondary particles. For example, in the KASCADE experiment, the expression of describing the lateral distribution of secondary particles produced by hadrons in EAS is <cit.> ρ_2(r)=N_size C(λ)(r/r_0)^λ-α(1+r/r_0)^λ-β C(s)=1/2 π r_0^2×Γ(β-λ)/Γ(λ-α+2) Γ(α+β-2 λ-2) In the formula, the parameter represents the age of EAS development, which is a free parameter. But r_0, α and β are defined as a constant. For the KASCADE experiment, α = 1.5, β = 3.6 and r_0 = 40 meters. This article first attempts to use equation (<ref>) to fit the lateral distribution of different secondary components in figure <ref>. It was found that there can be multiple sets of fitting parameters for the same horizontal distribution, that is, there is coupling between the parameters (for example, only two parameters are independent in λ,α and β). In order to reduce fitting parameters, this article adopts a more general equation (<ref>) to fit the lateral distribution of secondary components: ρ(r)=N_size C(s)(r/r_0)^s-2(1+r/r_0)^(s+Δ) C(s)=1/2 π r_0^2×Γ(-s-Δ)/Γ(s) Γ(-Δ-2 s) In the equation, Δ is the parameter. When Δ= -4.5 and r_0 = R_M in equation (<ref>), it is consistent with equation (<ref>); When s=λ+0.5, Δ = -4.1 and r_0 = 40 meters , it was consistent with equation (<ref>) . The formula (<ref>) is a double power law function. The specific meaning of the parameter is to represent the power law index (or slope) of the phase where the Particle number increases with the increase of the value of r in figure <ref>, which is equivalent to the age parameter in Formula (<ref>). 2s+ Δ represents the power law index (or slope) of the phase where the Particle number decreases with the increase of the value of r in figure <ref>, and r_0 represents the coordinates of r where the two different power law indexes change. There are four free parameters in the formula (<ref>) that are N_size, s, Δ, r_0. And the value of these fitting parameters are not unique. The same horizontal distribution can be fitted by combining multiple groups of parameters. Multiple parameter combinations can be used to fit the same lateral distribution. In order to further reduce the number of free parameters, the correlation between these parameters was studied. For the energy segment simulated in this paper and the original cosmic ray component, when the secondary particle is gamma ray and r_0^γ=460m, the horizontal distribution of these cases can be fitted. And the correlation between s_γ and Δ_γ is shown in figure <ref> and the fitting expression is Formula (<ref>). So N_size ^γ ,s_e will only be used as free parameters. Correspondingly, for the electrons in the secondary particles, when r_0^e=50m, fitting s_e and Δ_e satisfy equation (<ref>), as shown in figure <ref> N_size ^e and s_e will only be used as free parameters. For the muon in the secondary particle, fixed r_0^μ=800m, the s_μ and Δ_μ satisfy equation (<ref>) and N_size ^μ,s_μ will only be used as free parameters. Δ_γ=-1.18 · s_γ^2+1.94 · s_γ-5.00 Δ_e=-0.35 · s_e^2-0.27 · s_e-3.20 Δ_μ=-s_μ-4.4 For neutrons in secondary particles, due to the small number of secondary particles, the limit on the transverse distribution function is weaker. And the range of variation for each parameter is larger. According to equation (<ref>) in reference <cit.>, certain parameters of equation (<ref>) in this paper were fixed and optimized based on this. It was found that equation (<ref>) can fit the transverse distribution of neutrons well,where the free parameter is N_size ^nand r_0^n. ρ_n(r)=N_size^n C_n(r/r_0^n)^-0.9(1+r/r_0^n)^-4.0 C_n=1/2 π(r_0^n)^2×Γ(4.0)/Γ(1.1) Γ(2.9) Equations (<ref>) and (<ref>) are used to fit the transverse distribution of secondary particles produced by cosmic rays of different components in EAS, as shown in figure <ref>. Because only a few Cherenkov photons are preserved in the simulation, the transverse distribution of Cherenkov photons is not fitted. To test the fit quality of the transverse distribution of different secondary components, the deviation between the fit values “Nsize“and the statistical values “N”of the number of secondary particles produced by cosmic rays of different components and energies in EAS (Diff =N-N_size /N_size × 100 %) is shown in figure <ref>. When the energy log_10(E/GeV)>5.5, the deviation of all particles is within 6%, and then the fluctuation will be used to characterize the accuracy of the energy reconstruction. §.§ Differences between hadronic models The difference in current intensity of proton spectra in the knee region measured by different strong interaction models in KASCADE experiments is nearly double <cit.>. In this paper, the two strong interaction models’s difference of transverse distribution between EPOS-LHC and QGSJet-Ⅱ-04 is studied, and the results are shown in figure <ref> and figure <ref>. It can be seen that the difference in the number of positrons, gamma rays and Cerenkov photons in the two models is close and minimal. When r>20m, the difference of the models of the above three particles is within 5% and all range differences of r are within 10%; when r>100m, the difference of Muon’s model is less than 5%, but when r<100m, the maximum difference can be close to 20% (corresponding to the original particle is iron, with an energy of about 10 PeV, around r= 5 m); Neutrons differ the most, with a difference of 10%-20% at r>100m and a maximum difference is about 40% when r<100m (corresponding to r<100m). In general, the difference between muons and neutrons is significantly reduced when r>100m. For experiments measuring muse and neutrons, it is recommended that the detector size is greater than 100 m and should select particles greater than 100 m for reconstruction to reduce model dependence. Muons and neutrons are the products of the strong interaction process. The EPOS-LHC model takes into account the influence not considered in other strong interaction models. Under the multiple scattering of EPOS-LHC, the energy scale of a single scattering is taken into account when calculating the respective cross sections. This is not the case in the QGSJET-II-04 model based on Gribov-Regge theory <cit.> .The differences between different strong interaction models have been studied in detail in literature <cit.> and will not be discussed in this paper. § ENERGY RESOLUTION By fitting the transverse distribution of the secondary particles produced by cosmic ray EAS, the fitting parameters of each secondary particle can be obtained. The number of particles or the number density of particles at a certain radius is often used for energy reconstruction, while the ratio of the number of different secondary particles and the shape parameters of the transverse distribution are often used to identify the composition of the original particle. In this paper, we will use the Cherenkov photon numbers of the four secondary particles fitted and statistically obtained at a different “r” in Section <ref> to characterize the energy reconstruction accuracy and compare the differences between them. This paper only studies the accuracy of energy reconstruction under fixed components and does not study the component dependence of energy reconstruction and the construction of composition-independent energy reconstruction variables by combining real observation data with composition-sensitive variables, which is beyond the scope of this paper. The results obtained in this paper are superior to those obtained after taking into account the composition correction, so it can be considered as the upper limit of energy reconstruction using a single secondary particle. Since the energy of the primary particle is proportional to the number or density of secondary particles, E=C × N_size So the percentage spread of the number of secondary particles or the number density (defined as the spread of the number or number density distribution divided by the mean of the distribution) is equal to the resolution of the reconstructed energy Δ E/E=Δ(C × N_size )/C × N_size =Δ N_size /N_size Because the simulation process is carried out at several discrete fixed energies, the influence of the width of the energy range on the broadening of the particle population distribution is not involved. Therefore, this paper will directly use the percentage of the distribution broadening of particle number or particle number density to characterize the energy reconstruction accuracy, without carrying out specific energy reconstruction, and the calculation of the broadening will use the value of σ fitted by Gaussian function. At present, the most commonly used energy reconstruction method is to reconstruct the energy with the secondary particle number density “ρ”(electron number density "ρ_e", gamma ray number density "ρ_γ", muon number density "ρ_μ", neutron number density "ρ_n") of a certain solid “r” at a given point <cit.>. Figure <ref> shows the variation curve of the expansion percentage of secondary electron density ”ρ_e” produced by primary particles of different energies with the position “r” when the primary component is iron. And the lines of different colors represent the different energies of primary particles. It can be seen that for the number density of electrons in the secondary particles, the broadening percentage is smaller in the range of 100-500 m, and it is less dependent on the composition and energy of the primary particles. Other secondary particles have similar properties, which will not be detailed. Gamma ray at 300-800 m range is better, Muon at 150-600 m range is better and neutron at 800-2000 m range is better.The percentage spread of electron number density at 200 m, gamma ray density at 500 m, Muon density at 250 m, and neutron density at 1000 m will be used to characterize the accuracy of energy reconstruction using them (Figure <ref>). Another way to reduce the broadening of the particle population distribution is to use the age parameter “s” to modify the particle population <cit.>. Since the number of secondary particles in the observation plane is affected by the development stage of EAS, and the age parameter “s” represents the development stage of EAS, the influence of the development stage of EAS can be reduced by modifying the age parameter. Figure <ref> shows the distribution of fitting parameters of secondary electrons “N_size” with “s_e” when the original particle energy log_10(E/GeV) = 5.1 and the composition is iron, which ln(N_size^e) decreases as se increases. The red solid line is the straight line fitting. The broadening that can be effectively reduced is recorded as” ln(N_size 2^e)” at the mean corrected to by the red solid line. The red and blue curves in figure <ref> represent the distribution before and after correction respectively and are fitted with Gaussian functions. The modified broadening of “ln(N_size^e)” is significantly smaller, which can be used to characterize the energy reconstruction accuracy of this method and the results are shown in figure <ref>. Figure <ref> shows the comparison of the energy reconstruction accuracy before and after age correction “N_size” “N_size 2” and the secondary particle population density for energy reconstruction when the primary particle was iron, as well as the relationship between the energy reconstruction accuracy and the primary particle energy. It can be seen that for electrons and gamma rays in secondary particles, the energy reconstruction accuracy can be effectively improved by using the particle number and particle number density modified by age parameter compared with the direct use of particle number, and the energy reconstruction accuracy of the particle number modified by age parameter is slightly better than that obtained by particle number density. But the difference is not large. For muons and neutrons in secondary particles, there is little difference in the energy reconstruction accuracy of the particle number, particle density and particle number modified by age parameter. Since the modified curve of the age parameter depends on the energy and type of the original particle, and the radius selected for the particle number density is a fixed value, and the energy reconstruction accuracy obtained between them is not very different, the particle number density of the secondary particle at a fixed point will be used to characterize the optimal energy reconstruction accuracy. The results here are similar for the energy reconstruction accuracy of the EPOS-LHC model and the QGSJet-Ⅱ-04 model. In this paper, the limit detection performance under ideal conditions is studied and the process of detector response is not involved. The systematic error of energy reconstruction caused by the difference of the mean particle numbers of the two strong interaction models is not considered here. Figure <ref> shows the percentage broadening of the distribution of Cherenkov photon numbers (N^C) at different vertical distances from the core site when the primary particle is iron, and the vertical distances from the center to the shower axis are 20, 50, 100, 150, 200, 300 and 400 m, respectively. The number of Cherenkov photons at visible of 50m, 150m and 200 m has a smaller spread of about 4%-7%. The energy reconstruction accuracy obtained by different secondary components is shown in figure <ref>, where the primary particles in Figure 13(a) - (e) correspond to proton, helium, carbon, nitrogen, oxygen, magnesium, aluminum, silicon and iron, respectively. For protons, the energy reconstruction accuracy of electrons, gamma rays and Cerenkov photons at 50 m is about 10%-19%. For iron, the energy reconstruction accuracy of gamma rays, Cherenkov photons and muons at 150 m is better and is about 4%-8%. The higher the mass number of the primary particle, the higher the precision of energy reconstruction. In the experiment, multiple secondary particles can be combined according to the accuracy of individual energy reconstruction of different secondary particles to obtain energy reconstruction variables with less component dependence and higher precision. The above results can provide references for the selection of secondary particle types, energy reconstruction methods and distance from the core. § COMPOSITION DISCRIMINATION The identification of the primary particles of cosmic rays is the key to the single component energy spectrum measurement of cosmic rays. The study on the primary particle sensitivity of secondary components in EAS can provide guidance for the selection of component identification variables. In this paper, the ability of these parameters to identify primary particles is studied according to the fitting parameters of the transverse distribution of secondary particles obtained in Section <ref>. Since Cherenkov light is mainly used to identify primary particles based on imaging, it is beyond the scope of this study and will not be studied. According to the study in Section <ref>, it can be seen that the particle number density fluctuates less than the total number of particles, and it will also have better discrimination ability for component identification. Here, the particle number density will be used instead of the total number of particles to study the particle discrimination ability. This paper will study the component identification ability of each variable under the same energy condition, which can show the independent identification ability of each variable. For real experimental data, energy reconstruction variables can be combined with energy-independent component identification variables. For example, the number of electromagnetic particles and the number of muse of secondary particles are related to the energy and type of the original particle. The fluctuation of electromagnetic particles is smaller and the number of muon is more sensitive to the composition. So it can be modified on the basis of electromagnetic particles and the number of muon can be used to obtain the energy reconstruction variable independent of the composition. It is modified using the previous energy reconstruction variable to obtain an energy-independent component sensitive variable. For reasons of space, I will not go into details here. Figure <ref> shows the ability of particle number density and age parameters to distinguish protons and iron nuclei when the secondary particles are positrons, gamma rays, muons and neutrons respectively. The dots of different colors in Figure <ref> represent protons and iron nuclei with energy log_10(E/GeV) = 5.1, log_10(E/GeV) =6.1 and log_10(E/GeV) = 6.9, respectively. It can be seen that the age parameters “s” of positrons and gamma-rays, the particle number density of muons and neutrons have a good ability to identify the composition. In order to more vividly demonstrate their particle identification ability, the distributions of protons and iron nuclei at the same energy are projected onto the coordinate axes of the above variables, respectively, as shown in figure <ref> and figure <ref>. It can be seen that the identification energy of muon particle number density is the best in both low energy and high energy segments. The age parameter “s_e” ”s_Γ” of positron and gamma ray transverse distribution shape is better in low energy segment (such as around 100 TeV) and the identification ability of neutron particle number density is better in high energy segment (such as around 10 PeV). In the experiment, several secondary particles can be combined to obtain the component identification variable with less energy dependence and better identification ability according to the identification ability of different secondary particles. This can provide reference for the selection of component identification variables and detector types at different energies. This paper also studies the case of a zenith Angle of 45°, which will increase the atmospheric depth compared to the case of vertical incidence. For the energy segment studied in this paper and the selected altitude, the zenith angle is the atmospheric depth of 45° will exceed the atmospheric depth required for the shower to develop to the maximum area, while the atmospheric depth of vertical incidence is near the shower to develop to the maximum area. So the distribution of detected secondary components will be affected and the energy reconstruction accuracy and particle identification ability will be affected. The distribution comparison of secondary components at different zenith angles is shown in figure <ref>. It can be seen that as the zenith Angle increases, the number of electrons and gamma rays will decrease because they have passed through the shower maximum and their fluctuations will also be significantly larger. The muse interact less with the atmosphere during propagation, the increase of atmospheric depth has little influence on the muse number and the fluctuation of the muse number is also small. Neutrons also continue to undergo hadron shower processes with the atmosphere, attenuating more relative to the muon and the fluctuation of neutrons increases with the increase of the zenith, the amplitude is between the electromagnetic particles and the muon. The telescope measures the Cherenkov light at a fixed position, and the density of the Cherenkov light varies with the zenith angle depending on the vertical distance of the detection area from the shower axis ”r”. As shown in figure <ref>, when r= 50 m, the number of Cherenkov photons decreases with the increase of zenith angle. While, the number of Cerenkov photons increases with the increase of zenith Angle at r= 150 m. At r= 50 m and r= 150 m, the fluctuation of Cherenkov photon number increases significantly and the variation amplitude is similar to the fluctuation amplitude of electromagnetic particles. In summary, when the atmospheric depth exceeds the atmospheric depth where the shower develops to the maximum, the fluctuation of various secondary components increases, the fluctuation of Muse changes the least and the fluctuation of electromagnetic particles changes the most. § SUMMARY The single-component energy spectrum of the cosmic ray knee region is an important means to understand the physical origin of the cosmic ray knee region. Since the ground experiment lacks a good absolute energy calibration method, and can only measure secondary particles produced by the primary particles in EAS and cannot directly measure the primary particles. The energy measurement and particle identification ability of the ground experiment are called the limiting factors of single-component energy spectrum measurement. Based on this, we simulate the characteristics of secondary components of cosmic rays with different energies and primary particle compositions at 4400 m above sea level in EAS, including positrons, gamma rays, muons, neutrons and Cherenkov photons. The lateral distribution characteristics of various secondary components and the strong interaction dependence of the lateral distribution of different secondary components are studied in detail and the lateral distribution is fitted well with specific functions. The method and accuracy of energy reconstruction, strong interaction model dependence and discriminability of cosmic-ray particle identification are studied in detail with these fitting parameters. It provides reference for the selection of detector types, energy reconstruction methods and component identification variables in various ground experiments. For energy reconstruction, using the number density of secondary particles at a certain vertical distance from the core site ”r” is a better choice than the total number of secondary particles. It is less compositional dependent than the total number of particles modified by the age parameter. When the primary particle is proton, the energy reconstruction accuracy of electron, gamma ray and Cherenkov photon at 50 m is good and is about 10%-19%. When the primary particle is iron, the energy reconstruction accuracy of gamma ray, Cherenkov photon and Muon at 150 m is better and is about 4%-8%. The higher the mass number of the primary particle, the higher the precision of energy reconstruction. In the experiment, multiple secondary particles can be combined according to the accuracy of individual energy reconstruction of different secondary particles to obtain energy reconstruction variables with less component dependence and higher precision. For particle identification, the identification energy of muon particle number density "ρ_μ" is the best in both the low energy segment and the high energy segment. The age parameter of positron and gamma ray transverse distribution shape is better in the low energy segment (such as around 100 TeV) and the identification ability of neutron particle number density is better in the high energy segment (such as around 10 PeV). In the experiment, multiple secondary particles can be combined to obtain component identification variables with lower energy dependence and better identification ability according to the identification ability of different secondary particles, such as multivariate analysis method <cit.> or deep learning method <cit.>. The parameters provided in this paper can be directly used as training variables. For the difference in transverse distribution of secondary particles caused by the two strong interaction models of EPOS-LHC and QGSJET-II-04, the difference in the number of positrons, gamma rays and Cherenkov photons is very close and they are smaller than that of muon and neutrons and it is within 5% at r>20m and within 10% at all ranges of r. The difference in the number of muon is within 5% at r>100m. But the maximum difference can be close to 20% (corresponding to the original particle is iron, the energy is about 10 PeV, near r=5 m). The difference of neutrons is the largest, the difference is 10%-20% at r>100m and the maximum difference is about 40% (corresponding to r<10m). Overall, the difference between the muon and neutron models is significantly reduced at r>100m. In the experiment, secondary particles larger than 100 m were selected for reconstruction. It can reduce the dependence of the strong interaction model effectively. For the incidence of 45° zenith angle, compared with the vertical incidence, the number of electromagnetic particles will be significantly reduced and the fluctuation will be larger and the energy reconstruction accuracy and particle identification ability obtained by them will be worse. The number of muses will be slightly reduced and the fluctuation of the number of muses will not change much. The detection performance will be little affected. The decrease in the number of neutrons and the increase in the number of particles is somewhere between an electromagnetic particle and a muon. The change of Cherenkov photon relative to the vertical incidence depends on the vertical distance between the photon and the shower axis and the fluctuation of the photon number is also larger relative to the vertical incidence. And the amplitude of the fluctuation is similar to that of electromagnetic particles. In summary, when the atmospheric depth exceeds the atmospheric depth where the shower develops to the maximum, the fluctuation of various secondary components increases and the detection performance deteriorates. The fluctuation of the particle changes the least and the fluctuation of the electromagnetic particle changes the most. In summary, without considering the detector effect, this paper studies the energy reconstruction accuracy of various secondary components during energy reconstruction and the identification ability of the original particle composition, which provides references for the selection of detector types, energy reconstruction variables and methods and component identification variables in ground experiments. § ACKNOWLEDGEMENTS This work is supported by the Science and Technology Department of Sichuan Province, China (Grant No. 2021YFSY0031, 2020YFSY0016), the National Key R&D Program of China (Grant No. 2018YFA0404201), and the National Natural Science Foundation of China (Grant Nos. 12205244, 12147208). 99 1 Prosin V V, Berezhnev S F, Budnev N M, et al. 2014 Nucl. Instrum. Methods Phys. Res. Sect. A 756 94 2 Blümer J, Engel R, Hörandel J R 2009 Prog. Part. Nucl. Phys. 63 293 3Rújula De A 2006 Nucl. Phys. B 151 23 4Ahn H S, Allison P, Bagliesi M G 2009 Astrophys. J. 707 593 5Barao F 2004 Nucl. Instruments Methods Phys. Res. Sect. A 535 134 6Alemanno F, An Q 2021 Phys. Rev. Lett. 126 201102 7An Q, Asfandiyarov R, Azzarello P 2019 Sci. Adv. 5 3793 8Chang J, Ambrosi G, An Q 2017 Astropart. Phys. 95 6 9Sparvoli R 2013 Nucl. Phys. B 239 115 10Antoni T, Apel W D, Badea A F 2005 Astropart. Phys. 24 1 11Bartoli B, Bernardini P, Bi X J, Cao Z 2017 Astropart. Phys. 93 46 12Ma X H, Bi Y J, Cao Z 2022 Chin. Phys. C 46 030001 13Abbasi R, Abdou Y, Ackermann M 2013 Nucl. Instrum. Methods. Phys. Res. Sect. A 700 188 14Abbasi R U, Abe M, Abu-Zayyad T 2018 Astro. J. 865 74 15Amenomori M, Bao Y W 2021 Phys. Rev. Lett. 127 031102 16Apel W D, Arteaga-Velázquez J C 2011 Phys. Rev. Lett. 107 171104 17Apel W D, Arteaga-Velázquez J C 2013 Phys. Rev. D 87 081101 18Bartoli B, Bernardini P, Bi X J, Cao Z 2015 Phys. Rev. D 92 092005 19Aartsen M G, Abbasi R 2020 Phys. Rev. D 102 122001 20Aartsen M G, Ackermann M, Adams J 2019 Phys. Rev. D 100 082002 21Heck D, Knapp J, Capdevielle J N, Schatz G, Thouw T 1998 CORSIKA: A Monte Carlo Code to Simulate Extensive Air Showers 22Capdevielle J N, Cohen F 2005 J. Phys. G 31 507 23Apel W D, Badea A F, Bekk K 2006 Astropart. Phys. 24 467 24Feng Y L, Zhang Y, Chen T L 2019 Chin. Phys. C 43 075002 25Alexandru C E, Alexandru J, Lavinia-Elena G 2019 Chin. Phys. C 43 083001 26Li C 2018 Ph. D. Dissertation (Beijing:University of Chinese Academy of Sciences) (in Chinese) [李骢 2018 博士学位论文 (北京:中国科学院大学)] 27Rivera-Rangel D, Arteaga-Velázquez J C 2021 37th International Cosmic Ray Conference (ICRC 2021) Online–Berlin, Germany July 1223, 2021 p3721 28Aharonian F, An Q, Axikegu 2021 Chin. Phys. C 45 025002 29Conceição R, Peres L 2021 Eur. Phys. J. C 81 1 30Yin L Q, Zhang S S, Cao Z, Bi B Y 2019 Chin. Phys. C 43 075001 31Jin C, Chen S Z, He H H 2020 Chin. Phys. C 44 065002
http://arxiv.org/abs/2307.01808v1
20230704162229
Fast computation of analytic capacity
[ "Mohamed M S Nasser", "Christopher C. Green", "Matti Vuorinen" ]
math.CV
[ "math.CV", "65E05, 30C40, 30C85, 65R20" ]
Fast computation of analytic capacity Mohamed M S Nasser^ a, Christopher C. Green^ a, Matti Vuorinen^ b ===================================================================== -0.8cm ^ aDepartment of Mathematics, Statistics & Physics, Wichita State University, Wichita, KS 67260-0033, USA [email protected], [email protected] ^ bDepartment of Mathematics and Statistics, University of Turku, Turku, Finland [email protected] A boundary integral equation method is presented for fast computation of the analytic capacities of compact sets in the complex plane. The method is based on using the Kerzman–Stein integral equation to compute the Szegö kernel and then the value of the Ahlfors map at the point at infinity. The proposed method can be used for domains with smooth and piecewise smooth boundaries. When combined with conformal mappings, the method can be used for compact slit sets. Several numerical examples are presented to demonstrate the efficiency of the proposed method. We recover some known exact results and corroborate the conjectural subadditivity property of analytic capacity. Keywords. Analytic capacity, multiply connected slit domain, boundary integral equation, Szegö kernel, Ahlfors map, special functions § INTRODUCTION Capacities – such as analytic, logarithmic, and conformal – are important tools in complex analysis and have several applications to problems in different fields, e.g., in approximation theory, potential theory, electronics, and fluid dynamics <cit.>. These capacities can be expressed explicitly in only a handful of special cases, and therefore numerical methods are needed to compute these capacities in the majority of instances. The Riemann mapping theorem states that any unbounded simply connected domain G in the extended complex plane =∪{∞} with ∞∈ G and whose boundary consists of more than three points can be mapped one-to-one onto the unit disk by a conformal map f. If we assume that f(∞)=0 and f'(∞)>0, then this mapping f is unique and known as the Riemann mapping function. Here, the derivative of analytic function f at the point at infinity is f'(∞)=lim_z→∞z[f(z)-f(∞)]. The so-called Ahlfors map can be regarded as an extension of the Riemann mapping function for multiply connected domains. That is, given a multiply connected domain G of connectivity m, the associated Ahlfors map is an m-to-one covering of G onto . If we assume that this Ahlfors map satisfies the conditions (<ref>), then it will be unique <cit.>. Let us now introduce the notion of analytic capacity. Let E be a compact subset of the complex plane such that its complement G = \ E in the extended complex plane =∪{∞} is an unbounded multiply connected domain of connectivity m with ∞∈ G. The analytic capacity of E is defined to be <cit.> γ(E)=sup|f'(∞)| where the supremum is taken over all analytic functions f:G→ such that |f(z)|≤1 for all z∈ G, and f'(∞) is as in (<ref>). It is well-known that analytic capacity is inextricably linked to the Ahlfors map <cit.>. If w=f(z) is the unique Ahlfors map from the unbounded multiply connected domain G in the z-plane onto the unit disk in the w-plane satisfying the conditions (<ref>), then the analytic capacity of the set E is given by <cit.> γ(E) = f'(∞). In particular, when E is compact and connected such that G = \ E is a simply connected domain, the Ahlfors map w=f(z) is a conformal map from G onto the unit disk and hence the analytic capacity γ(E) is equal to the logarithmic capacity (E) of E. However, for a general compact set E, we have γ(E)≤(E). See <cit.> for details. Closed-form expressions for analytic capacity are special and known only in a handful of cases, and it is informative to survey some of these briefly here. If E is a disk of radius r, then <cit.> γ(E)=r, and if E is a square with sides of length ℓ, then <cit.> γ(E)=ℓ Γ^2(1/4)/4√(π^3), where Γ(·) is the gamma function. For a complex line segment E=[a,b]⊂, we have <cit.> γ(E)=1/4|b-a|. For m non-overlapping real intervals E_j=[a_j,b_j] with a_1<b_1<⋯<a_m<b_m, if E=⋃_j=1^mE_j, then <cit.>: γ(E)=1/4|E|=1/4∑_j=1^m|E_j|=1/4∑_j=1^m(b_j-a_j). If E⊂ F⊂, then γ(E)≤γ(F). Furthermore, for all z,λ∈, γ(z+λ E)= |λ| γ(F). For a compact and connected set E⊂, we have (E)/4≤γ(E)≤(E) where (E) denotes the diameter of E. For more details, see <cit.>. When E and F are disjoint connected compact subsets of , Suita <cit.> proved that the subadditivity property γ(E∪ F)≤γ(E)+γ(F) holds. For general compact sets, the proof of this property is still an open problem. However, Tolsa <cit.> proved the semi-additivity of analytic capacity: there exists a constant c such that for all compact sets E,F⊂, the analytic capacity satisfies γ(E∪ F)≤ c(γ(E)+γ(F)). Moreover, proving the conjectural subadditivity of analytic capacity for arbitrary compact sets E,F⊂ is equivalent to proving it for all disjoint compact sets E,F⊂ that are finite unions of disjoint closed disks, all with the same radius <cit.>. Several numerical examples have been considered by Younsi & Ransford in <cit.> for purely circular compact sets. Numerical results for sets other than circular ones were also presented in <cit.>. All of these examples provide convincing evidence to suggest that the conjectural subadditivity property for analytic capacity is true. It should also be pointed out that, from (<ref>), in the case of multiple real slits, equality as opposed to inequality holds in (<ref>). The subadditivity property will be of significant interest in the ensuing discussion and will be corroborated numerically in several cases. Several numerical methods are available in the literature for computing the logarithmic and conformal capacities. One of these methods is based on the boundary integral equation (BIE) with the generalized Neumann kernel <cit.>. For the numerical computation of analytic capacity, to the best of our knowledge, the only available numerical method is that given in <cit.>. The method is based on using quadratic minimization for the numerical computation of upper and lower bounds for the analytic capacity, which, in principle, converges to its exact value. This method has been used before to compute the logarithmic capacity <cit.>. In this paper, we present a fast and accurate BIE method for the numerical computation of the analytic capacity. The method is based on using a BIE for the Szegö kernel (refer to <cit.> for the definition and basic properties of the Szegö kernel). The BIE has been used by Kerzman & Trummer <cit.> to compute the conformal mapping for simply connected domains and extended by Bell <cit.> to compute the Ahlfors map of bounded multiply connected domains. See also <cit.>. Our presented method will be used to compute the analytic capacity for a wider class of compact sets, including those with smooth boundaries, piecewise smooth boundaries, and sets consisting of only slits. Besides this introductory section, our paper is structured in the following way. In Section 2, we introduce a numerical method for computing the analytic capacity for compact sets bordered by smooth or piecewise smooth Jordan curves, and several numerical examples for such sets are presented in Section 3. In Section 4, we consider compact slit sets. We provide our concluding remarks in Section 5. To complement our numerical work, in Appendix A, we re-derive an exact result for the analytic capacity of two collinear slits. § THE NUMERICAL METHOD §.§ Analytic capacity and the Szegö kernel Let E be a compact subset of the complex plane and let G = \ E be its complement in the extended complex plane . We assume that G is an unbounded multiply connected domain bordered by m smooth, or piecewise smooth, Jordan curves Γ_1,…,Γ_m. Domains bordered by slits will be considered in Section <ref> below. From (<ref>), the analytic capacity γ(E) is calculated by computing the derivative f'(∞) of the Ahlfors map f from the domain G onto the unit disk with the normalization (<ref>). A BIE method for computing the Ahlfors map for bounded multiply connected domain is presented in <cit.>; however, the domain G is unbounded, and therefore a preliminary step is required. We first conformally map the unbounded multiply connected domain G onto a bounded multiply connected domain D using the Möbius transformation ζ=M(z)=1/z-α, where α is a point in the interior of any of the curves Γ_j. The point at infinity is mapped onto the origin, M(∞)=0∈ D. Let w=F(ζ) be the Ahlfors map from the bounded multiply connected domain D onto the unit disk such that F(0)=0 and F'(0)>0. It follows immediately that f(z)=F(M(z)), z∈ G, is the unique Ahlfors map from the unbounded domain G onto the unit disk with the normalization (<ref>). Note that f(∞)=F(M(∞))=F(0)=0 and f'(∞)=lim_z→∞z[f(z)-f(∞)]=lim_z→∞zf(z)=lim_z→∞zF(M(z)). Note also that ζ=M(z)=1/(z-α) if and only if z=M^-1(ζ)=a+1/ζ and hence z→∞ if and only if ζ→ 0. Thus, since F(0)=0, f'(∞) =lim_ζ→ 0(a+1/ζ)F(ζ) =F'(0)>0. As D is a bounded multiply connected domain, the Ahlfors map w=F(ζ) from D onto the unit disk can be computed using the method presented by Bell <cit.>. However, computing the analytic capacity γ(E) requires only computing the derivative F'(0) since γ(E)=f'(∞)=F'(0). In fact, it follows from <cit.> that F'(0)=2π S(0,0) where S(ζ,0) is the Szegö kernel for the bounded multiply connected domain D with respect to the base point 0∈ D. Our numerical method is based on computing S(0,0) using the BIE related to the Szegö kernel in multiply connected domains <cit.>. Then γ(E) = f'(∞)=F'(0)= 2π S(0,0). §.§ Integral equation for the Szegö kernel Assume that each boundary component Γ_j is parametrized by a 2π-periodic function ζ_j(t), t∈ J_j=[0,2π], j=1,…,m. For domains with corners, the parametrization ζ_j(t) is defined as described in <cit.>. We define the total parameter domain J as the disjoint union of the m intervals J_j=[0,2π], j=1,…,m. The whole boundary Γ is therefore parametrized by ζ(t)= {[ ζ_1(t), t∈ J_1,; ⋮ ; ζ_m(t), t∈ J_m. ]. See <cit.> for more details. Further, the boundary ∂ D of the bounded multiply connected domain D is parametrized by η(t)=1/ζ(t)-α, t∈ J. The Szegö kernel for bounded simply connected domains can be computed by solving the second-kind Fredholm integral equation S(η(t),0)+∫_J A(η(t),η(s))S(η(s),0)|η'(s)|ds= 1/2πıη'(t)/|η'(t)|η(t) where A(η(t),η(s))=1/2πıη'(t)/|η'(t)|(η(t)-η(s)) -1/2πıη'(s)/|η'(s)|(η(s)-η(t)) The kernel A(η(t),η(s)) is continuous with A(η(t),η(t))=0. The integral equation in (<ref>) is known as the Kerzman–Stein BIE <cit.>. It was proved in <cit.> that this BIE can be used also for the computation of the Szegö kernel for bounded multiply connected domains. Multiplying both sides of (<ref>) by η'(t)/η(t) and defining ϕ(t)=S(η(t),0)η'(t)/η(t), the BIE (<ref>) can be written as ϕ(t) +∫_J A(η(t),η(s))η'(t)/η(t)η(s)/η'(s)ϕ(s)|η'(s)|ds= ı/2π|η'(t)|/|η(t)|^2 Now, since η(t)=1/ζ(t)-α, ζ(t)=1/η(t)+α, it follows that ϕ(t) +∫_J A(1/ζ(t)-α,1/ζ(s)-α)ζ(s)-α/ζ(t)-αζ'(t)/ζ'(s)|ζ'(s)|/|ζ(s)-α|^2ϕ(s)ds= ı/2π|ζ'(t)|, or equivalently, ϕ(t)-∫_J(1/2πıζ'(s)/|ζ'(s)|(ζ(s)-ζ(t)) -1/2πıζ'(t)/|ζ'(t)|(ζ(t)-ζ(s)))|ζ'(t)|ϕ(s)ds=ı/2π|ζ'(t)|. Taking the conjugate of both sides and then multiplying by ı/|ζ'(t)|, we obtain ı ϕ(t)/|ζ'(t)|+∫_J(1/2πıζ'(t)/|ζ'(t)|(ζ(t)-ζ(s)) -1/2πıζ'(s)/|ζ'(s)|(ζ(s)-ζ(t)))ı ϕ(s)/|ζ'(s)||ζ'(s)|ds=1/2π. which can be written in the concise form ψ(t)+∫_J A(ζ(t),ζ(s))ψ(s)|ζ'(s)|ds=1/2π, ψ(t)=ı ϕ(t)/|ζ'(t)|. This BIE (<ref>) is a modification of the Kerzman–Stein BIE (<ref>). By (<ref>), computing the analytic capacity γ(E) requires computing the value of the Szegö kernel S(0,0). Since the Szegö kernel S(ζ,0) is an analytic function in the domain D, by the Cauchy integral formula, we have S(0,0)=1/2πı∫_∂ DS(ζ,0)/ζdζ =1/2πı∫_JS(η(t),0)/η(t)η'(t)dt. Then, by solving the BIE (<ref>) for ψ(t) and using (<ref>) and (<ref>), we have S(0,0) = 1/2πı∫_Jϕ(t)dt = 1/2π∫_Jψ(t)|ζ'(t)|dt. It then follows at once from (<ref>) that γ(E) = 2π S(0,0) = ∫_Jψ(t)|ζ'(t)|dt. It is immediate from (<ref>) that the integral ∫_Jψ(t)|ζ'(t)|dt must be real and hence γ(E) = [∫_Jψ(t)|ζ'(t)|dt] = ∫_J[ψ(t)]|ζ'(t)|dt. §.§ Numerical solution of the integral equation The Kerzman–Stein BIE (<ref>) has been used to compute conformal mappings for bounded and unbounded simply connected domains <cit.>, and has been extended in <cit.> to compute the Ahlfors map for bounded multiply connected domains. A combination of the usage of the Kerzman–Stein BIE (<ref>) and the Fast Multipole Method (FMM) <cit.> has been presented in <cit.> for computing conformal mappings for bounded simply connected domains. In this paper, to compute the analytic capacity γ(E), we will solve the BIE (<ref>) which is a modified version of the Kerzman–Stein BIE. To accomplish this, we too shall employ the FFM when solving (<ref>). Since the integrand in (<ref>) is 2π-periodic, the BIE (<ref>) can be best discretized by the Nyström method with the trapezoidal rule <cit.>. For domains with smooth boundaries, we use the trapezoidal rule with equidistant nodes. We discretize each interval J_p=[0,2π], for p=1,2,…,m, by n equidistant nodes s_1, …, s_n where s_q = (q-1) 2 π/n, q = 1, …, n, and n is an even integer. Writing =[s_1,…, s_n], we discretize the parameter domain J by the vector = [, , …, ] which consists of m copies of , i.e., =[t_1,t_2,…,t_mn] where for p=1,2,…,m and q = 1, …, n, t_(p-1)n+q=s_q. For a real or a complex function μ(ζ(t)) defined on the boundary Γ, the trapezoidal rule then yields ∫_Jμ(ζ(t))dt = ∑_p=1^m∫_J_pμ(ζ_p(t))dt ≈∑_p=1^m∑_q=1^n2π/nμ(ζ_p(s_q)) =∑_j=1^mn2π/nμ(ζ(t_j)). Discretizing the BIE (<ref>) using the trapezoidal rule (<ref>) and substituting t=t_i, we obtain the linear system ψ_n(t_i)+2π/n∑_j=1^mn A(ζ(t_i),ζ(t_j))|ζ'(t_j)|ψ_n(t_j)=1/2π, i=1,2,…,mn, where ψ_n is an approximation of ψ. Recall that A(ζ(t_i),ζ(t_j))=0 when i=j. Using the definition of the kernel A(ζ(t),ζ(s)), we have for i=1,2,…,n, ψ_n(t_i)+ı/n∑_c j=1 j i^mn(ζ'(t_i)/|ζ'(t_i)|(ζ(t_i)-ζ(t_j)) +ζ'(t_j)/|ζ'(t_j)|(ζ(t_j)-ζ(t_i)))|ζ'(t_j)|ψ_n(t_j)=1/2π, or equivalently, ψ_n(t_i)+ı/nζ'(t_i)/|ζ'(t_i)|∑_c j=1 j i^mn1/ζ(t_i)-ζ(t_j) |ζ'(t_j)|ψ_n(t_j) - ı/n∑_c j=1 j i^mn1/ζ(t_i)-ζ(t_j)ζ'(t_j)ψ_n(t_j)=1/2π, which can be written in the following concise form: +ı/nζ'()/|ζ'()|B( |ζ'()|) - ı/nB( ζ'())=1/2π. Here, =ψ_n() and B is the mn× mn matrix with entries (B)_ij= {[ 0, i=j,; 1/η(t_i)-η(t_j), i j, i,j=1,2,…,(m+1)n.; ]. The linear system (<ref>) will be solved using the GMRES iterative method <cit.> where the matrix-vector product can be computed using the FMM. If we define the left-hand side of (<ref>) to be a function of the unknown vector , ℱ( )=+ı/nζ'()/|ζ'()|B( |ζ'()|) - ı/nB( ζ'()), then the value of the function ℱ() can be computed quickly and accurately using the MATLAB function zfmm2dpart in the MATLAB toolbox FMMLIB2D developed by Greengard & Gimbutas <cit.>. This method for computing the solution ψ to the BIE (<ref>) is summarized in the following MATLAB function where the tolerances for the FMM and the GMRES method are taken to be 0.5×10^-15 and 10^-14, respectively, and the GMRES method is run without restart: function y = szegofmm (et,etp,psi,n) Tet = etp./abs(etp); Tet(etp==0) = 0; a = [real(et.') ; imag(et.')]; m = length(et)/n-1; y = gmres(@(x)F(x),psi,[],1e-14,100); function y = F(x) b1 = [abs(etp).*conj(x)].'; [Ub1] = zfmm2dpart(5,(m+1)*n,a,b1,1); Eb1 = (Ub1.pot).'; b2 = [abs(etp).*Tet.*x].'; [Ub2] = zfmm2dpart(5,(m+1)*n,a,b2,1); Eb2 = (Ub2.pot).'; y = x+(1./(n*i)).*(-conj(Tet).*conj(Eb1)+Eb2); end end The preceding method assumes that the boundaries of the domains of interest are smooth, i.e. without corners. In the case of domains with corners (excluding cusps), to obtain accurate results, a re-parametrization of the boundary of the domain is performed as described in <cit.> which removes the singularity in the solution of the BIE at the corner points <cit.>. Assume that the boundary component Γ_j has ℓ corner points. We first parametrize each boundary component Γ_j by a 2π-periodic function ζ̂_j(t) for t∈ J_j=[0,2π]. The function ζ_j(t) is assumed to be smooth with ζ'_j(t)0 for all values of t∈ J_j such that ζ_j(t) is not a corner point. We assume that ζ'_j(t) has only the first kind discontinuity at these corner points. At each corner point, the left tangent vector is taken to be the tangent vector at this point. As above, let J be the disjoint union of the m intervals J_j=[0,2π], j=1,2,…,m and ζ̂(t), t∈ J, be a parametrization of the whole boundary Γ. Then, we parametrize the boundary Γ by ζ(t)=ζ̂(δ(t)), where the function δ(t) is defined in <cit.>. With the new parametrization ζ(t), the BIE (<ref>) can be solved accurately using the above MATLAB function. However, for domains with corners, we usually need a larger number of points n (which should be a multiple of the number of corners on each boundary component) for discretizing the BIE compared to domains with smooth boundaries. See <cit.> for further details. Once the solution ψ of the BIE (<ref>) has been found, we can proceed to compute the analytic capacity γ(E) using the formula (<ref>). This can be undertaken using the following MATLAB function: function cap = ancap(zet,zetp,n) h = 2*pi/n; rzet = 1/(2*pi)+zeros(size(zet)); psi = szegofmm(zet,zetp,rzet,n); cap = sum(h*real(psi).*abs(zetp)); end Various numerical examples will be presented in the proceeding two sections. We will take, in turn, domains bounded by Jordan curves and domains bordered by slits. § DOMAINS BORDERED BY JORDAN CURVES In this section, we will use the method presented in the previous section to compute numerical approximations γ̃(E) to the analytic capacity γ(E) of compact sets bordered by smooth and piecewise smooth boundaries. Consider the compact set E=E_1∪ E_2 where E_1,2={z ∈ | |z± c|≤ r} and 0<r<c. Then <cit.> γ(E)=r/2√(q)(1-q)θ_2^2(q), q=p-√(p^2-1)/p+√(p^2-1), p=c/r, where θ_2(·) is the second Jacobi theta function defined in (<ref>). The relative error in the computed approximate values γ̃(E) obtained with n=2^9 are given in Table <ref> for several values of c and r. For c=2 and r=1, our obtained value is γ̃(E)=1.875595019097120 which is in the interval (1.875595019097112,1.875595019097164) given in <cit.>. The exact value is γ(E)=√(3) θ_2^2((2-√(3))^2)≈ 1.8755950190971197. Let E be the square with corners 1,-ı,-1,ı. In this case, the domain G = \ E is an unbounded simply connected domain. Thus, by (<ref>), the analytic capacity of E is γ(E)=(E)=Γ^2(1/4)/2√(2π^3)≈ 0.834626841674073. In this example, the boundary components of the compact set E have corners. We use our numerical method with various values of n to approximate the analytic capacity γ(E) and the results are presented in Table <ref>. As can clearly be seen in Table <ref>, the relative error decreases as n increases. As a validation of our numerical method, let us also consider the four compact sets shown in Figure <ref>. These sets were considered in <cit.>. We are not aware that the analytic capacities γ(E) of these sets are known analytically. The approximate values of γ(E) computed by our method are presented in Table <ref>. It is important to point out that for the union of 4 ellipses, the elapsed time of 0.55 seconds suggests that our computations are more than 1000 times faster than the method used for the same problem in <cit.>. However, around a decade has passed since the computations in <cit.> were performed. We also note that our method is much faster than the method presented in <cit.> for non-circular compact sets compared with circular ones. Consider the square [-2,2]×[-2,2] and the four sub-squares with centers ± 1 ±ı. Let ε>0 be a real parameter. We consider three cases (i)-(iii). In the case (i), consider moving the centers of the sub-squares via the parameter ϵ to the points (1+ε)(± 1 ±ı) (see Figure <ref> (left)). In case (ii), let us fix the lower two sub-squares and consider moving the centers of the upper sub-squares to the points (1+ε)(± 1 + ı) (see Figure <ref> (middle)). Finally, in case (iii), let us fix three of the sub-squares and move the center of the remaining sub-square to the point (1+ε)(1+ı) by increasing the value of ϵ (see Figure <ref> (right)). Let us label the union of the compact sets generated by ε in each case by E_ε. For ε=0, then E_0=[-2,2]×[-2,2] is the original square. Note that the length of each side of the original square is 4, and hence, by (<ref>), γ(E_0)=Γ^2(1/4)/√(π^3). Further, let us label the sub-squares by F_1,…,F_4, then the length of each of these sub-squares is 2, and hence, by (<ref>), γ(F_j)=Γ^2(1/4)/2√(π^3)=1/2γ(E_0), j=1,…,4. In each of the three cases, we compute the analytic capacity γ(E_ε) as a function of ε which are presented in Figure <ref>. The results indicate that γ(E_0) is a lower bound for γ(E_ε) and ∑_j=1^4γ(F_j)=2γ(E_0) is an upper bound for γ(E_ε). These results collectively provide numerical evidence to corroborate the conjectural subadditivity property of analytic capacity for these compact sets. Our results demonstrate the expected phenomena that the analytic capacity increases as the sub-squares move further apart from each other, and values of the analytic capacity in case (i) are larger than the those in cases (ii) and (iii) corresponding to the greater number of sub-squares. Consider the four disks with radius 1 and centers at ±1±ı. Let ε>0 be a real parameter. We consider three cases (i)-(iii). In case (i), consider moving the centers of the four disks via the parameter ϵ to the points (1+ε)(± 1 ±ı) (see Figure <ref> (left)). In case (ii), let us fix the lower two disks and consider moving the centers of the upper disks to the points (1+ε)(± 1 + ı) (see Figure <ref> (middle)). Finally, in case (iii), let us fix three of the disks and move the center of the remaining disk to the point (1+ε)(1+ı) by increasing the value of ϵ (see Figure <ref> (right)). We label the union of the compact sets generated by ε in each case by E_ε. Let us label the disks by F_1,…,F_4, then γ(F_j)=1, j=1,…,4. In each of the three cases, we compute the analytic capacity γ(E_ε) as a function of ε which are presented in Figure <ref>. The results indicate that ∑_j=1^4γ(F_j)=4 is an upper bound for γ(E_ε). These results collectively provide numerical evidence to corroborate the conjectural subadditivity property of analytic capacity for these compact sets. Our results demonstrate the expected phenomena that the analytic capacity increases as the sub-squares move further apart from each other, and values of the analytic capacity in case (i) are larger than the those in cases (ii) and (iii) corresponding to the greater number of sub-squares. We consider m=100 random non-overlapping disks. For k=1,2,…,100, the radius r_k of the disk E_k is chosen randomly in (0.2,0.8) and its center c_k is chosen in the square [-10,10]×[-10,10] such that all disks are non-overlapping. Then we randomly choose an integer ℓ∈[1,99]. We define E=⋃_k=1^ℓE_k, F=⋃_k=ℓ+1^mE_k. See Figure <ref> (right) for an example of such compact sets E and F. For this problem, we run our method 50 times to obtain 50 different locations for these disks as well as different sets E and F. For each run j, we use the above presented method with n=2^9 to compute approximate values of the quantities γ(E), γ(F) and γ(E∪ F). The values of the ratios γ(E∪ F)/γ(E)+γ(F), γ(E∪ F)/∑_k=1^mγ(E_k) are plotted as a function of the run number j as shown in Figure <ref> (left). As can be seen from the graphs in this figure, we have verified that the conjectural subadditivity property of analytic capacity holds for each of the 50 random compact sets we considered, and in particular that γ(E∪ F)≤γ(E)+γ(F)≤∑_k=1^mγ(E_k)=∑_k=1^mr_k. Again, the conjectural subadditivity property for analytic capacity holds for the compact sets in this example. It is particularly interesting that the second ratio in (<ref>) as a function of j appears to be almost constant in light of the fact that each of the compact sets were generated randomly. To investigate this further, we consider three cases where in all cases, we choose the centers of the disks and the elements of the two sets E and F as above, but the radii of the disks are chosen to be arbitrary and lie in the intervals (0.2,0.4) and (0.6,0.8) for the two cases. For the third case, we choose the radii fixed and equal to 0.5. The results are shown in Figure <ref>. It is apparent that for these two cases that both ratios in (<ref>) appear qualitatively similar to Figure <ref> (left), where roughly speaking, the graphs of these ratios are shifted up for radii in (0.2,0.4) and shifted down for radii in (0.6,0.8). For the fixed radii, there is very little changes compared to Figure <ref> (left). In this example, we study compact sets consisting of disjoint disks of equal radii. In particular, we will validate numerically the conjectural subadditivity property of analytic capacity in several cases. The consideration of such compact sets is important since proving the conjectural subadditivity property of analytic capacity for arbitrary compact sets E,F∈ is equivalent to proving it for all disjoint compact sets that are finite unions of disjoint closed disks, all with the same radius <cit.>. Let E_m be a union of m disjoint disks and F be a disk with center x+ı y such that these m+1 disks are non-overlapping and of unit radii. Let the real function u(x,y) be defined by u(x,y) = γ(E_m∪ F), i.e. the function u(x,y) is defined for all points (x,y)∈^2 such that the disk F does not overlap any of the other m disks. Note that the domain of definition of the function u(x,y) is unbounded. Hence, in our numerical computations, we consider only the region -10≤ x,y≤ 10 and we assume that the minimum distance between any two disks is 0.02. We compute approximate values of the function u(x,y) using n=2^10 and then plot several level curves of u(x,y). We consider four cases of m, namely m=1,2,3,4. For m=1, E_m consists of only one disk which is assumed to be the unit disk. For m=2,3,4, we assume that the centers of the disks forming E_m are 5e^2kπı/m, k=1,2,…,m. The approximate values γ̃(E_m) of the analytic capacity of E_m are presented in the following table. m γ̃(E_m) 1 1 2 1.98000206142844 3 2.88420404308815 4 3.67012955644439 The computed level curves of the function u(x,y) are presented in Figure <ref>. The presented results demonstrate that the values of the analytic capacity γ(E_m∪ F) increase when F moves away from E_m and decrease when F moves towards E_m. In particular, when F is far away from E_m, we have γ(E_m∪ F) ≈γ(E_m)+γ(F). The presented results also illustrate that γ(E_m) ≤γ(E_m∪ F) ≤γ(E_m)+γ(F)=γ(E_m)+1, which indicates that the conjectural subadditivity property for analytic capacity again holds in this example. Finally, we compute approximate values γ̃(E_m) of the analytic capacity for the above compact sets E_m, this time assuming that the centers of the disks are re^2kπı/m, k=1,2,…,m, for 1.5≤ r≤ 20. The ratios γ̃(E_m)/m are plotted in Figure <ref> as functions of r. As we can see, each of these ratios tends to 1 as r increases; that is, the analytic capacity of E_m (the union of m disks) tends to the sum of the individual analytic capacities of these m disks which is equal to m. This provides further numerical evidence to support the validity of the conjectural subadditivity property of analytic capacity. § DOMAINS BORDERED BY SLITS In this section, we will use our numerical method to compute numerical approximations γ̃(E) to the analytic capacity γ(E) of sets consisting of slits. We will consider here only rectilinear slits. However, the presented method can be extended to other types of slits. For a_1,…,a_m,b_1,…,b_m∈ such that E_j=[a_j,b_j] are non-overlapping for j=1,2,…,m, let E=⋃_j=1^mE_j and let Ω=\ E, i.e., Ω is the unbounded multiply connected slit domain obtained by removing the m slits E_1,…,E_m from the extended complex plane . The method presented in Section <ref> is not directly applicable to such a domain Ω. However, an iterative method has been presented in <cit.> for constructing of a preimage unbounded multiply connected domain G bordered by smooth Jordan curves and the unique conformal mapping ζ=Φ(z), Φ : G→Ω, such that Φ is normalized near infinity by the condition Φ(z)=z+O(1/z). This method has been applied in several other works, see e.g., <cit.>. The inverse function z=Φ^-1(ζ) is then the conformal mapping from Ω onto G. The method presented in Section <ref> is now applicable to the new domain G. Let w=f(z) be the Ahlfors map from the unbounded domain G onto the unit disk with the normalization f(∞)=0 and f'(∞)>0. Then the function w=g(ζ)=f(Φ^-1(ζ)) is an Ahlfors map from the unbounded slit domain Ω onto the unit disk . We have g(∞)=f(Φ^-1(∞))=f(∞)=0, and g'(∞)=lim_ζ→∞ζ[g(ζ)-g(∞)]=lim_ζ→∞ζ f(Φ^-1(ζ)). If z=Φ^-1(ζ), then ζ→∞ as z→∞, and g'(∞)=lim_z→∞Φ(z) f(z)=lim_z→∞Φ(z)/z lim_z→∞zf(z)=f'(∞), where we used (<ref>) and the fact that lim_z→∞zf(z)=f'(∞). Thus, by computing the preimage domain G and the conformal mapping Φ : G→Ω normalized by the condition (<ref>), we will have γ(E)=g'(∞)=f'(∞) where w=f(z) is the Ahlfors map from the unbounded domain G onto the unit disk with the normalization f(∞)=0 and f'(∞)>0. Since the domain G is bordered by smooth Jordan curves, the value of f'(∞) can be computed as explained in Section <ref>. We consider three examples. In our first example, we consider rectilinear slits on the real line. We know the exact value of the analytic capacity in this case and hence the error in the computed approximate values can be calculated. Examples with unknown explicit formulae are also presented. We consider several of the compact sets used in the process of generating the middle-thirds Cantor set. Let E_0=[-1,1], and let E_k for k≥1 be defined recursively by E_k=1/3(E_k-1-1/3)⋃1/3(E_k-1+1/3). Note that E_k consists of m=2^k sub-intervals of the [0,1], each of length 2/3^k. We denote these sub-intervals by I_j for j=1,2,…,m. By (<ref>), the exact value of γ(E_k) is known and given by γ(E_k)=1/4|E_k|=1/4∑_j=1^m|I_j|=1/2 (2/3)^k, from which it is immediate that γ(E_k)→0 as k→∞. Note for this example that γ(E_k)=γ(⋃_j=1^mI_j)=∑_j=1^2^kγ(I_j). The proposed method is used to compute approximate values γ̃(E_k) to the analytic capacity γ(E_k) for k=1,2,…,10 and the obtained results are presented in Table <ref>. We compute also the relative error in the computed approximate values. As can be seen in Table <ref>, our numerical method gives accurate results for sets consisting of a very high number of bordering slits. We next consider the union of two equal rectilinear slits of unit length: one slit F_1=[0.1,1.1] is fixed on the real line, and the other is taken to be F_2,ε=[0.1e^ıεπ,1.1e^ıεπ] where we vary ε between zero and one. A schematic of this configuration is shown in Figure <ref> (right) when ε=1/3. Let E_ε=F_1 ∪ F_2,ε. It is known that γ(E_1)=γ(E_2,ε)=1/4, by (<ref>). When ε=1, we have E_1=[0.1,1.1]∪[-1.1,-0.1] and hence γ(E_0)=1/2, by (<ref>). When ε=0, we have E_0=F_1=F_2,0=[0.1,1.1] and hence γ(E_0)=1/4. For 0<ε<1, there is no exact value of γ(E_ε). We use our method to compute γ(E_ε) for 0.05≤ε≤1 and the numerical results are presented in Figure <ref> (left). It is clear that 1/4=γ(E_0)≤γ(E_ε) ≤γ(E_1)=1/2. That is, the value of the analytic capacity of E_ε is maximum when the two slits are collinear. Furthermore, we always have γ(E_ε)=γ(F_1 ∪ F_2,ε) ≤γ(E_1)+γ(F_2,ε)=1/2. We next consider the union of four equal rectilinear slits of length 2-2ε, 0<ε<1, such that these four slits make the square [-1,1]×[-1,1] when ε=0. A schematic of this configuration is shown in Figure <ref> (right) when ε=0.1. We denote these four slits by F_k,ε with k=1,2,3,4. We define E_ε=∪_k=1^4F_k,ε. We use our method to approximate γ(E_ε) for 0.01≤ε≤0.99 and the obtained numerical results are presented in Figure <ref> (left). As ε→0, it is clear that the approximate values γ̃(E_ε) approach γ(E_0)= Γ^2(1/4)/2√(π^3)≈ 1.1803405990161, i.e., the value of the analytic capacity of the square [-1,1]×[-1,1]. Furthermore, we always have γ̃(E_ε)≤∑_k=1^4γ(F_k,ε)=2-2ε. § CONCLUDING REMARKS This paper has shown how to use a numerical boundary integral equation method to quickly and accurately compute analytic capacity, an important conformal invariant. This quantity has been widely studied from a mostly theoretical perspective with several deep analytical results having been established <cit.>. Analytic capacity is intimately connected to the Ahlfors map and Szegö kernel – two fundamental objects in complex analysis — and arises from the generalization of the Riemann map to multiply connected domains. In our work, two particular classes of configurations were considered over which our calculations of analytic capacity were performed: compact sets bounded by smooth and piecewise smooth Jordan arcs, and domains consisting of a finite number of rectilinear slits. To the best of our knowledge, numerical computation of analytic capacity associated with these slit domains have not yet appeared in the literature. Throughout, we made connections with previous results; in particular, we have been able to corroborate the bounds found by Younsi & Ransford <cit.> for the analytic capacity for several compact sets they considered, and we were able to validate the conjectural subadditivity property, proved by Suita <cit.>, of analytic capacity for numerous configurations. We were also able to validate numerically other exact results of analytic capacity, and illustrate several of its properties. Furthermore, the presented numerical results demonstrate that the analytic capacity γ(E∪ F) is maximized when F is furthest from E and minimized when F is closest to E. Our work has been mainly numerical in approach, and the key to its success lies in the BIE scheme based on the Kerzman–Stein BIE <cit.> and the FMM <cit.>. The presented method can be used for domains with smooth and piecewise smooth boundaries as well as for domains with many boundary components. We used the method also to approximate the analytic capacity for compact sets consisting of rectilinear slits. However, for the latter case, a preliminary conformal mapping step is required; this has also been shown to be expedient in other works <cit.>. Future work will focus on numerical computations of analytic capacity for further kinds of compact sets. We also plan to broaden our search for exact results for analytic capacity; we corroborated an existing exact result in Appendix <ref> for a doubly connected slit domain. Arguably, judicious use of results in relevant special function theory will be useful to this end, although such endeavors are expected to present various challenges. § ANALYTIC CAPACITY OF TWO COLLINEAR SLITS It follows from (<ref>) and (<ref>) that if E=[-b,-a]∪[a,b] with two real numbers 0<a<b, then the analytic capacity of E is known exactly and given by γ(E)=γ([-b,-a]∪[a,b])=b-a/2=γ([-b,-a])+γ([a,b]). Let us now present an alternative derivation of this exact result using special function theory. For the set E=[-b,-a]∪[a,b], let Ω be the unbounded doubly connected domain Ω = \ E. Let ζ=ϕ(z) be the conformal mapping from Ω onto the concentric annular domain D={ζ∈ | p<|ζ|<1}, and let z=ψ(ζ) be its inverse. The positive constant p, which is given by formula (<ref>) below, is uniquely determined by Ω. Owing to the left-right symmetry of the domains Ω and G about the imaginary axis, we can assume that ψ(√(p))=0 and ψ(-√(p))=∞, and hence ϕ(∞)=-√(p). Let w=F(ζ) be the unique Ahlfors map from the annular domain D onto the unit disk such that F(-√(p))=0 and F'(-√(p))>0. Then w=f(z)=F(ϕ(z)) is an Ahlfors map from the domain Ω onto the unit disk with f(∞)=F(ϕ(∞))=F(-√(p))=0 and f'(∞)=lim_z→∞z[f(z)-f(∞)] =lim_z→∞zF(ϕ(z)) =lim_ζ→-√(p)ψ(ζ)F(ζ), since ζ→-√(p) as z→∞. Note that ψ(ζ) has a simple pole at ζ=-√(p). Writing ψ̂(ζ)=(ζ+√(p))ψ(ζ), it then follows that f'(∞) =lim_ζ→-√(p)F(ζ)/ζ+√(p) lim_ζ→-√(p)ψ̂(ζ) =F'(-√(p))ψ̂(-√(p)). We will choose ζ=ϕ(z), and hence z=ψ(ζ), such that ψ̂(-√(p))=lim_ζ→-√(p)(ζ+√(p))ψ(ζ)>0, and thus f'(∞)=F'(-√(p))ψ̂(-√(p))>0. Therefore w=f(z) given by (<ref>) is the unique Ahlfors map from the unbounded domain Ω onto the unit disk with the normalization f(∞)=0 and f'(∞)>0. The analytic capacity of E follows at once: γ(E)=f'(∞)=F'(-√(p))ψ̂(-√(p)). Let us now derive explicit formulae for the values of F'(-√(p)) and ψ̂(-√(p)), and hence an explicit formula for γ(E). §.§ Special functions It follows from <cit.> that the explicit conformal capacity of the set E is given by (E)=π/μ(b-a/b+a) where μ is the decreasing homeomorphism μ (0,1)→(0,∞) defined by <cit.> μ(r)= π/2 '(r)/(r). Here, (r) and '(r) are the complete elliptic integrals of the first kind: (r)=∫_0^1dx/√((1-x^2)(1-r^2x^2)), '(r)=(r'), r'= √(1-r^2) . Owing to the fact that conformal capacity is conformally invariant, (E) is the same as (D) which is given by <cit.> (D)=2π/log(1/p). Consequently, we have the following explicit formula for p: p=exp(-2μ(b-a/b+a)). Note that 0<p<1. With this constant p, let us now introduce the function P(ζ)=(1-ζ)∏_j=1^∞(1-p^2jζ)(1-p^2jζ^-1) =(1-ζ)P̂(ζ) which is the Schottky-Klein prime function associated with the annular domain D, up to a pre-multiplicative constant <cit.>. Further, for the annular domain D and for a∈ D, the Szegö kernel S(ζ,a) has the following infinite product representation <cit.>: S(ζ,a) =1/2π∏_j=0^∞(1+aζ p^2j+1)(aζ+p^2j+1)(1-p^2j+2)^2/(1-aζ p^2j)(aζ-p^2j+2)(1+p^2j+1)^2. In what follows, we need the following convergent infinite product forms for the Jacobi theta functions <cit.>: θ'_1(0,p) = 2p^1/4∏_j=1^∞(1-p^2j)^3 θ_2(0,p) = 2p^1/4∏_j=1^∞(1-p^2j)(1+p^2j)^2 θ_3(0,p) = ∏_j=1^∞(1-p^2j)(1+p^2j-1)^2 θ_4(0,p) = ∏_j=1^∞(1-p^2j)(1-p^2j-1)^2. It is a fact that θ_1(0,p)=0 and <cit.> θ'_1(0,p)=θ_2(0,p)θ_3(0,p)θ_4(0,p). We will also need the following lemmas. With p as in (<ref>), we have θ_3(0,√(p))^2 = θ_2(0,p)^2+θ_3(0,p)^2 θ_3(0,√(p))θ_4(0,√(p)) = θ_4(0,p)^2. Proof. (<ref>) is obtained by putting x=0 and q=√(p) in <cit.>. (<ref>) is obtained by putting y=0 and q=√(p) in <cit.> and using the fact that θ_1(0,p)=0. With p as in (<ref>), we have √(b/a) =θ_3(0,√(p))/θ_4(0,√(p)). Proof. Let r=b-a/b+a, r'=√(1-r^2)=2√(ab)/b+a. It then follows from (<ref>) that p=exp(-2μ(r)), and from <cit.> that b-a/b+a = 4√(p)∏_j=1^∞(1+p^2j/1+p^2j-1)^4 2√(ab)/b+a = ∏_j=1^∞(1-p^2j-1/1+p^2j-1)^4. Dividing (<ref>) by (<ref>), we obtain b-a/2√(ab)=4√(p)∏_j=1^∞(1+p^2j/1-p^2j-1)^4. Adding the reciprocal of (<ref>) to (<ref>) produces √(b/a)=4√(p)∏_j=1^∞(1+p^2j/1-p^2j-1)^4 +∏_j=1^∞(1+p^2j-1/1-p^2j-1)^4. Finally, it follows from (<ref>), (<ref>), (<ref>) and (<ref>) that √(b/a) =θ_2(0,p)^2+θ_3(0,p)^2/θ_4(0,p)^2 which, by (<ref>) and (<ref>), yields (<ref>). With p as in (<ref>), we have b-a/2√(ab)=θ_2(0,p)^2/θ_4(0,p)^2. Proof. (<ref>) follows from (<ref>), (<ref>) and (<ref>). §.§ Analytic results The doubly connected domains Ω = \ E and D={ζ∈ | p<|ζ|<1} are conformally equivalent. Since the value of p is known and given by (<ref>), an explicit formula for the inverse conformal mapping z=ψ(ζ) from the annulus D onto the unbounded domain Ω can be written in terms of the prime function P(ζ). The explicit formula is given by <cit.> ψ(ζ)=C P(ζ/√(p)) P(ζ√(p))/P(-ζ/√(p)) P(-ζ√(p)) where C is a constant which will be fixed below. This formula for ψ(ζ) can also be written as ψ(ζ)=1/ζ+√(p)ψ̂(ζ), ψ̂(ζ)=√(p)C P(ζ/√(p)) P(ζ√(p))/P̂(-ζ/√(p)) P(-ζ√(p)). For the normalization of the Ahlfors map below, it is required that ψ̂(-√(p))>0. The constant C can therefore be determined by fixing the image of one point on ∂ D, say 1∈∂ D, such that ψ̂(-√(p))>0. Note that ψ̂(1)=√(p)C P(1/√(p)) P(√(p))/P̂(-1/√(p)) P(-√(p)) and ψ̂(-√(p))=√(p)C P(-1) P(-p)/P̂(1) P(p), where P(-1)>0, P(-p)>0, P̂(1)>0, and P(p)>0. Thus ψ̂(-√(p))>0 if C is a positive real number. Using the functional identity P(1/ζ)=(-1/ζ)P(ζ), we have P(1/√(p)) P(√(p))=-1/√(p)(P(√(p)))^2, and P̂(-1/√(p)) P(-√(p))=√(p)/1+√(p)P(-1/√(p)) P(-√(p)) =1/1+√(p)(P(-√(p)))^2. Hence ψ(1)=1/1+√(p)ψ̂(1)=-C (P(√(p)))^2/(P(-√(p)))^2. Since C is a positive real number, we fix ψ(1)=-a, and hence C=a(P(-√(p)))^2/(P(√(p)))^2. Consequently, it follows from (<ref>) that ψ̂(ζ)=a √(p)(P(-√(p)))^2/(P(√(p)))^2P(ζ/√(p)) P(ζ√(p))/P̂(-ζ/√(p)) P(-ζ√(p)). With ψ̂(ζ) as in (<ref>), we have ψ̂(-√(p))=2a√(p)θ_3(0,√(p))/θ_4(0,√(p))1/θ_4(0,p)^2. Proof. Since P(-1)=2P̂(-1), it follows from (<ref>) that ψ̂(-√(p))=2a√(p)(P(-√(p)))^2/(P(√(p)))^2P̂(-1)/P̂(1)P(-p)/P(p). Using the definition (<ref>) of the prime function P(ζ), we obtain (P(-√(p)))^2/(P(√(p)))^2 =(1+√(p))^2/(1-√(p))^2∏_j=1^∞(1+p^2j+1/2)^2(1+p^2j-1/2)^2/(1-p^2j+1/2)^2(1-p^2j-1/2)^2 which, after expanding and collecting terms and using the identities (<ref>)–(<ref>), implies that (P(-√(p)))^2/(P(√(p)))^2 = ∏_j=1^∞(1+(√(p))^2j-1)^2/(1-(√(p))^2j-1)^2 =θ_3(0,√(p))/θ_4(0,√(p)). Similarly, we have P̂(-1)/P̂(1) =∏_j=1^∞(1+p^2j)^2/(1-p^2j)^2 =θ_2(0,p)/θ'_1(0,p) =1/θ_3(0,p)θ_4(0,p), and P(-p)/P(p) =1+p/1-p∏_j=1^∞(1+p^2j+1)/(1-p^2j+1)(1+p^2j-1)/(1-p^2j-1) =∏_j=1^∞(1+p^2j-1)^2/(1-p^2j-1)^2 =θ_3(0,p)/θ_4(0,p). (<ref>) then follows from (<ref>)–(<ref>). Let w=F(ζ) be the Ahlfors map from the annular domain D onto the unit disk such that F(-√(p))=0 and F'(-√(p))>0. Then F'(-√(p))=1/2√(p)θ_2(0,p)^2. Proof. Let S(ζ,-√(p)) be the Szegö kernel for the annulus domain D with the basepoint -√(p)∈ D. Then <cit.> F'(-√(p))=2π S(-√(p),-√(p)) which in view of (<ref>) implies that F'(-√(p)) = ∏_j=0^∞(1+p^2j+2)(1+p^2j)(1-p^2j+2)^2/(1-p^2j+1)(1-p^2j+1)(1+p^2j+1)^2 =∏_j=1^∞(1+p^2j)(1+p^2j-2)(1-p^2j)^2/(1-p^2j-1)^2(1+p^2j-1)^2. Finally, after expanding and collecting terms and using the identities (<ref>)–(<ref>), we reach F'(-√(p)) =2∏_j=1^∞(1+p^2j)^2(1-p^2j)^2/(1-p^2j-1)^2(1+p^2j-1)^2 =1/2√(p)θ_2(0,p)θ'_1(0,p)/θ_4(0,p)θ_3(0,p). The proof then follows from (<ref>). Let E=[-b,-a]∪[a,b] with two real numbers 0<a<b. Then the analytic capacity of E is known exactly and given by γ(E)=1/2(b-a). Proof. It follows from (<ref>), (<ref>) and (<ref>) that γ(E)=f'(∞) = ψ̂(-√(p)) F'(-√(p)) = aθ_3(0,√(p))/θ_4(0,√(p))θ_2(0,p)^2/θ_4(0,p)^2. Then, by (<ref>) and (<ref>), we have γ(E) =a √(b/a)b-a/2√(ab) = b-a/2, and the theorem is proved. 10 Crowdy1 D.G. Crowdy. Finite gap Jacobi matrices and the Schottky–Klein prime function. Comput. Methods Funct. Theory, 17:319–341, 2017. Dav A.M. Davie. Analytic capacity and approximation problems. Trans. Amer. Math. Soc., 171:409–444, 1972. Gar J. Garnett. Analytic Capacity and Measure. Springer-Verlag, Berlin, 1972. Mur94 T. Murai. Analytic capacity (a theory of the Szegö kernel function). In K. Nomizu, editor, Selected Papers on Analysis, Probability, and Statistics, pages 51–74. AMS, 1994. Tol X. Tolsa. Analytic capacity, the Cauchy transform, and non-homogeneous Calderón-Zygmund theory. Springer, Heidelberg, 2014. Za L. Zalcman. Analytic capacity and rational approximation. Springer, Berlin, 2006. LSN17 J. Liesen, O. Sète, and M.M.S. Nasser. Fast and accurate computation of the logarithmic capacity of compact sets. Comput. Methods Funct. Theory, 17:689–713, 2017. Suita N. Suita. On subadditivity of analytic capacity for two continua. Kodai Math. J., 7:73–75, 1984. Tol03 X. Tolsa. Painlevé's problem and the semiadditivity of analytic capacity. Acta Math., 190:105–149, 2003. Mel M.S. Mel'nikov. Analytic capacity: discrete approach and curvature of measure. Sb. Math., 186:827, 1995. YR13 M. Younsi and T. Ransford. Computation of analytic capacity and applications to the subadditivity problem. Comput. Methods Funct. Theory, 13:337–382, 2013. YR18 M. Younsi. Analytic capacity: computation and related problems. Theta Ser. Adv. Math., 22:121–152, 2018. Nvm M.M.S Nasser and M. Vuorinen. Numerical computation of the capacity of generalized condensers. J. Comput. Appl. Math., 377:112865, 2020. Ran10 T. Ransford. Computation of logarithmic capacity. Comput. Methods Funct. Theory, 10:555–578, 2010. BelBook S. Bell. The Cauchy Transform, potential theory and conformal mapping. CRC Press, Boca Raton, 2 edition, 2016. Ker-St N. Kerzman and E. Stein. The Cauchy kernel, the Szegö kernel, and the Riemann mapping function. Math. Ann., 236:85–93, 1978. Ker-Tru N. Kerzman and M.R. Trummer. Numerical conformal mapping via the Szegö kernel. J. Comput. Appl. Math., 14:111–123, 1986. BelAhl S. Bell. Numerical computation of the Ahlfors map of a multiply connected planar domain. J. Math. Anal. Appl., 120(1):211–217, 1986. OD S.T. O'Donnell and V. Rokhlin. A fast algorithm for the numerical evaluation of conformal mappings. SIAM J. Sci. Stat. Comput., 10(3):475–487, 1989. Tru-Sze M.R. Trummer. An efficient implementation of a conformal mapping method based on the Szegö kernel. SIAM J. Numer. Anal., 23(4):853–872, 1986. Nas-ETNA M.M.S. Nasser. Fast solution of boundary integral equations with the generalized Neumann kernel. Electron. Trans. Numer. Anal., 44:189–229, 2015. NG18 M.M.S. Nasser and C.C. Green. A fast numerical method for ideal fluid flow in domains with multiple stirrers. Nonlinearity, 31:815–837, 2018. Murid1 A.H.M. Murid, M.Z. Nashed, and M.R.M. Razali. Numerical conformal mapping for exterior regions via the Kerzman–Stein kernel. J. Integral Equations Appl., pages 517–532, 1998. Gre-Gim12 L. Greengard and Z. Gimbutas. FMMLIB2D: A MATLAB toolbox for fast multipole method in two dimensions, version 1.2. edition, 2012. <http://www.cims.nyu.edu/cmcl/fmm2dlib/fmm2dlib.html>. Accessed 16 June 2023. GR L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comput. Phys., 73:325–348, 1987. gmres Y. Saad and M.H. Schultz. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Statist. Comput., 7(3):856–869, 1986. Kre R. Kress. A Nyström method for boundary integral equations in domains with corners. Numer. Math., 58:145–161, 1990. AVV G. Anderson, M. Vamanamurthy, and M. Vuorinen. Conformal invariants, inequalities, and quasiconformal maps. John Wiley & Sons, New York, 1997. HKV P. Hariri, R. Klén, and M. Vuorinen. Conformally invariant metrics and quasiconformal mappings. Springer, Cham, 2020. PRY19 S. Pouliasis, T. Ransford, and M. Younsi. Analytic capacity and holomorphic motions. Conform. Geom. Dyn., 23:130–134, 2019. Vit A.G. Vitushkin. The analytic capacity of sets in problems of approximation theory. Russ. Math. Surv., 22:139–200, 1967. Du V. Dubinin. Condenser Capacities and Symmetrization in Geometric Function Theory. Springer, Basel, 2014. GSWC22 C.C. Green, M.A. Snipes, L.A. Ward, and D.G. Crowdy. Harmonic-measure distribution functions for a class of multiply connected symmetrical slit domains. Proc. R. Soc.A, 478:20210832, 2022. GMW N.S. Gafai, A.H.M. Murid, and N.H. Wahid. Infinite product representation for the Szegö kernel for an annulus. J. Funct. Spaces, 2022:Article ID 3763450, 2022. Coo S. Cooper. Ramanujan's theta functions. Springer, Cham, 2017. Law D.F. Lawden. Elliptic functions and applications. Springer, New York, 1989.
http://arxiv.org/abs/2307.03060v2
20230706152707
Nonlinear dynamics of hot, cold and bald Einstein-Maxwell-scalar black holes in AdS spacetime
[ "Qian Chen", "Zhuan Ning", "Yu Tian", "Bin Wang", "Cheng-Yong Zhang" ]
gr-qc
[ "gr-qc" ]
[email protected] School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China [email protected] School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China [email protected] School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China [email protected] Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou 225009, China Shanghai Frontier Science Center for Gravitational Wave Physics, Shanghai Jiao Tong University, Shanghai 200240, China [email protected] Department of Physics and Siyuan Laboratory, Jinan University, Guangzhou 510632, China We investigate the dynamical transition processes of an Einstein-Maxwell-scalar gravitational system between two local ground states and an excited state in the anti-de Sitter spacetime. From the linear perturbation theory, only the excited state possesses a single unstable mode, indicating the dynamical instability. Such an instability is associated with the tachyonic instability due to the presence of an effective potential well near the event horizon. From the nonlinear dynamics simulation, through the scalar field accretion mechanism, the critical phenomena in the transition process of the gravitational system between the two local ground states are revealed. The threshold of the accretion strength indicates the existence of a dynamical barrier in this transition process, which depends on the coupling strength between the scalar and Maxwell fields. On the other hand, for the unstable excited state, there exists a special kind of critical dynamics with a zero threshold for the perturbation strength. The perturbations of different signs push the gravitational system to fall into different local ground states. Interestingly, in an extended parameter space, there exist specific parameters such that the perturbations of non-zero amplitude fail to trigger the single unstable mode of the excited state. Nonlinear dynamics of hot, cold and bald Einstein-Maxwell-scalar black holes in AdS spacetime Cheng-Yong Zhang ============================================================================================= § INTRODUCTION The no-hair theorem is crucial to characterize the properties of steady-state black holes <cit.>, but the conditions under which it holds have been controversial. In particular, the recent discovery of a series of hairy black holes has refreshed the understanding of classical black holes <cit.>. Among them, one of the most famous mechanisms that leads to a hairy black hole is the spontaneous scalarization due to the non-minimal coupling between the scalar field and the spacetime curvature <cit.> or the matter source <cit.> which gives the scalar field an effective tachyonic mass. The tachyonic instability triggers a strong gravity phase transition and results in a hairy black hole. The spontaneous scalarization can significantly affect the properties of compact objects while passing the weak field tests, and thus has attracted much attention recently <cit.>, especially for the Einstein-scalar-Gauss-Bonnet (EsGB) theory. Whether the hairy black hole is energetically favorable over the general relativity solution depends on the coupling function and the range of parameters in the theory. The linear stability of the hairy black hole has been discussed in many works <cit.>. To disclose the details of the nonlinear dynamics of spontaneous scalarization, the fully nonlinear evolution have been done in the EsGB theory. Moreover, the imprint of the scalar hair in the gravitational radiation and the dynamical descalarization were studied in the binary black hole mergers <cit.>. For its sibling theory, the Einstein-dilaton-Gauss-Bonnet (EdGB) theory, the study of the gravitational collapse found tentative evidence that black holes endowed with scalar hair form <cit.>. The nonlinear evolution of black holes under perturbations also shows that a hairy black hole forms indeed as the end state of scalarization <cit.>. The equations of motion may not be well posed in the EsGB or EdGB theory such that these works are limited in small parameter regions of these theories. In the Einstein-Maxwell-scalar (EMs) theory, the equations of motion are always well posed and allow the nonlinear study for large couplings. The nonlinear dynamics of single black hole in the EMs theory shows that a hairy black hole forms at the end <cit.>. Differences with general relativity in the binary black hole merger are significant only for large charge <cit.>. In addition to the above tachyonic instability that provides the dynamical mechanism of scalarizing black holes, the nonlinear instability of black holes has also been discovered recently <cit.>, suggesting a new scalarization mechanism. Whether the source of the scalar field is provided by the spacetime curvature in the EsGB theory or the electromagnetic field in the EMs theory, a class of critical scalarization phenomena occurs through the nonlinear accretion of scalar field into a central black hole if the coupling function in the scalar source is dominated by a quartic term. In the dynamical intermediate process, an unstable critical state that acts as a dynamical barrier emerges, separating the final bald and hairy black holes. [It is argued <cit.> in the context of AdS/CFT under the probe limit that there should be at least one unstable excited state if there are two local ground states that are connected in the configuration space, and the unstable excited state acts as the “lowest” dynamical barrier for the transition between the two local ground states. This argument is believed to hold generally in other (gravitational or non-gravitational) systems as well <cit.>.] For the EMs theory in asymptotically flat spacetime, the finally stable and intermediately unstable hairy black holes in such critical scalarization process are expected to be exactly the hot and cold scalarized black holes found in <cit.>. However, in the asymptotically anti-de Sitter (AdS) spacetime, the complete phase diagram structure of the case where the coupling function is dominated by a quartic term in the EMs theory is still unknown, although the related dynamical process indicates that there should be two branches of hairy black holes. From the AdS/CFT point of view, the answer to this question will be of great significance for probing the properties of the holographic QCD phase diagram <cit.>. On the other hand, although it can be deduced from the critical phenomenon that the intermediate critical state has only one unstable mode, it is still necessary to give direct evidence and reveal the quasi-normal modes, which is crucial for characterizing the dynamical properties. Most importantly, at present, by fine-tuning a single parameter of the initial value, the critical state can only briefly appear in the middle of the dynamical process, and the nonlinear evolution with it as the initial configuration is still pending. Since this model allows for two stable local ground states (the hot hairy and bald black holes), the final fate of the unstable cold hairy black holes cannot be predicted until nonlinear evolution is implemented. These questions have motivated us to investigate further. In this paper, we study the real-time dynamics on the local ground and excited states in spherically symmetric AdS spacetime in the EMs theory, which contains a non-minimal coupling function between the scalar and Maxwell fields dominated by a quartic term. By numerically solving the static field equations, the phase structure of the model is revealed, which shows that the domain of existence for black hole solutions consists of two branches of hairy black holes and one branch of RN-AdS black holes. From the linear perturbation theory, one branch of hairy black holes (hot hairy black holes) is linearly stable like RN-AdS black holes, while the other branch of hairy black holes (cold hairy black holes) with a single unstable mode is linearly dynamically unstable. For a gravitational system with fixed energy, the hot hairy and RN-AdS black holes serve as two local ground states and the cold hairy black hole acts as an excited state. Through fully nonlinear numerical simulations, the real-time dynamics based on these states are revealed, where critical phenomena emerge. For the case where the initial value is a stable local ground state, we find that a scalar field accretion process of sufficient strength can induce a dynamical transition from one local ground state to the other. The occurrence of such a transition requires the gravitational system overcome a dynamical barrier, leading to the existence of a threshold for the accretion strength. Near the threshold, the system is excited to an unstable excited state. For the case where the initial value is an unstable excited state, on the other hand, the system will fall into one of the two local ground states under arbitrarily small perturbations. The selection of the final state of the evolution depends on the specific form of the perturbation. Scanning the parameter space of the perturbation, the two local ground states occupy different regions in the spectrum of the final state with the excited state as the boundary. The organization of the paper is as follows. In section <ref>, we give a brief introduction to the EMs model. In section <ref>, we numerically solve the static solutions of the field equations to obtain the phase diagram structure of the model. In section <ref>, we reveal the effective potentials and quasinormal mode spectrums of the three classes of thermal phases. In section <ref>, we study the dynamical transition process of a gravitational system from one of the two local ground states to the other by crossing an excited state. In section <ref>, the real-time dynamics during the transition from an excited state to a local ground state is further revealed. Finally, we conclude the paper in section <ref>. § THE EMS MODEL We consider 4-dimensional EMs gravity with a negative cosmological constant described by the action S=1/2κ^2_4∫ d^4x√(-g)[R-2Λ-1/4f(ϕ)F_μνF^μν-∇_μϕ∇^μϕ-m^2ϕ^2], where R, F_μν, ϕ are the Ricci scalar curvature, Maxwell field strength tensor and a real scalar field, respectively. For convenience, the cosmological constant is set to be Λ=-3 to work in units of AdS radius. In what follows, we shall take the mass squared of the scalar field m^2=-2 to respect the Breitenlohner-Freedman (BF) bound <cit.> for definiteness. The interaction between the real scalar field and the electromagnetic field is governed by a non-minimal coupling function f(ϕ). In this model, the variation of the action (<ref>) with respect to the metric tensor g_μν gives rise to the following Einstein equation R_μν-1/2Rg_μν=-Λ g_μν+T^M_μν+T^ϕ_μν, where the stress-energy tensors of the Maxwell and scalar fields have the form T^M_μν =(1/2g^ρσF_μρF_νσ-1/8F_αβF^αβg_μν)f(ϕ), T^ϕ_μν =∇_μϕ∇_νϕ-1/2(∇_αϕ∇^αϕ+m^2ϕ^2)g_μν. On the other hand, the equations of motion for the Maxwell and scalar fields can be obtained by varying the action (<ref>) with respect to the corresponding matter fields, respectively, as follows ∇_ν[f(ϕ)F^νμ] =0, ∇^μ∇_μϕ =1/8df(ϕ)/dϕF_μνF^μν+m^2ϕ. In this paper, we require the model to contain a branch of stable RN-AdS black hole solutions and be ℤ_2-invariant under the transformation ϕ→-ϕ. To this end, the coupling function between the scalar and Maxwell fields is assumed to be an exponential dependence without loss of generality f(ϕ)=e^αϕ^4, with a positive coupling constant α, which has a global minimum for ϕ=0. Such a choice is only for simplicity and convenience of numerical calculation due to the existence of term df/fdϕ in the field equaitons. A simpler form, such as f(ϕ)=1+αϕ^4, does not qualitatively change the conclusion of the paper. In order to solve above time-dependent field equations numerically with the characteristic formulation <cit.>, which has been widely used to study the non-equilibrium dynamics of black holes <cit.>, the ingoing Eddington-Finkelstein metric ansatz compatible with spherical symmetry is adopted ds^2=-2W(t,r)dt^2+2dtdr+Σ(t,r)^2dΩ^2_2, where dΩ^2_2 represents the line element of a unit radius S^2. Such form of the metric ansatz is invariant to the following shift transformations in the radial coordinate r → r+λ(t), W → W+d_tλ(t), Σ →Σ, which allow us to fix the radial position of the apparent horizon during the dynamics. For the Maxwell field, we take the gauge A_μdx^μ=A(t,r)dt. By taking a Taylor expansion of the field equations near the AdS boundary, the asymptotic behaviors of the field variables can be obtained as follows ϕ =ϕ_1r^-1+ϕ_2r^-2+o(r^-3), A =μ-Qr^-1+o(r^-2), Σ =r+λ-1/4ϕ^2_1r^-1+o(r^-2), W =1/2(r+λ)^2+1/2-1/4ϕ^2_1-d_tλ-Mr^-1+o(r^-2), where the constants Q and M are electric charge and Arnowitt-Deser-Misner mass <cit.>, respectively. Since we only focus on the properties of the system in the microcanonical ensemble, without loss of generality, the electric charge Q is fixed to be 1 in what follows unless otherwise stated. According to the holographic dictionary <cit.>, the free parameter ϕ_1 is the source of the scalar field on the AdS boundary, which is set to zero to work with the source free boundary condition. At this point, the response of the scalar field ϕ_2, whose value can only be determined after solving the bulk, is proportional to the expectation value of the scalar operator of the boundary conformal field theory. The parameter μ, which is set to zero in our work, is a pure gauge whose difference from the value of Maxwell field at the horizon A(r_h) is the chemical potential. Since the effective Newton constant is chosen as κ^2_4=1 for convenience, the energy density and entropy density of the system are denoted as ϵ=2M, s=2πΣ^2(r_h), respectively, where r_h stands for the radius of apparent horizon. On the other hand, in order to describe the quasi-local mass, we introduce the rescaled Misner-Sharp (MS) mass <cit.> defined as M_MS=1/2Σ(-1/3ΛΣ^2+1-g^μν∂_μΣ∂_νΣ), which tends to the ADM mass on the AdS boundary. For a static solution, the temperature is extracted as T=1/2πd_rW(r_h). With these boundary conditions in hand, the field equations can be easily solved numerically. For more details on the numerical procedures for the static solutions and dynamical evolution, we refer readers to subsection <ref> and Appendix A in <cit.>, respectively. § STATIC SOLUTIONS In this section, we reveal the complete phase structure of this model with the coupling function (<ref>) by numerically solving the static field equations. The results show that the domain of existence of solutions is composed of three branches: hot hairy black holes, cold hairy black holes and RN-AdS black holes. §.§ Numerical procedure In order to describe the relationship between the physical quantities of the equilibrium phases, we need to seek out the static solutions to the field equations. By eliminating the time dependence in the field equations, the static field configuration X={Σ,W,A,ϕ} is determined by the following independent ordinary differential equations 0 =2Σ”/Σ + ϕ'^2 , 0 =W” + 2 W' Σ'/Σ -1/4A'^2 f - ϕ^2 - 3 , 0 =1/2A” + A' ( Σ'/Σ + ϕϕ'dlnf/dϕ^2), 0 =(W ϕ')'+ 2 W ϕ'Σ'/Σ + (1/4A'^2df/dϕ^2 + 1)ϕ , where prime stands for the derivative with respect to the radius r. The above system of equations E(X)=0 can be efficiently solved by the Newton-Raphson iteration algorithm, which can be thought as a linear algebra problem of finding the value of X_i+1 via its value in the previous step X_i X_i+1=X_i-M^-1(X_i)E(X_i), where M=δE/δX is the Jacobian matrix. The procedure is iterated until the difference X_N-X_N-1 is small enough, which is the condition for X_N to be considered a static solution. In addition, the boundary conditions (<ref>) must be maintained throughout the process. In order to implement the iteration numerically, we make a coordinate compactification z=r^-1 such that the radial direction is bounded in z∈ [0,1]. Note that the radial position of the horizon is fixed at r_h=1 using the reparameterization freedom (<ref>). Discretizing the z-coordinate with Chebyshev-Gauss-Lobatto grid points and replacing the radial derivative with corresponding differentiation matrix, the equation (<ref>) is converted into a series of algebraic operations, which is conveniently implemented with a code library such as numpy. §.§ phase diagram Due to the confining AdS boundary that restricts the escape of matter, the electric charge and energy of the system are conserved during evolution, indicating that such dynamics essentially occurs in the microcanonical ensemble. In this case, the relevant thermodynamic potential describing the competitive relationship between the several thermal phases in equilibrium is the entropy, and the dominant thermal phase is the one with the largest entropy. Therefore, to a certain extent, the microcanonical phase diagram is a key factor in judging the stability of thermal phases in the asymptotically AdS spacetime. In this model, the microcanonical phase diagrams with different values of coupling constant α are shown in figure <ref> as the energy density dependence of physical quantities. As we can see from figure <ref>, in addition to RN-AdS black hole solutions with trivial scalar field, for a sufficiently large value of coupling constant α, there is also a branch of black hole solutions whose horizon surface is attached to scalar condensation, which connects to the branch of RN-AdS black holes at point A representing the extremal RN-AdS black hole. Fixing a general value of coupling constant α, one can observe that the branch of hairy black holes possesses a turning point B, which represents the maximum value of energy density of hairy black hole solutions and depends on the coupling constant α. Such turning point divides the branch of hairy black holes into two parts. On the one hand, the AB region directly connected to the extremal RN-AdS black hole with zero temperature, usually referred to as the branch of cold hairy black holes, has not only less scalar condensation, but also a lower temperature than the other part as shown in figure <ref>. On the other hand, the region extending from the turning point B to the over-extremal region with decreasing energy is called the branch of hot hairy black holes due to the higher temperature. With the increase of the coupling constant α, both branches of hairy black holes exhibit the same decreasing behavior in terms of scalar condensation. However, for the temperature, the branch of hot hairy black holes increases while the branch of cold hairy black holes decreases as the coupling constant α increases. In figure <ref>, we would like to reveal the competitive relationship between above three thermal phases in the microcanonical ensemble. To make the diagram clearer, on the vertical axis we actually plot the difference between the entropy density of hairy black holes and RN-AdS black holes. As we can see from it, the three branches of solutions form the shape of a swallowtail, among which, the entropy density of the branch of cold hairy black holes is always the smallest. Due to the second law of black hole mechanics, which requires that the entropy never decreases during the dynamical process, the cold hairy black hole, as an excited state of the system, is expected to be dynamically unstable and spontaneously evolve to a ground state with greater entropy under perturbations. For a system with fixed energy, both the RN-AdS black hole state and hot hairy black hole state meet the criteria of being the ground state. In a low-energy region, the state of hot hairy black hole has the largest entropy density and thus acts as the dominant phase. However, the entropy density of the RN-AdS black hole state gradually exceeds that of the hot hairy black hole state as the energy density of the system exceeds a critical value, becoming the global ground state. In fact, both of these local ground states (RN-AdS black hole and hot hairy black hole) can serve as the final fate of a dynamically unstable excited state (cold hairy black hole), independent of which is the dominant state with maximum entropy. This will be verified from the perspective of nonlinear dynamics in section <ref>. In addition, for an ensemble with a fixed energy, the entropy gap between the hot and cold hairy black holes increases significantly with the increase of the coupling constant α, indicating that the dynamical barrier in the excitation process of the hot hairy black hole increases with the parameter α. Interestingly, the entropy gap between the RN-AdS black hole and the cold hairy black hole decreases as the coupling constant α increases, indicating that the RN-AdS black hole is more likely to be excited under the strong coupling condition. These conclusions are consistent with the numerical simulation results presented in section <ref>. § STABILITY In this section, we further investigate the linear stability of the thermal phases obtained in the previous section by the linear perturbation theory. The corresponding quasinormal mode spectrums are numerically calculated, which show that the hot hairy black hole as the local ground state is linearly stable while the cold hairy black hole as the excited state is dynamically unstable. For RN-AdS black holes, the superradiance and near-horizon instabilities are two important mechanisms leading to the dynamical transition to hairy black holes. For the case of a real scalar field in our work, which cannot extract electric charge from the black hole, the superradiant instability is suppressed. The near-horizon instability is quite universal and occurs for black holes with extremal configuration, which can be triggered by both charged and neutral scalar fields. For our model here, such a instability is only possible for a near-extremal RN-AdS black hole in the large black hole limit r_h→∞, which is not within our consideration. In what follows, we focus on the linear stability of hot and cold hairy black holes. §.§ Effective potential From quantum mechanics <cit.>, the effective potential is an important mechanism to characterize the stability of the system. The emergence of instability requires that the effective potential be negative in some regions. In this subsection, we reveal the effective potential of the black hole solutions in this model along the radial direction to preliminarily judge their stability. To this end, we take the following metric ansatz for convenience ds^2=-N(t,r)e^-2Δ(t,r)dt^2+1/N(t,r)dr^2+r^2dΩ^2_2. Considering a time-dependent linear perturbation with spherical symmetry, the corresponding metric and scalar fields are assumed to be of the form N(t,r) =N(r)+εδ N(r)e^-iω t, Δ(t,r) =Δ(r)+εδΔ(r)e^-iω t, ϕ(t,r) =ϕ(r)+εδϕ(r)e^-iω t. The Maxwell field is determined by equation (<ref>) as ∂_rA(t,r) =Qe^-Δ/r^2f(ϕ). Here the symbol ε stands for the control parameter of the infinitesimal expansion, and the complex frequency ω=ω_R+iω_I, also known as quasinormal mode, corresponds to the eigenvalue of the perturbative eigenstate {δ N,δΔ,δϕ}. The configuration of the leading terms { N,Δ,ϕ} is the static background solution of the field equations. For a mode with a positive imaginary part ω_I>0 , one can observe from the ansatz (<ref>) that the time-dependent solution {N,Δ,ϕ} will exponentially leave the static field configuration in the form e^ω_It. Such a mode is dynamically unstable. Conversely, a mode with a negative imaginary part ω_I<0 decays exponentially, failing to trigger instability in the system. Substituting the ansatz (<ref>) into the equations of motion for the metric (<ref>) and scalar (<ref>) fields, the leading order of the expansion equations gives rise to the following static field equations 0 = (N ϕ')'+2r^-1N ϕ'+1/2rN ϕ'^3 + 2 ϕ + Q^2/4r^4f^2df/dϕ, 0 = N' + 1/2r N ϕ'^2+ r^-1N - r ϕ^2 - 3r - r^-1+Q^2/4 r^3 f, 0 =Δ'+1/2r ϕ'^2, where prime stands for the derivative with respect to the radius r. Obviously, hot hairy black holes, cold hairy black holes and bald black holes solve the above equations. At the subleading order, one can find that the perturbations of the metric fields can be expressed by the perturbation of the scalar field, as follows 0 =δ N+ rNϕ'δϕ , 0 = δΔ'+r ϕ'δϕ' , indicating that the linear order Klein-Gordon equation is decoupled from the perturbation of the metric fields. Introducing the tortoise coordinate dr_*=e^ΔN^-1dr and defining a new radial function δϕ=r^-1Ψ, one can extract a Schrodinger-like equation for the perturbation of the scalar field from the subleading order of the Klein-Gordon equation 0=d^2/dr_*^2Ψ+ w^2Ψ -V_effΨ, with the effective potential V_eff= Ne^-2Δ/r^2[(1+ 3r^2)(1 - r^2ϕ'^2) - N + m^2r^2( 1/2r^2ϕ^2ϕ'^2 + 2 r ϕϕ' - 1/2ϕ^2 + 1) . .-Q^2/4r^2f(1 - r^2ϕ'^2 +d^2f/ fdϕ^2+ 2rϕ'df/ fdϕ -2(df/ fdϕ)^2)]. One can observe that such effective potential vanishes at both the event horizon and the AdS boundary in the case of m^2=-2, whereas it diverges at the AdS boundary for the case of a massless scalar field. The resulting effective potentials for the cold hairy black holes, hot hairy black holes and RN-AdS black holes are shown in figures <ref>, <ref> and <ref>, respectively. Since the RN-AdS black hole solutions have a positive definite effective potential, they are free from radial instability. However, for the hairy black hole solutions, it turns out that the effective potential always has a negative region. The difference is where the negative region exists. On the one hand, for solutions in the cold branch, such negative region comes into play near the event horizon and gradually intensifies along the direction of the cold branch from the connection point with the branch of RN-AdS black holes (point A representing the extremal RN-AdS black hole) to the bifurcation point of the branch of hairy black holes (point B), that is, the direction in which the energy density increases. As the bifurcation point is approached, a positive region develops near the event horizon. On the other hand, as the configuration smoothly transitions from the cold branch to the hot branch through the bifurcation point, the positive region near the event horizon gradually grows. Along the direction of the hot branch away from the bifurcation point, such positive region is significantly enlarged and gradually plays a dominant role as the energy density of the system decreases. The negative region can only move away from the event horizon towards the AdS boundary. As a result, near the event horizon, a hot hairy black hole exhibits a potential barrier while a cold hairy black hole possesses a potential well. According to the results of quantum mechanics <cit.>, for a one-dimensional potential, the existence of bound states capable of triggering instability requires that the integral of the effective potential over the entire space is negative. In order to compare with it, we introduce the rescaled effective potential defined as V_eff=r_hz^-2e^ΔN^-1V_eff, such that ∫_-∞^+∞V_effdr_*=∫_0^1V_effdz. Without loss of generality, taking the ensemble with energy density of ϵ=1.8 as an example, the profiles of the corresponding rescaled effective potential of the three types of black holes are shown in figure <ref>. It turns out that only the integral of the effective potential of a cold hairy black hole is negative, and thus is expected to be dynamically unstable. We have verified that such integral is negative for the entire branch of cold hairy black holes. However, these qualitative analyses still cannot give definitive evidence of instability. To this end, one needs to obtain the eigenvalue ω of the linear perturbation to determine whether there is an unstable mode with a positive imaginary part. §.§ Quasi-normal modes In this subsection, we numerically solve the quasinormal spectrum to give direct evidence of instability quantitatively. Since the configurations of hairy black holes are obtained by numerical method, the generalized eigenvalue method <cit.> is used, which is simple and efficient for this case. By discretizing the field configurations of the static background solution with a pseudospectrum, this method converts the solving process of the equation (<ref>) into a generalized eigenvalue problem. The complex frequency ω to be calculated is the corresponding eigenvalue. The resulting quasinormal spectrums of the cold and hot hairy black holes are shown in figures <ref> and <ref>, respectively. The purple dots represent the AdS modes, which dominates in the small black hole limit r_h→ 0. The blue and orange dots represent the zero-damped modes, which converge at the origin of the complex plane as a branch point in the case of an extremal RN-AdS black hole <cit.>. For the branch of RN-AdS black holes, all the modes are located on or below the real axis, indicating the dynamical linear stability. However, along the branch of cold hairy black holes from the connection point A representing the extremal RN-AdS black hole to the bifurcation point B, one of the zero-damped modes gradually climbs upward along the imaginary axis from the origin, while the others move down to the lower half of the imaginary axis. Such a mode with a positive imaginary part represents the occurrence of dynamical linear instability. After the imaginary part of the single unstable mode reaches a maximum value, the linear instability of the system gradually weakens as it decreases. Until the bifurcation point B is reached, the imaginary part of this single unstable mode returns to the origin, indicating that the instability completely disappears. We reveal more clearly in figure <ref> that the positive imaginary part of the single unstable mode varies with the energy density in this process. Along the branch of hot hairy black holes from the bifurcation point B to the over-extremal region, on the other hand, the dominant mode initially moves downwards from the origin along the imaginary axis, and then gradually approaches the real axis again after reaching a turning point, accompanied by the growth of the real part. With the approach of the extremal configuration with zero horizon area, this mode gradually converges to the real axis. Therefore, from the above results, we can conclude that only the cold hairy black holes are dynamically linearly unstable, and there is only a single unstable mode without real part. § DYNAMICS OF THE LOCAL GROUND STATE From the results of the linear perturbation theory in the previous section, the RN-AdS black holes and hot hairy black holes are dynamically linearly stable, acting as two local ground states. In this section, by numerically solving the time-dependent field equations, we simulate the fully nonlinear accretion process of a scalar field to a central black hole to reveal the real-time dynamics during the excitation process of the ground state, where scalarization or descalarization phenomenon occurs depending on the central black hole. The corresponding schematic diagram of such continuous accretion process is shown in figure <ref>. Due to the linear stability of the RN-AdS and hot hairy black holes, the disturbance of the accretion process of small strength, which can only increase the energy of the system without changing its essential properties, gradually dissipates in the background spacetime. Until the accretion strength exceeds a threshold, the RN-AdS and hot hairy black holes with nonlinear instability are dynamically interconverted by crossing a cold hairy black hole with linear instability. From the microcanonical phase diagram in figure <ref>, it can be seen that the branch of hot hairy black holes ends at point B, indicating that the accretion process with sufficient strength, which brings in enough energy, can always make the system converge to the state of RN-AdS black hole. The real-time dynamics of the above physical processes will be revealed in detail in the following. Among them, it is particularly important to emphasize the critical dynamics at the strength threshold, near which the evolved system tends to converge to a certain critical state in the dynamical intermediate process. In particular, if such critical state has only one unstable mode, then we can obtain it by parameterizing the initial value with a single parameter and fine-tuning the characteristic parameter to reach a critical value. At the late time of critical evolution, the system will enter the linear region of the critical solution, which can be effectively approximated by ϕ(t,r)≈ϕ_*(r)+(p-p_*)e^-iω _*tδϕ(r)+decaying modes. Here ϕ_*(r) represents the static configuration of the critical state, and δϕ(r) is the only unstable eigenmode associated with the eigenvalue ω _*, which has a positive imaginary part. On the one hand, for the case where the parameter p is exactly equal to the critical value p_*, the only unstable mode cannot be triggered, causing the system to permanently stay in the critical state. However, on the other hand, for the parameter p slightly away from the critical value p_*, such an unstable mode will grow exponentially in the later stage of evolution and push the system away from the critical state to reach the final stable state. Interestingly, when the parameter p leaves the critical value p_* in different directions, the final state often has distinct essential properties. Furthermore, the time that the system stays on the critical state during the dynamical intermediate process satisfies τ∝ -Im[ω _*]^-1ln(|p-p_*|), where Im[ω _*] stands for the imaginary part of the eigenvalue ω _*. Such a relationship is obtained by requiring that the coefficient of the unstable mode in equation (<ref>) grows to a finite size, |(p-p_*)e^-iω _*τ|∼ O(1), which represents the end point of the linear region of the intermediate critical solution. §.§ Critical scalarization In this subsection, we focus on the dynamical accretion process of a scalar field towards a central RN-AdS black hole with coupling constant α=500. That is, the physical process near the critical point p_cs in figure <ref>. Without loss of generality, we choose a seed RN-AdS black hole with energy density ϵ=1.6 as the initial configuration and impose a scalar field perturbation of the form δϕ=pz^2(1-z)^2e^-w(z-z_c)^2, with the radial compactified coordinate z=r^-1. The width and center position of the Gaussian function are fixed as w=50 and z_c=0.5. Since the apparent horizon is fixed at the radial position z_h=1 by using the reparameterization freedom (<ref>), such form of perturbation characterizes a Gaussian-type scalar field distributed outside the central black hole. The energy of the system increases with the increase of the perturbation amplitude p, so this process can be regarded as the accretion process of the scalar field to a central black hole, with the accretion strength p. The evolution of the scalar field configuration and the MS mass during the dynamical accretion process is shown in figure <ref>. In the early stages of evolution, the system exhibits similar dynamical behaviors for accretion processes of different strengths. In the first stage (t<0.5), the outgoing scalar field carrying energy propagates towards the AdS boundary, resulting in a decrease in the local mass at the position of the wave packet at the initial moment, as shown in figures <ref> and <ref>. Note that the value of the MS mass at the radial coordinate r represents the integral of the energy within radius r. Since the local mass change in the dynamical process is not significant compared to the overall energy, we show the difference between the MS mass at different times and the initial time so that the energy flow is more obvious. The MS mass at any time converges to a constant on the AdS boundary, which is equal to the ADM mass, indicating that the total mass of the system in the asymptotically AdS spacetime is conserved during the dynamical process. Then in the second stage (0.5<t<1), due to the gravitational potential of the AdS spacetime, the outwardly propagating scalar field is bounced and clustered around the horizon of the central black hole, resulting in a significant increase in the local mass near the horizon. Interestingly, the subsequent fate of the scalar field depends on the specific accretion strength. On the one hand, for a weak accretion strength, such as the accretion process of p=0.5 shown in figures <ref> and <ref>, the scalar field is gradually absorbed by the central black hole in the later stages of evolution (1<t), leaving a bald black hole with greater energy. Since the energy carried by the initial scalar field enters the interior of the central black hole, the value of the MS mass is most significantly improved at the horizon compared with the initial value, and then rapidly decreases at the original wave packet of the scalar field. Obviously, for a radial position outside the original wave packet, the MS mass of the initial and final states remains the same. On the other hand, for a strong accretion strength p=1.5, as shown in figures <ref> and <ref>, the scalar field eventually converges to a nontrivial configuration, leading to the appearance of a black hole with the scalar condensation attached at the horizon. Such a final black hole is exactly the hot hairy black hole obtained in the subsection <ref>, and it has been proved to be linearly dynamically stable in the subsection <ref>. That is to say, a scalar field accretion process of sufficient intensity can induce a dynamical transition from a linearly stable RN-AdS black hole to a hot hairy black hole. From the distribution of the MS mass of the final state in figure <ref>, it can be seen that most of the energy still enters the interior of the central black hole, resulting in a significant increase in the value of the MS mass at the horizon. The scalar hair also carries part of the energy, distributed in the bulk. The above real-time dynamical processes indicate that there should be a critical value for accretion strength p between 0.5 and 1.5 to distinguish two different final states. A natural question is what kind of dynamical behavior the system exhibits near the critical value of accretion strength p_cs. By dichotomy, we keep approaching the critical value and show the numerical results in figure <ref>. The evolution of the scalar condensation at the horizon and the entropy density are presented in figures <ref> and <ref>, respectively, from which it can be seen that a critical state appears in the dynamical intermediate process. All initial values parameterized by the accretion strength p close to the critical value p_cs are attracted to a critical black hole with scalar hair, manifested by the convergence of the scalar field to a static nontrivial configuration. At the same time, the entropy of the system also stops growing and presents a plateau. Subsequently, for the dynamical process with the accretion strength greater than the critical value p_cs, the scalar condensation continues to grow and eventually converges to another static configuration. The entropy also increases significantly to another constant. The stable final state of the process is a hot hairy black hole. The fast oscillating behavior of the scalar condensation in the later stages of evolution is caused by the non-zero real part of the stable dominant mode of the hot hairy black hole. On the contrary, for the dynamical process with the accretion strength less than the critical value p_cs, the corresponding scalar condensation decays rapidly after leaving the critical configuration, indicating that the system eventually evolves into an RN-AdS black hole. These dynamical processes are accompanied by a small increase in entropy, which is guaranteed by the second law of black hole mechanics. Furthermore, one can observe that the closer the accretion strength is to the critical value p_cs, the longer the system stays in the critical state during the dynamical intermediate process. Therefore, it can be inferred that the critical accretion strength p_cs just corresponds to the critical black hole. After verification, such a critical black hole is exactly the linearly unstable cold black hole obtained in the subsection <ref>. In order to reveal the behavior of the dominant mode in the dynamical process, we show the evolution of the value of ln|∂_tϕ(r_h)| over time in figure <ref>. One can observe that the whole dynamical process can be divided into three stages. The short-lived first stage describes the process where the initial values are attracted to a critical black hole, depending on the form of the disturbance. After that, the evolution system enters the linear region of the critical state, at which point it can be approximated by equation (<ref>). At the beginning of the second stage, the dynamical process is dominated by the decay modes of the critical black hole. Due to the deviation between the actual accretion strength and the critical value, the unstable mode will grow exponentially at later times and gradually takes over the evolution process, pushing the system away from the critical state. The growth exponent can be extracted from the slope in figure <ref>, which is equal to the imaginary part of the unstable mode. In the third stage, the intermediate solution converges to the RN-AdS black hole in the case of subcritical accretion strength and to hot hairy black hole in the case of supercritical accretion strength. From the quasinormal modes of hot hairy black holes shown in figure <ref>, the dominant stable mode has a small imaginary part, implying a slow decay rate. Since the critical black hole (cold hairy black hole) emerging in the dynamical intermediate process possesses a single unstable mode shown in figure <ref>, the ralationship (<ref>) is checked in figure <ref>, where the slope of the red line is exactly the reciprocal of the imaginary part of the unstable mode. From the microcanonical phase diagram shown in figure <ref>, due to the reduced entropy gap between the RN-AdS and cold hairy black holes, one can deduce that the dynamical barrier for the transition from an RN-AdS black hole to a hot hairy black hole gradually decreases with the increase of the coupling constant α. We show in figure <ref> the critical accretion strength p_cs required to trigger the dynamical transition of the central RN-AdS black hole for different values of the coupling constant α, where the monotonically decreasing behavior of the critical accretion strength verifies the inference from the phase diagram. The energy density of the gravitational system increases continuously with the further accretion of the scalar field, showing a monotonically increasing behavior with the accretion strength p, as shown in figure <ref>. At the first threshold of the accretion strength p_cs, the value of the final scalar field at the horizon jumps, as shown in figure <ref>, indicating a dynamical transition from an RN-AdS black hole to a hot hairy black hole. Subsequently, such value gradually decreases with the increase of the accretion strength, which is consistent with the result shown in figure <ref> that the scalar condensation attached to the event horizon of the hot hairy black hole decreases with the increase of the energy density of the system. Since the branch of hot hairy black holes ends at the bifurcation point B, the system must return to the branch of RN-AdS black holes when the energy density exceeds that of point B. Interestingly, from the dynamical results in figure <ref>, instead of passing through the whole branch of hot hairy black holes, the gravitational system evolves to the branch of RN-AdS black holes through a critical dynamical transition in advance, manifested by another jump in the value of ϕ(r_h) at the second threshold of the accretion strength p_c. The energy density corresponding to this threshold p_c is between those corresponding to the first threshold and the bifurcation point. The real-time dynamics near the second threshold is similar to the scalar field accretion process towards a hot hairy black hole, which will be revealed in detail in the next subsection. As a conclusion, the accretion process of a scalar field to a central RN-AdS black hole can trigger its nonlinear instability. There is a threshold of the accretion strength that induces the dynamical transition from one local ground state (RN-AdS black hole) to the other (hot hairy black hole). Near the threshold, the system stays on an excited state (cold hairy black hole), which acts as a dynamical barrier for the transition process. Such a dynamical barrier decreases monotonically with the increase of the coupling constant α. With the continuous accretion of the scalar field, due to the upper limit of the energy density for the domain of existence of the hot hairy black holes, a second threshold of the accretion strength appears, beyond which the gravitational system undergoes a critical dynamical transition and returns to the state of RN-AdS black hole. §.§ Critical descalarization The dynamical simulation results in the previous subsection show that even if a gravitational system is dynamically stable at the linear level, it can still transition to another linearly stable local ground state under sufficiently large disturbance. In this subsection, by simulating the dynamical accretion process of a scalar field to a central hot hairy black hole, we reveal that the transition process between local ground states is bidirectional. Without loss of generality, we take the hot hairy black hole under the same ensemble with energy density ϵ=1.6 as the initial value and impose the same form of disturbance (<ref>). The evolution of the value of the scalar field at the apparent horizon is shown in figure <ref>. It turns out that there is a threshold p_cd for the accretion strength to distinguish two different stable final states. For the accretion process whose strength is less than the critical value, the final state of the evolution is still a hot hairy black hole but with more energy. On the other hand, for the case where the accretion strength is greater than the critical value, a dynamical transition occurs, leading to the descalarization phenomenon. That is to say, the accretion process of sufficient strength can also induce the excitation process from the ground state of the hot hairy black hole to the ground state of the RN-AdS black hole. Near the threshold p_cd, similar to the critical scalarization phenomenon, the scalar field gradually converges to a static critical configuration after a short period of drastic changes. This critical state is a cold hairy black hole with linear dynamical instability. Similarly, the time that the critical state exists in the dynamical intermediate process depends on the difference between the accretion strength p and the critical value p_cd. The relationship (<ref>) still holds, as shown in figure <ref>. Different from the dynamical transition from a RN-AdS black hole to a hot hairy black hole, from the phase diagram in figure <ref>, it can be seen that the entropy gap between the hot and cold hairy black holes increases with the coupling constant α. That is to say, the dynamical barrier for the transition from a hot hairy black hole to an RN-AdS black hole gradually increases with the coupling constant α. This is consistent with the numerical simulation results presented in figure <ref>, which describe the monotonically increasing behavior of the accretion strength threshold with the coupling constant α. Through the above real-time dynamics, we realize the bidirectional dynamical transition process of a gravitational system between two local ground states. By fine-tuning the parameter p that characterizes the disturbance strength, the gravitational system will stay in a critical excited state with linear instability in the dynamical intermediate process, exhibiting critical dynamics. In fact, such critical dynamics is universal and independent of the disturbance parameters. That is to say, for the disturbance form described by (<ref>), by fixing an appropriate value of parameter p, a similar critical dynamical process can also be triggered by fine-tuning the parameter w or z_c. Not only that, the critical dynamics is also universal to the disturbance form, as long as it can make the system cross the corresponding dynamical barrier. § DYNAMICS OF THE EXCITED STATE In the previous section, we have revealed the real-time dynamics in the case where the initial central black hole is a linearly stable local ground state, where novel critical phenomena emerge. In this section, we further investigate the dynamics in the case where the inital central black hole is an excited state with linear dynamical instability, in which the gravitational system exhibits a more interesting critical behavior. For consistency, we take the cold hairy black hole in the ensemble with energy density ϵ=1.6 as the initial configuration, in this case the hot hairy black hole with maximum entropy as the dominant thermal phase, and then impose the scalar field perturbation described by (<ref>). Due to the linear instability of the initial gravitational system, the dynamical transition can occur under arbitrarily small perturbations. Interestingly, there are two local ground states with linear dynamical stability that can serve as the final state of the dynamical evolution. In the microcanonical ensemble, the entropy of the system describes the competitive relationship between thermal phases in equilibrium. From a thermodynamic point of view, the system tends to reside in the state of maximum entropy, that is, the hot hairy black hole. However, from the dynamics, it turns out that the system does not show a preference according to the entropy of the state. From the numerical results in figure <ref>, which show the evolution of the value of the scalar field at the apparent horizon, it can be seen that different values of the perturbation amplitude can still induce the system to evolve into two different stable final states. We find that the final fate of the dynamical process corresponding to the positive perturbation amplitude is an RN-AdS black hole, on the contrary, the negative perturbation amplitude pushes the evolved system to a hot hairy black hole. Such a result indicates that the gravitational system undergoes a special class of critical dynamics with a perturbation strength threshold of zero p_*=0. That is to say, in this case the critical state in the critical dynamical process is the initial state itself. The corresponding dynamical barrier for the transition is zero. Similarly, the smaller the perturbation strength, the longer the system will remain in the unstable initial state. In this process, the relationship (<ref>) holds, as shown in figure <ref>. Due to numerical errors, even with a perturbation amplitude that is strictly zero, a dynamical transition still occurs after long-term evolution, resulting in the values of τ corresponding to the scalarization and descalarization processes lying on two parallel lines, respectively. Obviously, the slope of these two lines is equal to the reciprocal of the imaginary part of the single unstable mode of the initial cold hairy black hole. The evolution of the dominant mode in the dynamical intermediate process is similar to that in figure <ref>, with the difference being the lack of the first stage. In order to exclude the influence of the thermodynamic potential of the local ground state on the conclusion, the real-time dynamics of the cold hairy black hole in the ensemble with energy density ϵ=2.2, in which the RN-AdS black hole possesses the maximum entropy, is also studied. Similar critical dynamical phenomena occur. The unstable cold hairy black hole maintains its static configuration in the absence of perturbation, and evolves to a stable state through a non-equilibrium process with any small perturbation. The specific configuration of the final state depends on the sign of the perturbation amplitude. The selection of the final state is consistent with the case of energy density ϵ=1.6: the plus sign corresponds to an RN-AdS black hole and the minus sign corresponds to a hot hairy black hole. Such results indicate that the dynamical transition mechanism from an excited state to a local ground state is independent of the thermodynamic potential between the local ground states. For the case where the initial configuration is unstable, a zero threshold of the perturbation strength for the critical dynamics is reasonable. However, the relationship between the sign of the perturbation amplitude and the final state of evolution is somewhat puzzling. In fact, we find that such a correspondence is related to the parameter z_c in the perturbation (<ref>), which characterizes the distance of the perturbation from the horizon. For the cases of perturbations near the horizon and far away from the horizon, the corresponding relationship between the sign of the perturbation amplitude and the final state of the evolution is exactly opposite. Fixing the perturbation amplitude p=+0.01 without loss of generality, we show the evolution of the scalar field configuration with time for the cases of z_c=0.9 and z_c=0.95 in figures <ref> and <ref>, respectively. It turns out that a near-horizon perturbation with a positive sign leads to the formation of a hot hairy black hole instead of an RN-AdS black hole. Such a result on the one hand shows that the selection of the final state is not only determined by the perturbation amplitude but also depends on the specific form of the perturbation, and on the other hand indicates that there should be a critical value for the parameter z_c such that the gravitational system undergoes critical dynamics. The critical value z^*_c, which is a function of perturbation amplitude p, is obtained by dichotomy, near which the system exhibits critical behaviors. From the figure <ref>, which shows the evolution of the value of the scalar field at the horizon, it can be seen that the time of the intermediate solution that stays near the critical black hole depends on the difference between the parameter z_c and the critical value z^*_c. Such a dependency satisfies (<ref>), as shown in figure <ref>. Note that in this case the critical black hole that emerges in the dynamical intermediate process is not the one in figure <ref>, which is exactly the initial state. With the imposition of a perturbation of non-zero amplitude, the energy of the gravitational system changes even if the perturbation amplitude is so small, indicating that the perturbed system deviates from the original ensemble. By fine-tuning the perturbation parameter z_c, the system converges to the cold hairy black hole in the new ensemble. That is to say, such a perturbation is absorbed by the excited state without triggering its dynamical instability. Certainly, the energy of the gravitational system is changed. Since the dynamically unstable excited state can be viewed as a non-equilibrium configuration at a certain moment in the evolution process, the realization of the above process shows that the critical dynamics is quite universal to the initial value. In fact, for any non-equilibrium configuration in a time-dependent process, the critical phenomena will also appear by applying appropriate perturbations. The existence of the critical value z^*_c indicates that there is another critical value other than zero for the perturbation amplitude p. Since the combination of parameters (z_c=z^*_c,p=+0.01) corresponds to a critical black hole, naturally, fixed the parameter z_c=z^*_c, the gravitational system will also exhibit critical behaviors around p=+0.01. In other words, p=+0.01 is also a threshold for the perturbation with parameter z_c=z^*_c. In order to reveal the dependence of the final state of evolution on the parameters, we scan the parameter space (z_c, p) and display the spectrum of the final state in figure <ref>. It turns out that the two stable local ground states occupy different regions of the parameter space, separated by the boundary representing the excited state. If a one-parameter curve connects two different local ground states, there must be a point where it intersects the boundary. This point is the threshold of critical dynamics. Note that the points on the boundary p=0 represent the same state, namely the initial excited state. The existence of other boundaries indicates that for an appropriate form of perturbation, there exists a series of specific combinations of parameters that fail to trigger the single unstable mode of the excited state. Such a perturbation only changes the energy of the unstable gravitational system without changing its dynamical properties. Interestingly, there is a region inhabited by the hot hairy black hole with a negative scalar field, whose boundary also corresponds to the cold hairy black hole with a negative scalar field. Although there is degeneracy among the hot hairy black holes with positive and negative scalar fields due to the symmetry ϕ→-ϕ, the dynamical processes from a cold hairy black hole with a positive scalar field to these two degenerate states are not the same, as shown in figure <ref>. For the perturbation parameters on the boundary of the blue region, the corresponding perturbations are absorbed by the original excited state with a positive scalar field and induce a dynamical transition to the other degenerate excited state with a negative scalar field, as shown in figure <ref>. § CONCLUSION In the EMs theory, the thermodynamic and dynamic properties of the gravitational system depend heavily on the interaction between the scalar and Maxwell fields. For the case of the quadratic non-minimal coupling, the near-extremal RN-AdS black hole is dynamical unstable and can spontaneously scalarize to form a hairy black hole. Since there is only one excited state and one global ground state, this dynamical transition process is unidirectional and dull. Different from that, for the case of the non-minimal coupling function considered in our paper dominated by a quartic term, the richer phase structure leads to the emergence of many exciting dynamical processes. In order to investigate the real-time dynamics, in the first step, we have revealed the phase structure of the model in the microcanonical ensemble. The related results show that the domain of existence of solutions consists of a branch of RN-AdS black holes and two branches of hairy black holes, which are called as hot and cold hairy black holes, respectively. Among them, the branch of cold hairy black holes is smoothly connected with the branch of RN-AdS black holes at the extremal RN-AdS black hole and has an upper limit of energy. The branch of hot hairy black holes extends from this upper limit of energy to the over-extremal region. For a gravitational system with fixed energy, the cold hairy black hole with the minimum entropy is in an excited state, and the RN-AdS black hole and hot hairy black hole are in two local ground states due to the larger entropy. In the second step, we have studied the effective potentials and the quasinormal modes of these three classes of thermal phases. For both local ground states, the effective potential exhibits a potential barrier near the event horizon. However, for the excited state, there is a negative region in the effective potential near the event horizon. Such a potential well is generally associated with the tachyonic instability. From the linear perturbation theory, only the excited state possesses an unstable mode with an imaginary part greater than zero, indicating the dynamical instability. Both local ground states are dynamically stable at the linear level. In the third step, the real-time dynamics based on the two local ground states are revealed. By simulating the fully nonlinear accretion process of a scalar field to a central black hole, we have discovered that the gravitational system can dynamically transition between the two local ground states. Moreover, there is a dynamical barrier in such a transition process, which is reflected in the existence of a threshold for the accretion strength p. For the case of the accretion strength less than the threshold, the scalar field disturbance is absorbed by the central black hole, increasing the energy without changing the essential properties. On the other hand, the accretion process with the strength greater than the threshold induces a drastic change in the gravitational configuration and triggers the corresponding transition process. Near the threshold, the gravitational system is attracted to an excited state in the dynamical intermediate process, and the time to maintain it increases continuously as the accretion strength approaches the threshold. Interestingly, the dynamical barrier that needs to be overcome to trigger the transition process of RN-AdS black holes decreases with the increase of the coupling strength between the scalar and Maxwell fields, which is just the opposite of the case of hot hairy black holes. In the final step, we have investigated the real-time dynamics with the excited state as the initial value. On the one hand, due to the linear dynamical instability, there exists a special kind of critical dynamics with a zero threshold for the perturbation strength. The perturbation amplitudes with different signs push the gravitational system to fall into different stable local ground states. The specific selection of the final state of evolution depends on other parameters of the perturbation. On the other hand, for the perturbation with a fixed non-zero amplitude, the parameter z_c describing the position of the perturbation can also induce the critical phenomenon. Such a result indicates that the perturbation with the threshold parameters fails to trigger the single unstable mode of the corresponding excited state, but only changes its energy after being absorbed by the central black hole. Further the spectrum of the final state of evolution is revealed in the parameter space (z_c, p). The two linearly stable local ground states occupy different regions respectively, bounded by the dynamically unstable excited state. The research on the dynamics of gravitational systems has formed a standard framework, from the thermodynamic properties of the equilibrium state, to the linear stability analysis of the near-equilibrium state, all the way to the real-time dynamics simulation of the far-from-equilibrium state. This paper demonstrates this procedure by taking an EMs gravitational system in AdS spacetime as an example. Different from the spontaneous process of the unstable system, the excitation process of the stable ground state shown in this paper has more practical significance and observable effects due to the stability of objects in the real world. Although the EMs model is unlikely to be relevant to astrophysics, it has important applications to holography in the AdS spacetime. At present, some related dynamical studies have shown that such critical phenomena with dynamical barrires widely exist in various gravitational systems, such as gravitational collapse <cit.>, EMs <cit.>, EsGB <cit.> and holographic first-order phase transition <cit.> models. In addition, one can observe that the phase structure here has certain resemblances to those between the vacuum black rings and Myers-Perry black holes in higher-dimensional spacetimes <cit.>, thus it is expected that there will be similar critical dynamics phenomena. On the other hand, for a specific self-interaction potential of a scalar field, there are indications of the existence of multiple local ground states <cit.>, leading to the emergence of critical dynamics. For astronomical observations, the case of a rotating black hole with non-minimal coupling to gravity is the most favorable candidate <cit.>, and the dynamical behaviors during the corresponding critical transition process can be characterized by gravitational waves. § ACKNOWLEDGEMENT This research is partly supported by the Natural Science Foundation of China (NNSFC) under Grant Nos. 11975235, 12005077, 12035016, 12075202 and Guangdong Basic and Applied Basic Research Foundation under Grant No. 2021A1515012374.
http://arxiv.org/abs/2307.01955v1
20230704231925
Algorithme EM régularisé
[ "Pierre Houdouin", "Matthieu Jonkcheere", "Frederic Pascal" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: [email protected]. August 1, 2023 ================================================================================================================================================================= § INTRODUCTION L'algorithme EM <cit.> est un algorithme fréquemment utilisé en apprentissage non supervisé et en modélisation statistique permettant de trouver un maximum local de la vrai­semblance de données non labellisées et d'estimer les labels associés. En procédant itérativement, cet algorithme estime les paramètres inconnus du modèle qui augmentent l'espérance de la vraisemblance des données complétées grâce aux paramètres de l'itération précédente. Historiquement élaboré pour les modèles de mélange de gaussien (GMM) <cit.>, l'algorithme a été étendu aux distributions de Student par <cit.> pour mieux faire face aux données aberrantes et aux données à queues lourdes. Plus récemment, une généralisation aux distributions elliptiques symétriques a été développée (<cit.> pour le clustering et <cit.> pour la classification). En traitement du signal, la dimension m des données est souvent élevée par rapport à leur nombre n : n ∼ m. Dans de telles conditions, des problèmes de convergence surviennent lors de l'estimation des matrices de covariance qui ne sont plus forcément bien conditionnées ou même inversibles à chaque itération. L'estimation de matrices de covariance régularisées est une technique couramment utilisée pour surmonter cette difficulté dans les modèles de clustering <cit.>. En 2022, <cit.> présente une nouvelle version régularisée de l'algorithme EM, RG-EM, qui utilise une nouvelle pénalisation de la vraisemblance permettant de tirer parti de la structure sous-jacente supposée des matrices de covariance. <cit.> montre que des estimateurs mieux conditionnés sont obte­nus, avec de meilleures performances de clustering dans les régimes où la dimension est elevée par rapport au nombre de données. Nous proposons ici d'évaluer les performances de RG-EM sur des données réelles. La structure du papier est la suivante : la section <ref> rappelle les éléments théoriques de l'algorithme, la section <ref> contient les expériences sur données réelles et les conclusions, remarques et perspectives sont établies dans la section <ref>. § ALGORITHME EM RÉGULARISÉ On suppose que chaque observation 𝐱_i ∈ℝ^m est issue d'un GMM où chaque cluster 𝒞_k a son propre vecteur moyenne μ_k ∈ℝ^m, sa propre matrice de covariance symétrique définie positive Σ_k ∈ℝ^m × m et sa probabilité d'appartenance π_k ∈ [0, 1] avec ∑_k π_k=1. La densité de probabilité de 𝐱_i s'écrit alors : f(𝐱_i |θ ) = (2π)^-m/2∑_k=1^K π_k |Σ_k |^-1/2 e^ -1/2 (𝐱_i-μ_k)^⊤Σ_k^-1 (𝐱_i-μ_k) avec θ = (π_1,...,π_K,μ_1,...,μ_K,Σ_1,...,Σ_K), le vecteur de tous les paramètres inconnus. Supposons égalemet qu'une informatio a priori sur la structure des matrices de covariance de chaque cluster est disponible : e.g., elles sont proches de matrices cibles 𝐓_k, k=1,…,K. On exploite cette structure en pénalisant la vraisemblance avec la divergence de Kullback-Leibler (définie dans <cit.>) entre chaque Σ_k et 𝐓_k : Π_(Σ_k,𝐓_k) = 1/2(tr(Σ_k^-1𝐓_k) - log | Σ_k^-1𝐓_k | - m ). Soit 𝐗 = (𝐱_1 ⋯𝐱_n) la matrice des données issues de notre modèle GMM, notre vraisemblance pénalisée est alors : ℓ_η( θ|𝐗 )= ℓ(𝐗|θ) - ∑_k=1^K η_k Π_(Σ_k,𝐓_k) où η_1,...,η_K ≥ 0 sont des paramètres automatiquement ajustés. L'étape E de l'algorithme EM régularisé n'est pas modifiée, on a ∀ i ∈ [[1,n]], ∀ k ∈ [[1,K]] : p_ik^(t) = π̂_k^(t) |Σ̂^(t)_k|^-1/2 e^-1/2 (𝐱_i-μ̂_k^(t))^⊤Σ̂^t-1_k (𝐱_i-μ̂_k^(t))/∑_j=1^K π̂_j^(t) |Σ̂^t_j|^-1/2 e^-1/2 (𝐱_i-μ̂_j^(t))^⊤Σ̂^t-1_j (𝐱_i-μ̂_j^(t)) Voir <cit.> Les mises à jours de l'étape M sont les suivantes : π_k^(t+1) = 1/n∑_i=1^n p_ik^(t) , μ̂_k^(t+1) = ∑_i=1^n w_ik^(t)𝐱_i Σ̂_k^(t+1) = β_k^(t+1)∑_i=1^n w_ik^(t) (𝐱_i-μ̂_k^(t))( 𝐱_i-μ̂_k^(t))^⊤ + (1-β_k^(t+1))𝐓_k, où β_k^(t+1) = nπ_k^(t+1)/η_k + nπ_k^(t+1) et w_ik^(t) = p_ik^(t)/∑_i=1^n p_ik^(t) Voir <cit.> La matrice cible 𝐓_k permet ainsi d'injecter des connaissances a priori sur Σ_k dans l'estimation. Si aucune information a priori n'est disponible, on peut choisir 𝐓_k = θ̂_k^0 𝐈_m, ce qui permet simplement d'assurer le bon conditionnement des estimateurs. On utilise alors l'estimateur classique du paramètre d'échelle θ_k = tr(Σ_k)/m. Dans nos expériences, on utilise θ̂_k^0 = tr(Σ̂_k^0)/m où Σ̂_k^0 est la valeur initiale de l'estimation de la matrice de covariance, obtenue grâce à un premier clustering avec l'algorithme K-means. Dans l'algorithme EM, la valeur du paramètre d'échelle est périodiquement mise à jour avec la nouvelle valeur de Σ̂_k. Le choix du paramètre de régularisation est également essentiel. On utilise une sélection par validation croisée qui maximise la log-vraisemblance gaussienne <cit.>. Chaque η_k est estimé indépendamment parmi un ensemble de candidats {η_1,…,η_J} par la procédure décrite dans l'algorithme <ref>. § EXPÉRIENCES SUR DONNÉES SIMULÉES L'algorithme EM régularisé, RG-EM, est comparé à l'EM classique, noté G-EM, ainsi qu'à l'algorithme K-means. Les deux versions de l'EM sont implémentées et la version de Scikit-learn pour le K-means est utilisée. Afin que l'EM classique converge même lorsque la dimension est élevée et qu'il y a peu de données, on ajoute une régularisation classique avec la matrice ϵ 𝐈_m à chaque itération. On utilise pour K-means n_init=10 et max_iter=200, pour G-EM ϵ=10^-4 et max_iter=40 et pour RG-EM L=5 (Algorithme <ref>) et max_iter=40. Comme indiqué dans la section 2, on utilise pour matrices cibles 𝐓_k = tr(Σ̂_k^0)/m 𝐈_m. Pour RG-EM, on recalcule les η_k optimaux toute les 10 itérations. Les données générées à partir de distributions gaussiennes sont réparties en K=3 clusters avec les priors π_k=1/3. Le vecteur moyenne est tiré aléatoirement sur la sphère centrée de rayon 2 tandis qu'on utilise une structure autorégressive pour les covariances. On choisit (Σ_k)_i,j= ρ_k^|i-j| avec les coefficients 0.8, 0.5 et 0.2. Cela traduit une structure autorégressive dans les données. On teste deux configurations avec, respectivement, n=1000 et n=500. On évalue la performance des modèles en calculant leur précision. Pour calculer la précision en clustering, on commence par calculer la matrice de confusion, puis on permute les colonnes de sorte à maximiser la somme des éléments diagonaux. Les résultats sont présentés en figure <ref>. Dans les deux configurations, il y a une dimension à partir de laquelle les performances de l'EM classique chutent et cela correspond au ratio n/m≈ 14. A l'inverse, l'EM régularisé parvient à conserver des performances similaires entre la dimension 10 et la dimension 100. En effet, les matrices de covariance des clusters ont une structure proche d'une identité, surtout lorsque ρ est proche de 0. La matrice cible choisie s'avère donc ici particulièrement pertinent § EXPÉRIENCES SUR DONNÉES RÉELLES On teste chaque méthode sur des jeux de données réelles issus de l'UCI machine learning repository <cit.>. Deux datasets sont utilisés : * Ionosphere : n=351, p=34 et K=2 * Breast cancer : n=699, p=9 et K=2 On utilise 70% des données pour l'entraînement et 30% pour l'évaluation des performances. Les résultats sont moyennés sur 100 simulations et les datasets sont recomposés toutes les 10 simulations. Utiliser une matrice cible circulaire n'est pas adapté si certaines valeurs propres des matrices de covariance sont proche de 0. On effectue donc une analyse en composantes principales pour réduire la dimension, on choisit la nouvelle dimension comme la plus petite permettant de conserver 95% de l'information (variance). Cela correspond à m=8 pour Breast cancer et m=26 pour Ionosphere. Les nouvelles matrices étant proches de matrices diagonales, le choix de 𝐓_k = θ̂_k^0 ·𝐈_m semble pertinent. On obtient les résultats présentés sur la figure <ref>. Sur les deux jeux de données, K-means obtient des performances sensiblement inférieures aux méthodes EM, avec un écart d'environ 10% de précision. La version régularisée de l'EM conduit, sur les deux jeux de données, à de meilleurs résultats que l'algorithme GMM classique, la réduction de dimension ayant rendu pertinent l'utilisation d'une matrice cible proportionnelle à l'identité. On peut maintenant s'intéresser à l'évolution des performances de chaque méthode lorsque le rapport n/m devient de plus en plus faible. Pour observer cela, on supprime progressivement des données du dataset d'entraînement pour réduire sa taille de 100% à 10%, ce qui fait diminuer le rapport n/m. Les résultats sont présentés sur la figure <ref>. Sur les deux jeux de données, K-means n'est pas très impacté par la diminution du nombre de données. En effet, la suppression des données ne change pas la structure géométrique des clusters, et K-means construit une frontière similaire avec peu de données. A l'inverse, les estimateurs des algorithmes EM sont impactés par la baisse du nombre de données, ce qui provoque une diminution des performances. Sur le dataset breast cancer wisconsin, les deux méthodes conservent des performances similaires jusqu'à ce que le nombre de données soit réduit de 80%. La performance chûte alors rapidement pour rejoindre celle des autres méthodes. Sur le dataset ionosphere, les deux algorithmes EM baissent progressivement, mais encore une fois, la version régularisée chute moins vite et conserve de meilleures performances. § CONCLUSION Nous avons présenté dans cet article une version régularisée de l'algorithme EM-GMM qui surpasse les méthodes classiques de clustering dans les régimes où le nombre de données est faible par rapport à la dimension. Dans cette nouvelle approche, l'estimation de la matrice de covariance est régularisée avec un terme de pénalisation qui oriente l'estimation vers une matrice cible. Les coefficients de régularisation η_k optimaux sont sélectionnés grâce à un algorithme de validation croisée et régulièrement mis à jour au cours des itérations. Les performances obtenues avec ce nouvel algorithme sont meilleures que celles obtenues avec des algorithmes classiques. De plus, la méthode proposée, qui peut être vue comme une amélioration de l'EM classique, est relativement stable en fonction du rapport m/n. Les perspectives de ces travaux vont se focaliser sur l'apprentissage des matrices cibles, ainsi que sur la version totalement non-supervisée de RG-EM. 99 Dempster2022Maximum A. P. Dempster and N. M. Laird and D. B. Rubin Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, 1977 Guorong2001EM Guorong Xuan and Wei Zhang and Peiqi Chai EM algorithms of gaussian mixture model and hidden Markov model. Proceedings 2001 International Conference on Image Processing Ingrassia2012Studies Ingrassia, Salvatore and Minotti, Simona C and Incarbone, Giuseppe An EM algorithm for the student-t cluster-weighted modeling. Challenges at the Interface of Data Analysis, Computer Science, and Optimization, 2012 roizman2019flexible Roizman, Violeta and Jonckheere, Matthieu and Pascal, Frédéric A flexible EM-like clustering algorithm for noisy data. arXiv preprint arXiv:1907.01660, 2019 houdouin2022robust Houdouin, Pierre and Wang, Andrew and Jonckheere, Matthieu and Pascal Robust classification with flexible discriminant analysis in heterogeneous data. ICASSP 2022 Teimour2021EM Mahdi Teimouri EM algorithm for mixture of skew-normal distributions fitted to grouped data. Journal of Applied Statistics, 2021 Ying2014Regularized Ying Sun and Prabhu Babu and Daniel P. Palomar Regularized Tyler's Scatter Estimator: Existence, Uniqueness, and Algorithms. IEEE Transactions on Signal Processing, 2014 pascal2014generalized Pascal, Frédéric and Chitour, Yacine and Quek, Yihui Generalized robust shrinkage estimator and its application to STAP detection problem. IEEE Transactions on Signal Processing, 2014 yi2020shrinking Yi, Mengxi and Tyler, David E Shrinking the Covariance Matrix using Convex Penalties on the Matrix-Log Transformation. Journal of Computational and Graphical Statistics, 2020 houdouin2022regularized Pierre Houdouin and Esa Ollila and Frederic Pascal Regularized EM algorithm. https://arxiv.org/abs/2303.14989, 2022 UCI Dua Dheeru and Graff Casey. UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences, 2017.
http://arxiv.org/abs/2307.02696v1
20230705235957
Multi-Foci Acoustic Field Generation Using Dammann Gratings for Phased Array Transducers
[ "Tatsuki Fushimi" ]
physics.app-ph
[ "physics.app-ph" ]
AIP/123-QED Multi-Foci Acoustic Field Generation Using Dammann Gratings for Phased Array Transducers]Multi-Foci Acoustic Field Generation Using Dammann Gratings for Phased Array Transducers Institute of Library, Information and Media Science, University of Tsukuba, Kasuga 1-2, Tsukuba, Ibaraki, 305-0051 R&D Center for Digital Nature, University of Tsukuba, Kasuga 1-2, Tsukuba, Ibaraki, 305-0051 [email protected] Phased array transducers can shape acoustic fields for versatile manipulation, however, generating multiple foci typically requires complex optimization. In this study, we show that Dammann gratings – binary phase gratings used in optics to generate arrays of equal intensity spots – can be adapted for acoustics to produce multiple equal-strength foci using a phased array transducer. The transducer elements were assigned phases of 0 or π based on a Dammann grating defined by transition points. Simulations results show that simple gratings with two transition points can generate fields with up to 12 foci with nearly equal acoustic pressures. Compared to conventional multi-focus phase optimization techniques, the Dammann grating approach is computationally efficient and enables facile reconfiguration of the focal pattern by adjusting the single focus or grating hologram. This study demonstrates that adapting binary phase functions from photonics can expand the capabilities of ultrasound for versatile acoustic manipulation tasks that require parallel actuation at multiple points. [ Tatsuki Fushimi August 1, 2023 =================== Acoustic radiation force has emerged as a powerful tool for remotely manipulating and controlling small particles in diverse fields. Unlike other remote manipulation techniques, such as photophoretic or electrostatic forces, acoustic force offers a distinct advantage due to its ability to exert force on target objects irrespective of their material properties. There have been significant advances in the field of acoustophoresis, driven by the development of phased array transducers (PAT) in recent years<cit.>. Specifically, the dynamic multi-focal capabilities of PAT have garnered considerable interest due to their potential in enabling parallelization<cit.>. The ability to simultaneously focus on multiple targets can enhance performance and efficiency in applications such as experiment automation<cit.>, acoustophoretic displays<cit.>, and ultrasonic haptic displays<cit.>. One of the most straightforward methods for generating a multi-focal field involves using a standing wave field, as illustrated in Fig. <ref>. A standing wave can be generated by the superposition of two counter-propagating waves, typically generated by a transducer and reflector or by another set of transducers. Although this approach is conceptually simple, it necessitates counter-propagating waves, which may not always be practical due to spatial constraints or accessibility limitations. An alternative approach involves specifying the desired focal position and using a phase retrieval algorithm (or acoustic hologram optimizers, Fig. <ref>) to determine the appropriate phase sets that produce the desired field. A wide range of techniques have been developed to realize multi-focal fields, including Eigensolver approach<cit.>, iterative backpropagation (IBP)<cit.>, GS-PAT<cit.>, and Diff-PAT<cit.>. Although, these algorithms can be efficient, they require computational resources as they must be optimized numerically. In this study, we introduce the use of acoustic lenses based on Dammann gratings to generate multi-focal fields. Dammann gratings<cit.> are binary phase gratings that produce one- or two-dimensional arrays of equal intensity in optics and have been widely used in Fourier optics. However, to date, their implementation for acoustics has remained largely unexplored. A notable advantage of the proposed method is that the multi-focal field can be directly specified by the Dammann grating function. This eliminates the need for optimization algorithms and expands the method's applicability to a range of applications. It should be noted that petal beams<cit.> can also generate multi-foci fields, yet Dammann grating presents an alternate strategy. The expansion of methodological diversity benefits the study of acoustic hologram. Moreover, where petal beams create a focused field around the propagation axis: Dammann grating forms a standing wave-like field along propagation axis. Dammann grating can be defined as follows <cit.>: g(x) = ∑_n=0^N(-1)^n [ x - 0.5(x_n+1 + x_n)/x_n+1-x_n], where x_n denotes a matrix in x_n = [x_0, x_1, ..., x_n] and contains N number of transition points (the matrix must be in ascending order, x_0 = 0, x_n+1=0.5). Furthermore, the rectangular function is defined as (x) = 1 if |x| < 0.5, 0 if |x| ≥ 0.5. The function g(x) returns a binary output (-1 or 1). The coordinate x is a normalized source position with 100 equally spaced points spanning [0, 0.5] (to create high resolution Dammann grating grid from which the transducer array can be interpreted from). The obtained function g(x) is replicated multiple times to generate a square matrix, G_x(x, y) = [g(x), g(x), ..., g(x)], and same sets of square matrix, G_y(x, y), is also created in y axis by repeating the process using the same transition points in the y axis. The Dammann grating for the whole array is identified by; H(x,y) = G_x(x,y)+ G_y(x,y). In H(x,y), the locations with 0, -2, and 2 are replaced with values 1, -1, and -1, respectively[https://github.com/aakhtemostafa/SSPIM]. Then, the nearest point interpolation function is applied to H(x,y) to identify the phase at the closest point to the transducer in normalzied form. Finally, +1 and -1 are replaced by 0 and π, respectively to create the Dammann lens (ϕ_Dammann). A visual aid for the generation process is available in the supplementary, and codes to replicate the process will be in the Data Availability section. To determine the ultimate capability of Dammann gratings in the context of mid-air acoustics, we first consider an ideal case where the resolution of phased array transducer is high (81 by 81 transducers with 2-mm pitch, 40 kHz) with two transition points (N=2). The pressure field generated by the Dammann gratings is calculated using the Huygens' linear superposition method (p(𝐱, 𝐱_𝐭) = |Σ^T_t p_t(𝐱, 𝐱_𝐭)|), where p_t(𝐱, 𝐱_𝐭, 𝐱_𝐟) = P_A/R(𝐱, 𝐱_𝐭) e^j(kR(𝐱, 𝐱_𝐭)+ϕ_Dammann+ϕ_focal(𝐱_𝐭, 𝐱_𝐟)) where P_A=1, k=2π f_0/c_0, 𝐱 and 𝐱_𝐭 denote the field and transducer position, respectively. Furthermore, ϕ_focal(𝐱_𝐭, 𝐱_𝐟) = -(2π f_0/c_0) [ d_tf(𝐱_𝐭, 𝐱_𝐟) - ||𝐱_𝐟|| ] is the acoustic hologram for a single focus, and d_tf(𝐱_𝐭, 𝐱_𝐟) = ||𝐱_𝐟 - 𝐱_𝐭||. Additionally, f_0 and c_0 = 346 ^-1 denote the acoustic frequency and speed of sound in mid-air (λ=0.00865m), respectively. The acoustic pressure fields generated by Dammann gratings, with two transition points (x_1 and x_2) incremented in 30 equally spaced points between 0 and 0.5, are shown in the supplementary material. The focal point was fixed at (0,0,0.1) m, and the depicted pressure field is at z = 0.1 m. The pressure amplitude is normalized to the acoustic pressure amplitude (p_max) at the focal point when a single focus is specified. Gieven that x_1 ≤ x_2, only the upper half of the combination matrix is applicable. An examination of each combination reveals that Dammann gratings can specify a wide range of acoustic pressure fields, and even subtle changes in x_1 or x_2 can drastically change the outcome of the field. Multiple focal spots with nearly equal pressure amplitudes are observed for certain transition point combinations. To identify useful Dammann gratings, criteria defining desired multi-focal field properties must be established. A useful multi-focal field is considered to have multiple focal spots with significant acoustic pressure and focal spots of nearly equal acoustic pressure strength. A peak finding algorithm was utilized to identify focal spots in the 2D acoustic pressure fields (x–y plane at z = 0.1 m) generated by the Dammann gratings. The algorithm initially detects all local maxima in the 2D field matrix, including those in flat regions. The local maxima are then sorted in descending order of acoustic pressure magnitude. The algorithm iteratively discards any local maxima that is within 5 mm of a higher pressure local maxima, retaining only the local maxima that are sufficiently separated and exhibit the highest acoustic pressures. Focus spot groups are considered to produce a "valid" multi-focal field when the maximum acoustic pressure at a focal spot (p^peak) is greater than 32.5% of the single focus pressure (0.325p_max). Additionally, the acoustic pressures at the focal spots are in the range between 0.707p^peak and p^peak(within -3dB). These criteria ensure that the focal spots have significant acoustic pressures and are of nearly equal acoustic pressure strength, producing a useful multi-focal field. The combinations of transition points that yielded valid focal spots are summarized in Fig. <ref>. The majority of combinations were considered invalid (75.3%), but valid multi-focal fields with 4, 5, 8, 9 and 12 peaks (n_p) were obtained. The percentages of fields with 4, 5, 8, 9, and 12 peaks were 18.3%, 3.23%, 1.29%, 1.72%, and 0.215%, respectively. Based on these groups, one grating with the highest pressure amplitude, p^peak, was selected for each number of focal spots, n_p. This narrowed down the selection to one grating for each n_p. The results for n_p = 4, 8, 9, and 12 are shown in Fig. <ref>, where the white crosses indicate the locations of the identified focal points (see the supplementary material for n_p=5). The phase profiles and 3D visualization of acoustic field for each grating are shown in the Supplementary Material. Although the field with n_p = 4 and 12 (Fig.  <ref> (a)-(d)) appear similar to each other, this is inevitable due to the relatively simple filtering process. The gratings with a large number of focal points are of interesting as they can potentially serve as “array-generating” multi-focal fields for experimental automation in biology, chemistry, and medicine, effectively replacing standing wave fields<cit.>. Until now, an ideal PAT with high spatial resolution is assumed, but the identified traps should translate well into a conventional phased array. Specifically, a 16 by 16 phased array with Murata MA40S4S transducer (40 kHz, 10-mm diameter, p_0 = 0.221^-1 with 20 V <cit.>) was assumed. A directivity function (D(θ) = 2J_1 (kr sinθ)/kr sinθ) was added to simulate piston-source transducers and the results are as shown in Fig. <ref>. In principle, the acoustic trap, as simulated in Fig. <ref> (a)-(d), are well-recovered in PAT. However, it is less focused and has more ambient noise than ideal case. In particular, the difference in the pressure amplitude between peaks are evident in Fig. <ref> (b) and (d). We further note that n_p=5 in the supplementary material does not fully recover the same pressure distribution as that in Fig. <ref>. However, despite these challenges; the conventional PAT is sufficiently resolved to generate Dammann gratings. One of the distinguishing advantage of the Dammann grating, when compared to the hologram optimziation method, is the capability to translate the multi-focus with the change in single-focus hologram, and does not require re-optimization of the field. This is due to the fact that Dammann grating is a binary phase hologram with 0 and π, and it creates phase singularities<cit.>. The translation capability of the multi-focus lens using n_p=4 is as shown in Fig. <ref> (a)–(b), and the mean pressure amplitude at the peaks were 1948, 2260, and 2686 Pa at -5λ, -2.5λ, and 0 shift, respectively. The field can also be rotated by rotating the the Dammann grating phase (using “imrotate” function in MATLAB, the applied phase is shown in the supplementary material). The field rotations with π/8, π/4 radians are shown in Fig. <ref> (c)–(d), respectively. The mean pressure amplitude at the focus stays relatively constant with pressure amplitude of 2209 and 2531 Pa. These characteristics share the similarity with the trap lens (such as twin, vortex, bottle trap lens). A characteristics that is not confirmed to be shared between the trap signature and Dammann grating is the ability to combine the lens with the optimized lens such as IBP<cit.> (see supplementary). This may be related to phase encoding limits as discussed by Memoli et al. <cit.>. However, further investigation is necessary. When multi-focus fields are generated using phased array transducers, a rule of thumb used to set “sensible” target acoustic pressure amplitude involves assuming that the sum of acoustic pressure amplitude at each focus should not exceed the single focus pressure amplitude p_max (i.e., each peak shares the acoustic pressure amplitude from the single focus amplitude). Thus, for a conventional multi-focus optimizer, the target normalized pressure amplitude for multi-focus with the same pressure amplitudes are the reciprocal of the number of peaks (n_p)<cit.>. The examination of Fig. <ref> reveals that the mean normalized pressure amplitudes are 0.325, 0.284, 0.287, 0.307, and 0.269 for n_p = 4, 5, 8, 9, and 12, respectively. This demonstrates that a higher target pressure amplitude can be set than the rule of thumb. This is an important insight into the limits of PAT, and aids in designing more appropriate performance test for acoustic hologram optimizers. Although, in this paper, relatively simple Dammann gratings with two transition points (N=2) were investigated, the number of transition points can be further increased. Hence, this still leaves the probability of discovering Dammann gratings with large n_p. The limitation of the Dammann grating is imposed by the spatial resolution. For the PAT, the limitation is the physical size of the transducers, and it can be improved by the application of metamaterials<cit.>. Application of Dammann gratings in underwater acoustics can also be envisioned. Acoustic field that can act in lieu of standing wave exhibits potentials in creating arrays for bioolgy/chemistry<cit.>, additive manufacturing, and medical applications<cit.> where it is difficult to generate standing waves. In summary, this paper presents a method that uses Dammann gratings to simply and effectively generate multi-focal acoustic fields. The Dammann grating function directly specifies the multi-focal field, eliminating the need for computational optimization. The findings of the study provide new insights into multi-focal acoustic pressure field and indicate that the Dammann gratings are promising for applications requiring PAT. § SUPPLEMENTARY MATERIAL See supplementary material for visual guide and pressure field output for all combination of Dammann Grating, n_p=5 field, hologram, and 3D visualization of acoustic field for the selected gratings, holograms for translation/rotation, and attempt with IBP optimizer. § ACKNOWLEDGEMENT We gratefully acknowledge the support of AI tools, OpenAI's GPT-4, and Anthropic's Claude. The authors have diligently reviewed and verified all generated outputs to ensure their accuracy and relevance. We would like to thank Editage [http://www.editage.com] for editing and reviewing this manuscript for English language. § CONFLICT OF INTEREST The authors have no conflicts to disclose. § AUTHOR CONTRIBUTIONS Tatsuki Fushimi: Conceptualization; Methodology; Software; Validation; Visualization; Writing. § DATA AVAILABILITY The data that supports the findings of this study are available within the article and its supplementary material. Codes are openly available in Github at [<https://github.com/DigitalNatureGroup/Dammann_Grating_Acoustics>].
http://arxiv.org/abs/2307.01577v1
20230704091101
Conceptual Cognitive Maps Formation with Neural Successor Networks and Word Embeddings
[ "Paul Stoewer", "Achim Schilling", "Andreas Maier", "Patrick Krauss" ]
cs.AI
[ "cs.AI", "q-bio.NC" ]
On possible wormhole solutions supported by non-commutative geometry within f(ℛ,ℒ_m) gravity P.K. Sahoo0000-0003-2130-8832 August 1, 2023 ============================================================================================ empty empty The human brain possesses the extraordinary capability to contextualize the information it receives from our environment. The entorhinal-hippocampal plays a critical role in this function, as it is deeply engaged in memory processing and constructing cognitive maps using place and grid cells. Comprehending and leveraging this ability could significantly augment the field of artificial intelligence. The multi-scale successor representation serves as a good model for the functionality of place and grid cells and has already shown promise in this role. Here, we introduce a model that employs successor representations and neural networks, along with word embedding vectors, to construct a cognitive map of three separate concepts. The network adeptly learns two different scaled maps and situates new information in proximity to related pre-existing representations. The dispersion of information across the cognitive map varies according to its scale - either being heavily concentrated, resulting in the formation of the three concepts, or spread evenly throughout the map. We suggest that our model could potentially improve current AI models by providing multi-modal context information to any input, based on a similarity metric for the input and pre-existing knowledge representations. § INTRODUCTION The memories in our brains make up our past experience and shape how we see the world. The hippocampus is heavily involved in the domain of memory processing and transfers short term to long term memories <cit.><cit.><cit.>. Furthermore a main function of the hippocampus is navigation in both spatial and non-spatial abstract mental spaces <cit.><cit.><cit.><cit.>. The hippocampus is expected to build cognitive maps, which can display arbitrary information and their relationship to each other <cit.>. Place and grid cells are cell types in the entorhinal-hippocampal complex, which are involved in the formation of these maps <cit.>. In addition, memory can be represented at varying degrees of detail along the longitudinal axis of the hippocampus, such as in diverse spatial resolutions <cit.>. These varying scales assist in navigation over differing horizons in terms of spatial navigation <cit.>. When we consider abstract conceptual spaces, these diverse scales may denote varying levels of abstraction <cit.>. Broadly speaking, these multi-scale cognitive maps facilitate flexible planning, enable the broad generalization of concepts, and foster intricate representation of information <cit.>. Inspired by the properties of the entorhinal-hippocampal complex, we use the successor representation (SR). The SR is a mathematical model, which can be used to model place cell activity <cit.>. Furthermore, models for multi-scale successor representations are proposed to enable maps with different scales <cit.> and the SR can also be used for flexible sequence generation <cit.>. In previous studies, we could already demonstrate that we can use the SR and artificial neural networks to model cognitive maps for different scenarios. We have recreated place cell fire patterns in spatial navigation experiments with rodents and have built a simplified language model <cit.>. Additionally, we built cognitive maps with handcrafted features of different animal species and used the map to interpolate novel and hidden information <cit.>. Furthermore, we showed that word class representations spontaneously emerge in a deep neural network trained on next word prediction <cit.>. Here, we go a step further and use generative features of large language models to build a cognitive map. In particular, the word embeddings of these models are used as semantic features for different categories of objects. This might serve as a first building block, to incorporate and access further information related to words and sentences, which could lead to the formation of concepts using the properties of navigation and memory processing inspired by the hippocampus. § METHODS §.§ Successor Representation V(s) = E[∑^∞_t=0γ^t R(s_t)|s_0=s] The Successor Representation was designed to describe the potential future reward V(s) from a current state s over a time period t counting all rewards R(s_t) to all successor states. The discount factor γ influences the weight of the successor states to the reward V(s) <cit.> (cf. eq.<ref>). Stachenfeld et. al used the SR to successfully model place cell behaviour <cit.>. V(s) = ∑_s' M(s,s')R(s') M = ∑^∞_t=0γ^t T^t Cognitive maps can be constructed with the transition probability matrix of the state space, which gives the relationship between the states. The matrix can then also be used to calculate the successor representation matrix (cf. eq.<ref>). The discount factor γ might furthermore be used to model the varying grid size in the entorhinal cortex <cit.> and therefore be a tool to encode information from broader concepts to more detailed individual properties <cit.>(cf. Figure <ref>). §.§ Word Category Data Set The set up cognitive map is based on word embeddings of 20 words of 3 categories: Animals, vehicles and furniture. 10 different words for each category are used as validation data. The word embeddings hold features of each word and can be used to compare words an their similarity to each other. We used the spacy library to calculate the embeddings. The transition probabilities of the state space where calculated via the similarity function of spacy, which uses the cosine similarity between the embeddings (A_s and B_s' (cf. eq. <ref>)). T(s,s')=cos(θ)=A_s· B_s'/||A_s||||B_s'|| §.§ Neural Network Architecture In order to learn the successor representation of the state space, we set up a neural network. The network receives as input the vector of the word embedding. Right after the input layer, a Dropout layer with a rate of 0.8 is placed, to increase the robustness of novel unseen inputs. One hidden layer is used, before the output layer points with a Softmax activation function to the 60 known states, which represent the word embeddings of the training data (cf. Figure <ref>). Two different networks are trained for two different γ with [1.0&0.7] and t=5. The training was performed over 500 epochs, a batch size of 20 and with a learning rate of 1e-5. §.§ Multi-dimensional scaling We used multi-dimensional scaling (MDS) to project the data on to a 2D surface, although T-distributed stochastic neighbor embedding (t-SNE) is a common method for producing low-dimensional embeddings from high-dimensional data <cit.>. Nevertheless, t-SNE's low-dimensional projections can be highly sensitive to specific parameter settings <cit.>, are susceptible to noise, and may often scramble rather than retain the global structure in data <cit.>. In contrast, multi-Dimensional-Scaling (MDS) offers an effective method for visualizing high-dimensional point clouds by projecting them onto a 2-dimensional plane <cit.>. A key benefit of MDS is that it does not require parameter tuning, and it maintains all mutual distances between the points, thus preserving both the global and local structure of the data it represents. When interpreting patterns as points in high-dimensional space and dissimilarities between patterns as distances between corresponding points, MDS is an elegant method to visualize high-dimensional data. By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora <cit.>, hidden layer representations (embeddings) of artificial neural networks <cit.>, structure and dynamics of highly recurrent neural networks <cit.>, or brain activity patterns assessed during e.g. pure tone or speech perception <cit.>, or even during sleep <cit.>. In all these cases the apparent compactness and mutual overlap of the point clusters permits a qualitative assessment of how well the different classes separate. §.§ Generalized Discrimination Value (GDV) We used the GDV to calculate cluster separability as published and explained in detail in <cit.>. Briefly, we consider N points 𝐱_𝐧=1..𝐍=(x_n,1,⋯,x_n,D), distributed within D-dimensional space. A label l_n assigns each point to one of L distinct classes C_l=1..L. In order to become invariant against scaling and translation, each dimension is separately z-scored and, for later convenience, multiplied with 1/2: s_n,d=1/2·x_n,d-μ_d/σ_d. Here, μ_d=1/N∑_n=1^Nx_n,d denotes the mean, and σ_d=√(1/N∑_n=1^N(x_n,d-μ_d)^2) the standard deviation of dimension d. Based on the re-scaled data points 𝐬_𝐧=(s_n,1,⋯,s_n,D), we calculate the mean intra-class distances for each class C_l d̅(C_l)=2/N_l (N_l-1)∑_i=1^N_l-1∑_j=i+1^N_ld(s_i^(l),s_j^(l)), and the mean inter-class distances for each pair of classes C_l and C_m d̅(C_l,C_m)=1/N_l N_m∑_i=1^N_l∑_j=1^N_md(s_i^(l),s_j^(m)). Here, N_k is the number of points in class k, and s_i^(k) is the i^th point of class k. The quantity d(a,b) is the euclidean distance between a and b. Finally, the Generalized Discrimination Value (GDV) is calculated from the mean intra-class and inter-class distances as follows: =1/√(D)[1/L∑_l=1^Ld̅(C_l) - 2/L(L-1)∑_l=1^L-1∑_m=l+1^Ld̅(C_l,C_m)] whereas the factor 1/√(D) is introduced for dimensionality invariance of the GDV with D as the number of dimensions. Note that the GDV is invariant with respect to a global scaling or shifting of the data (due to the z-scoring), and also invariant with respect to a permutation of the components in the N-dimensional data vectors (because the euclidean distance measure has this symmetry). The GDV is zero for completely overlapping, non-separated clusters, and it becomes more negative as the separation increases. A GDV of -1 signifies already a very strong separation. § RESULTS The neural network is able to learn the set up cognitive maps successfully. We projected the predictions of the network for all training and validation word embedding vectors using multi-dimensional scaling (MDS). Remarkably, three object category representations spontaneously emerge in the network with a discount factor γ=1.0 (cf. Figure <ref> B). In comparison the discount factor γ=0.3 leads to an evenly spread feature space (cf. Figure <ref> A). Furthermore, we calculated the generalized discrimination value (GDV) <cit.> to quantify the clustering of the representations. The GDV for γ=1.0 considering all data points is -0.44, indicating strong clustering. If we only consider the training samples, the GDV is -0.43 and for the validation samples -0.39. However, the predictions derived from the network with the lower discount factor γ=0.3 do not form dense clusters. This is also reflected by respective GDVs, indicating weak clustering. Here, for all data points the GDV is -0.38, whereas the GDV for the training samples is even smaller with -0.35 and for the validation data with -0.32. § DISCUSSION Our study demonstrates that it is possible to construct and learn differently scaled cognitive maps by using similarity measures between inputs and known states reflecting word embeddings. We propose that this mechanism can be paralleled with the process of memory recognition, where novel sensory inputs are compared to previously stored memories. Hence, we can extract contextual information from new inputs using the cognitive maps created by the learned memories. This strategy could potentially enhance the performance of current AI systems. For instance, let's consider large language models (LLMs). Over the past year, there has been significant improvement in LLMs, and they are particularly proficient at text generation tasks. Nonetheless, despite their superior formal competence, they still fall short in terms of functional competence <cit.>. Our model could potentially overcome these limitations. A similarity measure of word embeddings to arbitrary additional information, like factual data, visual representations or spatial locations could provide more context information to text input (cf. Figure <ref>). Furthermore, it could use this technique to "fact check" its proposition with a memory database, which is not only generated by predictions from the training data. How our proposed model can be incorporated into a LLM will be part of future research. In general, the model could be also useful for many other applications, to contribute more context awareness depending on the current situation. § ACKNOWLEDGMENT This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): grants KR 5148/2-1 (project number 436456810), KR 5148/3-1 (project number 510395418) and GRK 2839 (project number 468527017) to PK, and grant SCHI 1482/3-1 (project number 451810794) to AS, and by the Emerging Talents Initiative (ETI) of the University Erlangen-Nuremberg (grant 2019/2-Phil-01 to PK). unsrt
http://arxiv.org/abs/2307.01326v1
20230703200048
Sensitivities on the anomalous quartic $γγγγ$ and $γγγZ$ couplings at the CLIC
[ "E. Gurkanli" ]
hep-ph
[ "hep-ph" ]
Department of Physics, Sinop University, Turkey. It is essential to directly investigate the self-couplings of gauge bosons in the Standard Model (SM) due to its non-Abelian nature, as these couplings play a significant role in comprehending the gauge structure of the model. The discrepancies between the Standard Model's expectations and the measured value of gauge boson self-couplings would serve as strong evidence towards the existence of new physics phenomena that extend beyond the Standard Model. Such deviations could provide valuable insights into the nature of new physics and potentially lead to a deeper understanding of fundamental particles and their interactions. In this study, we examine the sensitivities of anomalous couplings associated with dimension-8 operators that affect the γγγγ and Zγγγ quartic vertices. The study focuses on the process e^- γ→ e^-γγ with the incoming photon under Weizsäcker–Williams approximation at the stage-3 scenerio of Compact Linear Collider (CLIC) that is refer to a CoM energy of 3 TeV. Due to the CLIC options, we take into account the both unpolarized and ∓80% polarized electron beam with the related integrated luminosities of L=5, 4, 1 ab^-1 under the systematic uncertainties of δ_sys=0, 3, 5. Obtained sensitivities on the anomalous quartic gauge couplings (aQGCs) for the process e^- γ→ e^-γγ at √(s)= 3 TeV and various polarizations, are improved by a factor of 2-200 times better for the couplings f_T,j/Λ^4 compared with the experimental results. —– 12.60.-i, 14.70.Bh, 14.70.Hp Keywords: Electroweak interaction, Anomalous couplings, Models beyond the Standard Model. Sensitivities on the anomalous quartic γγγγ and γγγ Z couplings at the CLIC E. Gurkanli[[email protected]] August 1, 2023 ============================================================================ § INTRODUCTION Non-Abelian nature of the SM implies the gauge boson's self interactions as triple and quartic gauge couplings(TGC and QGC). Also, possible deviations from triple and quartic gauge interactions as anomalous TGC and QGC (aTGC and aQGC) have an important role to check the validity of SM and predict the new physics contribution coming from the Beyond Standad Model(BSM)<cit.>. In the SM, neutral gauge boson ZZγγ, Zγγγ, γγγγ couplings are exluded. In this manner, the contributions from these vertices are sources of new physics and a sign for beyond Standard Model. These effects can be determined with Effective Field Theory (EFT) by adding high-dimension operators to the SM Lagrangian. It provides a model-independent framework to systematically parametrize the effects of new physics. It is discussed in the papers <cit.> to generate the neutral TGC (NTGC) ZVγ (V=γ,Z) by dim-8 operators. Numerous experimental and phenomenological studies have been conducted to explore high-dimensional gauge operators, both at present and future colliders, focusing on proton-proton, electron-proton, and electron-positron collisions <cit.>. The current CLIC scenarios supposed 1.0 ab^-1, 2.5 ab^-1, and 5.0 ab^-1 of luminosities with the center-of-mass energies of √(s)=0.38 TeV, 1.5 TeV, and 3 TeV, respectively <cit.>. Also, CLIC enables ±80% polarisation options for the electron beam but there is no option for positron polarisation. Here, unpolarized and polarized electron beam of P_e^-=-80%, 80% are pertinent with the luminosities of L=5 ab^-1, L=4 ab^-1 and L=1 ab^-1, respectively <cit.>. Using polarization options provides a better signal-background ratio to reach better sensitivities on aQGC with high luminosities. Besides, the collisions at the fundamental level in lepton collider provides cleaner environment comparing with the hadron colliders. This returns to us for an adventage of minor systematic uncertainties, easy analyse and precise measurements. The cross-section of any process is consist of the cross-sections related to electron and positron beam polarizations (P_e^-,P_e^+) given in the following equation <cit.>. σ(P_e^+,P_e^-)=1/4{(1+P_e^+)(1+P_e^-)σ_RR+(1-P_e^+)(1-P_e^-)σ_LL +(1-P_e^+)(1+P_e^-)σ_RL+(1+P_e^+)(1-P_e^-)σ_LR} , Here, σ_LL and σ_RR are the cross-sections of both left-handed and both right-handed electron-positron beams, respectively. Similarly, σ_RL and σ_LR represent the cross-conditions of the beam polarizations. In this study, the process e^- γ→ e^-γγ with a CoM energy √(s)=3 TeV at CLIC for both unpolarized and polarized P_e^-=-80%, 80% is taken into account to examine the new physics effects via aQGC under various systematic uncertainties. The paper is consist of the following parts. Section-II defines the dim-8 EFT for the anomalous Zγγγ and γγγγ couplings. Applied analysing techniques on the process and obtained sensitivities on the f_T,j/Λ^4 couplings at 95% Confidence Level (C.L.) which are discussed in Section-III and Section-IV, respectively. Finally, we sum up our results in Section-V. § DIM-8 OPERATORS FOR THE ANOMALOUS ZΓΓΓ AND ΓΓΓΓ COUPLINGS EFT in the SM is a framework that extends the reach of the SM of particle physics. It provides a systematic way to describe the effects of new physics(NP). Here, the starting point is to identify the effective operators. The EFT Lagrangian consists of the SM Lagrangian plus an infinite tower of higher-dimension operators which are suppressed by an cut-off scale. The EFT Lagrangian is given as follows. L_eff= L_SM+∑_ic_i^(6)/Λ^2 O_i^(6) +∑_jc_j^(8)/Λ^4 O_j^(8)+..., The processes including aQGCs can be produced via tri-boson productions and the vector boson scattering (VBS) processes. Here, VBS processes refer to scattering of two vector with each other which can be produce quartic gauge boson interactions. On the other hand, Tri-boson production refer to the production of three vector bosons. This can occur through various channels like WWV, ZVγ and ZZV (V=γ,Z). Comparing with the tri-boson processes, the VBS processes exhibit a higher sensitivity to aQGCs.<cit.>. In this study, we focused on the dim-8 anomalous Zγγγ and γγγγ quartic vertices using Effective Lagrangian techniques by adding dim-8 operators to the generic SM Lagrangian. In general, the formalism for aQGC has been discussed in the literature <cit.>. The effective Lagrangian including the dim-8 operators for quartic couplings is given as follows. L_eff= L_SM +∑_k=0^1f_S, k/Λ^4O_S, k +∑_i=0^7f_M, i/Λ^4O_M, i+∑_j=0,1,2,5,6,7,8,9^f_T, j/Λ^4O_T, j, Here, O_S, k, O_M, i and O_T, j is the operators of dim-8 and f_S, k/Λ^4, f_M, i/Λ^4 and f_T, j/Λ^4 are the related parameters running by the effective operators. In Eq.3, there are three types of aQGC operators. The first set of operators (f_S, k/Λ^4 O_S, k) is contribute with the covariant derivative which induces the ZZZZ, WWZZ and WWWW couplings. On the other hand, f_T, j/Λ^4 and f_M, i/Λ^4 operators are contribute with Gauge boson field strength tensors and the both field (Mixed field), respectively. Below, the aQGC operators corresponding to these three type operators are presented. ∙ Scalar field: O_S, 0 = [(D_ρΦ)^† (D_σΦ)]× [(D^ρΦ)^† (D^σΦ)], O_S, 1 = [(D_ρΦ)^† (D^ρΦ)]× [(D_σΦ)^† (D^σΦ)], O_S, 2 = [(D_ρΦ)^† (D_σΦ)]× [(D^σΦ)^† (D^ρΦ)]. ∙ Tensor field: O_T, 0 = Tr[W_σλ W^σλ]× Tr[W_αβW^αβ], O_T, 1 = Tr[W_λμ W^νβ]× Tr[W_νβW^λμ], O_T, 2 = Tr[W_λμ W^νσ]× Tr[W_σμW^μλ], O_T, 5 = Tr[W_λσ W^λσ]× B_μνB^μν, O_T, 6 = Tr[W_λμ W^νσ]× B_νσB^λμ, O_T, 7 = Tr[W_λν W^νσ]× B_σμB^μλ, O_T, 8 = B_σλ B^σλB_μνB^μν, O_T, 9 = B_λν B^νσB_σμB^μλ. ∙ Mixed field: O_M, 0 = Tr[W_νλ W^νλ]× [(D_σΦ)^† (D^σΦ)], O_M, 1 = Tr[W_νλ W^λσ]× [(D_σΦ)^† (D^νΦ)], O_M, 2 = [B_νλ B^νλ]× [(D_σΦ)^† (D^σΦ)], O_M, 3 = [B_νλ B^λσ]× [(D_σΦ)^† (D^νΦ)], O_M, 4 = [(D_νΦ)^† W_σλ (D^νΦ)]× B^σλ, O_M, 5 = [(D_νΦ)^† W_σλ (D^λΦ)]× B^σν + h.c. , O_M, 7 = [(D_νΦ)^† W_σλ W^σν (D^λΦ)]. In above equations, the subscripts S, M, T refer to scalar (or longitudinal), M mixed field and T transversal. In these equations, Φ is the Higgs doublet, D_μΦ=(∂_μ + igW^j_μσ^j/2+ i/2g'B_μ )Φ is the covariant derivatives of the Higgs field and finally Pauli matrices are denoted by σ^j (j=1,2,3). Here, B^μν and W^μν represent the gauge field strength tensors. As mentioned above, we aimed on the dim-8 aQGC operators. A list of quartic vertices altered with the dim-8 operators is given in Table-I. In this study, evaluated process is sensitive to T-type operators. Because of this, we only consider the f_T,j/Λ^4 parameters with j=0,1,2,5,6,7,8,9. Obtained sensitivities on f_T,8/Λ^4 and f_T,9/Λ^4 parameters are important due to the relation on the electroweak neutral bosons only. On the other hand, the analytical expressions of the anomalous Z γγγ coupling and related anomalous parameters for the process e^- γ→ e^-γγ are given in Eqs. (22)-(25). V_Zγγγ,1=F^μνF_μνF^αβZ_αβ, V_Zγγγ,2=F^μνF_ναF^αβZ_βμ. Here, Z^μν=∂^μ Z^ν-∂^ν Z^μ. The related coefficients are given in the following equations. α_Zγγγ,1=c_W^3 s_W/Λ^4( f_T,5+ f_T,6-4f_T,8)+c_W s_W^3/Λ^4(f_T,1+f_T,2-f_T,5-f_T,6), α_Zγγγ,2=c_W^3 s_W/Λ^4( f_T,7- 4f_T,9)+c_W s_W^3/Λ^4(f_T,2-f_T,7). § ANALYSIS FOR THE AQGC VIA THE PROCESS E^- Γ→ E^-ΓΓ In this study, we examine the process e^- γ→ e^-γγ to probe the anomalous Zγγγ and γγγγ quartic vertices in the stage-3 scenerio at CLIC with electron polarizations of P_e^-=-80%, 0%, 80%. Here, the incoming photon from positron beam is taken under Weizsäcker–Williams approximation (WWA). The following equation presents photon's spectrum that are emitted by a positron beam. <cit.>. f_γ(x)=α/π E_e{[1-x+x^2/2/x]log(Q_max^2/Q_min^2)-m_e^2x/Q_min^2 (1-Q_min^2/Q_max^2)-1/x[1-x/2]^2log(x^2E_e^2+Q_max^2/x^2E_e^2+Q_min^2)} Here x=E_γ/E_e and Q_max^2 is the photon's maximum virtuality. On the other hand, the expression of Q_min^2 is given as: Q_min^2=m_e^2x^2/1-x. Using the methodology, the cross-section of the e^- γ→ e^-γγ process at CLIC can be calculated as follows. σ=∫ f_γ(x) dσ̂dE_1. Possible Feynman diagrams of the process e^- γ→ e^-γγ are shown in Fig.1. Here, the black dots shows the effective vertices. In the analysis, we aimed on the cross-sections to constrain the anomalous f_T, j/Λ^4 parameters related to Zγγγ and γγγγ quartic vertices. The total cross-section of the process e^- γ→ e^-γγ including anomalous interaction is composed as the summition of SM, purely new physics contribution and the interference part. This is given as follows. σ_Tot( √(s), f_T,j/Λ^4) = σ_SM( √(s) ) +σ_(INT)( √(s), f_T,j/Λ^4) + σ_NP(√(s), f^2_T,j/Λ^8), j= 0-2, 5-9, In the above equation, σ_SM is the SM cross-section. On the other hand, σ_(INT) consist of the contributions coming from the interference between SM and the EFT operators. Lastly σ_NP is the cross-section purely contributed by the EFT operators. During the analysis, every coefficient is taken zero at a time. To evaluate the total cross-section σ_Tot(√(s),f_T,j/Λ^4) of the process e^- γ→ e^-γγ, we used the MadGraph5_aMC@NLO<cit.>. The operators described in Equations (7)-(14) were embedded into MadGraph5_aMC@NLO using the Feynrules package<cit.>, which served as a Universal FeynRules Output (UFO) module<cit.>. In the first step, we evaluate the SM background and the signals cross-sections for each anomalous parameter at a time with no-cut situation to determine the optimize kinematic cuts for the next step. In Fig.2, we give the transverse momentum of the final state photons p_T^γ. It must be mentioned that the given transverse momentum is the scalar sum of all photons. Here can be seen, p_T^γ cut is very decisive and a chosen value of 350 GeV are ideal for seperation area for the signals and SM background. The rest of the selected cuts are came with the default values of the MadGraph5_aMC@NLO. p_T^l and η^γ are the transverse momentum of the final state leptons and the pseudorapidities of the photons, respectively. We consider p_T^l > 10 GeV and η^γ< 2.5 with labeled as Cut-1. Next, we applied an angular separations (Δ R =( (Δϕ)^2+ (Δη)^2)^1/2 for the final state charged lepton and photons that are Δ R(γ, γ) > 0.4, Δ R(γ, l) > 0.4 which is labeled as Cut-2. The final state photons of the process e^- γ→ e^-γγ are useful to separate the signal and SM background events because, in the large values of p_T^γ, high dimensional operators can effect the transverse momentum of the photon. However, p_T^γ > 350 GeV is applied for the final state photons with labeled Cut-3. In Table-III, a cut flow chart is given to see the effect of the selected kinematic cuts to the number of events for the signals and the SM background step-by-step. In table, we give the f_T,0/Λ^4 f_T,2/Λ^4, f_T,5/Λ^4, f_T,7/Λ^4, f_T,8/Λ^4 for signals and the SM background. Here, all coupling values for the signals are taken 1 TeV^-4 at a time. In the table cut-0 refer to the default kinematic cuts to arrange the singularities and divergences in the phase space. After applying the selected cuts, we can see that the SM background events dramatically decrease comparing with the signals. Also we give the total cross-section as a function of anomalous parameters f_Tj/Λ^4 in Fig.3-5 to compare cross-sections of signals each other for the different polarizations of electron beams P_e^-=-80%, 0%, 80% after applying cuts given in Table-III. Another important topic is the systematic uncertainties while doing the analysis. We have obtain the sensitivities of anomalous parameters f_Tj/Λ^4 under the systematic uncertainties of 0%, 3%, 5% at CLIC. Possible sources of the systematic uncertainties are the jet-photon misidentification, integrated luminosities and photon efficiencies. These are listed detail in Table-II given in the reference <cit.>. Also other phenomenological studies are also used the same systematic uncertainty values to obtain the sensitivities <cit.>. In the next section, we will take our analysis one step further and obtain the sensitivies on anomalous parameters of f_T, j/Λ^4. § EXPECTED SENSITIVITY ON THE ANOMALOUS F_T, J/Λ^4 PARAMETERS AT THE CLIC A straightforward approach based on the chi-square method can be used to quantitate the parameters f_ T,j/Λ^4, j=0-2, 5-9 of new physics that is related to anomalous Zγγγ and γγγγ couplings. χ^2(f_T,j/Λ^4)=(σ_SM(√(s))-σ_Total(√(s), f_T,j/Λ^4)/σ_SM(√(s))√((δ_st)^2 + (δ_sys)^2))^2, Here, σ_Total(√(s) f_T,j/Λ^4) is the total cross section which is contributed by the anomalous couplings and the SM part. Besides, σ_SM(√(s)) is the SM cross-section. On the other hand, δ_sys and δ_st=1/√(N_SM) are the systematic and statistical error, respectively. In addition, N_SM= L×σ_SM is the number of events that L is the integrated luminosity. The anomalous parameters f_T,j/Λ^4 are obtained at the 95% C.L. using the cross-sections of the process e^- γ→ e^-γγ after the selected cuts given in Table II for each coupling at a time. The process are performed in √(s)=3 TeV option with an integrated luminosities of L=1 ab^-1 (P_e^-=80%), L=4 ab^-1 (P_e^-=-80%) and L=5 ab^-1 (P_e^-=0%) under the systematic uncertainties of δ_sys=0%, 3%, 5% at CLIC collider. Figs. 3-5 shows the variation of the anomalous couplings with respect to the total cross-section after applying the selected cuts in Table II for different electron polarization options of P_e^-=-80%, P_e^-=0% and P_e^-=80%, respectively. It is seen in that figures, the dim-8 operators have strong dependencies and have highly effect the cross-section to increase with its value. Sensitivities on anomalous Zγγγ and γγγγ couplings at 95% C.L. with the process e^- γ→ e^-γγ at the CLIC are given in Table IV. The sensitivities are obtained for different electron polarization options P_e^-=-80%, 0%, 80% and under various systematic uncertainties of δ_sys = 0%, 3%, 5%. Here can be seen, f_T,5/Λ^4, f_ T,8/Λ^4, and f_ T,9/Λ^4 couplings have the restrictive sensitivities of [-1.31; 0.84] × 10^-2 TeV^-4 , [-1.79; 2.05] × 10^-3 TeV^-4 and [-0.50; 0.32] × 10^-2 TeV^-4, respectively. In Figs. 6-13, we give the comparison of the anomalous f_T,j/Λ^4 parameters with the latest experimental limits. In those figures, we consider the electron polarizations of P_e^-=80% with the integrated luminosities of L = 0.1,0.5 and 1 ab^-1. Similarly, P_e^-=-80% polarizations with the integrated luminosities of L = 0.1, 1, 4 ab^-1 and L = 0.1, 1, 5 ab^-1 for unpolarized electron beam are also taken into account. Sensitivities on f_T,0/Λ^4, f_T,1/Λ^4, f_T,2/Λ^4, f_T,5/Λ^4, f_T,6/Λ^4 and f_T,7/Λ^4 couplings at P_e^-=-80% are more stringent than the obtained for the other options P_e^-=0% and 80%. On the other hand, f_T,8/Λ^4 and f_T,9/Λ^4 couplings have its best sensitivities at P_e^-=0%. § CONCLUSIONS In this study, a specific class of interactions aQGCs are handled within the EFT framework which is described by the non-Abelian gauge structure of the SM. In this context, the study may lead to an opportunity to test the validity of SM and gives important clues to the presence of new physics. Additionaly, lepton colliders are very suitable due to its clean environment comparing with the LHC which is effected to perform this simulation in CLIC. Furthermore, the CLIC program gives high CoM energy up to a 3 TeV in the stage-3 scenerio with high integrated luminosity. Also we take into account the electron polarization options of CLIC to see the effect on aQGC couplings. With these motivations, we evaluate the process e^- γ→ e^-γγ at the CLIC to probe the dim-8 anomalous Zγγγ and γγγγ couplings. Obtained sensitivities on dim-8 parameters f_T,j/Λ^4 that are between 2-200 times stronger than the experimental limits. Stringent bounds on the anomalous f_T,5,8,9/Λ^4 couplings are f_T5/Λ^4= [-1.31; 0.84] × 10^-2 TeV^-4 with L=4 ab^-1, P_e^-=-80% and δ_sys=0%, f_T8/Λ^4= [-1.79; 2.05] × 10^-3 TeV^-4 and f_T9/Λ^4= [-0.50; 0.32] × 10^-2 TeV^-4 with L=5 ab^-1, P_e^-=0% and δ_sys=0%. Consequently, O_T,8 and O_T,9 operators have the optimal sensitivities related on anomalous Zγγγ and γγγγ couplings for the process e^- γ→ e^-γγ. arXiv:2106.11082 A. Tumasyan, et al., [CMS Collaboration], Phys. Rev. D 104, 072001 (2021). PRD93-2016 G. Aad, et al., [ATLAS Collaboration], Phys. Rev. D 93, 112002 (2016). JHEP10-2017 A. M. Sirunyan, et al., [CMS Collaboration], JHEP 10, 072 (2017). JHEP10-2021 A. Tumasyan, et al., [CMS Collaboration], JHEP 10, 174 (2021). CPC44-2020 John Ellis, Shao-Feng Ge, Hong-Jian He and Rui-Qing Xiao, Probing the Scale of New Physics in the ZZγ Coupling at e^+e^- Colliders, Chin. Phys. C44, 063106 (2020); [arXiv:1902.06631]. PRD107-2023 John Ellis, Hong-Jian He and Rui-Qing Xiao, Probing neutral triple gauge couplings at the LHC and future hadron colliders, Phys. Rev. D107, 035005 (2023); [arXiv:2206.11676]. Eboli-PRD93-2016 O. J. P. Eboli, M. C. Gonzalez-Garcia, Phys. Rev. D93, 093013 (2016). Marantis-JPCS2105-2021 A. Marantis, et al., J. Phys.: Conf. Ser. 2105, 012014 (2021). Degrande-AP335-2013 C. Degrande, et al., Effective Field Theory: A Modern Approach to Anomalous Couplings, Annals Phys. 335, 21 (2013); [arXiv:1205.4231]. Hao-PRD104-2021 Hao-Lin Li, Zhe Ren, Jing Shu, Ming-Lei Xiao, Jiang-Hao Yu, and Yu-Hui Zheng, Complete set of dimension-eight operators in the standard model effective field theory, Phys. Rev. D104, 015026 (2021). Degrande-JHEP02-2014 C. Degrande, A basis of dimension-eight operators for anomalous neutral triple gauge boson interactions, J. High Energy Phys. 02, 101 (2014). Hays-JHEP02-2019 C. Hays, A. Martin, V. Sanz, and J. Setford, On the impact of dimension-eight SMEFT operators on Higgs measurements, J. High Energy Phys. 02, 123 (2019). Gounaris-PRD65-2002 G. J. Gounaris, J. Layssac, and F. M. Renard, Addendum to off-shell structure of the anomalous Z and gamma self-couplings, Phys. Rev. D65, 017302 (2002). Wrishik-JHEP08-2022 Wrishik Naskar, Suraj Prakash and Shakeel Ur Rahaman, EFT Diagrammatica. Part II. Tracing the UV origin of bosonic D6 CPV and D8 SMEFT operators, JHEP 08, 190 (2022). Murphy-JHEP10-2020 C. W. Murphy, Dimension-8 operators in the Standard Model Eective Field Theory, JHEP 10, 174 (2020); [arXiv:2005.00059]. Ellis-China64-2021 J. Ellis, H.-J. He and R.-Q. Xiao, Probing new physics in dimension-8 neutral gauge couplings at e^+e^- colliders, Sci. China Phys. Mech. Astron. 64, 221062 (2021); [arXiv:2008.04298]. Murphy-JHEP04-2021 C. W. Murphy, Low-energy effective field theory below the electroweak scale: dimension-8 operators, JHEP 04, 101 (2021); [arXiv:2012.13291]. Green-RMP89-2017 D. R. Green, P. Meade, and M.-A. Pleier, Rev. Mod. Phys. 89, 035008 (2017), Multi-Boson Interactions at the LHC. PLB760-2016 V. Khachatryan, et al., [CMS Collaboration], Phys. Lett. B760, 448468 (2016); [arXiv: 1602.07152 [hepex]]. JHEP12-2018 M. Aaboud, et al., [ATLAS Collaboration], JHEP 12, 010 (2018); [arXiv:1810.04995 [hepex]]. PLB540-2002 P. Achard, et al., [L3 Collaboration], Phys. Lett. B 540, 43-51 (2002). PRD70-2004 G. Abbiendi, et al., [OPAL Collaboration], Phys. Rev. D 70, 032005 (2004). PRD62-2000 B. Abbott, et al., [D0 Collaboration], Phys. Rev. D 62, 052005 (2000). PRD88-2013 V. M. Abazov, et al., [D0 Collaboration], Phys. Rev. D 88, 012005 (2013). PRL113-2014 G. Aad, et al., [ATLAS Collaboration], Phys. Rev. Lett. 113, 141803 (2014). PRL115-2015 G. Aad, et al., [ATLAS Collaboration], Phys. Rev. Lett. 115, 031802 (2015). PRD96-2017 M. Aaboud, et al., [ATLAS Collaboration], Phys. Rev. D 96, 012007 (2017). EPJC77-2017 M. Aaboud, et al., [ATLAS Collaboration], Eur. Phys. J. C 77, 646 (2017). PRD95-2017 M. Aaboud, et al., [ATLAS Collaboration], Phys. Rev. D 95, 032001 (2017). PRD90-2014 S. Chatrchyan, et al., [CMS Collaboration], Phys. Rev. D 90, 032008 (2014). PLB774-2017 A. M. Sirunyan, et al., [CMS Collaboration], Phys. Lett. B 774, 682 (2017). PLB770-2017 V. Khachatryan, et al., [CMS Collaboration], Phys. Lett. B 770, 380 (2017). JHEP06-2017 V. Khachatryan, et al., [CMS Collaboration], JHEP 06, 106 (2017). PRL120-2018 A. M. Sirunyan, et al., [CMS Collaboration], Phys. Rev. Lett. 120, 081801 (2018). PLB795-2019 A. M. Sirunyan, et al., [CMS Collaboration], Phys. Lett. B 795, 281 (2019). PLB798-2019 A. M. Sirunyan, et al., [CMS Collaboration], Phys. Lett. B 798, 134985 (2019). PLB812-2021 A. M. Sirunyan, et al. [CMS Collaboration], Phys. Lett. B 812, 135992 (2021). JHEP06-2020 A. M. Sirunyan, et al., [CMS Collaboration], JHEP 06, 076 (2020). PLB809-2020 A. M. Sirunyan, et al., [CMS Collaboration], Phys. Lett. B 809, 135710 (2020). PLB811-2020 A. M. Sirunyan, et al., [CMS Collaboration], Phys. Lett. B 811, 135988 (2020). Stirling W. J. Stirling, A. Werthenbach, Eur. Phys. J. C14, 103 (2000). PRD89-2014 A. Gutiérrez-Rodríguez, C. G. Honorato, J. Montano, and M. A. Pérez, Phys. Rev. D89, 034003 (2014). EPJC13-2000 G. Belanger, F. Boudjema, Y. Kurihara, D. Perret-Gallix, and A. Semenov, Eur. Phys. J. C13, 283 (2000). PLB515-2001 G. Montagna, M. Moretti, O. Nicrosini, M. Osmo, F. Piccinini, Phys. Lett. B515, 197 (2001). Eboli O. J. P. Eboli, M. C. Gonzalez-Garcia and S. F. Novaes, Nucl. Phys. B411, 381 (1994). PRD75-2007 S. Atag and I. Sahin, Phys. Rev. D75, 073003 (2007). AHEP2016-2016 M. Koksal, V. Ari, A. Senol, Adv. High Energy Phys. 2016, 8672391. JHEP10-121-2021 S. C. Inan and A. V. Kisselev, JHEP 10, 121 (2021). EPJC81-2021 S. C. Inan and A. V. Kisselev, Eur. Phys. J. C81, 664 (2021). EPJP130-2015 M. koksal, Eur. Phys. J. Plus 130, 75 (2015). PRD104-2021 Ji-Chong Yang, Yu-Chen Guo, Chong-Xing Yue, and Qing Fu, Phys. Rev. D104, 035015 (2021). JHEP06-142-2017 C. Baldenegro, S. Fichet, G. von Gersdorff and C. Royon, JHEP 06, 142 (2017). JPG26-2000 P. J. Dervan, A. Signer, W. J. Stirling, and A. Werthenbach, J. Phys. G26, 607 (2000). Eboli2 O. J. P. Eboli, M. C. Gonzalez-Garcia, and S. M. Lietti, S. F. Novaes, Phys. Rev. D63, 075008 (2001). PRD81-2010 E. Chapon, C. Royon and O. Kepka, Phys. Rev. D81, 074003 (2010). PRD85-2012 R. S. Gupta, Phys. Rev. D85, 014006 (2012). arXiv:0908.2020 J. de Favereau de Jeneret, V. Lemaitre, Y. Liu, S. Ovyn, T. Pierzchala, K. Piotrzkowski, X. Rouby, N. Schul, M. Vander Donckt, arXiv:0908.2020 [hep-ph]. PRD86-2012 I. Sahin and B. Sahin, Phys. Rev. D86, 115001 (2012). Eboli4 O. J. P. Eboli, M. C. Gonzalez-Garcia, and S. M. Lietti, Phys. Rev. D69, 095005 (2004). arXiv:2109.12572 A. Senol, C. O. Karadeniz, K. Y. Oyulmaz, C. Helveci, and H. Denizli, Nucl. Phys. B980, 115851 (2022). Christian-EPJC2017 Christian Fleper, et al., Eur. Phys. J. C77 120 (2017). Barger-PRD1995 V. D. Barger, K.-m. Cheung, T. Han, R. J. N. Phillips, Phys. Rev. D52, 3815 (1995). Han-PLB1998 T. Han, H.-J. He, C. P. Yuan, Phys. Lett. B422, 294 (1998). Boos-PRD1998 E. Boos, H. J. He, W. Kilian, A. Pukhov, C. P. Yuan, P. M. Zerwas, Phys. Rev. D57, 1553 (1998). Boos-PRD2000 E. Boos, H. J. He, W. Kilian, A. Pukhov, C. P. Yuan, P. M. Zerwas, Phys. Rev. D61, 077901 (2000). Beyer-EPJC2006 M. Beyer, W. Kilian, P. Krstonosic, K. Monig, J. Reuter, E. Schmidt, H. Schroder, Eur. Phys. J. C48, 353 (2006). Daniele-RNC1997 Daniele Dominici, Riv. Nuovo Cim. 20, 1 (1997). Stephen-arxiv:9505252 Stephen Godfrey, arXiv:hep-ph/9505252v1. Stirling-PLB1999 W. James Stirling, Anja Werthenbach, Phys. Lett. B466, 369 (1999). Belanger-PLB1992 G. Belanger, F. Boudjema, Phys. Lett. B288, 201 (1992). Senol-AHEP2017 A. Senol, M. Koksal, and S. C. Inan, Adv. High Energy Phys. 2017, 6970587 (2017). Cuypers-PLB1995 F. Cuypers and K. Kolodziej, Phys. Lett. B344, 365 (1995). Cuypers-IJMPA1996 F. Cuypers, Int. J. Mod. Phys. A11, 1525 (1996). Ji-arXiv:2204.08195 Ji-Chong Yang, Zhi-Bing Qing, Xue-Ying Han, Yu-Cheng Guo, Tong Li, arXiv:2204.08195 [hep-ph]. ATLAS1 G. Aad, et al., [ATLAS Collaboration], Phys. Rev. Lett. 115, 031802 (2015). ATLAS2 M. Aaboud, et al., [ATLAS Collaboration], Eur. Phys. J. C77, 646 (2017). CMS1 S. Chatrchyan, et al., [CMS Collaboration], Phys. Rev. D90, 032008 (2014). JPG2022 A. Gutiérrez-Rodríguez, V. Ari, E. Gurkanli, M. Koksal and M. A. Hernández-Ruíz, J. Phys. G49, 105004 (2022). JPG2023 E. Gurkanli, J. Phys. G50, 015002 (2022). Koksal-son M. Koksal, arXiv:2306.11894 [hep-ph]. franc R. Franceschini, P. Roloff, U. Schnoor, and A. Wulzer, The Compact Linear Collider (CLIC): Physics Potential, arXiv: 1812.07986 [hep-ex]. CLIC-1812.06018 T. K. Charles, et al., [The CLIC, CLICdp Collaborations], The Compact Linear Collider (CLIC)-2018 Summary Report, CERN-2018-005, arXiv:1812.06018 [physics.acc-ph]. kek2017 K. Fujii, et al., [LCC Physics Working Group], DESY 17-237, KEK Preprint 2017-57, SLAC-PUB-17197. Eboli1 O. J. P. Eboli, M. B. Magro, P. G. Mercadante, and S. F. Novaes, Phys. Rev. D52, 15 (1995). Eboli3 O. J. P. Eboli, M. C. Gonzalez-Garcia and J. K. Mizukoshi, Phys. Rev. D74, 073005 (2006). Eboli-PRD101-2020 Eduardo da Silva Almeida, O. J .P. Éboli and M. C. Gonzalez-Garcia, Phys. Rev. D101, 113003 (2020). Degrande C. Degrande, et al., arXiv: 1309.7890 [hep-ph]. Budnev V. M. Budnev, I. F. Ginzburg, G. V. Meledin and V. G. Serbo, Phys. Rep. 15, 181 (1975). Chen2 M. S. Chen, T. P. Cheng, I. J. Muzinich and H. Terazawa, Phys. Rev. D7, 3485 (1973). MadGraph J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, JHEP 06, 128 (2011). AAlloul A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, Comput. Phys. Commun. 185, 2250 (2014). CDegrande C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer and T. Reiter, Comput. Phys. Commun. 183, 1201 (2012). PLB478-2000 M. Acciarri, et al., [L3 Collaboration], Phys. Lett. B478, 39 (2000). PRD98-2018 A. Gutiérrez-Rodríguez, M. Koksal, A. A. Billur, M. A. Hernández-Ruíz, Phys. Rev. D98, 095013 (2018). PRD98-015017-2018 M. Koksal, A. A. Billur, A. Gutiérrez-Rodríguez, and M. A. Hernández-Ruíz, Phys. Rev. D98, 015017 (2018).
http://arxiv.org/abs/2307.00207v1
20230701031334
Low-Carbon Operation of Power Systems with Energy Storage via Electricity-Emission Prices
[ "Rui Xie", "Yue Chen" ]
math.OC
[ "math.OC" ]