entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.02257v1 | 20230705125606 | Hybrid NOMA for STAR-RIS Enhanced Communication | [
"Jiayi Lei",
"Tiankui Zhang",
"Yuanwei Liu"
] | cs.IT | [
"cs.IT",
"cs.ET",
"math.IT"
] |
fullwidth,itemindent=,listparindent=,itemsep=0ex,partopsep=0pt,parsep=0ex
remarkRemark
theoremTheorem
theorembox 500
lemmaLemma
lemmabox 500
DefinitionDefinition
Definitionbox 500
corollaryCorollary
corollarybox 500
propositionProposition
op-tical net-works semi-conduc-tor
@nat@width>@nat@width
Hybrid NOMA for STAR-RIS Enhanced Communication
Jiayi Lei,
Tiankui Zhang, Senior Member, IEEE,
Yuanwei Liu, Senior Member, IEEE
Jiayi Lei and Tiankui Zhang are with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: {leijiayi,zhangtiankui }@bupt.edu.cn).
Yuanwei Liu is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: [email protected]).
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, a hybrid non-orthogonal multiple access (NOMA) framework for the simultaneous transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) enhanced cell-edge communication is investigated. Specifically, one transmitted user and one reflected user are paired as one NOMA-pair, while multiple NOMA-pairs are served via time division multiple access (TDMA). The objective is to maximize the minimum downlink rate by jointly optimizing the user pairing, decoding order, passive beamforming, power and time allocation. A novel two-layer iterative algorithm is proposed to solve the highly coupled problem. Simulation results show that: 1) the proposed framework outperforms the conventional reflecting-only-RIS-based and the OMA-based frameworks; 2) the beamforming design and power allocation dominate the achieved performance; 3) increasing the number of passive elements and shortening the distance between BS and STAR-RIS are two effective ways to further improve the performance.
Hybrid NOMA, resource allocation, STAR-RIS.
§ INTRODUCTION
In traditional wireless communication networks, cell-edge users always suffer from poor quality-of-service (QoS) due to severe channel fading and inter-cell interference. For such practical scenarios, the reconfigurable intelligent surface (RIS) is a promising technology to enhance communications for cell-edge users. With the ability to smartly reconfigure the wireless propagation environment, the RIS is able to provide additional and high-quality transmission links <cit.>. In addition, thanks to its small size, light weight, and high extensibility, the RIS is easy to be deployed in existing networks. However, the conventional RIS is designed to only reflect the incident signals, making the source and the destination have to be on the same side of the RIS. To address this issue, a novel simultaneous transmitting and reflecting RIS (STAR-RIS) is proposed in <cit.>. The incident signal can be transmitted and reflected by the STAR-RIS simultaneously, and thus a full-space smart radio environment is created. With apparent advantages, the STAR-RIS aided networks have attracted widespread attention <cit.>. In <cit.>, the coverage probability of a STAR-RIS assisted massive multiple-input multiple-output (MIMO) system was analyzed with Rayleigh fading and phase-shift errors. In <cit.>, the transmit power for a STAR-RIS aided multiple-input single-output system was minimized by joint active and passive beamforming design.
Non-orthogonal multiple access (NOMA) is another promising technology for future wireless communications.
It has been proved that NOMA yields a significant gain over conventional orthogonal multiple access techniques in terms of spectral efficiency and user fairness <cit.>.
Significantly, combining NOMA and STAR-RIS is a meaningful research topic. On the one hand, the STAR-RIS is able to reconfigure channels smartly and introduce additional degrees-of-freedom (DoFs) for system design, thus enhancing the NOMA gain.
On the other hand, due to the simultaneous transmission and reflection characteristics of STAR-RIS, users are naturally divided into transmission users and reflection users, which provides a basis for user pairing in NOMA.
Inspired by this, there emerge some works which combine the two techniques <cit.>. In <cit.>, the effective capacity of a STAR-RIS assisted NOMA network with two users on different sides of the STAR-RIS was studied. In <cit.>, the authors investigated the secrecy performance of the STAR-RIS assisted NOMA networks. A sum rate maximization problem for STAR-RIS-NOMA systems was investigated in <cit.>, where the decoding order, power allocation, active beamforming, and passive beamforming were jointly optimized.
Although some excellent works on STAR-RIS assisted NOMA networks have been conducted, most of them considered pure NOMA, which will face the following bottleneck. With a large number of users, it is necessary to perform user clustering to ensure the effectiveness of NOMA. In this case, users served by pure NOMA suffer from not only intra-cluster interference but also inter-cluster interference, which makes a severely negative effect on QoS and increases the complexity of network design.
Hybrid NOMA is one of the effective solutions to tackle this issue, but it has not yet received widespread attention <cit.>. In <cit.>, the authors studied hybrid NOMA and time division multiple access (TDMA) for the uplink transmission in reflecting-only RIS assisted wireless powered communication networks. Similarly, a hybrid TDMA-NOMA scheme was developed to balance the performance and signalling overhead for RIS aided Internet-of-Things (IoT) system in <cit.>. However, as the functional limitation of reflecting-only RIS, only users located on one side of the RIS were considered in <cit.>.
Furthermore, few related works investigate hybrid NOMA for STAR-RIS aided networks <cit.>. A frequency division multiple access (FDMA)-NOMA mixed framework was applied in <cit.> to eliminate inter-cluster interference. However, it is worth noting that the passive beamforming at STAR-RIS is time-selective, but not frequency-selective. As a result, the advantages of the STAR-RIS can not be fully utilized.
Motivated by the above observation, a novel hybrid NOMA framework for STAR-RIS assisted cell-edge networks is proposed in this paper, which reduces the complexity of network design while fully leveraging the advantages of STAR-RISs. The main contributions are summarized as follows: 1) We propose a novel hybrid NOMA-TDMA framework for STAR-RIS assisted cell-edge networks, where one transmitted user and one reflected user are paired and served via NOMA, while multiple NOMA-pairs are served by TDMA; 2) We formulate a max-min rate problem and solve it by a novel two-layer iterative algorithm, where user pairing, passive beamforming, power and time allocation are jointly optimized; 3) Numerical results show the superiority of the proposed network framework and the two-layer algorithm. Moreover, increasing the number of STAR-RIS elements and shortening the distance from STAR-RIS to BS are confirmed to be two effective ways to improve network performance.
Notation: Scalars, vectors and matrices are denoted by Italic letters, bold-face lower-case, and bold-face upper-case, respectively. For a complex-valued vector 𝐚, 𝐚^T means its transpose, 𝐚^H means its conjugate transpose, and diag(𝐚) denotes a diagonal matrix with the elements of vector 𝐚 on the main diagonal. Besides, · denotes a vector's Euclidean norm, and arg(·) denotes a complex number's argument.
§ SYSTEM MODEL AND PROBLEM FORMULATION
§.§ System Model
As shown in Fig. <ref>, we consider a cell-edge area, where multiple users are randomly distributed, and a STAR-RIS is deployed in the center to enhance the communications. For the convenience of explanation, a 3D Cartesian coordinate system is established. Let 𝐪_S = [x_S,y_S,z_S], 𝐪_B = [x_B,y_B,z_B] and 𝐪_k = [x_k,y_k,0] denote the 3D positions of the STAR-RIS first element, the base station (BS) and cell-edge users respectively. The BS and users are all equipped with a single antenna. The STAR-RIS consists of a uniform planar array (UPA) with M = M_yM_z passive transmitting and reflecting elements, where M_y and M_z denote the number of elements along the y- and z-axis, respectively. The STAR-RIS adopts the energy-splitting protocol, i.e., each element of the STAR-RIS splits the incident signal into transmitted and reflected signals to serve users at both sides of the surface simultaneously [There are three practical protocols for operating STAR-RISs: namely energy splitting (ES), mode switching (MS), and time switching (TS). Based on the insights obtained in <cit.>, the ES protocol is the best option among the three protocols for commutations with high quality-of-service (QoS) requirements. Therefore, we adopt the ES protocol in this paper. Numerical simulation and comparison of the three protocols in STAR-RIS assisted wireless communications will be considered in our future study.]. As the whole region is divided into the right-half sub-region and the left-half sub-region by the STAR-RIS, users can be divided into transmitted users (TU) and reflected users (RU), whose sets are denoted as 𝒦_T and 𝒦_R respectively. We assume that users are equally located in the two sub-regions, that is, |𝒦_T|= |𝒦_R| = K [ The proposed network framework and algorithm in this paper can be extended to cases where the number of TUs is not equal to that of RUs, with small modifications on the design of user pairing.].
We consider that one TU is paired with one RU, and then all users are grouped into K user-pairs. Let binary variable c_k_T,k_R(c_k_R,k_T) ∈{ 0,1} ,k_T∈ K_T,k_R∈ K_R denote whether the TU k_T and the RU k_R are paired up, which satisfies ∑_k_R = 1^K c_k_T,k_R = 1. The two paired users receive data from the BS via non-orthogonal multiple access (NOMA). K user-pairs are served sequentially via TDMA. This multiple access scheme is referred to as hybrid NOMA in this paper. Given k ∈ K_T∪ K_R, let k̅ denote the index of the user which is paired with k, i.e., c_k,k = 1. Then, the time allocation coefficient of the user-pair {k,
k̅} is denoted by τ _{k,
k̅}∈ [0,1], which is supposed to satisfy ∑_k ∈ K_T(k ∈ K_R)τ _{ k,k̅} = 1.
As user-pairs occupy orthogonal time resource in the hybrid NOMA framework, the passive beamforming vectors of the STAR-RIS can be designed specifically and independently for each user-pair, so as to reconstruct the channel between the BS and the currently served user-pair more purposefully. Let 𝐕_k = √(β_k)diag( e^jθ _k^1, e^jθ _k^2,...,e^jθ _k^M) denote the transmission and reflection coefficient matrix of STAR-RIS for the user k, where √(β _k)∈ [0,1] denotes the amplitude and θ _k^m denotes the phase shifts of m-th STAR-RIS element. We assume that there is no energy dissipated by STAR-RIS, that is, β _k + β _k̅ = 1.
§.§ Channel Model
There are two kinds of downlinks for each user, direct link and STAR-RIS assisted link. Let h_B,k∈ℂ^1 × 1, 𝐡_B,S∈ℂ^M × 1 and 𝐡_S,k∈ℂ^M × 1 denote the channel vectors from the BS to user, from the BS to the STAR-RIS, and from the STAR-RIS to user respectively. Considering the blocked line-of-sight (LoS) link and potential extensive scattering from the BS to users, the propagation h_B,k∈ℂ^1 × 1 is modeled as Rayleigh fading, while the transmission links from the BS to the STAR-RIS and from the STAR-RIS to users are modeled as LoS-dominated channels.
Specifically, the channel coefficient of the direct link is expressed as
h_B,k = √(δ _0/d_B,k^α _1)h_B,k.
where δ _0 is the channel power at the reference distance of 1m, α_1 is the path loss exponent, d_B,k = 𝐪_B - 𝐪_k is the distance between the BS and the user k, and h_B,k is complex Gaussian random variable with zero mean and unit variance, i.e., h_B,k∼𝒞𝒩(0,1).
The channel vectors from the BS to the STAR-RIS and from the STAR-RIS to users are given by
𝐡_B,S = √(δ_0/d_B,S^α _2)𝐚_AoA,
[ 𝐡_S,k = √(δ_0/d_S,k^α _3)𝐚_AoD, ]
where d_B,S = 𝐪_B - 𝐪_S and d_S,k = 𝐪_S - 𝐪_k. 𝐚_AoA is the receive array response vector and given as 𝐚_AoA = e^ - j2πd_U,S/λ[ 1,e^ - j2πd_y/λsinφ _rcosη _r...,e^ - j2(M_y - 1)πd_y/λsinφ _rcosη _r]^T⊗[ 1,e^ - j2πd_z/λsinφ _rsinη _r...,e^ - j2( M_z - 1)πd_z/λsinφ _rsinη _r]^T , where φ _r and η _r are the zenith angle of arrival (AoA) and the azimuth AoA of the signal from the BS to the STAR-RIS respectively. 𝐚_AoD is the transmit array response vector and given as 𝐚_AoD = e^ - j2πd_S,k/λ[ 1,e^ - j2πd_y/λsinφ _tcosη _t...,e^ - j2( M_y - 1)πd_y/λsinφ _tcosη _t]^T⊗[ 1,e^ - j2πd_z/λsinφ _tsinη _t...,e^ - j2( M_z - 1)πd_z/λsinφ _tsinη _t]^T , where φ _t and
η _t are the zenith angle of departure (AoD) and the azimuth AoD of the signal from the STAR-RIS to the user k respectively. d_y and d_z are the STAR-RIS element spacings along the y- and z-axis respectively, λ denotes the signal wavelength, and ⊗ denotes the Kronecker product.
For any user k, the downlink equivalent-combined channel gain is given as
h_k = h_B,k + 𝐡_S,k^H𝐕 _k𝐡_B,S,
§.§ Hybrid NOMA signal model
As mentioned earlier, the two paired users are served simultaneously via NOMA, while K user-pairs are served sequentially via TDMA. Let π(k) ∈{0,1} denote the decoding order of user k. In downlink NOMA, to guarantee that successive interference cancelation (SIC) performs successfully, an optimal decoding order is to decode in the order of increasing equivalent-combined channel gain, regardless of the user power allocation, that is,
π( k) = {[ 0, if | h_k| ≤| h_k̅|,; 1, if | h_k| > | h_k̅|. ].
Under the proposed hybrid NOMA framework, the signal-to-interference-plus-noise ratio (SINR) at user k is expressed as
Γ _k = | h_k|^2ρ _kP/π( k̅)| h_k|^2ρ_k̅P + n,
where P is the total power of the BS. ρ _k,ρ _k̅∈ [0,1] are the power allocation coefficients, which satisfy ρ _k + ρ _k̅ = 1 . n is the complex circular i.i.d. additive Gaussian noise with n ∼𝒞𝒩( 0,σ ^2), where σ ^2 is the noise power.
Then, the downlink communication rate of the user is given by
R_k = τ _{k,k}log _2( 1 + Γ _k)
§.§ Problem Formulation
Considering the communication fairness in cell-edge areas, the minimum transmission rate is taken as the performance metric of the proposed STAR-RIS assisted networks. The object is to maximize the minimum transmission rate among multiple users, by jointly optimizing the user pairing, the decoding order, the passive beamforming design, the user-pair time allocation and the user power allocation. The problem is formulated as follows,
(P1): Max_𝐕_k,c,π_k,τ _{ k,k̅},ρ _k 5ptMin_k ∈ K_L∪ K_RR_k
s.t. √(β _k)∈ [0,1],β _k +β _k̅ = 1,
θ _k^m ∈ [0,2π ],
c_k_L,k_R∈{0,1},∑_k_R = 1^K c_k_L,k_R = 1,
π( k ) = {[ 0,if| h_k| ≤| h_k̅|,; 1,if| h_k| > | h_k̅|, ].
τ _{k,k̅}∈ [0,1],∑_k ∈ K_L(k ∈ K_R)τ _{ k,k̅} = 1,
ρ _k∈ [0,1],ρ _k + ρ _k̅ = 1.
(<ref>) and (<ref>) define the feasible ranges of STAR-RIS amplitude coefficients and phase shift respectively. Constraint (<ref>) illustrates the one-to-one mapping relationship between TUs and RUs. Constraint (<ref>) guarantees the SIC performs successfully. Constraint (<ref>) means the sum of the time allocation coefficients for user-pairs is 1 and constraint (<ref>) represents that the sum of the power allocation coefficients for two paired users should also be 1.
§ PROBLEM SOLUTION AND PROPOSED ALGORITHM
Evidently, the proposed optimization problem is nonconvex and highly coupled. Moreover, it is worth noting that the decoding order, beamforming vectors, power and time allocation can only be solved when the user pairing is given, due to the constraints (<ref>), (<ref>), (<ref>) and (<ref>). However, when the other variables are given, it is difficult to further optimize the user pairing through mathematical derivation. To tackle this issue, a novel iterative algorithm with a two-layer loop-nesting structure is proposed. Specifically, the outer layer is designed to determine the user pairing by a one-to-one swapping matching based approach. With given user pairing, the decoding order, passive beamforming, user-pair time allocation and user power allocation are solved by an alternating optimization (AO) based algorithm in the inner layer.
§.§ Outer layer: one-to-one matching based user-pairing
In this subsection, we describe the user pairing as a matching game between TUs and RUs, which is solved by a matching-theory based algorithm. First of all, some basic concepts are introduced as follows.
(One-to-one Two-sides Matching): A one-to-one matching Φ is defined as a function from K_T∪ K_R to K_T∪ K_R such that
* Φ (k_T) ∈ K_R,Φ (k_R) ∈ K_T,
* |Φ (k_T)|=|Φ (k_R)|=1
* k_T = Φ (k_R) ⇔k_R = Φ (k_T)
(Utility Function): Given a matching state Φ, the utility function W(Φ ) is defined as the possible maximum value of the minimum user rate, which is obtained in the subsequent inner layer.
W(Φ )=Max_𝐕_k,π_k,τ _{ k,k̅},ρ _k 5ptMin_k ∈ K_L∪ K_RR_k.
(Swapping Matching):
For matching state Φ (k_L) = k_R and Φ (k̃_L) = k̃_R, a swap matching is
Φ _k_L^k̃_L = {Φ\{ (k_L,k_R),(k̃_L,k̃_R) }∪{(k_L,k̃_R),(k̃_L,k_R) }}
.
(Swap-blocking pair): Given a matching state Φ with Φ(k_T) = k_R and Φ(k̃_T) = k̃_R, (k_T, k̃_T) is a swapping-blocking pair if and only if W(Φ _k_T^k̃_T) > W(Φ).
Definition <ref> indicates the one-to-one mapping relationship between TUs and RUs. Note that different from general matching algorithms, the utility function in Definition <ref> is a global function, which makes the obtained matching state closer to the optimal solution, rather than just a stable matching. Definition <ref> enables two TUs to exchange their paired RUs. Definition <ref> implies that the utility function will increase with a swap-blocking pair. Based on the above definitions, the one-to-one swapping matching based user pairing in the outer layer is briefly described as: with a given initial matching state, the algorithm keeps searching for one TU and one RU to form a swapping-blocking pair and executes the swapping matching until there is no swapping-blocking pair.
§.§ Inner layer: AO based decoding order, passive beamforming, power and time allocation
In this subsection, we aim to obtain the utility function defined in the outer layer. With given user pairing, the problem for jointly optimizing the decoding order, the beamforming, the power and time allocation is expressed as
(P2): Max_S,V_k,π _k,τ _{ k,k̅},ρ _k S
s.t. S ≤R_k,k ∈𝒦_T∪𝒦_R, (<ref>),(<ref>),(<ref>)∼(<ref>),
where S is an introduced auxiliary variable which represents the minimum rate among users. To seek the solution of the still highly coupled nonconvex problem (P1), an AO based iteration algorithm is proposed, where one variable is solved with giving the other ones [With sufficient iterations, the order in which multiple variables are solved has little effect on the final results.].
First of all, as considered in previous works <cit.>, the decoding order can be determined by the equivalent-combined channel gain when the other variables are given. Furthermore,
the SIC constraint can be removed under the optimal decoding order π^∗( k ) = {[ 0,if| h_k| ≤| h_k̅|,; 1,if| h_k| > | h_k̅|. ]. This operation will not affect the solution of (P2) by iteratively updating the optimal decoding order and solving the problem (P2) without (<ref>).
Then, we focus on the passive beamforming design at the STAR-RIS. It is known that the beamforming is expected to maximize the equivalent-combined channel gain. Based on the triangle inequality, there exists
| h_k| = | h_B,k + 𝐡_S,k^H𝐕_k𝐡_B,S|≤^(a)| h_B,k| + | 𝐡_S,k^H𝐕_k𝐡_B,S|, where (a) holds if and only if (h_B,k) = (𝐡_S,k^H𝐕_k𝐡_B,S) = φ _k^0. Actually, this indicates that the optimal beamforming is supposed to align the signals from the direct link and that from the STAR-RIS assisted link. Let Θ _k = [ e^jθ _k^1,e^jθ _k^2,...,e^jθ _k^M]^H and
𝐞_k = diag(𝐡_S,k^H)𝐡_B,S, it is easy to know 𝐡_S,k^H𝐕_k𝐡_B,S = √(β _k)Θ _k^H𝐞_k. As a result, the optimal phase shift at the STAR-RIS is given as
θ _k^m = [ (h_B,k) - (𝐡_S,k^H[m]) - (𝐡_B,S[m]),2π].
As for the beamforming amplitude β _k, to tackle the nonconvex constraint S ≤R_k, a successive convex approximation (SCA) based approach is applied. Since the signals from two links are aligned, the channel gain can be rewritten as
| h_k|^2 = A_k+B_k√(β _k) +C_kβ _k,
where A_k=| h_B,k|^2, B_k=2| h_B,k|| 𝐡_S,k^Hdiag(Θ _k)𝐡_B,S| and C_k=| 𝐡_S,k^Hdiag(Θ _k)𝐡_B,S|^2. With L_k,1 = Pρ _k and L_k,2 = π (k̅)Pρ _k̅, the constraint S ≤R_k is transformed as
S+τ _{ k,k̅}log _2( L_k,2( A_k + B_k√(β _k) + C_kβ _k) + n)≤
τ _{ k,k̅}log _2( ( L_k,1 + L_k,2)( A_k + B_k√(β _k) + C_kβ _k) + n).
As log _2( √(x) + x) is a concave function with regard to x, we consider taking the
first-order Taylor expansion at the given feasible points β _k^ε to obtain the upper bound of log _2( L_k,2( A_k + B_k√(β _k) + C_kβ _k) + n)):
Ξ _k,1^UB = log _2( L_k,2( A_k + B_k√(β _k^ε) + C_kβ _k^ε) + n) +
L_k,2( B_k/.
-2√(β _k^ε) + C_k)/log(2)( n + L_k,2( A_k + B_k√(β _k^ε) + C_kβ _k^ε))(β _k - β _k^ε ).
To make (<ref>) more tractable, auxiliary variables ξ _k≤ A + B√(β _k) + Cβ _k are introduced and the subproblem to optimize the beamforming amplitude is converted into
(P2-1): Max_β _k,ξ _k S,
s.t. S +τ _{ k,k̅}Ξ _k,1^UB≤τ _{ k,k̅}log _2( ( L_k,1 +L_k,2)ξ _k+ n),
ξ _k≤ A_k + B_k√(β _k) + C_kβ _k,
(<ref>),
which can be solved by CVX.
Next, given {π _k,𝐕_k,τ _{ k,k̅}}, S ≤R_k with regard to the user power allocation can be rewritten as
S +τ _{ k,k̅}log _2( L_k,3ρ _k̅ + n) ≤τ _{ k,k̅}log _2( L_k,4ρ _k+ L_k,3ρ _k̅+n),
where L_k,3 = π (k̅)P| h_k|^2 and L_k,4 = P| h_k|^2. Similar to (<ref>), the upper bound of log _2( L_k,3ρ _k̅ + n) can be derived by the first-order Taylor expansion at the given feasible points ρ _k^ε and ρ _k̅^ε:
Ξ _k,2^UB = log _2( L_k,3ρ _k̅^ε + n) + L_k,3/log 2( L_k,3ρ _k^ε + n)(ρ _k̅ - ρ _k̅^ε).
As a result, the subproblem to solve the user power allocation is expressed as
(P2-2): Max_ρ _k S,
s.t. S+Ξ _k,2^UB≤τ _{ k,k̅}log _2( L_k,4ρ _k + L_k,3ρ _k̅ + n),
(<ref>).
Finally, the subproblem to solve the user-pair time allocation can be expressed as a linear programming problem:
(P2-3): Max_τ _{ k,k̅} S,
s.t. S ≤τ _{ k,k̅}log_2(1+| h_k|^2ρ _kP/π( k̅)| h_k|^2ρ _k̅P + n),
(<ref>).
§.§ Discussion on the Proposed Algorithm
In summary, the proposed iterative algorithm has a two-layer loop-nesting structure. The details are shown in Algorithm <ref>.
1) Convergence: For the outer layer, the objective value is non-decreasing after each iteration due to the definition of the swap-blocking pair. For the alternating iteration in the inner layer, the objective value is also non-decreasing, which is proven as follows.
Firstly, in steps 6, 7 and 10 of Algorithm <ref>, since the optimal solution can be obtained by (<ref>), (<ref>) and solving (P2-3), we have S(π _k^ε ,θ _k^ε ,β _k^ε ,ρ _k^ε ,τ _{ k,k̅}^ε) ≤ S(π _k^ε + 1,θ _k^ε + 1,β _k^ε ,ρ _k^ε ,τ _{ k,k̅}^ε + 1).
Secondly, define S_amp^lb,ε as the objective value obtained by solving (P2-1) at the ε-th iteration. For step 8, it follows
S(π _k^ε + 1,θ _k^ε + 1,β _k^ε ,ρ _k^ε,τ _{ k,k̅}^ε + 1) = ^(a) S_amp^lb,ε(π _k^ε + 1,θ _k^ε + 1,β _k^ε ,ξ _k^ε , ρ _k^ε ,τ _{ k,k̅}^ε + 1)
≤^(b) S_amp^lb,ε(π _k^ε + 1,θ _k^ε + 1,β _k^ε + 1,ξ _k^ε+1 ,ρ _k^ε,τ _{ k,k̅}^ε + 1)
≤^(c) S(π _k^ε + 1,θ _k^ε + 1,β _k^ε + 1,ρ _k^ε ,τ _{ k,k̅}^ε + 1),
where (a) holds since the first-order Taylor expansion in (13) is tight at the given local points; (b) holds because of the optimized β _k^ε + 1 and ξ _k^ε+1; (c) holds since the objective value of (P2-1) is the lower bound of that of the original problem (P2) at β _k^ε + 1. Similarly, for step 9, there is S(π _k^ε+ 1,θ _k^ε+ 1,β _k^ε+ 1,ρ _k^ε,τ _{ k,k̅}^ε+ 1) ≤ S(π _k^ε + 1,θ _k^ε+ 1,β _k^ε+ 1,ρ _k^ε+ 1,τ _{ k,k̅}^ε+ 1).
Based on the above analysis, it can be driven that S(π _k^ε ,θ _k^ε ,β _k^ε ,ρ _k^ε ,τ _{ k,k̅}^ε) ≤ S(π _k^ε + 1,θ _k^ε + 1,β _k^ε+1 ,ρ _k^ε+1 ,τ _{ k,k̅}^ε+ 1), and the proof is completed.
Since the achievable max-min rate is upper bounded by a finite value, the proposed algorithm is guaranteed to converge.
2) Complexity: The computational complexity of Algorithm <ref> mainly depends on the number of users. The maximum number of swap operations in the outer layer is K( K - 1)/2. The complexity for solving (P2) in the inner layer is
O( I_ite( 2( K)^4.5 + K^2(K+1)))
if the interior point method is employed, where I_ite denotes the iterations taken to converge. Then the overall computational complexity of Algorithm <ref> is
O( K( K - 1)I_ite( 2( K)^4.5 + K^2(K+1))/2), i.e.,O( I_iteK^6.5).
§ NUMERICAL RESULTS
In this section, numerical results are provided to demonstrate the effectiveness of the proposed framework and algorithm. We consider an area with the size of 1000m × 1000m, where the STAR-RIS is deployed at the center. The main simulation setups are shown in Table <ref>.
Fig. <ref> illustrates the convergence process of the proposed two-layer iterative algorithm. We can see that both inner-layer iteration and outer-layer iteration show a growing trend and achieve convergence within 10 iterations, which verifies the feasibility of the proposed algorithm. In addition, it is shown that the performance gain obtained by the outer-layer iteration is significantly smaller than that obtained from the inner-layer iteration, which reveals that the user pairing optimization in the outer layer has a modest impact on network performance. This issue will be further demonstrated in Fig. <ref>.
In Fig. <ref>, we evaluate the performance of the proposed hybrid NOMA based STAR-RIS assisted network. There are three benchmark frameworks including TDMA based STAR-RIS assisted network, hybrid NOMA based reflecting-only RIS assisted network, and hybrid NOMA based network without RIS. It can be observed that regardless of the frameworks, the max-min user rate decreases as the number of users increases. This is caused by the wireless resource competition among multiple users. Compared with the three benchmarks, we can find that the combination of STAR-RIS and hybrid NOMA has the best performance. Specifically, the gain from the hybrid NOMA is greater than that from the STAR-RIS, with 6.08 times and 2.96 times respectively when K = 8. The max-min user rate of the proposed framework is also significantly improved compared with that of conventional reflecting-only RIS assisted network. This improvement exactly comes from the fact that the STAR-RIS can transmit and reflect signals simultaneously, allowing it to serve all users, regardless of its location.
In Fig. <ref>, the performance of the proposed two-layer iterative algorithm is compared with following benchmark algorithms: 1) equal user-pair time allocation, τ _{ k,k̅} = 1/K; 2) equal user power allocation, ρ _k = ρ _k̅ = 0.5; 3) distance based user pairing, which means the matching state is determined by the distance from users to the STAR-RIS (the far TU is paired with the near RU); and 4) random phase shift at the STAR-RIS. As we can see, the max-min user rate of the proposed algorithm is the largest among all five algorithms, which verifies the superiority of the proposed algorithm. Furthermore, the gain from power optimization is the largest, followed by that from beamforming design, while the gains from other variables are relatively small. This result shows the critical importance of power allocation for NOMA and beamforming design for STAR-RIS.
Both Fig. <ref> and Fig. <ref> are obtained with M_y = M_z = 10 and 𝐪_S = [0,0,20] m, while Fig. <ref> further investigates the impact of the number of STAR-RIS elements and the location of the STAR-RIS on the network performance [ The location of the STAR-RIS is considered in a simple way, that is, the distance to the BS. More accurate STAR-RIS deployment will be studied in our future works.]. Based on the results shown in Fig. <ref>, two major conclusions can be drawn. On the one hand, increasing the number of elements can improve the enhancement effect of the STAR-RIS. On the other hand, as the max-min rate increases with the shortening of the distance from BS to STAR-RIS, the deployment of STAR-RIS is of great importance.
§ CONCLUSION
In this paper, we investigated a STAR-RIS enhanced cell-edge network, where a novel hybrid NOMA framework was proposed to take full advantages of the STAR-RIS. A max-min user rate problem was solved by jointly optimizing the user pairing, decoding order, passive beamforming, power and time allocation. Numerical results verified the significant superiority of the proposed framework in improving communication fairness. The importance of power allocation and passive beamforming design was also proved. Moreover, it was confirmed that increasing the STAR-RIS elements and shortening the distance of BS and STAR-RIS contributed to the improvement of network performance.
IEEEtran
|
http://arxiv.org/abs/2307.02744v1 | 20230706030803 | Active Learning with Contrastive Pre-training for Facial Expression Recognition | [
"Shuvendu Roy",
"Ali Etemad"
] | cs.CV | [
"cs.CV"
] |
Active Learning with Contrastive Pre-training for Facial Expression Recognition
Shuvendu Roy, Ali Etemad
Dept. ECE and Ingenuity Labs Research Institute
Queen's University, Kingston, Canada
{shuvendu.roy, ali.etemad}@queensu.ca
August 1, 2023
==========================================================================================================================================================
fancy
Deep learning has played a significant role in the success of facial expression recognition (FER), thanks to large models and vast amounts of labelled data. However, obtaining labelled data requires a tremendous amount of human effort, time, and financial resources. Even though some prior works have focused on reducing the need for large amounts of labelled data using different unsupervised methods, another promising approach called active learning is barely explored in the context of FER. This approach involves selecting and labelling the most representative samples from an unlabelled set to make the best use of a limited `labelling budget'. In this paper, we implement and study 8 recent active learning methods on three public FER datasets, FER13, RAF-DB, and KDEF. Our findings show that existing active learning methods do not perform well in the context of FER, likely suffering from a phenomenon called `Cold Start', which occurs when the initial set of labelled samples is not well representative of the entire dataset. To address this issue, we propose contrastive self-supervised pre-training, which first learns the underlying representations based on the entire unlabelled dataset. We then follow this with the active learning methods and observe that our 2-step approach shows up to 9.2% improvement over random sampling and up to 6.7% improvement over the best existing active learning baseline without the pre-training. We will make the code for this study public upon publication at: https://github.com/ShuvenduRoy/ActiveFERgithub.com/ShuvenduRoy/ActiveFER.
Facial Expression Recognition, Semi-supervised Learning, Contrastive Learning
§ INTRODUCTION
Facial expression recognition (FER) has seen growing interest in the deep learning community <cit.> mainly due to its practical applications ranging from smart devices and medical care assistants to smart vehicles. However, the size of the labelled FER datasets is generally one of the concerns prohibiting further progress in the area. Many of the recently developed deep learning models, such as Transformer <cit.>, inherently require very large amounts of data.
As a result, many recent works on FER have focused on developing methods that learn better representations from small amounts of labelled data <cit.>.
Although labelled datasets are hard to collect and annotate, unlabelled images are widely available on the Internet. Given a pre-defined labelling budget, the annotation process involves resolving the choice of which samples of the unlabelled dataset to annotate <cit.>.
Recently, active learning has emerged as a viable solution for identifying key samples from the unlabelled set <cit.>. The basic idea of active learning is to start the training process with a few randomly selected samples and their corresponding labels. As the training progresses, a selection criterion is used to find more samples from the unlabelled set that are the best candidates for annotation. The model is then trained with this new labelled set along with the previously available labelled set. This cycle continues as long as the labelling budget is not exhausted.
A variety of new active learning methods have been proposed in recent years in different areas <cit.>. Although a few works have explored active learning specifically for FER <cit.>, more recent approaches in active learning <cit.> have not yet been studied in this context. Moreover, to our knowledge, a comprehensive study to benchmark the performance of different active learning methods for FER under the same training protocol has not been conducted.
Another well-known fact about active learning training with a small labelling budget is the `cold start' problem. The cold start problem occurs when the initial labelled set is either too small or not a good representative for the entire dataset. In such scenarios, the model fails to learn effective representations from the initial labelled set, and thus informative samples are not selected in later cycles of the active learning process. This may result in a final accuracy that is even worse than not using active sampling altogether.
In this work, we address the two problems mentioned above. First, we present a comprehensive study of different active learning methods for FER. To this end, we compare 8 active learning methods, namely: Entropy <cit.>, Margin <cit.>, Least Confidence <cit.>, BADGE <cit.>, GLISTER <cit.>, Coreset <cit.>, BALD <cit.>, and Adversarial Deepfool <cit.>. We conduct our study on three FER datasets (FER13, RAF-DB, and KDEF) and show that surprisingly, simpler methods like Least Confidence and Margin obtain better results than recently proposed methods like Coresets and GLISTER.
On average, the Least Confidence method shows the best performance across all three datasets. Additionally, we find that active learning on FER does indeed suffer from the cold start problem. To address this issue, we propose a simple yet effective solution: self-supervised pre-training using the unlabelled data. We select the best-performing active learning method, Least Confidence, and show that by adding a self-supervised pre-training step, the cold start problem is reduced. Specifically, for self-supervised pre-training, we explore BYOL, MOCO, Barlow Twins, SwAV, and SimCLR and observe that while all of them are effective in reducing the negative impact of the cold start issue, SimCLR is the most effective.
The self-supervised pre-training step helps the method select more representative samples at the first cycle and effectively enables better learning at later cycles.
Overall, our proposed solution shows up to 9% improvement over random sampling and up to 6% improvements over the scenario where the cold start issue is not addressed.
Further ablation studies confirm that the improvements in performance are not simply due to a better encoder (pre-trained), but rather because of the fact that better samples are selected from the unlabelled set due to the added pre-training step.
Our contributions in this work are summarized as follows:
* We study active learning in the context of FER by exploring eight different active learning methods on three FER datasets.
* We propose a new solution to reduce the cold start problem in active learning for FER, and show substantial overall improvements in performance.
* To contribute to the field of active learning in the context of affective computing and to enable reproducibility, we release the code for this work at: https://github.com/ShuvenduRoy/ActiveFERgithub.com/ShuvenduRoy/ActiveFER.
§ RELATED WORK
In this section, we discuss the related literature in two areas relevant to this work: (a) active learning and (b) self-supervised learning.
§.§ Active Learning
The objective of active learning is to utilize a selection criterion for selecting the most representative samples from an unlabelled set for annotation.
Although active learning is not a new concept, the rise of deep learning has resulted in a surge in active learning methods since deep learning methods require large datasets to train, which are not always available for many domains.
Some selection methods in earlier forms of active learning utilized the concept of uncertainty in the model's prediction as an indicator for selecting new samples for labelling. For example, in <cit.>, the entropy in the model's prediction on an unlabelled sample was used as the selection criterion. Two other variants of uncertainty-based sampling techniques were proposed in <cit.>. The first criterion (Margin) uses the difference between the top two predictions as the indicator for selection. A confident prediction will have a large difference between the highest and second-highest predictions over the number of classes. The second criterion (Least Confidence) simply takes the maximum over the class probability as the indicator. In this criterion, a less confident prediction will have lower confidence for the predicted class.
More recent methods focus more on the learning progress of the model and utilize more specific signals as the selection criteria. For instance, <cit.> inspects the loss gradient and selects a set of samples with diverse loss gradients. GLISTER <cit.> is another method focusing on diverse sampling over the entire dataset using bi-level optimization. The concept of Coresets <cit.> has also been utilized as a selection criterion. BALD <cit.> uses Bayesian deep learning to maximize the information between the prediction and model posterior as an indicator for sampling. The concept of adversarial attacks has also been utilized to estimate the decision boundary of classes and select samples that are close to the boundaries <cit.>.
§.§ Self-supervised Learning
Self-supervised learning (SSL) is one of the most popular unsupervised representation learning methods that has shown remarkable progress in various areas.
SSL can learn important representations of the data without any supervision. Most of the earlier SSL methods utilized the concept of pre-text tasks, where an auxiliary task was defined on the unlabelled data. One example of such a pre-text task is rotation prediction, where an input unlabelled image is rotated at a certain degree and the model is tasked with predicting the angle of rotation. A more recent form of SSL utilizes the concept of contrastive learning.
Contrastive learning in computer vision was popularized by SimCLR, which utilizes a contrastive loss on two augmentations of an unlabelled image to maximize their agreement in the embedding space. SimCLR also utilizes the concept of projection heads, hard augmentations, and carefully designed training protocols to perform effectively in many scenarios.
Many variants of SimCLR have been since proposed. For instance, MoCo <cit.> utilizes a momentum encoder to encode one augmented image, while the other image is encoded by an online encoder. Later BYOL <cit.> proposed a slightly different objective function with respect to MoCO, which predicts the embedding of one augmented image from the other. SwAV <cit.> also has a similar idea with the distinction of predicting a prototype rather than the actual embedding. Barlow Twins <cit.> proposed a different loss calculated on the cross-correlation between the predicted embedding of the two images. This method explicitly avoided the mode collapse problem of self-supervised learning while achieving strong performance on downstream tasks.
§ METHOD
§.§ Preliminaries
Let X_U=(x_i)_i=1^N be a set of unlabelled samples, where N is the total number of samples in the unlabelled set, and n be the total labelling budget, where n ≪ N. An active learning method first randomly samples a small subset of s samples from X_U and annotates them to form X_L_S=(x_i,y_i)_i=1^s. The model is trained with X_L_S for certain epochs. These two steps are together called a cycle. Over the next (c-1) cycles, the active learning method samples (n-s)/(c-1) samples per cycle using a selection criterion, and trains the model with the current and previously sampled data. In this study, we explore the following active learning methods.
Entropy.
In Entropy, the uncertainty in the prediction of the model is used as the selection criterion <cit.>. Let p(x)=softmax(M_j(x)) be the prediction of the model, where M_j is the trained model at cycle j. The entropy selection criterion (H(x)) is represented as:
H(x) = - ∑_ip(x)_i log(p(x)_i),
where i is the index over the vector dimension of the model's predictions. Here, the method chooses the sample with the highest entropy.
Margin.
This method considers the difference between the highest two predictions as the selection criterion (F(x)) <cit.>, which is defined as:
F(x)=p(x)_m_1-p(x)_m_2.
Here, m_1 and m_2 are the largest and second-largest predictions, and the active learning method selects the sample with the lowest margin.
Least Confidence.
Least Confidence is another simple approach where the prediction confidence is used as the selection criterion <cit.>. This criterion (C(x)) is defined as:
C(x)=max_i p(x)_i,
Here, the active learning method selects the minimum C(x) over all the unlabeled samples.
BADGE.
This approach is one of the more recent active learning selection methods that take the model's learning progress into consideration <cit.>. It first computes the gradient of the last layer with some pre-defined loss function. Then, a K-Means++ algorithm is utilized to find the desired number of centers with diverse loss gradients.
GLISTER.
In this recent method, the active learning solution aims to select samples that are representative of the entire domain <cit.>. For this, GLISTER utilizes a bi-level optimization problem where the inner optimization learns model parameters, and the outer optimization selects a set of unlabeled samples.
Coreset.
In this approach, the aim of the method is to find the samples that represent or capture the structure of the entire unlabelled dataset <cit.>. Here, a k-center algorithm is utilized to solve a pre-defined objective function to find the coresets.
BALD. In this method, the concept of Bayesian deep learning is utilized for active learning <cit.>. BALD uses the concept of information maximization between the prediction and model posterior to select a pool of samples.
Adversarial DeepFool. This active learning method operates by selecting points that are closer to the decision boundaries <cit.>. Since calculating the actual decision boundary in the embedding space is difficult, this method proposes to use the concept of adversarial attacks to estimate it. For each sample, the number of adversarial perturbations required to flip the prediction is considered as the indication of how close it is to the decision boundary.
§.§ Proposed Solution for the Cold Start Problem
In this section, we present a simple solution to address the cold start problem in active learning for improved performance in the context of FER. The proposed solution involves a two-step training protocol. In the first step, we pre-train the model with the entire unlabeled set X_U in a self-supervised setting to learn the underlying representation of the data. In the second step, we follow conventional active learning training, where the learned representation from the first step helps the model to learn a discriminative representation at the first cycle from the small labelled subset X_LS, without overfitting.
This approach enables a better selection of representative samples in later cycles of the active learning training process, effectively reducing the cold start problem. We explore some recently proposed self-supervised methods, including SimCLR <cit.>, MoCo v2 <cit.>, BYOL <cit.>, SwAV <cit.>, and Barlow Twins <cit.>, for the first step.
For the second step, we use the Least Confidence as the active learning component. Below, we provide a brief overview of each of the self-supervised methods that we explore to evaluate our proposed solution.
SimCLR <cit.> is a popular contrastive self-supervised technique responsible for popularizing contrastive learning in the field of computer vision. The basic idea behind this method is to learn from positive and negative samples, where positive samples are variations or transformations of an input sample, typically generated through augmentations. All other samples are considered negative with respect to the input sample. By bringing together positive samples and moving them away from negatives in the embedding space, contrastive learning allows for effective learning of the underlying representation of the data. The contrastive loss function for two positive samples, denoted as i and j, is defined by:
ℒ_i, j = -logexp(cos(z_i, z_j)/τ)/∑_k=1^2N1_[k ≠ i] exp(cos(z_i, z_k)/τ),
where, z_i is the embedding of the encoder, cos() is the cosine similarity function, and τ is a temperature parameter. A visual illustration of the SimCLR method is depicted in Figure <ref>.
MoCo <cit.> is another popular self-supervised learning technique. Similar to SimCLR, the basic idea behind this technique is to learn representations that can differentiate between positive and negative samples. The positive pairs are again generated by data augmentation, while negative pairs are taken from a queue of samples that were stored at previous iterations of training. MoCo maintains two encoders, one `online' and the other `momentum'. The online encoder is updated after processing each minibatch, while the momentum encoder is updated using a moving average of the online encoder parameters. This ensures that the momentum encoder is always slightly behind the online encoder, enabling it to capture information from a larger amount of samples. The momentum encoder is updated with the following equation:
θ_k = m*θ_k + (1-m)*θ_q,
where θ_q and θ_k are the parameters of the online and momentum encoder and m is the momentum coefficient.
MoCo v2 is a simple extension of MoCo that utilizes the projection head and hard augmentation concepts introduced in SimCLR. Figure <ref> depicts the MoCo framework.
BYOL <cit.> also utilizes two encoders called online and target encoder. Like MoCo, the target encoder is a moving average of the online encoder. The objective of BYOL is to train an online encoder to predict the target encoder's representation of the same image under different augmentations. The online encoder is updated with the following loss function:
ℒ_i, j = 2 - 2 <P, Z_j>/||P||_2 · ||Z_j||_2,
where Z_j is the embedding generated by the target encoder, and P is the prediction generated from the online encoder's representation. BYOL framework is depicted in Figure <ref>
SwAV <cit.> is another popular self-supervised learning technique, which is a clustering-based approach. It first generates multiple views on a sample image by applying augmentations and then predicts its cluster assignment. Considering each view as a query and other views as keys, SwAV <cit.> utilizes contrastive learning on its cluster assignment. This clustering-based approach is able to learn high-quality representations even when the dataset is highly diverse or has many classes. The SwAV method is visualized in Figure <ref>.
Barlow Twins <cit.> proposed a loss function that explicitly avoids the collapse in self-supervised representation learning. It does so by calculating the cross-correlation matrix between two augmented images and making this matrix close to identity. The loss function of Barlow Twins is represented as follows:
ℒ_BT = ∑_i(1-C_ii)^2 + λ∑_i ∑_j≠ iC_ij^2,
where C_ij is the cross-correlation between ith and jth images in a batch. Here, the first term is called the invariance term, and the second term is called the redundancy reduction term. The Barlow Twins method is shown in Figure <ref>.
§ EXPERIMENTS AND RESULTS
In this section, we describe the experimental setup and the results. First, we present the implementation details and dataset description. Then we discuss the results of different active learning methods for FER. Finally, we present the results of our proposed method and a details study of different aspects of the method.
§.§ Datasets and Implementation Details
The experiments in the paper are conducted on 3 popular expression recognition datasets: FER13, RAF-DB, and KDEF. These datasets were selected to cover various aspects of FER datasets, including dataset size (from small to very large), spatial resolutions (from low to high), and sources (lab condition vs. in-the-wild). FER13 <cit.> is an in-the-wild dataset that was collected by the Google search API. All images in this dataset have an input resolution of 48×48, and it contains seven expression classes with 28K and 7K images in the training and validation splits, respectively. RAF-DB <cit.> is another in-the-wild dataset containing 12K and 3K images for training and validation, respectively. The images in this dataset are re-scaled to a resolution of 96×96. KDEF <cit.> is a smaller dataset that was collected in a lab environment with comparatively higher-resolution images.
To ensure a fair comparison, all the active learning methods in the experiment were trained under the same training settings. Specifically, a ResNet-18 model is trained for seven cycles (c) with 40% of the total labelled samples of the original dataset. The models were trained with an SGD optimizer with a learning rate of 0.01 and a batch size of 20. The pre-training was done for 400 epochs following the implementation details of SimCLR <cit.>. The code is implemented in PyTorch and trained with Nvidia V100 GPU.
§.§ Performance of Existing Active Learning Methods on FER
In this section, we evaluate the performance of various active learning methods for FER on the three aforementioned datasets. The results of this study are presented in Table <ref>. We observe three important findings that summarize the results from this study as follows:
(1) Existing methods perform poorly on FER.
In general, active learning on FER does not provide a reasonable improvement over random sampling. Only Least confidence and GLISTER show improvements over random sampling for all datasets.
The average across the active learning methods shows some improvement over random sampling for FER13 and RAF-DB datasets (2.30% and 2.74%) but gets reduced performance for KDEF. This signifies the fact that FER requires more specialized active learning methods to get a reasonable improvement over random sampling.
(2) Simpler methods outperform more complicated and specialized methods.
For FER, simpler approaches such as Entropy, Margin, or Least Confidence show improvements over the latest methods like BADGE, GLISTER, and CoreSet.
Among the two methods that show improvements for all datasets (Least Confidence, GLISTER), Least confidence shows 2.48%, 2.89%, and 2.46% improvements on FER, RAF-DB, and KDEF, while GLISTER shows 1.98%, 3.28%, and 0.14% improvements. Thus we can conclude that the Least Confidence method is the best default choice for FER datasets for active learning.
(3) Active learning does not work well on small datasets.
In our experiment on the smallest dataset, KDEF, we find no improvement for most of the active learning methods. Apart from Least Confident and GLISTER, all other methods perform below random sampling.
We argue that this poor performance is caused by the cold start problem. Since KFEF is a small dataset, training starts with a very small-sized labelled set, thus overfitting without learning generalizable representations.
In the next section, we present the results of our proposed solution that alleviates the cold start problem and improves the accuracy of all datasets, including the very small KDEF dataset.
§.§ Performance of the Proposed Solution
In Table <ref>, we present the performance of the best active learning method for FER, i.e., Least Confidence, with and without our proposed solution for self-supervised pre-training.
In general, we observe noticeable improvements with the proposed SSL pre-training in comparison to the baseline, with the highest improvement being 6.74% on the KDEF dataset. Recall that most existing methods showed performance degradation on KDEF due to the cold start problem. Furthermore, the proposed method shows a 9.2% improvement over random sampling on this dataset. This shows the effectiveness of the proposed approach for solving the cold start problem and improving performance. On the RAF-DB dataset, we see a 2.93% improvement with pre-training and a 5.82% improvement over random sampling. Similarly, for FER13, we observe a 1.48% improvement with pre-training and 3.96% over random sampling.
§.§ Ablation Study
In this section, we further analyze the impact of SSL pre-training on active learning. While Table <ref> demonstrated the positive impact of pre-training on the Least Confidence method, to investigate whether this improvement is a result of the pre-training alone or the fact that pre-training combined with active learning alleviates the cold start problem, we perform the same pre-training followed by random sampling. This result is presented in Table <ref>, where we observe a 2.45%, 2.64%, and 6.81% drop in performance on the three datasets, respectively. This finding enforces that a pre-trained encoder (in a self-supervised setting) on its own does not provide much improvement. Rather, selecting more representative samples in active learning boosts performance.
We also investigate different choices for self-supervised pre-training, which we discussed in Section <ref>. We summarize these results in Table <ref>. Overall, observations from the table show that pre-training with SimCLR provides the best results compared to other self-supervised methods. Barlow Twins shows the next best accuracy for RAF-DB and KDEF. MoCo-v2 shows the second-best accuracy for FER13. While SimCLR provides 1.48%, 2.93%, and 6.74% improvements for FER13, RAF-DB, and KDER datasets, Barlow Twins shows 2.73% and 5.04% improvements for RAF-DB and KDEF datasets, and MoCo v2 demonstrates 1.28% improvements for FER13.
§.§ Sensitivity study
In this section, we present a detailed sensitivity study on different hyper-parameters involved in the entire pipeline, including pre-training and active learning. More specifically, we conduct experiments on the following factors: (1) initial labelled set size of active learning, (2) number of active learning cycles, (3) number of epochs per cycle, (4) total labelling budget, and (5) optimizer. We show the results for the best performing active learning method for FER, i.e., Least Confidence, on all three datasets.
§.§.§ Sensitivity toward the initial labelled set size
The `initial labelled set size (s)' is an important hyper-parameter for active learning. Since the initial labelled set is selected randomly, selecting a large value for s reduces the number of samples that can be selected with active learning at later cycles. On the other hand, choosing very small values of s can contribute to the cold start problem. Therefore, we investigate different values of s for FER using the two-step training solution. In Figure <ref>, we illustrate the sensitivity for different initial labelled set sizes. The figure shows that choosing small values of s leads to better performance. For example, both FER13 and RAF-DB show the best performance when only 5% of samples are selected as the initial labelled set. For KDEF, the best accuracy is achieved with an initial labelled set size of 15%. We argue that the underlying representations learned by the self-supervised pre-training are responsible for this phenomenon. Due to the pre-training step, the model can learn from a small initial labelled set and select most of the samples in later cycles. Another important trend is the increased standard deviation when more samples are selected as the initial labelled set, especially for the KDEF dataset.
§.§.§ Sensitivity toward the number of active learning cycles
Another important parameter for active learning training is the total number of cycles. A large number of cycles provides the opportunity for selecting more representative samples at later cycles. Nevertheless, increasing the number of cycles also increases the total training cost, and it is, therefore, important to find an optimal number for this parameter. The sensitivity toward different numbers of training cycles is presented in Figure <ref>. We find the best performance is obtained for FER and RAF-DB when the model is trained for seven cycles and KDEF for nine cycles.
§.§.§ Sensitivity toward training epochs per cycle
We also investigate the optimal number of training epochs per cycle.
This is another important parameter since training for an excessive number of epochs can cause the model to overfit the data available in that cycle, whereas using too few can hamper learning for the set of data in that cycle. As a result, we present a sensitivity study on the number of active learning epochs in Figure <ref>. The observations show that the best accuracies are observed for 150 epochs for both KDEF and RAF-DB datasets. The FER13 shows better performance for longer training as it obtains the best accuracy with 250 epochs, with 200 also showing a close accuracy.
§.§.§ Performance versus different labelling budgets
In Figure <ref>, we show the performance for different amounts of total labelling budget. In general, more labelled samples results in better accuracy for almost all settings. However, the change in accuracy is sharper in low data regimes (e.g. increase from 20% to 25%) compared to the higher ones.
§.§.§ Sensitivity toward the optimizer
We also investigate the impact of the choice of optimizer on the final performance in Table <ref>. The results in this table show that SGD is considerably better than Adam optimizer for all the datasets. SGD shows 7.29%, 4.31%, and 3.44% higher accuracy compared to Adam on KDEF, RAF-DB and FER13, respectively.
§ CONCLUSION
In this study, we explored active learning as a solution for reducing the reliance on large amounts of labelled data to train deep learning models for FER. First, we implemented and evaluated various active learning methods for FER and confirmed the presence of a cold start problem. To overcome this issue and further enhance FER performance, we employed a two-step training protocol. In the first step, we conducted contrastive self-supervised pre-training using the entire set of unlabeled data. Our extensive studies showed that the two-step training protocol alleviates the cold start problem and improves performance by considerable margins. An extensive ablation study showed the effectiveness of the proposed pre-training step, and a comprehensive sensitivity study identified the optimal parameters for each dataset. We hope that this research will bring attention to this important direction for reducing the labelling cost, which can help facilitate the development of improved FER methods.
§ ACKNOWLEDGEMENTS
We would like to thank Bank of Montreal and Mitacs for funding this research. We are also thankful to SciNet HPC Consortium for helping with the computation resources.
§ ETHICAL IMPACT STATEMENT
This study did not include the collection of any new datasets as it relied on three popular public FER datasets used by most existing works in this domain. These datasets feature diverse demographic groups and contain no personal identification information (besides what has been already public or known to the original authors of these datasets) or offensive material, thereby avoiding any privacy concerns. Since our work relies on datasets collected from the internet, we believe good generalizability towards different demographic groups of people is achievable.
However, a detailed study on bias is required to further analyze this notion.
We acknowledge that, like any FER method, the system developed in this paper has the potential to be used to analyze the facial expressions of individuals without their consent. As a result, we find it absolutely imperative for such systems to be used ethically and responsibly with full compliance with ethical, moral, and legal guidelines.
The training process of each experiment took under 12 hours on a single Nvidia V100 GPU, a reasonable time-frame that does not pose a significant carbon footprint.
IEEEbib
|
http://arxiv.org/abs/2307.02130v1 | 20230705091123 | Implicit Differentiation for Hyperparameter Tuning the Weighted Graphical Lasso | [
"Can Pouliquen",
"Paulo Gonçalves",
"Mathurin Massias",
"Titouan Vayer"
] | cs.LG | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Implicit differentiation for hyperparameter tuning
the weighted Graphical Lasso
[email protected]
PauloGonç[email protected]
[email protected]
[email protected]
Univ Lyon, Ens Lyon, UCBL, CNRS, Inria, LIP, F-69342, LYON Cedex 07, France.
Nous dérivons les résultats mathématiques nécessaires à l'implémentation d'une procédure de calibration d'hyperparamètres pour le Graphical Lasso via un problème d'optimisation bi-niveau résolu par méthode du premier-ordre.
En particulier, nous dérivons la Jacobienne de la solution du Graphical Lasso par rapport à ses hyperparamètres de régularisation.
We provide a framework and algorithm for tuning the hyperparameters of the Graphical Lasso via a bilevel optimization problem solved with a first-order method. In particular, we derive the Jacobian of the Graphical Lasso solution with respect to its regularization hyperparameters.
Differentially Private Adversarial Auto-Encoder
to Protect Gender in Voice Biometrics
Melek Önen
=====================================================================================
§ INTRODUCTION
The Graphical Lasso estimator (GLASSO) <cit.> is a commonly employed and established method for estimating sparse precision matrices.
It models conditional dependencies between variables by finding a precision matrix that maximizes the ℓ_1-penalized log-likelihood of the data under a Gaussian assumption.
More precisely, the GLASSO is defined as[in variants, the diagonal entries of are not penalized. This is handled by the framework of <Ref>]
(λ) = _≻ 0
-() + ⟨, ⟩ + λ_1_=Φ(, λ) ,
where = 1/n∑_i=1^n 𝐱_i 𝐱_i^⊤∈^p × p is the empirical covariance matrix of the data (𝐱_1, ⋯, 𝐱_n).
There exist many first- and second-order algorithms to solve this problem <cit.>. These approaches all require choosing the right regularization hyperparameter λ that controls the sparsity of (λ).
This is a challenging task that typically involves identifying the value of λ for which the estimate (λ) minimizes a certain performance criterion .
This problem can be framed as the bilevel optimization problem
λ^opt = _λ{(λ) ≜((λ) ) }
s.t. (λ) = _≻ 0Φ(, λ)
,
where the minimizations over λ and are called respectively the outer and inner problems. The standard approach to tune the hyperparameter λ is grid-search: for a predefined list of values for λ, the solutions of (<ref>) are computed and the one minimizing is chosen, which corresponds to solving (<ref>) with a zero-order method.
In this paper, we propose instead a first-order method for (<ref>), relying on the computation of the so-called hypergradient and the Jacobian of the GLASSO objective with respect to λ. Despite the non-smoothness of the inner problem, we derive a closed-form formula for the Jacobian.
Our main contributions are the derivations of the equations of implicit differentiation for the GLASSO: first in the single parameter regularization case for ease of exposure in <Ref>, and then for matrix regularization in <Ref>. Our work paves the way for a scalable approach to hyperparameter tuning for the GLASSO and its variants, and could naturally apply to more complex extensions of the GLASSO such as <cit.>.
We provide open-source code for the reproducibility of our experiments which are treated in <Ref>.
Related work
Although not strictly considering the GLASSO problem, some other alternatives to grid-search have been considered in the literature, including random search or Bayesian optimization .
While we compute the hypergradient by implicit differentiation , automatic differentiation in forward and backward modes have also been proposed .
Notation
The set of integers from 1 to k is [k].
For a set ⊂ [p] and a matrix 𝐀∈^p × p, 𝐀_:,S (resp. 𝐀_S,:) is the restriction of 𝐀 to columns (resp. rows) in .
The Kronecker and Hadamard products between two matrices are denoted by ⊗ and ⊙ respectively.
The column-wise vectorization operation, transforming matrices into vectors, is denoted by (·) and ^-1(·) denotes the inverse operation.
For a differentiable function F of two variables, _1 F and _2 F denote the Jacobians of F with respect to its first and second variable respectively.
A fourth-order tensor 𝐀 applied to a matrix 𝐁 corresponds to a contraction according to the last two indices: (𝐀 : 𝐁)_ij = ∑_k,l A_ijklB_kl.
The relative interior of a set is denoted by relint().
§ THE SCALAR CASE
If the solution of the inner problem (λ) is differentiable with respect to λ, the gradient of the outer objective function , called hypergradient, can be computed by the chain rule:
ℒ/λ(λ) = ∑_i,j=1^p ∂/∂Θ_ij() ∂Θ̂_ij/∂λ(λ) .
This work was partially funded by the AllegroAssai ANR-19-CHIA-0009 project.
The hypergradient can then be used to solve the bilevel problem with a first-order approach such as gradient descent:
λ_k+1 = λ_k - ρ/λ(λ_k).
The main challenge in the hypergradient evaluation is the computation of ∂Θ̂_ij/∂λ(λ), that we summarize in a p× p matrix[ is the image by ^-1 of the Jacobian of λ↦((λ))] = (∂Θ̂_ij/∂λ(λ))_ij.
When the inner objective Φ is smooth, can be computed by differentiating the optimality condition of the inner problem, ∇_Φ(, λ) = 0, with respect to λ, as in .
Unfortunately, in our case the inner problem is not smooth.
We however show in the following how to compute by differentiating a fixed point equation instead of differentiating the optimality condition as performed in for the Lasso.
The main difficulty in our case stems from our optimization variable being a matrix instead of a vector, which induces the manipulation of tensors in the computation of .
Let
F : ^p × p×_+ →^p × p
(, λ) ↦_γλ·_1 ()
,
which is equal to the soft-thresholding operator
F(, λ)=
() ⊙ ( - λγ)_+ ,
where all functions apply entry-wise to . When solves the inner problem (<ref>), it fulfills a fixed-point equation related to proximal gradient descent.
Valid for any γ>0 <cit.>, this equation is as follows:
= F( - γ ( - ^-1), λ) .
To compute , the objective is now to differentiate (<ref>) with respect to λ. By defining ≜ - γ( - ^-1) we will show that F is differentiable at (, λ). Since F performs entry-wise soft-thresholding, each of its coordinates is weakly differentiable <cit.> and the only non-differentiable points are when |Ẑ_ij| = λγ. To ensure that none of the entries of take the value ±λγ, we will use the first-order optimality condition for the inner problem (<ref>).
Let (λ) be a solution of (<ref>).
Then, using Fermat's rule and the expression of the subdifferential of the ℓ_1-norm <cit.>,
[^-1]_ij - S_ij∈
{λΘ̂(λ)_ij} , if Θ̂(λ)_ij≠ 0 ,
[-λ, λ] otherwise.
We also require the following assumption which is classical (see e.g. <cit.> and references therein).
[Non degeneracy]
We assume that the inner problem is non-degenerated, meaning that it satisfies a slightly stronger condition than (<ref>):
^-1 - ∈relint λ∂·_1 .
This implies that in (<ref>), the interval [-λ, λ] in the second case becomes (-λ, λ).
Using <Ref> under <Ref>, we conclude that |Ẑ_ij| never takes the value λγ so that (<ref>) is differentiable.
Indeed when Θ̂(λ)_ij = 0, |[^-1]_ij - S_ij| <λ which implies |Ẑ_ij| < λγ.
Conversely, when Θ̂(λ)_ij≠ 0, Ẑ_ij = Θ̂(λ)_ij + γλ(Θ̂(λ)_ij) which implies |Ẑ_ij| > λγ.
Consequently, we can differentiate <Ref> w.r.t. λ, yielding
=
_1 F(, λ)
( - γ^-1^-1)
+ _2 F(, λ) .
The goal is now to solve (<ref>) in . We define ≜_1 F(, λ) the Jacobian of F with respect to its first variable at (, λ) which is represented by a fourth-order tensor in ^p × p × p × p:
D_ijkl = [∂ F/∂ Z_kl(, λ)]_ij .
We also note ≜_2 F(, λ) viewed as a p × p matrix.
Jacobian with respect to Z
Because the soft-thresholding operator acts independently on entries, one has D_ijkl = 0 when (i, j) ≠ (k, l).
From <Ref>, the remaining entries are given by
D_ijij =
0 , if Ẑ_ij < λγ ,
1 , otherwise.
Jacobian with respect to λ
Similarly to , is given by
E_ij = [∂ F/∂λ(, λ)]_ij =
0 , if Ẑ_ij < λγ ,
-γ(Ẑ_ij) , otherwise.
We can now find the expression of as described in the next proposition.
Let ⊂[p^2] be the set of indices i such that (||)_i > λγ.
The Jacobian is given by
()_
= [ ( ^-1⊗^-1)_,]^-1( )_ / γ ,
()_^c = 0
.
From (<ref>), one has that applied to ∈^p × p is simply a masking operator : = ⊙,
where M_ij = 1{|Ẑ_ij| < λγ}.
Thus (<ref>) reads
= ⊙( - γ^-1^-1) + .
Now by the expression of (<ref>), has the same support as , so = ⊙, so = ⊙, and (<ref>) simplifies to
⊙ (^-1^-1) = / γ .
Using the mixed Kronecker matrix-vector product property (𝐀𝐂𝐁^⊤) = (𝐀⊗𝐁) (𝐂), by vectorizing (<ref>), we get
() ⊙ (^-1⊗^-1) ()
= () / γ .
Writing 𝐊 = ^-1⊗^-1, we have = _:, ()_ because () is 0 outside of .
Then, (<ref>) can be restricted to entries in , yielding
_, ()_ = ()_, which concludes the proof.
§ MATRIX OF HYPERPARAMETERS
In the vein of <cit.>, we now consider the weighted GLASSO where the penalty is controlled by a matrix of hyperparameters ∈^p × p.
In the weighted GLASSO, λ_1 is replaced by
⊙_1 = ∑_k, lΛ_klΘ_kl ,
with = (Λ_kl)_k,l ∈ [p].
Due to its exponential cost in the number of hyperparameters, grid search can no longer be envisioned.
In this setting, a notable difference with the scalar hyperparameter case is the dimensionality of the terms.
Indeed, the hypergradient ∇() is now represented by a p × p matrix,
while , and will be represented by fourth-order tensors in ^p × p × p × p.
For simplicity, we compute each element of the matrix ∇() individually as
[∇()]_kl = ∑_i,j=1^p ∂/∂Θ_ij(()) ∂Θ̂_ij/∂Λ_kl(Λ_kl) ∈ .
In the matrix case, the function F becomes F(, )
= () ⊙ ( - γ)_+.
By differentiating the fixed point equation of proximal gradient descent,
() =
F(() - γ ( - ()^-1)_, ) ,
with respect to Λ_kl, we obtain a Jacobian that can be expressed by a p × p matrix [_(Λ_kl)]_ij = ∂Θ̂_ij/∂Λ_kl(). It satisfies
_(Λ_kl) =
:
(_(Λ_kl)
- γ()^-1_(Λ_kl)()^-1)+ _(Λ_kl) .
Similarly to the scalar case D_ijkl = 1_(i,j) = (k,l)1_|Ẑ_ij| > γΛ_kl and [_(Λ_kl)]_ij = -(Ẑ_kl) 1_(i,j) = (k,l)1_|Ẑ_ij| > γΛ_kl. The following proposition thus gives the formula for _(Λ_kl).
Let ⊂[p^2] be the set of indices i such that (||)_i > γ()_i.
The Jacobian _(Λ_kl) is given by
(_(Λ_kl))_= [ ( ()^-1⊗()^-1)_,]^-1( _(Λ_kl))_ / γ,
(_(Λ_kl))_^c = 0
.
The Jacobian of () with respect to can be represented by the ^p × p × p × p tensor where Ĵ_ijkl = ([_(Λ_kl)]_ij)_i,j,k,l
We notice that the inverse of the Kronecker product, the bottleneck in the computation of , only has
to be computed once for all (Λ_kl)_k,l ∈ [p]. By its expression, _(Λ_kl) is a matrix with a single ±1 element at index (i,j) = (k,l). Therefore _(Λ_kl) is obtained by extracting the only column of [ ( ()^-1⊗()^-1)_,]^-1 indexed by that non-zero element.
§ EXPERIMENTS
In this section, we present our proposed methodology for tuning the hyperparameter(s) of the GLASSO, and we aim to address the following three questions through our experiments: 1) How does our approach compare to grid-search? 2) What level of improvement can be achieved by extending to matrix regularization? 3) What are the limitations of our method in its current state?
To answer these questions, we generated synthetic data using the function of , which created a random 100 × 100 sparse and positive definite matrix _true by imposing sparsity on its Cholesky factor. We then sample 2000 points following a Normal distribution 𝐱_i ∼𝒩(0, _true^-1), i∈ [n] i.i.d.
The criterion and its gradient Selecting the appropriate criterion to minimize is not an easy task without strong prior knowledge of the true matrix _true to be estimated.
In our numerical validation, we use the unpenalized negative likelihood on left-out data.
More precisely, we split the data into a training and testing set with a 50-50 ratio (𝐱_i)_i ∈ [n] = (𝐱_i)_i ∈ I_train∪ (𝐱_i)_i ∈ I_test and we consider the hold-out criterion
() = -() + ⟨_test, ⟩ where _test= 1/|I_test|∑_i ∈ I_test𝐱_i 𝐱_i^⊤ is the empirical covariance of the test samples (respectively _train for the train set).
This corresponds to the negative log-likelihood of the test data under the Gaussian assumption ∀ i ∈ I_test, 𝐱_i ∼𝒩(0, ^-1) i.i.d <cit.>. The intuition behind the use of this criterion is that should solve the GLASSO problem on the training set while remaining plausible on the test set.
Other possible choices include reconstruction errors such as () = _test - _F, but a comparison of the effect of the criterion on the solution is beyond the scope of this paper.
In our case, the criterion's gradient ∇() is then equal to
-^-1 + _test <cit.>.
Computing the Jacobian Based on the previous results we have all the elements at hand to compute the hypergradient for scalar and matrix hyperparameters.
In the first case it reads ℒ/λ(λ) = ⟨, ∇((λ)) ⟩ with as in <Ref>, while in the latter case it can be computed with the double contraction ∇ℒ() = : ∇(()) with as in <Ref>.
In the code, we use the parametrization λ = exp(α) and Λ_kl = exp(α_kl) respectively for the scalar and matrix regularization, and optimize over α in order to impose the positivity constraint on λ, as in <cit.>.
We rely on the GLASSO solver <cit.> for computing (·).
For solving (<ref>), we use simple gradient descent with fixed step-size ρ = 0.1.
Comparison with grid-search
As a sanity check, we first compare our method with a single hyperparameter (scalar case) to grid search. The initial regularization parameter λ^init is chosen such that the estimated precision matrix (λ^init) is a diagonal matrix:
λ^init = log(_train_∞).
<Ref> demonstrates that both methods find the same optimal λ, which we refer to as λ^opt_id, and that a first-order method that is suitably tuned can swiftly converge to this optimum. We also compute in the same Figure the relative error (RE) _true - (λ)/_true between the estimation and the true matrix (in blue). We notice that (λ^opt_id) results in a slightly worse RE than the optimal one. This highlights the importance of the choice of , which may not necessarily reflect the ability to precisely reconstruct the true precision matrix _true. Nonetheless, it is important to note that the RE represents an oracle error since, in practical scenarios, we do not have access to _true. This raises the essential question of criterion selection, which we defer to future research.
Matrix regularization
Our approach demonstrates its value in the context of matrix regularization, where grid search is incapable of identifying the optimal solution within a reasonable amount of time. As depicted in <Ref>, leveraging matrix regularization with appropriately tuned parameters enhances the value of the bilevel optimization problem.
Furthermore, as demonstrated in <Ref>, our method successfully modifies each entry Λ_kl of the regularization matrix, resulting in an estimated matrix (^opt) that aligns visually with the oracle _true.
The edge brought by this improvement remains to be further investigated with respect to the computational cost of the method.
While tuning the step-size, we observed that the non-convexity in this case appears to be more severe.
We speculate that utilizing more sophisticated first-order descent algorithms from the non-convex optimization literature could be more robust than plain gradient descent.
§ CONCLUSION
In this work, we have proposed a first-order hyperparameter optimization scheme based on implicit differentiation for automatically tuning the GLASSO estimator.
We exploited the sparse structure of the estimated precision matrix for an efficient computation of the Jacobian of the function mapping the hyperparameter to the solution of the GLASSO.
We then proposed an extension of the single regularization parameter case to element-wise (matrix) regularization. As future directions of research, we plan on studying the influence of the criterion on the sparsity of the recovered matrix, as well as clever stepsize tuning strategies for the hypergradient descent.
In the broader sense, we will also benchmark our method against data-based approaches to hyperparameter optimization such as deep unrolling <cit.>.
Finally, we provide high-quality code available freely on GitHub[<https://github.com/Perceptronium/glasso-ho>] for the reproducibility of our experiments.
|
http://arxiv.org/abs/2307.00739v1 | 20230703040224 | Characterizing slopes for satellite knots | [
"Patricia Sorya"
] | math.GT | [
"math.GT"
] |
A slope p/q is said to be characterizing for a knot K if the homeomorphism type of the p/q-Dehn surgery along K determines the knot up to isotopy. Extending previous work of Lackenby and McCoy on hyperbolic and torus knots respectively, we study satellite knots to show that for a knot K, any slope p/q is characterizing provided |q| is sufficiently large. In particular, we establish that every non-integral slope is characterizing for a composite knot. Our approach consists of a detailed examination of the JSJ decomposition of a surgery along a knot, combined with results from other authors giving constraints on surgery slopes that yield manifolds containing certain surfaces.
Role of strange quarks in the D-term and cosmological
constant term of the proton
June-Young Kim
August 1, 2023
===================================================================================
§ INTRODUCTION
A non-trivial slope p/q is said to be characterizing for a knot K in S^3 if whenever there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) between the p/q-Dehn surgery along K and the p/q-Dehn surgery along some knot K', then K = K', where “=” denotes an equivalence of knots up to isotopy.
In <cit.>, Lackenby proved that every knot has infinitely many characterizing slopes by showing that any slope is characterizing for a knot K, provided |p|≤|q| and |q| is sufficiently large.
The main theorem of the present paper strengthens this result.
Let K be a knot in S^3. Then any slope p/q is characterizing for K, provided |q| is sufficiently large.
In <cit.>, Kronheimer, Mrowka, Ozsváth and Szabó proved that all non-trivial slopes are characterizing for the unknot. McCoy showed in <cit.> that if K is a torus knot, there are only finitely many non-integral slopes that are non-characterizing for K, thus giving the torus knot case of the theorem. Lackenby showed the hyperbolic knot case in <cit.>. In this paper, we establish the theorem for any knot by studying the case of satellite knots.
The extension to satellite knots requires a distinct approach, as it cannot be simply derived from the cases of hyperbolic and torus knots. This is due to the presence of essential tori in the exterior of a satellite knot, which lead to a non-trivial JSJ decomposition of the knot's exterior. Hence, Dehn surgery along a satellite knot involves attaching a solid torus to a torus boundary component of a manifold that is not a knot exterior. Our strategy therefore consists of an in-depth analysis of the topology of Dehn fillings of manifolds that arise as JSJ pieces of a knot exterior, along with a description of the gluing between these manifolds through the distance between specific slopes. In particular, we rely on the rigidity of Seifert fibred structures, as well as results pertaining to fillings of non-Seifert fibred manifolds that contain certain surfaces.
Moreover, the ideas employed in the proof of Theorem <ref> can be adapted to derive explicit bounds on |q| for some families of satellite knots. We obtain the following result for composite knots.
If K is a composite knot, then every non-integral slope is characterizing for K.
Baker and Motegi constructed composite knots for which every integral slope is non-characterizing (<cit.>, Figure <ref>). As a corollary, Theorem <ref> gives the complete list of non-characterizing slopes for these knots.
The set of non-characterizing slopes for Baker and Motegi's composite knots consists of all integral slopes.
To the author's knowledge, this yields the first examples of knots for which the complete list of non-characterizing slopes is known and is not empty. Indeed, the other known examples of a complete list are for the unknot, the trefoil and the figure-8 knot, for which all slopes are characterizing (<cit.>, <cit.>).
The constraints given by the topology of the exterior of composite knots also lead to the following result.
If K is a knot with an exterior consisting solely of Seifert fibred JSJ pieces, with one of them being a composing space, then any slope that is neither integral nor half-integral is a characterizing slope for K.
I am deeply grateful to my research advisors, Duncan McCoy and Steven Boyer, for their invaluable guidance and the enlightening discussions during which several key ideas were shared. I would also like to thank Laura Wakelin for many insightful conversations and productive exchanges. Lastly, I would like to acknowledge David Futer for bringing to my attention a numerical improvement, and Giacomo Bascapè for his input on the visual aspects of this paper.
§.§ Structure of paper
After introducing our notation in Section <ref>, the paper is structured into three main parts. The first, covered in Section <ref>, describes the JSJ decomposition of a surgery along a knot. The second, consisting of Sections <ref>, <ref> and <ref>, presents the proof of Theorem <ref>. Finally, Sections <ref> and <ref> establish explicit bounds that realize the main theorem for certain families of knots.
§.§ Outline of proof
Dehn surgery along a knot K is obtained by gluing a solid torus to the boundary of S^3_K, the exterior of K in S^3. This boundary is contained in a single JSJ piece of the JSJ decomposition of S^3_K. Thus, to understand the topology of a surgery, we must study the fillings of manifolds that arise as JSJ pieces of a knot exterior. We do so in Section <ref>, where we describe the JSJ decomposition of S^3_K(p/q). In particular, when |q| is sufficiently large, there is one JSJ piece that contains the surgery solid torus; we call it the surgered piece.
For a fixed non-trivial knot K, suppose there is some knot K' such that there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q). Two scenarios may occur: the surgered piece of S^3_K(p/q) is not mapped to the surgered piece of S^3_K'(p/q), or the surgered pieces are mapped one to another. Most of the work towards Theorem <ref> lies in the study of the first case. For each possible description of K as a pattern P and a companion knot J, we demonstrate that there is a lower bound on |q| determined solely by K such that the surgered piece of S^3_K'(p/q) is not mapped to the outermost JSJ piece of S^3_J. This yields the following proposition, whose proof occupies Sections <ref> and <ref>.
Let K be a knot. Suppose |q|>2. If there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) for some knot K', then the homeomorphism sends the surgered piece of S^3_K(p/q) to the surgered piece of S^3_K'(p/q), provided |q| is sufficiently large.
It follows that for |q| sufficiently large, we find ourselves in the situation where an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) must send the surgered pieces one to another. In that case, the JSJ structures of S^3_K and S^3_K' agree away from the JSJ pieces that yielded the surgered pieces. Hence, the problem is now reduced to determining a bound on |q| such that the surgered pieces were in fact obtained from the same manifold. This is done in Section <ref>.
In the final two sections of the paper, we outline explicit bounds that realize Theorem <ref> for certain families of knots.
In Section <ref>, we provide a lower bound for |q| that ensures that p/q is a characterizing slope for a cable K whose exterior contains only Seifert fibred JSJ pieces. This bound is obtained from the proof of Theorem <ref>. In particular, when K is not an n-times iterated cable of a torus knot, n ≥ 1, we show that every slope that is not integral or half-integral is characterizing for K.
In Section <ref>, we demonstrate Theorem <ref>, which gives a realization of Theorem <ref> for composite knots when |q|>1. Up until this section, we have assumed |q|>2, which guaranteed that hyperbolic fillings of JSJ pieces of a knot exterior were also hyperbolic. To lower the bound to |q|>1, we need to consider the possibility of exceptional fillings of hyperbolic manifolds. We are able to constrain the topology of half-integer fillings of hyperbolic manifolds of interest by relying on various results that provide upper bounds on the distance between surgery slopes yielding manifolds that contain certain surfaces (<cit.>, <cit.>, <cit.>), <cit.>). We also use the classification by Gordon and Luecke of hyperbolic knots in S^3 and in S^1 × D^2 that admit half-integral toroidal surgeries (<cit.>). As a result, we establish that if a non-integral surgery along a knot is obtained from the filling of a hyperbolic JSJ piece, then it can never be orientation-preserving homeomorphic to a non-integral surgery along a composite knot. Theorem <ref> then follows from the argument in Section <ref> regarding composing spaces.
§ NOTATION AND PRELIMINARIES
Let K be a knot in S^3. We denote by S^3_K the exterior of K in S^3, i.e., the manifold obtained by removing an open tubular neighbourhood ν K of K in S^3.
We write P(J) for the satellite with pattern P and companion knot J. The winding number of P is the algebraic intersection number between P and an essential disc in V = S^1 × D^2. The exterior of the satellite P(J) is a gluing V_P ∪ S^3_J where V_P denotes the exterior of P seen as a knot in V. We call V_P the pattern space associated to P.
Recall that for any compact irreducible orientable 3-manifold M, there is minimal collection 𝐓 of properly embedded disjoint essential tori such that each component of M ∖𝐓 is either a hyperbolic or a Seifert fibred manifold, and such a collection is unique up to isotopy (<cit.>, <cit.>). The JSJ decomposition of M is given by
M = M_0 ∪ M_1 ∪…∪ M_k,
where each M_i is the closure of a component of M ∖𝐓. A manifold M_i is called a JSJ piece of M and a torus in the collection 𝐓 is called a JSJ torus of M. Any homeomorphism between compact irreducible orientable 3-manifolds can be seen as sending JSJ pieces to JSJ pieces, up to composition with an isotopy.
The JSJ piece of S^3_K containing the boundary of ν K is said to be outermost in S^3_K.
For 𝒯 a torus, fix a basis {μ, λ} of H_1(𝒯; ) ≅⊕. A simple closed curve on 𝒯 represents a class pμ+qλ where p and q are coprime. We denote this class by p/q ∈∪{1/0} and we call it a slope. The distance between two slopes p/q and r/s is Δ(p/q, r/s) = |ps-qr| and it corresponds to the absolute value of the algebraic intersection number between curves representing p/q and r/s.
If M is a 3-manifold with toroidal boundary components 𝒯_1, …, 𝒯_n with fixed bases {μ_i, λ_i} for each H_1(𝒯_i; ), i=1, …, n, then
M(𝒯_1, …, 𝒯_n ; p_1/q_1, …, p_n/q_n)
denotes the Dehn fillings along a simple closed curve representing p_i/q_i on 𝒯_i for each i=1, …, n. If only one boundary component of M is filled, we may simply write M(p/q) if it is clear from context which boundary component is filled. If M is connected, there is a unique slope γ on M that has finite order in H_1(M; ), called the rational longitude of M. We refer to a rational longitude as a longitude if it is of order 1 in H_1(M; ).
When the manifold M is a knot exterior S^3_K, a slope p/q on S^3_K is expressed in terms of the coordinates of H_1( S^3_K; ) given by the homotopy class of a curve that bounds an essential disc in ν K, the meridian of S^3_K, and the homotopy class of a curve that bounds a surface in S^3_K, the longitude of S^3_K, with orientations following the usual convention (a meridional curve pushed into S^3_K and a longitudinal curve have linking number +1). The meridian is well-defined by Gordon and Luecke's knot complement theorem (<cit.>) and the longitude is the unique element of H_1( S^3_K; ) that is null-homologous in H_1(S^3_K; ). The slope 1/0 corresponds to the meridian, while the slope 0/1 corresponds to the longitude. We will refer to this preferred basis as the one given by the knot K.
When K is a satellite, we have the following.
Let K be a satellite knot. For each JSJ torus 𝒯 of S^3_K, there is a pattern P and a knot J such that K = P(J) and 𝒯 = V_P ∩ S^3_J.
Let 𝒯 be a JSJ torus of S^3_K. It separates S^3_K into A∪_𝒯B, where B contains 𝒦= S^3_K. Note that S^3 ≅ S^3_K(1/0) ≅ A ∪_𝒯 B(𝒦; 1/0). By the loop theorem, any torus in S^3 bounds a solid torus, so one of A or B(𝒦; 1/0) must be a solid torus. Since 𝒯 is incompressible in A by definition of a JSJ torus, we have that B(𝒦; 1/0) is a solid torus. Its core is a non-trivial knot J in S^3. Thus, A is homeomorphic to S^3_J.
Let V=B(𝒦; 1/0)=B ∪_𝒦 (ν K). Then B is the solid torus V with ν K removed. We can thus see K as a knot in V. By minimality of the JSJ decomposition, K intersects an essential disc in V at least once. Also, K is not the core of V because 𝒯 is not boundary parallel in S^3_K. Hence, V ∖ν K is the pattern space for a pattern P.
Let 𝒯 be a JSJ torus of S^3_K. We say that 𝒯 decomposes K into P and J if 𝒯 separates S^3_K into V_P and S^3_J as described by Lemma <ref>.
If 𝒯 decomposes K into P and J, we fix the preferred basis of H_1(𝒯; ) to be the one given by the meridian and longitude of J (Figure <ref>).
Conversely, a pattern space V_P is the data of a knot P in a solid torus V, along with a slope λ on 𝒯 = V that intersects μ once, where μ is the slope that bounds a disc in V. Gluing V_P to a knot exterior S^3_J by respectively identifying μ and λ to the meridian and longitude of J results in the exterior of the knot K=P(J). The preferred basis of H_1(𝒯; ), where 𝒯 is seen as a JSJ torus of S^3_K is {μ,λ}.
Furthermore, for the boundary component 𝒫 = ν P of V_P, there is a unique class λ_P ∈ H_1(𝒫; ) that is homologous to w λ∈ H_1(𝒯; ) in V_P, where w is the winding number of P (see for instance <cit.>). The preferred basis of H_1(𝒫; ) is thus given by λ_P and μ_P, the class of a curve that bounds an essential disc in ν P.
Let K = P(J) be a satellite knot. The classes λ_P and μ_P ∈ H_1(𝒫; ) defined above coincide with the longitude and meridian of S^3_K.
The meridian of S^3_K and the slope μ_P coincide because they both bound an essential disc in ν K.
In S^3_K, the class λ_P is homologous to w times the longitude λ of S^3_J, where w is the winding number of P. Therefore, there is a surface F in S^3_K such that
F = (_i=1^w α_i) ⊔α_P,
where α_i are curves on S^3_J representing λ and α_P is a curve on S^3_K = 𝒫 representing λ_P.
By definition of λ, each α_i bounds a surface S_i ⊂ S^3_J, i = 1, …, w. The union of F with the S_i gives a surface in S^3_K whose boundary is α_P. Hence, λ_P coincides with the longitude of S^3_K.
If a pattern P in a solid torus V intersects an essential disc in V once, then P is a composing pattern. Note that if K = K_1 # K_2 is a composite (or connected sum) of knots K_1 and K_2, then K=P_1(K_2)=P_2(K_1) where P_1, P_2 are composing patterns such that P_1(U) = K_1 and P_2(U) = K_2, where U is the unknot.
The (r,s)-cable of a knot J is denoted C_r,s(J), where s is the winding number of the cable pattern. We may assume that s>0 since the (r,s) and (-r,-s)-cable patterns are equivalent.
The pattern space V_C_r,s is an (r,s)-cable space. It is the outermost JSJ piece of the exterior of C_r,s(J). Further, it admits a Seifert fibration with base orbifold an annulus with one cone point of order s. On its boundary component corresponding to S^3_C_r,s(J), the (r,s)-cable space has regular fibres of slope rs/1. On the other boundary component coinciding with S^3_J, a regular fibre has slope r/s.
We denote by T_a,b the (a,b)-torus knot. Its exterior is Seifert fibred, with two exceptional fibres of orders |a| and |b|. The regular fibres have slope ab/1 on S^3_T_a,b.
§ JSJ DECOMPOSITIONS AND THE SURGERED PIECE
The JSJ pieces of a non-trivial knot exterior take on one of four special types. Here is a version of this result found in <cit.>.
In the JSJ decomposition of the exterior of a non-trivial knot, the outermost JSJ piece is either
* the exterior of a torus knot;
* a composing space, i.e., a Seifert fibre space with at least 3 boundary components and base orbifold a planar surface with no cone points;
* the exterior of a hyperbolic knot or link such that if the component of the link corresponding to the knot is removed, the resulting link is the unlink;
* a cable space, i.e., a Seifert fibre space with base orbifold an annulus with one cone point.
By Lemma <ref>, a JSJ torus of the exterior S^3_K of a knot K is the boundary of the exterior of a non-trivial knot in S^3. Therefore, each JSJ piece of S^3_K is the outermost piece of some knot exterior, which implies that every JSJ piece of S^3_K belong to one of the types listed in Theorem <ref>.
Homological calculations from <cit.> lead to the following two results.
Let P(J) be a satellite knot, where P has winding number w. Denote the boundary components of the pattern space V_P by 𝒫 = ν P and 𝒯= S^3_J.
* H_1(V_P(𝒫;p/q)) ≅⊕ (/g_p,w), where g_p,w is the greatest common divisor of p and w;
* The kernel of H_1(𝒯; ) → H_1(V_P(𝒫;p/q)) induced by inclusion is generated by
(p/g_p,w) μ + (qw^2/g_p,w) λ if w≠0
μ if w=0
,
where {μ, λ} is the basis of H_1(𝒯; ) given by J.
Let K = C_r,s(J) be cable knot.
* If |qrs-p|>1, then S^3_K(p/q) is the union along their boundary of S^3_J and a Seifert fibre space with incompressible boundary;
* If |qrs-p|=1, then S^3_K(p/q) ≅ S^3_J(p/(qs^2)).
Note that if |qrs-p|=1, then g_p,s=1 and p/(qs^2) is a well-defined slope.
Gordon and Luecke showed that if p/q is not an integer, then the surgery S^3_K(p/q) is irreducible (<cit.>). Thus, it admits a JSJ decomposition. For the rest of this section, we focus our attention on the topology of the JSJ pieces of S^3_K(p/q) when |q|>2. The next theorem combines results from various authors.
Let M be the exterior of a hyperbolic link in S^3 with components L_0, L_1, …, L_n, n ≥ 1, such that the link formed by the components L_1, …, L_n is the unlink. Let σ be a slope on ℒ_0 = ν L_0 ⊂ M and let μ be the slope on ℒ_0 that bounds a disc in ν L_0. If Δ(σ, μ) > 2, then M(ℒ_0; σ) is hyperbolic.
Suppose a slope α is such that M(ℒ_0; α) contains an essential disc. Since M is hyperbolic, we have Δ(α, β) ≤ 2 if β is a slope such that M(ℒ_0; β) contains an essential sphere (<cit.>), disc (<cit.>), annulus (<cit.>) or torus (<cit.>).
The manifold M(ℒ_0; μ) is the exterior of the unlink with n components, which contains an essential disc. Then if Δ(μ, σ) > 2 for some slope σ on ℒ_0, M(ℒ_0; σ) does not contain an essential sphere, disc, annulus or torus. By Thurston's geometrization theorem, this implies that M(ℒ_0; σ) is hyperbolic.
Let K be a non-trivial knot and Y_0 ∪ Y_1 ∪…∪ Y_k be the JSJ decomposition of its exterior S^3_K, where Y_0 is the outermost piece. The Dehn surgery S^3_K(p/q) is obtained by filling Y_0 along 𝒦 = S^3_K ⊂ Y_0.
If |q|>2, the filling Y_0(𝒦; p/q) is either a Seifert fibre space or a hyperbolic manifold. In particular,
* If Y_0 is the exterior of a hyperbolic link that is not a knot, then Y_0(𝒦; p/q) is hyperbolic;
* If Y_0 is a composing space, then Y_0(𝒦; p/q) is Seifert fibred with base orbifold a planar surface with at least two boundary components and one cone point of order |q|;
* If Y_0 is an (r,s)-cable space and |qrs-p|>1, then Y_0(𝒦; p/q) is Seifert fibred with base orbifold a disc with two cone points of orders |qrs-p| and s;
* If Y_0 is an (r,s)-cable space and |qrs-p|=1, then Y_0(𝒦; p/q) is a solid torus.
If Y_0 = S^3_K and K is a hyperbolic knot, then Y_0(𝒦; p/q) = S^3_K(p/q) does not contain an essential sphere or an incompressible torus if |q|>2, so it is either hyperbolic or Seifert fibred (<cit.>).
If Y_0 = S^3_K and K is a torus knot, then Y_0(𝒦; p/q) = S^3_K(p/q) is Seifert fibred if |q|>1 (<cit.>).
If Y_0 is the exterior of a hyperbolic link, then by Theorem <ref>, Y_0(𝒦; p/q) is hyperbolic if |q|>2.
If Y_0 is a composing space, a regular fibre on 𝒦 has slope 1/0. If |q| > 1, we have Δ(1/0, p/q) = |q|>1, so the surgery slope does not coincide with the regular fibre slope. The Seifert fibred structure of Y_0 thus extends to the surgery solid torus adding an exceptional fibre of order |q|. Moreover, a composing space has at least three boundary components, so Y_0(𝒦; p/q) has at least two boundary components.
If Y_0 is an (r,s)-cable space, a regular fibre on 𝒦 has slope rs/1. If |q| > 1, we have Δ(rs/1,p/q) = |qrs-p|≠ 0, so the surgery slope does not coincide with the regular fibre slope. The Seifert fibred structure of Y_0 thus extends to the surgery solid torus. If |qrs-p|>1, the surgery adds an exceptional fibre of order |qrs-p|. If |qrs-p|=1, then the surgery solid torus is regularly fibred in Y_0(𝒦; p/q), so Y_0(𝒦; p/q) has base orbifold a disc and one cone point. It is a solid torus.
Suppose |q|>2. The JSJ decomposition of S^3_K(p/q) is either
Y_0(𝒦; p/q) ∪ Y_1 ∪ Y_2 ∪…∪ Y_k
or
Y_1(𝒥; p/(qs^2)) ∪ Y_2 ∪…∪ Y_k,
where 𝒥 = Y_0 ∩ Y_1 and s≥ 2. The second scenario occurs precisely when K is a cable knot C_r,s(J) and |qrs-p|=1.
By the previous proposition, Y_0(𝒦; p/q) is either Seifert fibred or hyperbolic. If it is hyperbolic or closed, then the result is immediate.
If Y_0(𝒦; p/q) is Seifert fibred and has boundary, i.e., in cases (2),(3) and (4) of Proposition <ref>, then Y_0(𝒦; p/q) might admit a Seifert structure that extends across adjacent JSJ pieces. By definition of the JSJ decomposition, this structure would have to differ from the one inherited from the Seifert structure on Y_0.
Only cases (3) and (4) of Proposition <ref>, which correspond to K being the cable of a knot J,
may give rise to manifolds Y_0(𝒦; p/q) that admit multiple Seifert fibred structures.
In case (3), Y_0(𝒦; p/q) admits more than one Seifert fibred structure when it is a twisted I-bundle over the Klein bottle. One is inherited from Y_0 and has base orbifold a disc with two cone points each of order 2, and the other has base orbifold a Möbius band with no cone points. This second structure has regular fibres that are non-meridional and non-integral[The regular fibre is a rational longitude, and one can show that it has slope (p/4)/q with p divisible by 4 in the coordinates given by the companion knot J.] on Y_0(𝒦; p/q) if |q|>1. It does not extend to an adjacent Seifert fibred JSJ piece Y_1, because the slope of a regular fibre of Y_1 on the JSJ torus 𝒥 = Y_0 ∩ Y_1 is either meridional (if Y_1 is a composing space) or integral (if Y_1 is a torus knot exterior or a cable space) in the coordinates given by the companion knot J.
In case (4), K is a cable knot C_r,s(J) such that |qrs-p|=1, and Y_0(𝒦; p/q) is a solid torus. By Proposition <ref>, S^3_K(p/q) ≅ S^3_J(p/(qs^2)). We have |qs^2| > |q| > 2. We iterate the above argument for S^3_J(p/(qs^2)) to reduce to case (4) of Proposition <ref> for S^3_J(p/(qs^2)). We show that this case does not occur if |q|>1.
Suppose Y_1(𝒥; p/(qs^2)) is a solid torus. Then |qrs-p|=|qs^2r's' - p| = 1 (Proposition <ref>). Hence, |q(rs - s^2r's')| = 2 or 0. As |q|, s > 1, the first case does not occur, and the second case happens only if rs - s^2r's' = 0, but this contradicts r and s being coprime.
It follows that the surgery solid torus is contained in exactly one JSJ piece of S^3_K(p/q) when |q|>2.
Suppose |q|>2. The surgered piece of S^3_K(p/q) is the JSJ piece of S^3_K(p/q) that contains the surgery solid torus. It corresponds to either Y_0(𝒦; p/q) or Y_1(𝒥; p/(qs^2)), as outlined in Proposition <ref>.
The topology of the surgered piece is summarized as follows.
Suppose |q|>2. The surgered piece of S^3_K(p/q) is a filling Y(p/(qt^2)) of a JSJ piece Y of S^3_K, for some integer t ≥ 1. In particular,
* Y(p/(qt^2)) has non-empty boundary and is hyperbolic if and only if Y is the exterior of a hyperbolic link that is not a knot;
* Y(p/(qt^2)) is Seifert fibred with base orbifold a planar surface with at least two boundary components and one cone point of order |qt^2| if and only if Y is a composing space;
* Y(p/(qt^2)) is Seifert fibred with base orbifold a disc with two cone points of orders |qt^2rs-p| and s if and only if Y is an (r,s)-cable space.
Furthermore, if |q|>8, then
* Y(p/(qt^2)) is closed and Seifert fibred if and only if Y is the exterior of a torus knot;
* Y(p/(qt^2)) is closed and hyperbolic if and only if Y is the exterior of a hyperbolic knot.
The converses of (1), (2), (3) follow from Proposition <ref>. We deduce the direct implications from Theorem <ref> as follows.
If Y(p/(qt^2)) is not closed and is hyperbolic, then Y is hyperbolic with at least two boundary components, so it must be the exterior of a hyperbolic link that is not a knot.
If Y(p/(qt^2)) is Seifert fibred and has n ≥ 1 boundary components, then Y must be Seifert fibred (Theorem <ref>) and it has n+1 boundary components. Hence, if n ≥ 2, Y is a composing space, while if n=1, Y is a cable space.
When |q|>8, it is a result of Lackenby and Meyerhoff (<cit.>) that if Y is the exterior of a hyperbolic knot, then Y(p/(qt^2)) must also be hyperbolic. Conversely, if Y(p/(qt^2)) is closed and hyperbolic, then Y is the exterior of knot that must be hyperbolic.
If Y is the exterior of a torus knot, then Y(p/(qt^2)) is Seifert fibred (<cit.>). Conversely, if Y(p/(qt^2)) is closed and Seifert fibred, Y is a knot exterior that must be Seifert fibred, by the result of Lackenby and Meyerhoff. The only knots whose exteriors are Seifert fibred are torus knots (<cit.>).
The five types of surgered pieces described in Proposition <ref> correspond to fillings of distinct types of JSJ pieces of a knot exterior.
Suppose |q|>2 and let K and K' be knots. Suppose further that the surgered piece Y(p/(qt^2)) of S^3_K(p/q) is homeomorphic to the surgered piece Y'(p'/(q'(t')^2)) of S^3_K'(p'/q').
* If Y(p/(qt^2)) and Y'(p'/(q'(t')^2)) have non-empty boundary, then Y and Y' are of the same type, as listed by Theorem <ref>.
* Furthermore, if |q|>8 and if Y(p/(qt^2)) and Y'(p'/(q'(t')^2)) are closed, then Y and Y' are both torus knots or both hyperbolic knots.
Comparing with Theorem <ref>, we obtain additional constraints on the structure of the surgered piece.
Suppose |q|>2. Let Y be the JSJ piece of S^3_K such that the surgered piece of S^3_K(p/q) is a filling Y(p/(qt^2)) for some integer t ≥ 1. If Y(p/(qt^2)) is homeomorphic to a JSJ piece of a knot exterior, then
* Y is not the exterior of a knot;
* Y(p/(qt^2)) is homeomorphic to the exterior of a hyperbolic knot or link such that if a specific component of the link is removed, the resulting link is the unlink, if and only if Y is hyperbolic;
* Y(p/(qt^2)) is homeomorphic to an (r, |qt^2|)-cable space if and only if Y is a composing space;
* Y(p/(qt^2)) is homeomorphic to the exterior of a torus knot if and only if Y is an (r,s)-cable space.
For (1), we observe that if Y is a the exterior of a knot, then Y(p/(qt^2)) is a closed manifold. However, all JSJ pieces of a knot exterior have non-empty boundary.
The first implications of (2), (3) and (4) follow from Proposition <ref>. We show their converses.
If Y is hyperbolic, then by (1), it is not the exterior of a knot. Hence, Y(p/(qt^2)) is hyperbolic by Proposition <ref>. By Theorem <ref>, a hyperbolic JSJ piece of a knot exterior is as stated in (2).
If Y is a composing space, then Y(p/(qt^2)) is Seifert fibred with only one exceptional fibre of order |qt^2| (Proposition <ref>). By Theorem <ref>, cable spaces are the only Seifert fibred JSJ pieces of a knot exterior with only one exceptional fibre. An (r,s)-cable space has an exceptional fibre of order s, so Y(p/(qt^2)) is an (r, |qt^2|)-cable space.
If Y is an (r,s)-cable space, Y(p/(qt^2)) is Seifert fibred with two exceptional fibres (Proposition <ref>). By Theorem <ref>, torus knot exteriors are the only Seifert fibred JSJ pieces of a knot exterior with two exceptional fibres.
§ DISTINGUISHED SLOPES
The goal of Sections <ref> and <ref> is to prove the following proposition.
Let K be a satellite knot and 𝒯 be a JSJ torus of S^3_K that decomposes K into P and J. There exists a constant L(𝒯) with the following property. Suppose |q|>2 and assume 𝒯 does not compress in S^3_K(p/q). If there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) for some knot K', then the homeomorphism does not map the surgered piece of S^3_K'(p/q) to the outermost piece of S^3_J⊂ S^3_K(p/q), provided |q| > L(𝒯).
§.§ Filled patterns and companion knots
Throughout Section <ref>, we will consider the following scenario.
The satellite knot K is fixed. Suppose there is a knot K' such that there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q).
Let X, X' be the surgered pieces of S^3_K(p/q), S^3_K'(p/q) respectively. Suppose that the homeomorphism does not send X' to X. Then X' is carried to a JSJ piece of S^3_K(p/q) that is not X. That JSJ piece in S^3_K(p/q) is the outermost piece of S^3_J ⊂ S^3_K for some knot J. Let 𝒯 = S^3_J. This is a JSJ torus of S^3_K. By Lemma <ref>, 𝒯 decomposes K into a pattern P and the knot J.
The JSJ torus 𝒯 is sent by the homeomorphism to a JSJ torus 𝒯' of S^3_K'(p/q), which is also a JSJ torus of S^3_K' by Proposition <ref>. By Lemma <ref>, 𝒯' decomposes K' into a pattern P' and a knot J'.
Let 𝒫 and 𝒫' respectively denote the boundary components ν P and ν P' of V_P and V_P', the pattern spaces associated to P and P'. The homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) then restricts to a homeomorphism between V_P(𝒫;p/q) and S^3_J', and between V_P'(𝒫';p/q) and S^3_J (Figure <ref>).
We will now identify distinguished slopes on the JSJ tori 𝒯⊂ S^3_K(p/q) and 𝒯' ⊂ S^3_K'(p/q). Information about the gluing of JSJ pieces along their boundaries will be obtained by analyzing the distances between these slopes. In Section <ref>, we will rely on the fact that distances between slopes are preserved by homeomorphisms to establish constraints on the coefficients p and q.
§.§ General pattern case
In the scenario described in Section <ref> and Figure <ref>, we have the following lemma.
The greatest common divisor g_p,w of p and w is 1.
On one hand, we have H_1(V_P(𝒫;p/q);) = ⊕ (/g_p,w) (Lemma <ref>(i)). On the other hand, H_1(S^3_J';) =. Since S^3_J'≅ V_P(𝒫;p/q), we conclude that g_p,w = 1.
Our first distinguished slope on 𝒯⊂ S^3_K(p/q) is the longitude of V_P(𝒫;p/q) seen as a knot exterior. Combining Lemma <ref>(ii) with Lemma <ref>, we have that this slope is
p/(qw^2) if w≠0,
1/0 if w=0.
Our second distinguished slope on 𝒯⊂ S^3_K(p/q) is the meridian of V_P(𝒫; p/q) seen as a knot exterior. Let x/y be this slope in the coordinates of H_1(𝒯;) given by J.
On 𝒯'⊂ S^3_K'(p/q), we have two analogous distinguished slopes: the meridian and longitude of V_P'(𝒫';p/q) seen as a knot exterior.
* The meridian x/y of V_P(𝒫;p/q) is such that |x| = |q(w')^2|.
* The meridian x'/y' of V_P'(𝒫';p/q) is such that |x'| = |qw^2|.
The homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) sends the meridian x/y of V_P(𝒫;p/q) to the meridian 1/0 of S^3_J', and the longitude 0/1 of S^3_J to the longitude p/(q(w')^2) of V_P'(𝒫';p/q). Since the homeomorphism preserves distances between slopes, we have
|x| = Δ(x/y, 0/1) = Δ(1/0, p/q(w')^2) = |q(w')^2|.
We obtain (ii) symmetrically.
§.§ Iterated cable case
In the case where P is an iterated cable, we also distinguish the slopes of regular Seifert fibres in the scenario described in Section <ref> and Figure <ref>.
If P is an iterated cable C_r_n, s_n… C_r_2, s_2C_r_1, s_1, n≠ 1, then
* The JSJ piece of V_P(𝒫; p/q) with boundary component 𝒯 is Seifert fibred and its regular fibre has slope r_1/s_1 on 𝒯;
* The outermost JSJ piece of S^3_J' is Seifert fibred and its regular fibre has integral slope on 𝒯'.
Let V_i be (r_i, s_i)-cable spaces for i = 1, …, n. The pattern space V_P has JSJ decomposition V_1 ∪ V_2 ∪…∪ V_n, where 𝒯⊂ V_1. A regular fibre of V_1 has slope r_1/s_1 on 𝒯. If V_P(𝒫; p/q) contains an incompressible torus, then it is clear that the regular fibre slope on 𝒯 remains unchanged. Furthermore, V_1 is homeomorphic to the outermost piece of S^3_J', which must also be a cable space. Hence, a regular fibre of the outermost piece of S^3_J' has integral slope on 𝒯' = S^3_J'.
If V_P(𝒫; p/q) contains no incompressible torus, then it is a filling of V_1 by Proposition <ref>. By hypothesis (Figure <ref>), this filling is homeomorphic to a JSJ piece of S^3_K'. By Proposition <ref>, this piece is the exterior of a torus knot. It follows that the Seifert fibred structure on V_P(𝒫; p/q) is unique, and it is the one inherited from V_1. The regular fibre of this structure has slope r_1/s_1 on 𝒯. Moreover, the torus 𝒯' ⊂ S^3_K'(p/q) is the boundary of a torus knot exterior, so a regular fibre has integral slope on 𝒯'.
Thus, the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) maps the slope r_1/s_1 on 𝒯 to a slope k/1 on 𝒯', where k ∈.
§ SURGERED PIECES ARE SENT TO SURGERED PIECES
This section is dedicated to demonstrating Proposition <ref>, from which Proposition <ref> follows easily.
\beginprop:sxpiece
Let K be a knot. Suppose |q|>2. If there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) for some knot K', then the homeomorphism sends the surgered piece of S^3_K(p/q) to the surgered piece of S^3_K'(p/q), provided |q| is sufficiently large.
\endprop:sxpiece
If K is not a satellite knot, then the result follows from Proposition <ref>, so suppose K is a satellite knot. Set
L(K) = max_𝒯{L(𝒯), 𝒯 is a JSJ torus of S^3_K},
where the L(𝒯)'s are given by Proposition <ref>. Let |q|>L(K).
Suppose there is an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) that does not carry the surgered piece X of S^3_K(p/q) to the surgered piece X' of S^3_K'(p/q). Then, as described in Section <ref>, the homeomorphism maps X' to a JSJ piece of S^3_K(p/q) that is the outermost piece of the exterior of some knot J such that S^3_J ⊂ S^3_K. Let 𝒯 = S^3_J.
Proposition <ref> implies that since |q| > L(K) ≥ L(𝒯), the surgered piece X' cannot be mapped to the outermost piece of S^3_J, a contradiction. Therefore, if |q| > L(K), then any orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) must send the surgered pieces one to another.
The proof of Proposition <ref> is divided into three cases: composing patterns, once or twice-iterated cables, and other patterns.
For the last two cases, we will need a simplified version of a theorem from Cooper and Lackenby, as well as some related lemmas.
Let M be a compact orientable 3-manifold with boundary a union of tori. Let ϵ > 0. Then there are finitely many compact orientable hyperbolic 3-manifolds X and slopes σ on some component of X such that M ≅ X(σ) and where the length of each slope σ is at least 2π+ϵ, when measured using some horoball neighbourhood of the cusp of X that is being filled.
Let Y be a hyperbolic JSJ piece of a knot exterior and let ℒ_0 be the cusp of Y along which the trivial filling yields the unlink. Let l(p/q) be the length of the slope p/q on ℒ_0, measured in a maximal horoball neighborhood N of ℒ_0. Then l(p/q) ≥ |q|/√(3).
By a geometric argument as in <cit.> or <cit.>, the lengths of two slopes σ_1, σ_2 on ℒ_0 satisfy l(σ_1)l(σ_2) ≥ Area( N) ·Δ(σ_1, σ_2).
By Theorem 1.2 of <cit.>), Area( N) ≥ 2√(3).
By taking σ_1 = p/q and σ_2 = 1/0, and by the 6-theorem (<cit.>, <cit.>), we get
l(p/q)≥ 2√(3)· |q|/l(1/0) ≥ |q|/√(3).
The next lemma follows the approach of <cit.>.
Let K be a knot and Y a JSJ piece of S^3_K. Then there exists a constant L(Y) with the following property. Let Y' be a hyperbolic JSJ piece of the exterior of some knot K', with boundary component ℒ_0 such that Y'(ℒ_0; 1/0) is S^3 or an unlink. If Y'(ℒ_0; p/q) ≅ Y, then |q|≤ L(Y).
Suppose that Y is hyperbolic. Let ϵ = 1/15. By Theorem <ref>, there are finitely many manifolds {X_j} that are JSJ pieces of a knot exterior, and finitely many slopes {p_j_i/q_j_i} of length at least 2π + 1/15 such that X_j(p_j_i/q_j_i) is homeomorphic to Y. Set L(Y) = max{|q_j_i|, 11}. If |q|>L(Y), then by the previous lemma, l(p/q) > 11/√(3)≥ 2π + 1/15, but p/q ∉{p_j_i/q_j_i}. It follows that for any hyperbolic JSJ piece Y' as in the statement, the filling Y'(ℒ_0; p/q) cannot be homeomorphic to Y.
If Y is Seifert fibred and Y ≅ Y'(ℒ_0; p/q) for a hyperbolic JSJ piece Y' as in the statement, then |q|≤ 2 by Proposition <ref>. Therefore, we may take L(Y)=2 in that case.
§.§ Composing pattern case
We begin the proof of Proposition <ref> by considering the case of composing patterns.
Let P be a composing pattern and 𝒫 = ν P ⊂ V_P. If |q|>1, then the filling V_P(𝒫; p/q) is not homeomorphic to a knot exterior.
Let Y ⊂ V_P be the composing space containing 𝒫. Let n+1 be the number of boundary components of Y.
If n > 2, then Y(𝒫;p/q) is Seifert fibred and has more than two boundary components (proof of Proposition <ref>(2)), so it is not a JSJ piece of a knot exterior by Proposition <ref>.
Suppose now that n=2. Then V_P = Y ∪ S^3_K_1 for some knot K_1.
The filling Y(𝒫;p/q) is Seifert fibred (proof of Proposition <ref>(2)), and on the JSJ torus 𝒯_1 = S^3_K_1 of V_P(𝒫;p/q), a regular fibre of Y(𝒫;p/q) has meridional slope.
By Lemma <ref>, if J' is a knot such that S^3_J' has the same JSJ pieces in its decomposition as V_P(𝒫;p/q), then J' must be a cable of K_1. By the knot complement theorem, if V_P(𝒫;p/q) were homeomorphic to S^3_J', then the meridian on 𝒯_1 would be mapped to the meridian on 𝒯_1' = S^3_K_1⊂ S^3_J'. Further, regular fibres of Y(𝒫;p/q) would be mapped to regular fibres of the outermost cable space of S^3_J'. However, a regular fibre of the outermost cable space of S^3_J' does not have meridional slope on 𝒯_1', and a cable space possesses a unique Seifert fibred structure. Hence, V_P(𝒫;p/q) cannot be homeomorphic to the exterior of a knot.
Let K = K_1 # K_2 #…# K_n be a composite knot, where the K_i's are prime for each i=1, …, n. Let 𝒯 be a JSJ torus of S^3_K that decomposes K into P, a composing pattern, and K_i, i ∈{1, …, n}. Suppose there is an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) for some knot K' where |q|>2. Then the homeomorphism does not map the surgered piece of S^3_K'(p/q) to the outermost piece of S^3_K_i.
Let 𝒫 = ν P ⊂ V_P. If the homeomorphism maps the surgered piece of S^3_K'(p/q) to the outermost piece of S^3_K_i, then V_P(𝒫; p/q) is homeomorphic to a knot exterior by the discussion of Section <ref> and Figure <ref>. By Lemma <ref>, this cannot happen if |q|>2.
Let 𝒯 be a JSJ torus of S^3_K that decomposes K into P and J. Suppose there exists a knot K' such that there is an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q). By Proposition <ref>, if P is a composing pattern, we may take L(𝒯) = 2.
§.§ Cable and twice-iterated cable case
We proceed with the case when P is a once or twice-iterated cable.
We now suppose that P is a cable C_r_1,s_1 or a twice-iterated cable C_r_2,s_2(C_r_1,s_1). Recall that the JSJ torus 𝒯 decomposes K into P and a knot J. Let Y be the outermost piece of S^3_J. Let Y' be the JSJ piece of S^3_K' such that the surgered piece of S^3_K'(p/q) is X' = Y'(p/(q(t')^2), t' ≥ 1 (Proposition <ref>).
Suppose the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) carries the surgered piece of S^3_K'(p/q) to the outermost piece Y of S^3_J, as described in Section <ref>. We look at each possibility for Y given by Proposition <ref>.
If Y is hyperbolic, then Y' is also hyperbolic if |q|>2, according to Proposition <ref>(2). By Lemma <ref>, there is a constant L(J) such that |q| ≤ |q(t')^2| ≤ L(J).
If Y is an (r,s)-cable space, then Y' is a composing space by Proposition <ref>(3) and K' is a composite knot. Using the notation in Figure <ref>, 𝒯' separates K' into a composing pattern P' and some companion knot J'. By the discussion of Section <ref>, V_P'(𝒫'; p/q) is homeomorphic to S^3_J, but this contradicts Lemma <ref> applied to P' when |q|>2.
If Y is the exterior of a torus knot T_a,b, |a|>|b|>1, then Y' is a an (r',s')-cable space by Proposition <ref>(4). Since the orders of exceptional fibres in S^3_T_a,b and X' coincide, we have without loss of generality
|a| = |q(t')^2r's'-p|. 1
Recall that the JSJ torus 𝒯⊂ Y is mapped by the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) to the JSJ torus 𝒯' ⊂ X', in the notation of Figure <ref>. As distances are preserved between slopes that are carried one to another, we have the following equality by comparing Table <ref> and the last row of Table <ref> from Section <ref>:
|r_1| = Δ( r_1/s_1, 0/1) = Δ( k/1, p/q(w')^2)=|q(w')^2k-p|.
Combining this with equation (1) yields
|q(t')^2| · |(w'/t')^2k-r's'| = | r_1 ± a|.
Since r',s'≠ 0 are coprime and (w'/t')^2 is divisible by s', we have (w'/t')^2k-r's' ≠ 0. This implies that |q| ≤ |r_1|+|a|.
Summing up, suppose 𝒯 decomposes K into P and J, where P = C_r_1,s_1 or C_r_2,s_2(C_r_1,s_1). Denoting the outermost JSJ piece of S^3_J by Y, we let
L(𝒯) =
L(J) if Y is hyperbolic,
2 if Y is an (r,s)-cable space,
|r_1|+|a| if Y is the exterior of a torus knot T_a,b.
Then if |q| > L(𝒯), and if there exists and orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) for some knot K', the surgered piece of S^3_K'(p/q) is not carried to the outermost piece Y of S^3_J.
§.§ Other pattern case
To conclude the proof of Proposition <ref>, it remains to study patterns that are neither composing patterns nor once or twice-iterated cables. We will be using the Cyclic surgery theorem by Culler, Gordon, Luecke and Shalen.
Let M be a compact, connected, irreducible, orientable 3-manifold such
that M is a torus. Suppose that M is not a Seifert fibre space. If
π_1 (M(σ_1)) and π_1(M(σ_2)) are cyclic, then Δ(σ_1, σ_2) ≤ 1.
We want to apply this theorem to the case where M is a filling of a pattern space. To do so, we must show that this filling is not a Seifert fibre space. We will need the following homological lemma about fillings of composing spaces.
Let Y be a composing space with three boundary components 𝒯, 𝒯_1, 𝒯_2. Denote by h the slope of a regular fibre on each boundary component of Y. Suppose σ_1 and σ_2 are slopes on 𝒯_1 and 𝒯_2 respectively, that are homologous in Y(𝒯; σ) for some surgery slope σ on 𝒯. Then Δ(h, σ_1) = Δ(h, σ_2) = kΔ(h, σ) for some k ∈.
There are slopes λ_1 and λ_2 on 𝒯_1 and 𝒯_2 respectively such that {h, λ_i} generates H_1(𝒯_i; ), i=1,2, and {h, λ_2-λ_1} generates H_1(𝒯; ) (Figure <ref>). Further, the images induced by inclusion of h, λ_1, λ_2 into Y generate H_1(Y; ). Write
σ = mh + n(λ_2-λ_1),
σ_1 = a_1h + b_1λ_1,
σ_2 = a_2h + b_2λ_2.
The σ-surgery along 𝒯 adds the relation mh + n(λ_2-λ_1) in H_1(Y(𝒯; σ); ). Hence, if σ_1 and σ_2 are homologous in Y(𝒯; σ), then
(a_1h + b_1λ_1) + k(mh + n(λ_2-λ_1)) = a_2h + b_2λ_2,
for some k ∈. This implies that b_1=b_2=kn, giving us
Δ(h, σ_1) = Δ(h, σ_2) = kn = kΔ(h, σ).
Suppose P is a pattern that is neither a composing pattern, nor a cable C_r_1,s_1 or a twice-iterated cable C_r_2,s_2(C_r_1,s_1). Let 𝒯 be the boundary component of V_P that is not ν P. Then the filling V_P(𝒯;m/n) is not a Seifert fibre space for |m| sufficiently large.
The pattern space V_P admits a JSJ decomposition V_1 ∪ V_2 ∪…∪ V_k where V_1 is the JSJ piece that contains 𝒯.
If V_1 is hyperbolic, there are only finitely many slopes m/n such that V_1(𝒯; m/n) is not hyperbolic. So V_1(𝒯;m/n) is hyperbolic for |m| sufficiently large, and V_P(𝒯; m/n) is not a Seifert fibre space.
If V_1 is a Seifert fibre space, then by Theorem <ref>, V_1 is either a composing space or an (r_1,s_1)-cable space.
Suppose V_1 is a composing space. For V_P(𝒯; m/n) to be Seifert fibered, the JSJ pieces adjacent to V_1 in V_P must be Seifert fibred and V_1(𝒯;m/n) must admit a Seifert fibred structure that differs from the one inherited by the fibration on V_1. The only such possibility is if V_1(𝒯;m/n) is a trivial I-bundle over the torus. Recall that the regular fibres of V_1 have meridional slopes on each boundary component of V_1. Let 𝒯_1 and 𝒯_2 be the boundary components of V_1(𝒯;m/n). As regular fibres are homologous in V_1(𝒯;m/n), Lemma <ref> says the distance on 𝒯_1 between the meridian of 𝒯_1 and a regular fibre of V_1(𝒯;m/n) is equal to the distance on 𝒯_2 between the meridian of 𝒯_2 and a regular fibre of V_1(𝒯;m/n).
Suppose the pieces adjacent to V_1 in V_P are Seifert fibred. Note that since P is not a composing pattern, V_1(𝒯;m/n) shares a boundary component, say 𝒯_1, with a cable space V_2 whose regular fibre has non-integral slope on 𝒯_1. The other boundary component 𝒯_2 of V_1(𝒯;m/n) is shared with a torus knot exterior or a cable space V_3, whose regular fibre has integral slope on 𝒯_2. By Lemma <ref>, the Seifert fibred structure of V_1(𝒯;m/n) cannot extend across both V_2 and V_3, so V_P(𝒯;m/n) is not Seifert fibred.
Suppose now that V_1 is an (r_1, s_1)-cable space. Let V_2 be the JSJ piece of V_P that shares a boundary component 𝒯_1 with V_1.
Suppose that 𝒯_1 remains incompressible in V_P(𝒯; m/n). The pattern space V_P is either the union of V_1 with a hyperbolic V_2, or it decomposes into at least three JSJ pieces. In the first case, V_P(𝒯; m/n) is clearly not Seifert fibred. In the second case, a Seifert fibred structure on V_1(𝒯; m/n) might extend across a Seifert fibred structure on V_2. However, a JSJ piece of a knot exterior admits a unique Seifert fibred structure, so the structure on V_2 does not extend across the other JSJ pieces of V_P(𝒯; m/n).
Suppose now that the torus 𝒯_1 is compressed in V_1(𝒯; m/n). On 𝒯_1 and 𝒯, the regular fibres of V_1 have respective slopes r_1s_1/1 and r_1/s_1. By a similar reasoning as that of Proposition <ref>, cases (3) and (4), we have |ms_1-r_1n|=1. For homological reasons (analogous to Lemma <ref>), the filling V_P(𝒯;m/n) is homeomorphic to (V_P ∖ V_1)(𝒯_1; ms_1^2/n).
If V_2 is hyperbolic or a composing space, we iterate the argument previously given for V_1.
If V_2 is a cable space, let 𝒯_2 be its boundary component that is not 𝒯_1. Let V_3 be the JSJ piece of V_P such that V_3 ∩ V_2 = 𝒯_2. The Seifert fibred structure on V_2(𝒯_1; ms_1^2/n) might extend across V_3 only if V_2(𝒯_1; ms_1^2/n) is a solid torus or a twisted I-bundle over the Klein bottle. This occurs when |ms_1^2s_2-r_2n| = 1 or 2. Combining this with the fact that |ms_1-r_1n|=1, we have that (m,n) must be a solution to the system
( s_1 -r_1
s_1^2s_2 -r_2 ) ( m
n )
= (± 1
± 1 ) or (± 1
± 2 ).
As r_2 and s_2 are coprime,
( s_1 -r_1
s_1^2s_2 -r_2 ) ≠ 0.
Therefore, there are only finitely many slopes m/n such that the Seifert fibred structure on V_2(𝒯_1; ms_1^2/n) extends across V_3.
Consequently, V_P(𝒯;m/n) is not a Seifert fibre space for |m| sufficiently large.
Recall that 𝒯 is a JSJ torus of S^3_K that decomposes K into P and J. Suppose there exists a knot K' such that there is an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) where |q|>2.
We now suppose that P is neither a composing pattern nor a once or twice-iterated cable. Suppose the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) carries the surgered piece of S^3_K'(p/q) to the outermost piece Y of S^3_J, as described in the scenario of Section <ref>. Using the notation of Figure <ref>, by Lemma <ref>, there is a slope x/y = q(w')^2/y, on 𝒯 that is the meridian of V_P(𝒫; p/q) seen as a knot exterior.
By Lemma <ref>, there exists a bound L(P) such that if |m|>L(P), then V_P(𝒯; m/n) is not a Seifert fibre space. If w'≠ 0, suppose that |q|>L(P). The inequality |x|=|q(w')^2| > |q| > L(P) implies that V_P(𝒯; x/y) is not a Seifert fibre space. On one hand, the filling of V_P(𝒫; p/q) along the meridian x/y is the trivial filling V_P(𝒫, 𝒯; p/q, x/y) ≅ S^3. On the other hand, V_P(𝒫; 1/0) is the trivial filling of the pattern P, so it is homeomorphic to a solid torus. Consequently, V_P(𝒫, 𝒯; 1/0, x/y) is homeomorphic to the lens space L_y,x. We obtain that both the p/q and 1/0-fillings of the non-Seifert fibred manifold V_P(𝒯; x/y) yield manifolds with cyclic fundamental groups. By the Cyclic surgery theorem (Theorem <ref>), we have |q| = Δ(p/q, 1/0) ≤ 1, which contradicts |q|>2.
Suppose now that w'=0. If the surgered piece X' of S^3_K'(p/q) were Seifert fibred, then it would be a filling of either a cable space or a composing space (Proposition <ref>). In both cases, w' would be non-zero, a contradiction. Therefore, X' is hyperbolic. As |q|>2, X' is the p/(q(t')^2)-filling of a hyperbolic JSJ piece of S^3_K', t' ≥ 1 (Proposition <ref>(1)). By Lemma <ref>, there exists a constant L(J) such that X' is homeomorphic to the outermost piece Y of S^3_J only if |q|≤ L(J).
Setting L(𝒯)= max{ L(P), L(J) } gives the desired bound.
This completes the proof of Proposition <ref>.
§ PROOF OF THEOREM 1
Proposition <ref> tells us that if |q| is sufficiently large, an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) restricts to a homeomorphism between the surgered pieces of S^3_K(p/q) and S^3_K'(p/q). By the knot complement theorem, this homeomorphism preserves the slopes on the boundary of the surgered pieces. To complete the proof of Theorem <ref>, we must show that it further restricts to the JSJ pieces of S^3_K and S^3_K' that were filled to produce the surgered pieces.
First, we need the following intermediate results.
Let K and K' be knots such that there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p'/q'). If the core of the surgery solid torus in S^3_K(p/q) is mapped to the core of the surgery solid torus in S^3_K'(p'/q'), then K = K'.
Let v and v' be the cores of the surgery solid tori of S^3_K(p/q) and S^3_K'(p'/q') respectively. Since v is sent to v' by the homeomorphism, the neighbourhoods ν(v) and ν(v') are also sent one to another by the homeomorphism. Therefore, S^3_K(p/q) ∖ν(v) ≅ S^3 ∖ν K is homeomorphic to S^3_K'(p'/q') ∖ν(v') ≅ S^3 ∖ν K', which implies that K=K' by the knot complement theorem.
If q, p, r, s, r', s' are integers such that |q|>2 and |qrs-p|=|qr's'-p|=1, then rs = r's'.
We have |q(rs-r's')| = 0 or 2. But |q|>2, so |q(rs-r's')| = 0 and rs=r's'.
Let K be a non-trivial knot, and suppose there is a knot K' such that there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q). Let X, X' be the surgered pieces of S^3_K(p/q), S^3_K'(p/q) respectively. Let Y, Y' be the JSJ pieces of S^3_K, S^3_K' such that X=Y(p/(qt^2)) and X'=Y'(p/(q(t')^2)), for some t,t' ≥ 1 (Proposition <ref>).
We now assume, by Proposition <ref>, that |q| is large enough such that X and X' are sent one to another by the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q). We will study each possibility listed in Theorem <ref> for Y and show in each case that K=K' for |q| sufficiently large.
§.§ Exterior of a torus knot
Suppose Y is the exterior of a torus knot. In this case, K is a torus knot or a cable of a torus knot (Proposition <ref>). By McCoy, if K is a torus knot, we have that K = K' for |q| sufficiently large (<cit.>). So suppose K is a cable knot C_r,s(T_a,b) such that |qrs-p|=1.
By Corollary <ref>, Y' is also the exterior of a torus knot if |q|>8. Therefore, K' is either a torus knot or a cable of a torus knot by Proposition <ref>.
We have the following corollary of a proposition from McCoy.
If an (r,s)-cable of a torus knot shares a p/q-surgery with a torus knot where |q|>1, then |q|=s.
It follows that the cable K=C_r,s(T_a,b) cannot share a p/q surgery with a torus knot when |q| > s. Hence, if |q| > s and 8, K' is a cable of a torus knot C_r', s'(T_c,d) where |qr's'-p|=1. We thus have an homeomorphism
S^3_T_a,b(p/(qs^2)) ≅ S^3_T_c,d(p/(q(s')^2))
by Proposition <ref>. This gives the homeomorphism of base orbifolds
S^2(|a|, |b|, |qs^2ab-p|) ≅ S^2(|c|, |d|, |q(s')^2cd-p|).
Comparing orders of cone points, assuming without loss of generality that |b| = |d|, suppose that |a| = |q(s')^2cd-p|. By combining this with |qrs-p|=1, we find
|q| · |(s')^2cd-rs)| = | a ± 1|.
The right-hand side is a non-zero integer since |a|>1, which implies that |q| ≤ |a|+1.
Consequently, if |q| > |a|+1, the homeomorphism S^3_T_a,b(p/(qs^2)) ≅ S^3_T_c,d(p/(q(s')^2)) sends the core of the surgery solid torus in S^3_T_a,b(p/(qs^2)) of order |qs^2ab-p| to the core of the surgery solid torus in S^3_T_c,d(p/(q(s')^2)) of order |q(s')^2cd-p|. By Proposition <ref>, we obtain that T_a,b = T_c,d. Furthermore, the equality of orders yields qs^2ab-p=± (q(s')^2cd-p), but since |q|>1 and p and q are coprime, the only possibility is qs^2ab-p=q(s')^2cd-p, which in turn gives s=s'. By Lemma <ref>, since |q|>|a|+1 > 2, we have C_r,s = C_r', s'. Hence, C_r,s(T_a,b) = C_r', s'(T_c,d), that is, K=K', as desired.
§.§ Composing space
Suppose Y is a composing space. By Corollary <ref>, Y' is also a composing space if |q|>2. By Proposition <ref>(2), X and X' are Seifert fibred, each with one exceptional fibre of order |qt^2| and |q(t')^2| respectively. These exceptional fibres correspond to the cores of the surgery solid tori in X and X'.
Since X ≅ X', the unique exceptional fibre of X is sent to the unique exceptional fibre of X' by the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q). This implies that t = t'. If t = 1, then these exceptional fibres are precisely the cores of the surgery solid tori of S^3_K(p/q) and S^3_K'(p/q). Then K' = K. by Proposition <ref>. If t > 1, by Proposition <ref>, t is the winding number of cable patterns C_r,t and C_r',t such that K=C_r,t(J) and K'=C_r',t(J'), where J and J' are composite knots. By Proposition <ref>, we have S^3_J(p/(qt^2)) ≅ S^3_J'(p/(qt^2)) and J = J' by Proposition <ref>. Since |qrt-p|=1 and |q r' t-p|=1, Lemma <ref> tells us that r=r', and we conclude that K = K'.
§.§ Exterior of a hyperbolic link
Suppose Y is the exterior of a hyperbolic knot or link. By Corollary <ref>, Y' is also the exterior of a hyperbolic knot or link if |q|>8. We apply the following theorem by Lackenby in the same way as in the proof of <cit.>.
Let M be S^3 or the exterior of the unknot or unlink in S^3, and let K be a hyperbolic knot in M. Let M_K = M ∖ν K. There exists a constant C(K) with the following property. If M_K(σ) ≅ M_K'(σ') for some hyperbolic knot K' in M and some σ' such that Δ(σ',μ') > C(K), where μ' is the slope that bounds a disc in ν K', and if the homeomorphism restricted to the boundary of M is the identity, then (M, K) ≅ (M, K') and σ = σ'.
Let n+1 be the number of boundary components of Y and Y'. If n=0, let M be S^3. If n≥ 1, let M be the exterior of the unlink with n components. By Theorem <ref>, Y and Y' are respectively homeomorphic to exteriors of hyperbolic knots H and H' in M. Let C(H) be the constant given by Theorem <ref> for H. If |q| > C(H), then |q(t')^2|>C(H). By Theorem <ref>, (M, H) ≅ (M, H') and p/(qt^2) = p/(q(t')^2), i.e., Y ≅ Y' and t=t'.
In S^3_K and S^3_K' respectively, the JSJ pieces Y, Y' are the outermost pieces of the exteriors of knots J, J' such that there is an homeomorphism S^3_J(p/(qt^2)) ≅ S^3_J'(p/(qt^2)). Since the JSJ structure away from the surgered pieces is preserved by that homemorphism, we have an homeomorphism S^3_J ∖int(Y) ≅ S^3_J'∖int(Y') which agrees on Y, Y' with the homeomorphism Y ≅ Y' given by Theorem <ref> (for details, see <cit.>). Hence, J = J'. If t = 1, then K=J and K'=J' and we are done. If t>1, then K and K' are cables of J = J'. By Lemma <ref>, K = K'.
§.§ Cable space
Suppose Y is an (r_1, s_1)-cable space. By Corollary <ref>, Y' is also a cable space if |q|>2. Therefore, K and K' are once or twice-iterated cables of knots J and J' respectively. Let us write
K =
C_r_1,s_1(J) if t=1
C_r_2,s_2(C_r_1,s_1(J)) if t>1
,
K' =
C_r_1',s_1'(J') if t'=1
C_r_2',s_2'(C_r_1',s_1'(J')) if t'>1
.
If |q|>2, then K = K'. That is:
* J = J';
* C_r_1,s_1 = C_r_1',s_1';
* t=t', and C_r_2,s_2 = C_r_2',s_2' if t>1.
Since S^3_K(p/q) ∖ X ≅ S^3_J and S^3_K'(p/q) ∖ X' ≅ S^3_J', the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) restricts to a homeomorphism between S^3_J and S^3_J'. This implies (i) by the knot complement theorem.
By (i), the meridians and longitudes forming the bases of H_1( S^3_J;) and H_1( S^3_J';) are respectively sent one to another.
The regular fibres of X' and X have respective slopes r_1'/s_1' and r_1/s_1 on X' and X, and both Seifert fibred structures have base orbifold a disc with two cone points. If a given oriented manifold admits a Seifert fibration with base orbifold a disc and two cone points, then there is no other Seifert fibration on this manifold with the same orbifold structure. It follows that the slopes r_1'/s_1' and r_1/s_1 are equal. Hence, C_r_1,s_1 = C_r_1',s_1' showing (ii).
The longitudes of X and X' coincide and have respective slopes p/(q(t's_1')^2) and p/(q(ts_1)^2), so q(t's_1')^2 = q(ts_1)^2. Since s_1 = s_1' by (ii), we get the equality t = t'. If t>1, then t = s_2 = s_2', and C_r_2,s_2 = C_r_2',s_2' by Lemma <ref>, which proves (iii).
This concludes the proof of Theorem <ref>.
§ CHARACTERIZING SLOPES FOR CABLES WITH ONLY SEIFERT FIBRED PIECES
For some specific families of satellite knots, an explicit bound for |q| that realizes Theorem <ref> can be expressed. The following result is obtained from the treatment of Seifert fibred JSJ pieces throughout Sections <ref> and <ref>.
Let K be a cable knot with an exterior consisting solely of Seifert fibred JSJ pieces. A slope p/q is characterizing for K if:
* |q|>2 and K is not an n-times iterated cable of a torus knot, n ≥ 1;
* |q|>|r_1| + |a| and K is an n-times iterated cable of C_r_1, s_1(T_a, b), |a|>|b|>1, n ≥ 1;
* |q|> max{8, s_1, |r_1| + |a|} and K is a cable C_r_1, s_1(T_a, b), |a|>|b|>1.
We first show that Proposition <ref> is realized. Suppose K' is a knot such that there exists an orientation-reserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q). If S^3_K(p/q) is Seifert fibred, Proposition <ref> is immediately realized.
If S^3_K(p/q) contains a JSJ torus, then the surgered piece X' of S^3_K'(p/q) is not a filling of a knot exterior. It follows that the JSJ decomposition of S^3_K' does not contain hyperbolic pieces if |q|>2. The surgered piece X' is thus Seifert fibred and it is a filling of a JSJ piece Y' of S^3_K' that is either a composing space or a cable space (Proposition <ref>).
If Y' is a composing space, then by Lemma <ref> and Section <ref>, X' must be sent by the homeomorphism to the surgered piece of S^3_K(p/q) if |q|>2.
If Y' is a cable space and if X' is not sent to the surgered piece of S^3_K(p/q), then X' is sent to the exterior of a torus knot T_a,b in S^3_K (Proposition <ref>). Using the notation introduced in Section <ref>, let 𝒯= S^3_T_a,b and let P be the pattern such that 𝒯 decomposes K into P and T_a,b.
Suppose K is not an n-times iterated cable of C_r_1, s_1(T_a, b), n ≥ 1. Then the pattern space V_P contains a composing space that shares all its boundary components with other JSJ pieces of S^3_K. By the proof of Proposition <ref>, V_P(𝒯; q(w')^2/y) is not Seifert fibred. Applying the Cyclic surgery theorem (Theorem <ref>) as described in Section <ref>, we obtain that |q|=1, contradicting (i).
If K is an n-times iterated cable of C_r_1, s_1(T_a, b), n ≥ 1, we apply the same method as in Section <ref> to compare the distances between the regular fibre slope and the longitudinal slope on 𝒯 and 𝒯'. This yields the inequality |q| ≤ |r_1|+|a|, which implies that if (ii) holds, X' must be carried to the surgered piece of S^3_K(p/q).
The theorem now follows, as (i), (ii) and (iii) are greater than or equal to the bounds from Section <ref>.
Note that Theorem <ref>(i) is equivalent to Theorem <ref> when applied to cable knots. If K is not a cable knot in Theorem <ref>, then it is a composite knot and the result follows from Theorem <ref>, which we prove in the next section.
\beginthm:cableSFS
If K is a knot with an exterior consisting solely of Seifert fibred JSJ pieces, with one of them being a composing space, then any slope that is neither integral nor half-integral is a characterizing slope for K.
\endthm:cableSFS
We relied on the constructive nature of Seifert fibred spaces to compute the above bounds. If S^3_K contains hyperbolic JSJ pieces, the task becomes more difficult. Indeed, for generic cases, we need to determine values that realize Theorems <ref> and <ref>. Recently, Wakelin established a lower bound on |q| for a slope p/q to be characterizing for certain hyperbolic patterns (<cit.>). In forthcoming work with Wakelin, we combine her findings and our study of Seifert fibred JSJ pieces to obtain further results.
§ CHARACTERIZING SLOPES FOR COMPOSITE KNOTS
We now turn to the proof of Theorem <ref>.
\beginthm:composite
If K is a composite knot, then every non-integral slope is characterizing for K.
\endthm:composite
§.§ The surgered submanifold
Thus far, we have made the assumption |q|>2, allowing us to define the surgered piece of a surgery along a knot. When |q|=2, the surgered piece might not be well-defined if the resulting manifold is obtained from filling a hyperbolic JSJ piece. Indeed, the surgery operation can create essential tori, or it might yield a Seifert fibre space which admits a Seifert fibred structure that extends to other JSJ pieces.
Let Y_0 ∪ Y_1 ∪ Y_2 ∪…∪ Y_n be the JSJ decomposition of the exterior of a knot K.
If |q|>1, then up to re-indexing the Y_i, the JSJ decomposition of S^3_K(p/q) is of the form
(X_0 ∪ X_1 ∪…∪ X_m) ∪ (Y_i ∪ Y_i+1∪…∪ Y_n),
for some 1 ≤ i ≤ n, and where none of the X_j's are JSJ pieces of S^3_K. The manifold X_0 ∪ X_1 ∪…∪ X_m is the surgered submanifold of S^3_K(p/q).
If the surgered submanifold is a JSJ piece of S^3_K(p/q), i.e., m=0, then we may also call it the surgered piece of S^3_K(p/q).
This definition is compatible with Definition <ref>. In fact, the surgered submanifold of a surgery S^3_K'(p/q) may not be a surgered piece only in the case where the outermost piece of S^3_K' is hyperbolic (proof of Proposition <ref> and Proposition <ref>).
We obtain an analogue of Proposition <ref> for composite knots when |q|>1.
Let K be a composite knot and suppose there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) for some knot K'.
If |q|>1, then the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) carries the surgered piece of S^3_K(p/q) to a JSJ piece of the surgered submanifold of S^3_K'.
Let K = K_1 # K_2 #…# K_n where the K_i's are prime for each i=1, …, n. Let Y be the outermost composing space of S^3_K. It is homeomorphic to the exterior of the link in S^3 with unknotted components L_0, L_1, …, L_n such that each pair (L_0, L_i) for i = 1, …, n, is a Hopf link and the link formed by L_1, …, L_n is the unlink with n components. Let ℒ_i = ν L_i be the boundary components of Y (Figure <ref>). By the remark above, the surgered piece X = Y(ℒ_0; p/q) of S^3_K(p/q) is well-defined.
Suppose the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) does not carry X into the surgered submanifold X' of S^3_K'. Then there is a component K_i of K, i∈{1, …, n}, whose exterior contains a submanifold homeomorphic to X'. Let 𝒯 = S^3_K_i⊂ S^3_K. The JSJ torus 𝒯 decomposes K into a composing pattern P and K_i. The homeomorphism sends 𝒯 to a JSJ torus 𝒯' of S^3_K'(p/q) that separates S^3_K'(p/q) into a manifold homeomorphic to S^3_K_i and the exterior of a knot J'. As a result, V_P(𝒫;p/q) is homeomorphic to the exterior of J', which contradicts Lemma <ref>.
§.§ Fillings of a hyperbolic piece
In order to prove Theorem <ref>, we must demonstrate that the surgered submanifold of S^3_K'(p/q) does not result from filling a hyperbolic JSJ piece of S^3_K'. Therefore, we now focus on the topology of the surgered submanifold of S^3_K'(p/q), under the assumption that the outermost piece of S^3_K' is hyperbolic. In this subsection, we study the surgery S^3_K'(p/q) by itself, without taking into account any constraints arising from an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q).
Recall from Theorem <ref> that if Y' is a hyperbolic JSJ piece in the exterior of a knot, then Y' is either the exterior of a hyperbolic knot L_0' in S^3 or the exterior of a hyperbolic link in S^3 with components L_0', L_1', …, L_n' such that the link formed by L_1', …, L_n' is the unlink with n components. From now on, we will denote the boundary components of such a hyperbolic piece Y' by ℒ_i' = ν L_i', i=0, …, n.
Let Y' be a hyperbolic JSJ piece of a knot exterior. If Y'(ℒ_0';p'/q') is homeomorphic to either a composing space or to a p/q-filling of a composing space where |q| > 1, then |q'| ≤ 1.
Let Y be a composing space with n+1 boundary components labeled as in Figure <ref>. Suppose Y'(ℒ_0';p'/q') is homeomorphic to Y(ℒ_0;p/q), a Seifert fibre space with one exceptional fibre of order |q|>1. Then Y' also has n+1 boundary components. Up to permuting indices, we can assume that for each i=1, …, n, the homeomorphism maps ℒ_i' to ℒ_i.
There exists infinitely many slopes σ_1 on ℒ_1' such that the cores of the surgery solid torus is an exceptional fibre in Y'(ℒ_0', ℒ_1'; p'/q', σ_1). There also exists infinitely many slopes σ_i on each ℒ_i', i=2, …, n, such that the core of the surgery solid tori are regular fibres in Y'(ℒ_0', ℒ_i'; p'/q', σ_i). Since hyperbolic manifolds possess only finitely many exceptional surgery slopes on each of their torus boundary components, we can choose σ_1, …, σ_n such that Y' = Y'(ℒ_1', …, ℒ_n' ; σ_1, …, σ_n) is hyperbolic.
Now, Y'(ℒ_0'; p'/q') has base orbifold S^2 with two exceptional fibres, which means that it has cyclic fundamental group. On the other hand, Y'(ℒ_0'; 1/0) is homeomorphic to the exterior of the unlink with n components. Therefore, Y'(ℒ_0'; 1/0) is a connected sum of manifolds with cyclic fundamental groups. By Boyer and Zhang (<cit.>), or the Cyclic surgery theorem (Theorem <ref>) if the connected sum is trivial, we must have |q'| = Δ(p'/q',1/0) ≤ 1 since Y' is not Seifert fibred.
Suppose now that Y'(ℒ_0';p'/q') is homeomorphic to a composing space with n boundary components. The argument is similar to that above. There are infinitely many slopes σ_i on each component ℒ_i' of Y'(ℒ_0';p'/q') such that the cores of the surgery solid tori corresponding to σ_1, σ_2 are exceptional fibres and the cores of the surgery solid tori corresponding to σ_3, …, σ_n are regular fibres. We can choose the σ_i's so that Y' = Y'(ℒ_1', …, ℒ_n' ; σ_1, …, σ_n) is hyperbolic. Then Y'(ℒ_0'; p'/q') and Y'(ℒ_0'; 1/0) are fillings of a hyperbolic manifold that are respectively a manifold with cyclic fundamental group and a connected sum of manifolds with cyclic fundamental groups. We conclude as before with <cit.> or Theorem <ref>.
Let K' be a knot such that the outermost piece of S^3_K' is hyperbolic. If |q|>1, then the JSJ tori of S^3_K' are incompressible in S^3_K'(p/q).
Let Y' be the outermost piece of S^3_K'. We may assume that K' is a satellite as otherwise, the statement is vacuously true. The surgered submanifold of S^3_K'(p/q) contains Y'(ℒ_0';p/q). Suppose Y'(ℒ_0';p/q) has compressible boundary. As S^3_K'(p/q) is irreducible, Y'(ℒ_0';p/q) is also irreducible, so it must be a solid torus. This implies that Y' has two boundary components and, therefore, as Y'(ℒ_0'; 1/0) is also a solid torus, a result of Wu (<cit.>) implies that |q| = Δ(p/q,1/0)≤ 1, which contradicts our assumption |q|>1. Hence, Y'(ℒ_0';p/q) has incompressible boundary and as a consequence, the JSJ tori of S^3_K' are incompressible in S^3_K'(p/q).
Let K' be a knot such that the outermost piece Y' of S^3_K' is hyperbolic. If |q|>1, then the surgered submanifold of S^3_K'(p/q) is either Y'(ℒ_0';p/q), or the union of Y'(ℒ_0';p/q) and some Seifert fibred JSJ pieces of S^3_K' sharing a boundary component with Y' in S^3_K'.
§.§ Non-integral toroidal surgeries
We now study the surgered submanifold of S^3_K'(p/q) in the context of an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) where K is a composite knot.
Recall that a 3-manifold is said to be toroidal if it contains an essential torus, and atoroidal otherwise. The following proposition narrows down our investigation to hyperbolic manifolds that admit a non-integral toroidal surgery.
Let K be a composite knot. Suppose there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) where K' is such that the outermost piece Y' of S^3_K' is hyperbolic. If |q|>1, then Y'(ℒ_0';p/q) is toroidal.
By Proposition <ref>, the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) sends the surgered piece X of S^3_K(p/q) to a JSJ piece of the surgered submanifold X' of S^3_K'(p/q).
Suppose Y'(ℒ_0';p/q) is atoroidal. Then by Corollary <ref>, X' has trivial JSJ decomposition, which means that it is homeomorphic to X, a filling of a composing space. Hence, Y'(ℒ_0';p/q) is homeomorphic to a submanifold of a filling of a composing space.
Since Y'(ℒ_0';p/q) has incompressible torus boundary components, it must be homeomorphic to either a filling of a composing space or a composing space, since these are the only submanifolds of X' that have such a boundary. However, this contradicts Proposition <ref>.
Let Y' be a hyperbolic JSJ piece of a knot exterior with at least three boundary components. If |q|>1, then Y'(ℒ_0';p/q) is atoroidal.
The filling Y'(ℒ_0'; 1/0) is the complement of the unlink and it has compressible boundary. If Y'(ℒ_0'; p/q) contains an essential torus, then by a result of Wu (<cit.>), we have |q| = Δ(p/q, 1/0) ≤ 1, contradicting the assumption |q|>1.
Eudave-Muñoz contructed in <cit.> a family of hyperbolic knots that admit half-integral toroidal surgeries. These surgeries are the union of two Seifert fibre spaces. Gordon and Luecke proved that if a hyperbolic knot admits a non-integral toroidal surgery, then it belongs to Eudave-Muñoz's family and the surgery slope is half-integral (<cit.>).
Suppose there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) where K' is such that the outermost piece of S^3_K' is hyperbolic. If |q|>1, then K' is not a hyperbolic knot.
By Eudave-Muñoz, Gordon and Luecke, any non-integral surgery along a hyperbolic knot contains at most one essential torus. However, the surgery S^3_K(p/q) contains at least two essential tori, the boundary components of the surgered piece being such tori.
We obtain the next corollary by combining Proposition <ref> and the two preceding lemmas.
Suppose there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) where K' is such that the outermost piece Y' of S^3_K' is hyperbolic. If |q|>1, then Y' has exactly two boundary components.
Gordon and Luecke also classified in <cit.> all hyperbolic knots in solid tori which admit non-integral toroidal surgeries. They are derived from Eudave-Muñoz's construction mentioned above, and the resulting surgeries have link surgery descriptions as in Figure <ref>. The labels L_i identify the link components and the α_i's are the corresponding surgery slopes, which correspond to the slopes α, β, γ in <cit.>. The component L_4 is left unfilled. The essential torus 𝒯 is pictured. If μ_i is the slope on ν L_i that bounds a disc in ν L_i, then Δ(α_i, μ_i) ≥ 2 (<cit.>). Hence, the surgery is the union along 𝒯 of two Seifert fibre spaces M_1 and M_2, with respective base orbifolds a disc with two cone points of orders Δ(α_1, μ_1) and Δ(α_2, μ_2), and an annulus with one cone point of order Δ(α_3, μ_3).
Let ℰ be the exterior of a hyperbolic knot K_0 in S^1 × D^2 such that ℰ(𝒦_0; σ) is toroidal, where μ bounds a disc in ν K_0. Then ℰ(𝒦_0; σ) is the union of Seifert fibre spaces M_1 and M_2. Suppose M_2 = ℰ(𝒦_0; σ) = (S^1 × D^2). The slope of a regular fibre of M_2 on (S^1 × D^2) does not coincide with the slope that bounds a disc in ℰ(𝒦_0; μ) ≅ S^2 × D^2.
We follow <cit.>, in which the solid torus containing K_0 is denoted L(α, β, γ, *, 1/2) and the non-integral toroidal filling ℰ(𝒦_0; σ) is denoted L(α, β, γ, *, 1/0). If Δ(σ, μ)>1 and ℰ(𝒦_0; σ) is toroidal, then by the discussion above, ℰ(𝒦_0; σ) is the union of Seifert fibre spaces M_1 and M_2, and its surgery description is given by Figure <ref>.
Let h be the slope of a regular fibre of M_2 on 𝒮 = (S^1 × D^2). Suppose by contradiction that h bounds a disc in ℰ(𝒦_0; μ) ≅ S^2 × D^2. Then ℰ(𝒮, 𝒦_0; h, μ) ≅ S^2 × S^1.
In the surgery description from Figure <ref>, filling along 𝒮 corresponds to filling along L_4. One can see that filling M_2 along a regular fibre yields the connected sum of a lens space and a solid torus whose meridian has distance one with a regular fibre of M_1. Hence, ℰ(𝒮, 𝒦_0; h, σ) is either a lens space or a connected sum of lens spaces. Since Δ(σ, μ)>1, this implies that ℰ(𝒮; h) is reducible (<cit.> and <cit.>).
Write ℰ(𝒮; h) = N_1 # N_2. Then S^2 × S^1 ≅ℰ(𝒮, 𝒦_0; h, μ) ≅ N_1 # N_2(𝒦_0; μ). But S^2 × S^1 does not contain a separating S^2, which means that N_1 ≅ S^2 × S^1 and N_2(𝒦_0; μ) ≅ S^3. It follows that ℰ(𝒮, 𝒦_0; h, σ) = (S^2 × S^1) # N_2(𝒦_0; σ). This contradicts the assertion that ℰ(𝒮, 𝒦_0; h, σ) is a lens space or a connected sum of lens spaces, as such spaces do not contain a non-separating essential sphere.
Let K' be a knot such that the outermost piece of S^3_K' is hyperbolic. If |q|>1, then there is no orientation-preserving homeomorphism between S^3_K(p/q) and S^3_K'(p/q).
Suppose by contradiction that there exists an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) where K' is such that the outermost piece Y' of S^3_K' is hyperbolic, and where |q|>1. By Corollary <ref>, the surgered submanifold of S^3_K'(p/q) contains Y'(ℒ_0';p/q). By Corollary <ref> and Lemma <ref>, Y' is the exterior of a knot in a solid torus and Y'(ℒ_0';p/q) is a non-integral toroidal filling. By Gordon and Luecke, Y'(ℒ_0';p/q) is the union of two Seifert fibre spaces M_1 and M_2, with respective base orbifolds a disc with two cone points and an annulus with one cone point.
Let X be the surgered piece of S^3_K(p/q) and let X' be its image in S^3_K'(p/q) by the homeomorphism S^3_K'(p/q) ≅ S^3_K(p/q). Since X' is a filing of a composing space, we have X' ∩ Y'(ℒ_0';p/q) = M_2 (proof of Proposition <ref>(2)).
Let 𝒯' = Y'(ℒ_0';p/q) ⊂ M_2. In S^3_K', the torus 𝒯' decomposes K' into P' and J'. Let ℰ = V_P' and 𝒫' = ν P'.
The torus 𝒯' is mapped by the homeomorphism to an incompressible torus 𝒯 in X ⊂ S^3_K. Although this torus might not be a JSJ torus of S^3_K, it separates S^3_K into a pattern space V_P and a knot exterior S^3_J, where P is a composing pattern (and J is a composite knot if 𝒯 is not a JSJ torus). Let 𝒫 = ν P ⊂ V_P.
We thus have homeomorphisms V_P(𝒫;p/q) ≅ℰ(𝒫';p/q) and S^3_J ≅ S^3_J'. These imply that the meridian on 𝒯 given by J is sent by the homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) to the meridian on 𝒯' given by J', by the knot complement theorem.
By construction of satellite knots, the meridian on 𝒯' given by J' coincides with the slope that bounds a disc in ℰ(𝒫';1/0). By Proposition <ref>, this slope does not coincide with the slope of a regular fibre of M_2 on 𝒯'.
On the other hand, a regular fibre on 𝒯 in V_P(𝒫;p/q) comes from the Seifert fibred structure of a composing space, so it has meridional slope.
Hence, regular fibres on 𝒯 are not mapped to regular fibres on 𝒯'. This contradicts the unicity of the Seifert fibred structure on a Seifert fibre space with base orbifold an annulus and one cone point.
Let K be a composite knot and suppose there is an orientation-preserving homeomorphism S^3_K(p/q) ≅ S^3_K'(p/q) for some knot K', where |q|>1.
According to Proposition <ref>, the surgered piece of S^3_K(p/q) is carried into the surgered submanifold of S^3_K'(p/q). Proposition <ref> implies that the surgered submanifold of S^3_K'(p/q) is a JSJ piece of S^3_K'(p/q), and it is the p/(qt^2)-filling of a JSJ piece Y' of S^3_K' for some t ≥ 1.
According to Proposition <ref>, this filling Y'(p/(qt^2)) is homeomorphic to the surgered piece of S^3_K(p/q), a filling of a composing space with one exceptional fibre of order |q|. By Proposition <ref>, Y' is Seifert fibred. Since Y'(p/(qt^2)) has at least two boundary components, Y' is a composing space by Theorem <ref>. The exceptional fibre of Y'(p/(qt^2)) has order |qt^2|=|q|, so t=1 and K' is not a cable. Therefore, we conclude that K = K'.
amsalpha
|
http://arxiv.org/abs/2307.00391v1 | 20230701173921 | Hybrid quantum algorithms for flow problems | [
"Sachin S. Bharadwaj",
"Katepalli R. Sreenivasan"
] | quant-ph | [
"quant-ph",
"physics.app-ph",
"physics.comp-ph",
"physics.flu-dyn"
] |
APS/123-QED
[email protected]
Department of Mechanical and Aerospace Engineering, New York University, New York 11201 USA
[email protected]
Department of Mechanical and Aerospace Engineering, New York University, New York 11201 USA
Courant Institute of Mathematical Sciences, New York University, New York, NY 10012
Department of Physics, New York University, New York, NY 10012
Center for Space Science, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
For quantum computing (QC) to emerge as a practically indispensable computational tool, the exigency is for quantum protocols with an end-to-end practical applications—in this instance, fluid dynamics. To facilitate this, we debut here a high performance quantum simulator which we term QFlowS (Quantum Flow Simulator),
designed for fluid flow simulations using QC. Solving nonlinear flows by QC generally proceeds by solving an equivalent infinite dimensional linear system as a result of linear embedding. Thus, we first choose to simulate two well known linear, unsteady flows using QFlowS and demonstrate a previously unseen, full gate-level implementation of a hybrid and high precision Quantum Linear Systems Algorithms (QLSA) for simulating such flows.
The utility of this simulator is shown by extracting error estimates and a power law scaling that relates T_0 (a parameter crucial to Hamiltonian simulations) to the condition number κ of the simulations matrix, and allows the prediction of an optimal scaling parameter for accurate eigenvalue estimation. Further, we append two speedup preserving algorithms for (a) the functional form or sparse quantum state preparation
and (b) in situ quantum post-processing
to compute a nonlinear function of the velocity field, namely the the viscous dissipation rate, resulting in an end-to-end complexity of 𝒪(poly log (N/ϵ)κ/ϵ_QPP), where N is the size of the linear system of equations, ϵ is the accuracy of the solution and ϵ_QPP is the accuracy of post processing. This work demonstrates a possible way towards quantum simulation of fluid flows, and highlights the special considerations needed at the gate level implementation of QC.
Hybrid quantum algorithms for flow problems
Katepalli R. Sreenivasan
August 1, 2023
===========================================
§ INTRODUCTION
omputer simulations of nonlinear physical systems—such as turbulent flows, glassy systems, climate physics, molecular dynamics and protein folding—are formidably hard to perform on even the most powerful supercomputers of today or of foreseeable future.
In particular, the state-of-the-art Direct Numerical Simulations (DNS) of turbulent flows <cit.> governed by the Navier-Stokes equations, or of turbulent reacting flow problems and combustion <cit.>, both of which involve massive simulations with high grid resolutions, not only reveal fine details of the flow physics <cit.>, but also constantly contend with the limits of supercomputers on which the codes run <cit.>. However, simulation sizes required to settle fundamental asymptotic theories, or simulate turbulent systems such as the Sun or cyclones, or to simulate flows around complex geometries of practical interest, would require computing power that is several orders of magnitude higher than is currently available. Reaching such computational prospects calls for a paradigm shift in the computing technology.
One such potential candidate is Quantum Computing (QC)<cit.>,
which has striven to establish its advantage over classical counterparts by promising polynomial or exponential speedups <cit.>. Even though QC has been around for the last two decades, the subject is still nascent. In this nascent era, which has been called the Noisy Intermediate Scale Quantum (NISQ) era, QC's applications already extend <cit.> across finance, chemistry, biology, communication and cryptography, but not as much in areas that are predominantly nonlinear, such as fluid dynamics.
This work attempts to pave the way for utilizing QC in Computational Fluid Dynamics (CFD) research, which we have termed <cit.> Quantum Computation of Fluid Dynamics (QCFD). An initial comprehensive survey of various possible directions of QCFD was made in <cit.>. Realistic CFD simulations with quantum advantages require one to quantumly solve general nonlinear PDEs such as the Navier-Stokes equations. However, it is worth noting that the fundamental linearity of quantum mechanics itself blockades encoding of nonlinear terms, thus forcing a linearization of some kind <cit.>, which typically results in an infinite dimensional linear system. In such cases the inaccessibility to the required large number of qubits (and thus exponentially large vector spaces) leads to inevitable truncation errors, limiting the focus to weakly nonlinear problems <cit.>. Therefore the ability to solve high dimensional linear systems in an end-to-end manner [By this we mean an algorithm that efficiently prepares a quantum state, processes it and outputs a result by measurement while retaining all or some net quantum advantage.] while capturing the flow physics is crucial to simulating nonlinear flow problems. Our goal here is to present various steps involved in the process of solving simple and idealized problems, including providing estimates of scaling and errors involved.
To this end, we unveil here a high performance quantum simulator which we call QFlowS(Quantum Flow Simulator), designed particularly to simulate fluid flows. Built on a C++ platform, it offers both QC and CFD tools in one place. With QFlowS
we implement a modified version of the class of algorithms, now termed Quantum Linear Systems Algorithms (QLSA). Under some caveats, these algorithms promise to solve a linear system of equations given by the matrix inversion problem A𝐱=𝐛, with up to an exponential speedup compared to known classical algorithms. In recent years a number of efforts based on continuum methods using QLSA <cit.>, variational quantum algorithms <cit.>, amplitude estimation methods <cit.>, and quantum-inspired methods <cit.>, have been undertaken to solve linear and nonlinear PDEs. However most of these efforts have been theoretical, lacking gate-level quantum numerical simulations and analysis of the resulting flow field, or proper estimates of the actual errors involved.
In particular, a full-gate level quantum simulation is implemented on QFlowS to solve the unsteady Poiseuille and Couette flow problems, extendable with little change to the advection diffusion problem with constant advection velocity. We implement both the fundamental form of QLSA, called the Harrow-Hassidim-Lloyd (HHL) algorithm <cit.> and its more recent counterpart, <cit.> based on the linear combination of unitaries (LCU). In addition, we prescribe suitable quantum state preparation protocols and propose a novel quantum post-processing (QPP) protocol to compute in situ nonlinear functions of the resulting flow solution. In particular we obtain the viscous dissipation rate ε=ν⟨(∂ u/∂ y)^2⟩
of the flow field u and
viscosity ν.
Together, this forms an end-to-end implementation, which alleviates, to some extent, the caveats of both quantum state preparation and the measurement of qubits—which are otherwise the major limiters of the theoretical quantum advantage <cit.>. Although the proposed algorithms are far from realistic fluid simulations, they make quantum implementations more amenable for small systems, and inform the running of codes on near-term NISQ machines while attempting to preserve the quantum advantage.
§ LINEAR FLOW PROBLEMS
We consider the well known 1D unsteady Poiseuille and Couette flows (schematic shown in SI Appendix figure S1(a) that are linear dissipative flows which describe for instance, micro-channel flows (e.g., in micro-chips, blood capillaries and syringes) or lubricant flows around bearings. The framework outlined in this work is readily extendable to the linear advection-diffusion eq. (<ref>) with constant advection velocity. More generally, this algorithm caters to the class of elliptic and parabolic PDEs described by d-dim Laplace, Poisson and heat equations. Under certain boundary conditions, the flows under discussion admit exact analytical solutions, thus making them ideal candidates for evaluating the performance of the quantum solver. Some earlier works such as <cit.> made some important observations in possible implementations on QC for similar problems and estimated theoretical upper bounds of their complexities. The general form of the governing PDEs considered here (assuming no body forces or source terms) are given by the momentum conservation and continuity relations of the kind
∂u/∂ t + 𝐂·∇𝐮 = 1/Re∇ ^2u - ∇p,
∂u/∂ t = 1/Re∇ ^2u - ∇p,
∇·u = 0 .
where 𝐮 = (u,v,w) is the velocity field, 𝐂 is a constant advection velocity, 𝐩 is pressure field, Re = UD/ν is the Reynolds number, U is the characteristic velocity, ν is the kinematic viscosity and D is the separation between the boundaries. Eq. (<ref>) enforces the incompressibility condition while eq. (<ref>) describes the well-known unsteady channel (or Poiseuille/Couette flow) (with 𝐂=0 in eq. (<ref>)) which in the 1D case (running example for all discussions from here on) reduces to
∂ u/∂ t = 1/Re∂^2 u/∂ y^2 - ∂ p/∂ x,
where the velocity varies only along y (wall-normal direction), and the pressure gradient ∂ p/∂ x is set to be a constant. The boundary conditions are no-slip with u(0,t) = u(D,t) = 0 for the Poiseuille flow and u(0,t) = 0 and u(D,t) = 1 for the Couette flow. The initial condition for the temporal evolution is set to be a uniform flow u(y,0)= u_in = 1.
We reiterate that this problem is simple from the standpoint of the sophisticated advances of classical CFD. However, this is an excellent starting point for demonstrating the viability of quantum algorithms for CFD, which is the spirit in which this work is presented.
§.§ Hybrid quantum-classical numerical setup
The goal now is to solve eq. (<ref>) by means of QLSA, which thus necessitates eq. (<ref>) to be recast as a linear system of equations. To do this, we consider the method of finite differences to discretize the computational domain in both space and time. Details of these schemes, their stability considerations and the resulting matrix equations that form the input to the quantum algorithm is outlined in SI Appendix, Section 1.A. The well-known second order central difference scheme is used to discretize the Laplacian operator for N_g grid points, while both forward and backward Euler (FE and BE from here on) schemes are implemented to discretize time, which yield the set of three possible matrix equations 𝐀_be1u̅ = 𝐛_be1, 𝐀_be2u̅ = 𝐛_be2 and 𝐀_feũ = 𝐛_fe.
To solve these equations, a hybrid quantum-classical method is developed (schematic flow chart is shown in figure <ref>(b)).
The preconditioning and computations of the elements of the matrices 𝐀_be1, 𝐀_be2,𝐀_fe and vectors 𝐛_be1,𝐛_be2,𝐛_fe are done classically. Certain parameters required (as elucidated later) for quantum state preparation (e.g., rotation angles and decision trees) and for Hamiltonian simulation (time T^*_0) are pre-computed classically as well. N from here on refers to the dimension of final matrix system that results from these considerations. With this on hand, the inputs are first loaded on the QC by the quantum state preparation algorithms (QSP-1,2) and the resulting linear system of equations is then solved by QLSA. In the case of iterative BE, 𝐀_be1u̅ = 𝐛_be1 is solved for velocities u̅, at every time step until convergence (residue reaching a tolerance ≤ϵ_tol = 10^-6), which is checked classically. In a contrasting setup, BE and FE are used to set up, respectively, 𝐀_be2ũ = 𝐛_be2 and 𝐀_feũ = 𝐛_fe, giving ũ = [u(y,0),u(y,dt),⋯,u(y,T)], in one shot, ∀ t∈[0,T]. It is important to note that even the BE method can be setup such that the solution is computed ∀ t in one go. However, in the absence of efficient state preparation and measurement protocols, measuring the solution and re-preparing the state for the next time step are 𝒪(N_g) operations that eliminate any quantum advantage, making the overall algorithm no better than classical solvers (and with additional errors due to quantum measurements). In any case, it is still worthwhile establishing how the method fares as a plausible alternative to classical simulations.
The final solution is either: (a) simply read by quantum measurements for post-processing on a classical device, or (b) the solution is post-processed using the QPP protocol introduced here in situ for a quantum device. The former, at the level of a simulator, allows one to validate the correctness of solutions and redesign the circuit as required. While in the latter case, only a single target qubit and few ancillas are measured, which outputs one observable—which is a real-valued nonlinear function of the velocity field. Apart from computing nonlinear functions, this circumvents expensive and noisy measurements of entire quantum states and more importantly preserves quantum advantage (to the extent possible).
§ QUANTUM FLOW SIMULATOR - QFLOWS
In <cit.> several commercially available quantum simulation packages are listed. Most of them, for instance Qiskit (IBM), Quipper <cit.> and QuEST <cit.>, are constructed for general purpose quantum simulations and are highly optimized for such operations, making it hard to customize the fundamental subroutines and data structures for CFD calculations. On the other hand, there are softwares such as ANSYS and OpenFOAM that perform solely classical CFD simulations. With the motivation of having a single bespoke quantum simulator for CFD, we unveil here a high performance, gate level quantum-simulation toolkit, which we call QFlowS; it is based on a C++ core and designed to be used both independently or as part of other software packages. It has a current capability of 30+ qubit simulation of custom quantum circuits. It also has several built-in gates and quantum circuits that could be used readily, while also being able to probe different quantum state metrics (such as the norm, density matrix and entanglement). Along with these, it includes basic CFD tools needed to set up flow problems making it versatile for QCFD simulations. Noise modelling is in progress and forms the major part of future software development. QFlowS is also continually being parallelized for optimal performance on supercomputers. For instance, figure <ref>(c) shows the strong scaling performance using OpenMP. The performance is measured while running on NYU's Greene supercomputing facility. On a single medium-memory computer node (48 cores: 2x Intel Xeon Platinum 8268 24C 205W 2.9GHz Processor) and for a choice of 20 qubits, we measure the run-time (by omitting the one-time initial overhead processes) of a QFT-IQFT circuit action on an ensemble of randomly initialized quantum states. We observe near optimal and at times super-optimal scaling with increasing number of threads up to 24. (Super-optimality arises when the quantum circuit is sparse, causing lesser quantum entanglement. Every single circuit layer operation is distributed over many worker threads, whose cache size exceeds the size of quantum state subspace being handled, thus making them closely parallel). SI Appendix, Section 2, summarizes features of QFlowS.
§ QUANTUM LINEAR SYSTEMS ALGORITHM (QLSA)
One of the first quantum protocols for solving equations of the form Ax⃗=b⃗ is the HHL algorithm <cit.> which we refer to here as QLSA-1. In <cit.> it was shown that: For a hermitian and non-singular matrix A ∈ℂ^2^n×2^n, vector b ∈ℂ^2^n (N=2^n), given oracles to prepare A and b in 𝒪(polylog(N)), and a prescribed precision of ϵ > 0, there exists an algorithm that computes a solution x such that |||x⟩ - |A^-1b⟩||≤ϵ in 𝒪(polylog(N)s^2κ^2 /ϵ), where κ is the condition number of the matrix. This shows that the algorithm is exponentially faster than classical alternatives, but there are important caveats <cit.>). Later works <cit.> attempted to address these caveats, while some others <cit.> fundamentally improved the method by reducing error complexity from poly(1/ϵ) to poly(log(1/ϵ)). Consequently refs. <cit.> led to a more precise class of QLSA methods based on the linear combination of unitaries (LCU) <cit.> which we shall refer to as QLSA-2. In <cit.> it was shown that under similar caveats of QLSA-1, we have: For a hermitian and invertible matrix A ∈ℂ^2^n×2^n, vector b ∈ℂ^2^n , given oracles to prepare A and b in 𝒪(polylog(N)), and a prescribed precision of ϵ > 0, there exists an algorithm that computes a solution x such that |||x⟩ - |A^-1b⟩||≤ϵ in 𝒪(polylog(N/ϵ)κ ).
This work implements algorithms due to both these methods <cit.>.
In the QCFD context, we now explore methods suitable for preparing {𝐛_be1,𝐛_be2,𝐛_fe} and the matrices {𝐀_be1,𝐀_be2,𝐀_fe}, to enable the post-processing of the solution ũ, in order to construct an end-to-end method.
§.§ Quantum State Preparation
To prepare quantum states that encode 𝐛_be1, 𝐛_be2 and 𝐛_fe, we implement two different methods, both offering sub-exponential circuit depth complexity:
(1) In the case of iterative BE, the vector 𝐛_be1, prepared at every time step, is generally fully dense with sparsity s_b∼𝒪(N_be1). In the specific cases of Poiseuille and Couette flows, and for the specific initial conditions considered here, the state prepared at every time step forms a discrete log-concave distribution (i.e., ∂^2log(b)/∂ y^2<0 for ∀ t ≥ 0 ), which could also be confirmed from the analytical solution given by eq. (<ref>) known for this case as
u(y,t) = ∑_k=1^∞ [2(1-(-1)^k)/kπ(1+∂ p/∂ xRe/(kπ)^2)
sin(kπ y/D)e^-t/Re(kπ y/D)^2] - Re/2∂ p/∂ xy(1-y).
Even if the exact solution is not known, provided the initial condition is the only state preparation involved in the algorithm, flexibility exists for most flow simulations in choosing initial conditions that are log-concave, so that one could invoke a Grover-Rudolph state preparation <cit.> technique (or its more evolved off-springs <cit.>) to offer an efficient way to encode data. Two comments are useful. (i) Though this method could be used for arbitrary state-vectors (at the cost of exponential circuit depth), for an efficient state preparation, some information on the functional form of the state needs to be known a priori—from analytical solutions, classical CFD, or by the measurement of the quantum circuit at intermittent time-steps, peeking into its instantaneous functional form. Here, we implement a similar method, which we shall refer to as QSP-1, based on <cit.>, where it was shown that: Given a vector 𝐛_be1∈ℝ^N_be1,state 𝐛'_be1 can be prepared such that, ||𝐛_be1⟩ - |𝐛'_be1⟩| < 𝒪(1/ploy(N_be1)) in 𝒪(log(N_be1)) steps. (ii) Measuring all qubits of the register (∼𝒪(N_be1)) at every time step compromises the exponential speed-up and could introduce measurement errors. However, such a method of recursive state preparation and measurement could still prove to be useful with quantum advantage for a very small number of qubits <cit.>. In any case, we implement this method here to explore if such a BE scheme gives accurate results with or without quantum advantage.
(2) In the case of the one shot methods, an alternative quantum state preparation method can be considered since 𝐛_fe is generally larger in size ∼𝒪((m+p)N_g) (this discussion applies similarly to 𝐛_be2). When considered together with all other registers that are initially set to |0⟩, 𝐛_fe is a highly sparse state vector with s_b∼𝒪(N_gm). For such states, we implement a sparse state preparation protocol
<cit.>, which we shall refer to as QSP-2. It provides an optimal circuit depth that scales only polynomially with vector size. This method involves constructing decision trees forming an alternative way to represent quantum states. Careful optimization on the structure of these trees leads to efficient state preparation whose complexity depends on the number of continuous pathways in the resulting tree structure. Thus, rephrasing here the result in <cit.>, we have: Given an n-qubit initial state of size N=2^n, all set to |0⟩, except for a sparse vector subspace 𝐛_fe∈ℝ^N_fe (N_fe = 𝒪((m+p)N_g)), with sparsity s_b= m ≪ N, then with only single qubit and CNOT gates, one can prepare such a state with in 𝒪(2kn) time, k×𝒪(n) CNOT gates and using 1 ancillary qubit, where k(≤ m) is the number of branch paths of the decision tree.
Both QSP-1 & 2 are elucidated with an example in SI Appendix, Section 3.
§ FLOW SIMULATION RESULTS
We construct and solve the system given by eqs. (<ref>) for N_g=10 and Re = 10. We observe that the quantum solutions for the velocity field capture the physics both qualitatively and quantitatively. To discuss closely the utility of QFlowS we consider results from QLSA-1. As shown in figure <ref>(a) the converged steady state solution (using iterative BE) undershoots the analytic solution for 7 qubits, and performs better with a higher number of qubits (Q_PE≥ 14) that are allocated for the Quantum Phase Estimation (QPE) algorithm, which in turn decides the quantum numerical precision. Similarly, the converged solution for the one shot FE and BE cases also become more accurate with respect to both analytical and classical CFD solutions, with increasing number of qubits, as seen in figure <ref>(c) and (d) respectively.
Our experience is that, between the three schemes, the one shot FE and BE turns out to be more accurate than iterative BE for increasing number of qubits, as shown in figure <ref>(a), where the error ϵ_rms is computed with respect to the analytical solution. Among the two, one shot schemes, though quantitatively their error behavior is nearly the same, (i) We see some spurious oscillation like error in the velocity profile for the BE case as seen in figure <ref>(d) for higher Q_PE. (ii) The BE case however has no stability based restrictions as the FE case making it more flexible on the choice of dt. (iii) When accuracy in temporal discretization is of the concern the one shot FE fares better. The performance can also be measured by computing the fidelity of the solution as shown in the inset of figure <ref>(a), which shows the BE to perform better than FE. However fidelity might not always be a good indicator to performance as illustrated in SI Appendix Section 1.B. Both QLSA-1 and QLSA-2 rely on a variant of the phase estimation algorithm which generally contributes most to the total error QPE; in QLSA-2, the Gapped Phase Estimation (GPE) is rather computationally inexpensive and less erronous than in QLSA-1<cit.>).
In the case of phase estimation, the operator/matrix under consideration is exponentiated first as e^iAT_0, where T_0 is the Hamiltonian simulation time. An optimal choice T^*_0 (unknown a priori) scales on the eigenvalues λ_j that are spectrally decomposed in the basis of A as ∑_je^iλ_jT^*_0|u_j⟩⟨ u_j| to produce the best Q_QPE-bit binary representation |λ_j⟩_Q_QPE. It also minimizes possible truncation errors and any spurious quantum numerical diffusion.
In case of QLSA-1 it is important to note, since we are interested in estimating λ^-1 eventually, the smallest eigenvalue will contribute the most to the error. Therefore ensuring that the smallest value representable (least count) with Q_PE qubits = 2^-Q_PE is ≤λ̃_min is essential. The error for all cases shown in figure <ref>(a) has a gradual step like decay because increasing Q_PE in small steps (of 𝒪(1)) does not lower the least count appreciably (in log_10 or log_e) as Q_PE gets larger. In case of QLSA-2, though it evades a full blown QPE, the right choice of T_0 (for the Fourier approach <cit.>) and the LCU coefficients is still crucial for better accuracy.
At the level of the flow field when we probed further, the choice of T_0 seemed to exhibit non-trivial effects; for instance, in the iterative BE case, when gradually increasing Q_PE qubits, the converged solution either undershot or overshot the analytical solution initially. This is captured in figure <ref>(b), where the error ϵ (with respect to the analytical solution 𝐮_𝐚𝐧 of the center line velocity solution) oscillates around ϵ = 0 before converging to it for Q_PE > 12. For the specific case shown here, a choice of T^*_0 = 1.75 (dotted black line) has the least oscillation of the error and best accuracy.
We can now ask: what combination TQ = (T_0,Q_PE) gives the least error? To answer this better, we take a sample matrix equation system (of size 8×8 and κ = 18.8795) and solve it for different TQ. We then make a contour plot of the QLSA error ϵ_QLSA= ||u_Q-u_C||, as shown in figure <ref>(c) and trace the path of least error ϵ_min for each TQ. Further, the range of the T_0 scan can be reduced with some initial estimates to the lower and upper bounds of λ_min and λ_max <cit.> such as, β_1 - β_2√(N-1)≤λ_min≤β_1 - β_2/√(N-1) and β_1 + β_2/√(N-1)≤λ_max≤β_1+β_2√(N-1), where β_1 = Tr(A)/N and β_2 = (Tr(A^2)/N - β^2_1)^1/2. We observe that the optimal T^*_0 for all combinations lies in a fairly small range Δ T_0∼ 0.1. This is a unique value lying along the median of this range, T^*_0≈ 1.3 for which the system performs best. This means that all or most eigenvalues are best represented in binary form with Q_QPE qubits (one or some of the eigenvalues could also turn out to be represented exactly). Further, given T^*_0, with increasing number of qubits, the minimum error exhibits a power law decay ϵ_min∼ Q_PE^-6.81 as shown in figure <ref>(d), reaching ∼ 10^-5 at around 13 qubits. The exponent becomes increasingly negative with decreasing κ, since the range of eigenvalues becomes smaller and more eigenvalues tend to be easily representable with a given number of qubits. The thick horizontal red line shows the least count for Q_PE = 13; here, ϵ_min < 2^-13.
This favorable possibility arises because a subspace of the solution set could have had near exact representation using the given number of qubits, which lowers the overall L2 error ≤ 2^-Q_PE. This means the minimum number of qubits needed to attain an error ϵ grows as a power law (Q_PE)_min∼ 2.92ϵ^-0.1158, as shown in the inset of figure <ref>(d). If not for the right choice of T^*_0, for Q_PE>3, one would spend more numerous qubits 𝒪(1.44log(ϵ^-1)) to lower the overall error. We note that ϵ as computed here suppresses the error from finite differences, which is ϵ_fd∼𝒪(Δ y^2,Δ t) and it's important to note that this error plagues both quantum and classical solutions.
The quantum solution, however, gets closer to the classical solution with increasing qubits (Q_PE = 9, yellow curve).
Thus, being able to estimate T^*_0 fairly accurately reduces the overall computational resources required as well as the error, making it amenable for NISQ devices. Though there have been several analytical, asymptotic prescriptions for the choice of T^*_0 <cit.>, the exact choice remains elusive. To shed better light, QFlowS is equipped with QPE optimizer subroutine which, on the basis of the nature of the flow problem and the numerical method (finite differences) used, estimates T^*_0 by very minimal classical pre-processing. Since κ decides the range of eigenvalues and the invertibility of the matrix, it forms a common link that characterizes matrices for different systems with similar sparsity. Therefore, a relation that uniquely connects κ with T^*_0 would make a reasonable basis for prescribing T^*_0 for different system configurations. We provide such a relation which, though generalizable in behavior, is specific to: (1) the class of elliptic and parabolic linear PDEs considered here; (2) finite difference based numerical formulations that give rise to either sparse, band diagonal, lower triangular, Toeplitz or circulant matrices.
Since the one shot FE and BE seems to perform better than BE, we take the matrix system of the former for N_g=10 and characterize how κ varies as a function of matrix size m=⌈log_2(T/dt)⌉ and viscosity ν = 1/Re. Figure <ref>(a) shows for a specific case of T=1, dt=0.001, κ grows as a stretched exponential with decreasing (increasing) ν (Re) and saturates for very low ν, while for a fixed ν = 0.1, κ grows exponentially with m as shown in the inset of figure <ref>(a). However, the overall behavior of κ with both ν and the system size m is shown in figure <ref>(b), where the κ-ν curves transition from exponential to stretched exponential fits as plotted for increasing m obtaining the relation κ≤ m(e^-0.02mν+2). This relation confirms that κ is bounded and is not exponentially large. Here we explicitly highlight the dependence of κ on ν and note that it is also bounded from above by κ = 3(m+p+1), as given in <cit.> for nonlinear PDE systems that generally have higher κ than linear PDE problems. In both cases, κ increases with Re and m.
With this relation in hand, we proceed to compute T^*_0 for increasing κ, by extracting, as before, the TQ phase diagram and finally obtaining the relation T^*_0∼ -0.363log(κ) +0.918. In practice, only a small zone of the phase diagram is explored. The effect of ν is as one would expect, with T^*_0 decreasing and saturating for very low values. Now this relation between T^*_0-κ obtained via simulations on QFlowS serves as an ideal source for choosing T^*_0 appropriately to perform accurate simulations for bigger circuit sizes as well. The earlier relation (despite a slight variation with matrix structure) forms a reasonable approximation for T^*_0 for all systems considered here. This process predicts optimal T^*_0 for Hamiltonian simulation algorithms for both QLSA-1 and QLSA-2. In case of QLSA-2, QFlowS also efficiently generates the set of the LCU coefficients for optimal performance. Further, even for other class of problems (different PDEs and discretization schemes), QFlowS's QPE optimizer could be employed to perform similar low cost classical pre-processing to suggest optimal T^*_0 for accurate and efficient fluid flow simulations. Further, barring minor quantitative differences, on performing a similar analysis on the Couette flow case we find that the qualitative outcome and inferences drawn are nearly the same as that of the Poiseuille flow case as seen here. The corresponding velocity profiles for the Couette flow are shown in SI Appendix Section 1.B.
§ QUANTUM POST-PROCESSING PROTOCOL (QPP)
Once the velocity field is obtained, measuring it by repeated execution (excluding the requirements of quantum amplitude amplification <cit.>) of the quantum circuit (𝒪(N) complexity) will compromise any quantum advantage and also introduce measurement errors. Here, we examine a QPP that produces just one real-valued output of the average viscous dissipation rate per unit volume, ε=ν⟨(∂ u/∂ y)^2⟩ (by requiring only a very few measurements). Given that we are equipped with an oracle U_V (QLSA-2) that prepares a quantum register with the velocity field solution, we append to it a derivative module that computes first ∂ u/∂ y of the solution depicted in the circuit diagram shown in figure <ref>(a). This is done by either (1) the LCU method where a finite difference matrix of first derivative is decomposed as linear combination of unitaries, or (2) a spectral method in which an IQFT is first applied to enter the conjugate space and the first derivative is now a simple scalar multiplication with the corresponding wavenumber, k. Finally the application of QFT transforms it back into real space.
At this point, if one is interested in general nonlinear functions such as trigonometric, logarithmic, square-root or higher powers, we implement the following procedure. The derivatives that are stored as quantum amplitudes are first converted into an n_m-bit binary representation using Quantum Analog-Digital Converter (QADC) <cit.>. Following this step, a direct squaring algorithm outlined in SI Appendix Section 4, or a binary quantum arithmetic squaring circuit (an inverse of the square-root algorithm, which is more expensive than the former, see <cit.>) is used to compute (∂ u/∂ y)^2, which are finally converted back into amplitude encoding using Quantum Digital-Analog Converter (QDAC) <cit.>. This algorithm requires 𝒪(1/ϵ_QPP) calls to the controlled-U_V oracles (that has a complexity of 𝒪(polylogg(N/ϵ)κ ), thus an overall complexity of 𝒪(polylog(N/ϵ)κ/ϵ_QPP )), one query to the bit-squaring algorithm with complexity 𝒪((log_2 N)^2) and 𝒪((log_2 N)^2/ϵ_QPP) single- and two-qubit gates. The outline of the QPP algorithm introduced here along with circuit implementation is given in circuit SI Appendix Section 4.
As a final step we apply a matrix U_avg that computes the sum of derivatives at all points into one qubit, measuring which, along with a few ancillas, outputs the desired ε (after some normalization and multiplication by ν). Applying QPP on the quantum solution yields a behavior of ε with Re, computed at T=0.2, as shown in figure <ref>(c). Since the initial condition is a uniform flow, in the beginning there will be sharp gradients near the wall. To capture them, one would need a large number of grid points since the error in ∂ u/∂ y is ϵ≈𝒪(Δ y). This effect is seen clearly from figure <ref>(c), where the classical dissipation computed for N_g=8,16 and 64 shows improving trends; for high enough N, it begins to closely follow the analytical result. We pick the case of N_g=8 for the quantum case and see that it follows closely the corresponding classical solution for a total of 13 qubits. This could be made more accurate with more of the QPE qubits and the resolution N_g, as seen before. In essence, this demonstrates the possibility for computing quantities such as ε effectively as a quantum post processing step.
§ DISCUSSION
We have demonstrated here a possible quantum algorithmic strategy and its full implementation using gate-based quantum circuits on QFlowS, to simulate Poiseuille and Couette flows in an end-to-end manner. First, we identify suitable quantum state preparation algorithms by considering the sparsity and functional forms of the initial velocity data being encoded. In CFD, it is generally admissible to choose a relatively simple form and a sparse initial condition, which would result in a relatively low cost of state preparation. QSP-1,2 could both be used as shown here, by assessing the form of input to encode initial and boundary conditions with sub-exponential complexity (𝒪(log(N_be)) and 𝒪(kn), respectively), as well as to re-initialize instantaneous velocity fields. However, data that are dense with no functional form would force an exponential circuit depth.
Second, using finite difference schemes, the governing equations were discretized to form linear system of equations, and solved by implementing QLSA-1 and -2, which are state-of-the-art, high precision algorithms with exponential advantage compared to classical schemes. Here, we have made a detailed analysis of the behavior of the velocity solutions and the attendant errors, which have revealed that FE outperforms BE. Further, we proposed the role of T_0 and discussed algorithms to prescribe the optimal value T^*_0. The power-law and exponential form relations of (ϵ_min-Q_PE) and (T^*_0-κ), respectively, given by QFlowS, forms a well-informed basis to choose T^*_0 and minimum required qubits to perform accurate (up to ϵ_min) and qubit-efficient fluid flow simulations. Though QLSA-2 evades QPE and can provide exponential advantage in precision 𝒪(poly log(N/ϵ)κ ), other methods based on adiabatic QC <cit.> could have potentially simpler implementations, while offering a similar performance, motivating further investigations.
To keep the discussion compact, the data reported here are taken mainly from QLSA-1, but a similarly detailed discussion of simulation results with QLSA-2 forms the bulk of the upcoming work. In QLSA-2 the critical factors computed are the coefficients for LCU, GPE parameters and a comparison between the Fourier and Chebyshev approaches. Further, we have introduced a QPP protocol, where we propose the computation of the viscous dissipation rate ε, using a specific combination of QFT, IQFT, QADC, QDAC and bit-arithmetic, with an overall complexity that scales as 𝒪(poly log(N/ϵ)κ/ϵ_QPP + ((log_2 N)^2/ϵ_QPP) ). The QPP introduces an extra 𝒪(1/ϵ_QPP) scaling that brings down the performance of QLSA-2 to the level of QLSA-1 (if ϵ≈ϵ_QPP). Added to this, for purposes of quantum amplitude amplification, QFlowS is capable of repeating circuit runs in parallel (currently tested up to ∼8000 shots).
We should point out that this method avoids measuring the entire velocity field, thus protecting it from compromising the quantum advantage and escalating possible measurement errors. We observe that ε computed from the resulting quantum simulations captures the known analytical results. This method can be extended to compute other nonlinear functions of the velocity field.
In summary, along with introducing a new quantum simulator package QFlowS—designed mainly for CFD applications—we also demonstrate a complete implementation of an end-to-end algorithm to perform fluid flow simulations using QC, which paves the way for future QCFD simulations of both linear and nonlinear flows. However, it is important to note that we have not addressed other key challenges such as noise and quantum error correction, which we emphasize are critical to simulations on near term quantum devices. Also, the complexities of the algorithms provided here are estimates; along with investigating higher order finite difference schemes, a detailed error and complexity analysis along with computing exact gate counts and circuit depths form an important part of ongoing efforts.
Extending these methods and tools presented to nonlinear systems such as Burgers equations and Navier-Stokes equations also part of the ongoing efforts.
§.§ Data availability
All study data are included in the main text. The package QFlowS will shortly be made available as an open-source package on GitHub.
We wish to thank Dhawal Buaria (NYU), Yigit Subaśi (LANL), Jörg Schumacher (TU Ilmenau), Balu Nadiga (LANL), Patrick Rebentrost (CQT), Stefan Wörner (IBM) and Philipp Pfeffer (TU Ilmenau) for insightful discussions. S.S.B acknowledges the computational resources provided by the NYU Greene supercomputing facility on which these simulations were performed.
§ NUMERICAL METHOD
§.§ Finite difference
§.§.§ Spatial discretization
The well known 2nd order central difference scheme is used to discretize the flow domain into N_g equidistant grid points, as shown in figure S1(a)
(u = [u_1,u_2, ⋯ ,u_N_g]), with grid spacing Δ y=h=1/(N_g+1), admitting a discretization error ∼𝒪(Δ y^2). Since the velocity is known at the boundaries, one solves only for the N_g-2 unknown internal grid points. Thus the Laplacian operator can be written as
Δ u = u(y_i+h)- 2u(y_i)+u(y_i-h) /h^2 + h.o.t .
Now denoting this discretization operator as matrix 𝐀 and letting the pressure gradient form a constant vector 𝐟, we can rewrite eq. 2 (in the main text) as
∂ u/∂ t = 𝐀u + 𝐟 .
§.§.§ Temporal discretization:
To integrate in time, the temporal domain t ∈ [0,T] is discretized into m = T/Δ t time steps using two different schemes both admitting an error ∼𝒪(Δ t):
* Backward Euler or Implicit method: This discretizes the time derivative as
u^j+1-u^j/Δ t = 𝐀u^j+1 + 𝐟,
and gives the matrix equation
𝐀_be1u̅ = 𝐛_be1,
which needs to be inverted recursively to obtain the velocity field at every time step, where 𝐀_be1 = -(𝐀Δ t - 𝐈), u̅ = u^j+1 and 𝐛_be1 = u^j + 𝐟Δ t. This scheme is known to be unconditionally stable with any choice of the size of Δ t.
We also set up an alternative matrix equation
𝐀_be2u̅ = 𝐛_be2,
for all time steps as shown in eq. <ref>. (A_be2)_ij = -(𝐈 + AΔ t) ∀ i=j. However, ∀ i≤ m, (A_be2)_ij = -𝐈 for j=i-1 and ∀ i≥ m, (A_be2)_ij = -𝐈 for j=i-1. Further, (b_be2)_i = {u_in∀ i=0; = -𝐟Δ t
∀ 0<i≤ m; and = 0 ∀ i>m}.
* Forward Euler or Explicit method: Here the discretization is given by
u_i^j+1-u_i^j/Δ t = 𝐀u_i^j + 𝐟 ,
which leads to the matrix equation,
𝐀_feũ = 𝐛_fe,
where 𝐀_fe, has a double-banded structure as written in eq. <ref>, (A_fe)_ij = 𝐈 ∀ i=j (see below). However, ∀ i≤ m, (A_fe)_ij = -(𝐈 + AΔ t) for j=i-1 and ∀ i≥ m, (A_fe)_ij = -𝐈 for j=i-1. Further, (b_fe)_i = {u_in∀ i=0; = 𝐟Δ t
∀ 0<i≤ m; and = 0 ∀ i>m}.
Equations <ref> and <ref> thus unroll all the time steps into one big matrix of dimensions (m+p+1)N_g× (m+p+1)N_g,
thus solving for the velocity ũ = [u^0,u^1,⋯,u^m+p] at all times in one shot, where every u^j is the full field at all grid points. The total time T is discretized into m+p time steps, where one chooses a large enough p, such that every u^j_i = u^j+1_i for j ∈ [m+1,m+p], which implies that, after the attainment of a steady state, the solution produces p copies of the the steady state solution. This is done only to improve the measurement probability of the post-selected state <cit.> and does not affect the solution itself. This method is stable only for an appropriate choice of Courant number (von Neumann stability criteria), α = Δ t/(Δ y)^2 < 0.5, making it conditionally stable and specific to the PDE under discussion, which therefore also decides the upper bound on the largest admissible Δ t.
§.§ Poiseuille flow
The discussion of the Poiseuille flow is provided in the main text and will not be repeated here.
§.§ Couette flow
Following the same procedure as for Poiseuille flow, the Couette flow (𝐮(0,t)=0, 𝐮(1,t)=1) can be captured accurately as seen in the flow profiles of figure <ref>(a-d). Similar inferences as discussed in the main text are applicable in this case as well. We wish to highlight two possible measures of accuracy: (i) the fidelity = |𝐮_𝐐· 𝐮_𝐂| and (ii) RMS error (with respect to analytical solution), given by ϵ_rms=(⟨𝐮_𝐐-𝐮_𝐚𝐧⟩)^1/2, which are both plotted in figure <ref>(e) for a one shot FE scheme with a nearly accurate estimate of T^*_0. We can clearly observe that with increasing Q_PE, the fidelity increases, though the ϵ_rms has a weakly increasing trend. This indicates the that higher fidelity does not necessarily indicate better ϵ_rms, showing that fidelity might not be the most robust measure of performance when solving physical, fluid mechanical problems. Higher fidelity only indicates larger overlap of the quantum solutions with respect to the classical inversion solution, which itself is erroneous due to finite discretization and truncation errors. Even if fidelity=1, the error is still bounded by ∼𝒪((Δ y)^2,Δ t). The solution for the one shot case is a wavefunction that encodes all time steps, but we extract only the final time step. The fidelity is an overall measure of this solution but does not quantify whether the vector subspace corresponding to the final time step is more accurate or not. Also, the fidelity in the iterative BE case drops largely for every time step, since a dynamical T^*_0 is not chosen for every time step. However, higher fidelity can be seen as only an indicator of whether the chosen quantum parameters for QLSA are at least in the reasonable ballpark.
§ QFLOWS - A BRIEF OVERVIEW OF THE PACKAGE
QFlowS is a specialized high performance quantum simulator that enables setting up CFD problems in the QC format seamlessly. We summarize here briefly the various features of QFlowS as schematically depicted in figure <ref>(a).
* Qubits and quantum states: Qubits which are quantum analogues of classical bits, form the fundamental units of information storage and are represented by quantum states that follow rules of quantum mechanics. Mathematically, they form elements of a complex vector space (Hilbert space ℍ (∈ℂ^n)). An n-qubit state of the quantum computer, formed by taking tensor products of single qubit states |ψ⟩, is given by
|ψ⟩^⊗ n = ∑^2^n_i=1c_i|u_i⟩, c_i∈ℂ,
which encodes 2^n complex values c_i, in the basis |u_i⟩, that are stored as 1-dimensional arrays on QFlowS. The memory required should ideally scale linearly with wavefunction size (≈ 16×2^n bytes with double precision), but there is an overhead due to the need to store quantum circuit instructions. Currently, QFlowS offers simulation capabilities with up to 30+ qubits, which span vector spaces with dimensions of the order ∼ 10^9. For performing parallelized simulations, these quantum states are either (a) loaded on a large memory single node architecture (with or without GPU) dealt with an OpenMP style parallelization, or (b) distributed onto different processors for an MPI style execution. Both methods were tested initially but the former method was favored because of the simplicity of implementation and lower communication overheads. Some key operations on quantum states that can be done with QFlowS include:
* Quantum State Preparation (QSP) - To initialize any arbitrary states or states with special features. This is detailed further in SI Appendix, Section 3;
* Quantum state tomography and amplitude estimation - To estimate the amplitude of the final quantum state by reconstructing the state using different tomography techniques <cit.>;
* Quantum state characteristics - To compute other useful properties of the state such as density matrices, entanglement and norm.
* Quantum gates and circuits: Quantum gates are given by unitary operators U (UU^† = U^†U = ℐ), that are essentially rotation matrices, which collectively form a quantum circuit that manipulates quantum information in a specific way. The quantum circuit can be viewed as a tensor product of all the single qubit gates that forms a matrix of the size 2^n×2^n for an n-qubit circuit. On QFlowS, the quantum gates are not implemented as matrix operations, but as algebraic operations. The exact transformation caused by different gates is translated into vector operations that affects the specific coefficients, but done in parallel on multiple cores. This makes the simulator more efficient with respect to both memory and speed. For example, a Hadamard gate and a NOT gate acting on a two qubit state shown in figure S4, is given by
The action of such a circuit brings about a transformation on the state, as given by eq. 11. Such a transformation could be algebraically dealt as follows. In an n-qubit circuit, a Hadamard gate acting on the qubit q is H_q, for which u_i↦1/√(2)(u_i± u_i+2^n-q-1). This operation is performed on vector elements by the parallel processing of causally disconnected (unentangled) gates and vector sub-spaces without having to store or multiply the large 2^n× 2^n matrices. QFlowS can successfully handle circuits with ∼ 10 million multi-controlled and two-level gates even on a standard workstation with 10 core CPU, 1TB memory and 32GB RAM.
* Algorithm library and portability: QFlowS is docked with several standard quantum subroutines or algorithms such as Quantum Fourier Transform (QFT) and Quantum Phase Estimation (QPE) that can be readily used in any new circuit. Along with that, QFlowS has classical CFD tools such finite difference (FDM) , finite volume (FVM) and boundary element methods (BEM), implicit and explicit time stepping methods, predictor-corrector methods and linearization methods such as Homotopy Analysis Method (HAM) and Carleman method—to highlight a few. These classical subroutines generate appropriate matrices and vectors in formats that can seamlessly be imported by the quantum algorithms. This package will offer easy portability onto both local workstations and supercomputers.
* Visualization: With QFlowS, the quantum states and quantum circuits can also be visualized with an in-built state histogram builder and circuit drawer, with which algorithms and states can be visualized and verified for correctness to simulate and test the circuits.
§ QUANTUM STATE PREPARATION
QFlowS is equipped with a Quantum State Preparation (QSP) library that includes algorithms that correspond to certain classes of states with specific properties. We briefly elucidate two such algorithms used here. When the state that is being initialized on a quantum computer has a specific functional form such as log-concavity, it is known that it can be prepared efficiently <cit.>. The original algorithm proposed in <cit.> proceeds as follows. Consider an arbitrary n-qubit initial state given by
|ψ⟩ = ∑_i=0^2^n-1√(p_i)|i⟩,
where p_i is the i-th region of discretely sampled elements/regions from a log-concave probability distribution function p(ω). With the existence of an efficient classical subroutine to perform partial sums given by
p_i = ∫^ω^i_R_ω^i_L p_ωdω ,
p_iL = ∫^(ω^i_R-ω^i_L)/2_ω^i_L p_ωdω,
where p_i and p_iL are probabilities of the point lying in the entire region i and the left half of the region i, respectively, we can construct a circuit that prepares an n-qubit state for k<m as
|ψ^(k)⟩ = ∑_i=0^2^k-1√(p^(k)_i)|i⟩.
Now we can further discretize this to yield the state
|ψ^(k+1)⟩ = ∑_i=0^2^k+1-1√(p^(k+1)_i)|i⟩
by the following steps. Given the ability to compute the quantities in eq. <ref>, we can compute the conditional probability function f_k(i)
f_k(i) = p_iL/p_i.
With this we now compute the next level of discretization by constructing a quantum arithmetic circuit that performs
|i⟩|0⟩↦ |i⟩|θ_i⟩,
where θ_i = arccos(√(f_k(i))). Further, by adding an ancillary qubit ((k+1)-th qubit) we perform controlled R_y(θ_i) rotation gates (controlled on θ); uncomputing the second register we get
√(p^(k+1)_i)|i⟩|θ_i⟩|0⟩ ↦√(p^(k)_i)|i⟩|θ_i⟩(cosθ_i|0⟩+sinθ_i|1⟩)
≡∑_i=0^2^k+1-1√(p^(k+1)_i)|i⟩ = |ψ^(k+1)⟩.
Now repeat this 𝒪(n) times to generate an n-qubit state with the distribution sampled over 2^n regions. Though the complexity of such a method could be seen as 𝒪(n), the quantum arithmetic circuits are generally expensive and hence one could alternatively follow an improvement as proposed in <cit.>. To construct a circuit based on the above method described here, we employ the binary tree formulation as shown in <cit.>, which we shall call QSP-1. Let us consider loading 4 arbitrary non-zero values onto a 2 qubit state as
|ψ⟩ = u_0|00⟩ + u_1|01⟩ + u_2|10⟩ + u_3|11⟩.
To construct the circuit with QSP-1, we create a binary tree as shown on the left panel of figure <ref>(a), where we start with the values to be loaded as the terminal nodes at the base of the tree. These values are pairwise squared (since probabilities are squares of amplitudes) and summed to build the tree upwards. Finally, all nodes should converge to 1, as one should expect. Now, to prepare the quantum states, we traverse the tree downwards starting from the vertex, which corresponds to the initial state |ψ⟩ = |0⟩⊗ 0⟩. Every node has two children: the zero-child and the one-child which have amplitudes corresponding to either the |0⟩ or |1⟩ state, respectively. Then we compute at every node, the angle θ = arccos(√(zero-child/one-child)) and apply a controlled R_y(θ) gate, where the control sequence is the bit sequence of the corresponding node. In the example shown in figure S4, to compute θ_2, we look at the two children and obtain 0.25(zero-child) and 0.5(one-child). Therefore θ_2 = arccos(√(0.25/0.5))= π /4. Here the parent node, 0.75 corresponds to a zero-child branch. So the controlled-R_y gate will operate when qubit 1 is set to 0. Thus, after R_y(θ_1) we get √(0.75)|0⟩+√(0.25)|1⟩)⊗|0⟩.Further with successive applications of controlled gates R_y(θ_2) and R_y(θ_3) we obtain
|0⟩⊗ 0⟩√(0.75)|0⟩+√(0.25)|1⟩)⊗|0⟩
√(0.75)|0⟩+√(0.25)|1⟩)⊗|0⟩
√(0.25)|00⟩ + √(0.5)|01⟩ + √(0.125)|10⟩ + √(0.125)|11⟩.
The corresponding quantum circuit for QSP-1 is shown in the left panel of figure <ref>(b). Now notice that such a method has a total of 𝒪(n) stages which is logarithmic in the size of the state vector. Methods based on <cit.> will work best when the input velocity field to be loaded has a log-concave form. However, when we look at the gate complexity of the number of CNOTs, it grows exponentially as 𝒪(2^n). To ameliorate this, we employ QSP-2, which drastically reduces the CNOT gate count by ≥ 90%; but this is true for sparse quantum states and is based on the method proposed in <cit.>. In fact, this condition works in our favor for the linear solvers considered here. Firstly, we know that between the two time-stepping schemes used in this work, one shot FE and BE schemes are more accurate and efficient since it does not need of repeated measurements and state preparation at every time step. The right hand side of the equation in that approach can be readily seen to be a sparse vector; importantly, we consider here all the quantum registers including ancilla qubits to be one single sparse state that needs to be prepared. For flow problems and system sizes considered here, this provides an initial sparse quantum state with N_nz non-zero elements, where N_nz∈ [n,5n] with an improvement in CNOT count as high as ≈ 94%. Further unlike other state preparation algorithms which require exponentially large number of ancillary qubits, this method requires only one. To implement such a circuit, we consider the same 4 values but now loaded onto a 4 qubit state to make it sparse (N_nz=n). Again we construct a tree called as a Decision Diagram (DD) <cit.> as shown in the right panel of figure <ref>(a). The tree is constructed such that q_1 through q_4 depict the different qubits |q_1q_2q_3q_4⟩. The solid and dashed arrows/edges denotes whether that specific parent node evaluates a 0 or 1, and points to the corresponding zero-child or one-child. For instance, the left-most branch of QSP-2 in figure <ref>(a) has the sequence of solid-solid-dashed-solid lines, corresponding to the state |1101⟩. However, owing to possible redundant connections one can sometimes invoke reduction rules <cit.> to simplify the DD by eliminating a few nodes; for instance, the right-most node with q_3 can be simplified as shown, since both children at the terminal nodes point to the same value. Thus by following the rules as detailed in <cit.>, one can generate an efficient circuit with minimal CNOT gates as shown in fig. <ref>(b) (right panel), which is of the order k×𝒪(n)<<𝒪(2^n), where k is the number of paths ≤ N_nz in the DD, whereas the time complexity is 𝒪(2kn). This procedure drastically reduces the number of CNOT gates thus making it more amenable for implementation on near-term QCs.
§ QUANTUM POST PROCESSING
In this section we outline the Quantum Post Processing (QPP) protocol for computing the viscous dissipation rate per unit volume ε=ν⟨(∂ u/∂ y)^2⟩. The method proposed here is versatile, making it applicable to more general nonlinear quantities as well. In brief, to compute the above quantity we would first need to take the derivative of the velocity field with respect to the wall-normal direction ∂ u /∂ y, for which we employ the well-known spectral method as discussed in Section 4.A. Further, for computing the square of that quantity (or any nonlinear function) we first invoke the Quantum Analog-Digital Converter (QADC-QDAC)<cit.> to convert the representation of the derivatives into binary format and then perform either quantum arithmetic or direct controlled rotation operations to finally yield the squares of the derivatives in amplitude-encoding format, after undoing the QADC operation outlined in Section 4.B.
§.§ Velocity gradients - spectral method
Computing derivatives in spectral space instead of real space is tantamount to simple scalar multiplication of vector elements by corresponding wave-vectors k. Let us consider the following n-qubit state resulting from a QLSA solution
|ϕ⟩ = ∑_j=0^N_g-1u_j|j⟩.
where u_i are the velocities at different grid points. First, we apply the Inverse Discrete Fourier Transform (IDFT), which in the quantum setting is the IQFT, to transition into spectral space as
U_IQFT|ϕ⟩↦1/√(N)∑_k=0^k=N_ge^-2π ijk/N_g|k⟩.
Next we multiply the state by a constant diagonal matrix Λ defined as,
Λ_kk =
{[ 2π ik, k ∈ [0,N_g/2 -1]; 0, k = N_g/2; 2π i (k-N_g), k ∈ [N_g/2 +1,N_g-1]; ]}
and perform the QFT (given by the kernel e^2π ijk/N_g) to transform back, which yields a final state that gives the derivative in real space (u'=∂ u/∂ y) as
|ϕ'⟩ = U_IQFTΛ U_QFT|ϕ⟩ = ∑_j=0^N_g-1du_j/dy|j⟩≡∑_j=0^N_g-1u_j'|j⟩
§.§ Nonlinear transform
The final step in computing the dissipation involves computing the squares of the velocity gradients that requires a nonlinear transformation of the quantum amplitudes. Consider a velocity gradient state vector prepared by an oracle U_V as shown in figure 1.a in the main text, which either results from a direct quantum state preparation algorithm or the QLSA itself. Our aim is to implement the mapping
|ζ⟩ = ∑_k=0^N_g-1u'_k|k⟩↦∑_k=0^N_g-1(u'_k)^2|k⟩.
This requires a total of 6 quantum registers as shown in figure <ref>, where q_up is an ancillary qubit to store the amplitude encoding of (u')^2, q_ub stores the r-bit basis encoding of u', q_add is the address register, q_ua encodes the input amplitude encoding of u and q_a1, q_a2 are ancillary qubits. Here, we implement a modified version of the Quantum Analog Digital Converter described in <cit.>.
The steps are as follows:
* STEP 1: Generate the basis superposition for the address qubits by applying Hadamard gates on q_add register to yield 1/√(N)∑_s=0^N_add-1|s⟩_add. Following this, we apply CNOT gate on q_a1 to clone the basis of the address register.
* STEP 2: We then load the velocity derivative values into q_ua by the oracle U_V, which gives 1/√(N)∑_s∑_cu'_c|s⟩_add|c⟩_ua|s⟩_a1.
* STEP 3: Since we are interested finally in a squared quantity we can extract simply the absolute value of u' by using the SWAP test circuit (denoted by the portion of circuit enclosed in the dotted box V in figure <ref>, excluding U_V). This is a procedure for comparing two states to determine their closeness by estimating the inner product. The SWAP test of two state |ϕ_1⟩ and |ϕ_2⟩ yields a quantity of the type 1/2(1-|⟨ϕ_1|ϕ_2⟩|). Here we use the test without any measurements, which gives us absolute values of u'. This leaves us with
= ∑_s1/2√(N)|k⟩_add[ (∑_cu'_c|c⟩_ua|s⟩_a1 +|s⟩_ua∑_cu'_c|c_a1⟩)_1.2ζ_s0|0⟩_a2
+(∑_cu'_c|c⟩_ua|s⟩_a1 -|s⟩_ua∑_cu'_c|c_a1⟩)^1.2ζ_s1 |1⟩_a2]
= ∑_s1/√(N)|k⟩_add[ α_s0|ζ_s0⟩|0⟩_a2 + α_s1|ζ_s1⟩|1⟩_a2]
* STEP 4: We now perform Quantum Phase Estimation with a gate P defined as shown in fig. <ref> (bottom panel), where S_k = ℐ - 2(|0⟩⟨ 0|)_ua,a2⊗ (|s⟩⟨ s|)_a1, this being a conditional phase shift operator. The details of this step can be found in <cit.>. The QPE along with the IQFT results in the state
|ζ⟩_ub,ua,a1,a2 = 1/√(2N)∑_s|s⟩_add(|β⟩_ub|ζ_+⟩) +|(1-β)⟩_ub|ζ_-⟩),
where sin(πβ) = √(0.5(1+(u'_c)^2)) is stored as an r-bit basis representation, and ζ_± = 1/√(2)(|ζ_s0⟩± i|ζ_s1⟩) form the eigen-basis of P.
* STEP 5: Now, instead of performing quantum arithmetic as shown in <cit.>, we can directly compute the squares of u'_c and transform back to amplitude encoding together by applying conditional rotation operators. We introduce another ancillary qubit q_up and apply R_y(θ_r) operators on it (for a given c=r), conditioned on the q_up qubits, where θ = 2γ. Since u'_r = √(2sin^2πγ_r-1), therefore sin(2γ_r)=(u'_r)^2. Thus R_y(θ_r) performs the operation, |0⟩_up↦ (√((1-(u'_r)^4))|0⟩ + (u'_r)^2|1⟩). For a given r-basis, these values of θ can be hard encoded into the circuit. Following this step, we undo the operations on q_ua, q_a1 and q_a2 to set them to |0⟩, which yields the final state
R∑_c=0^N(√((1-(u'_c)^4)))|0⟩_up+ (u'_c)^2|1⟩_up)|c⟩_add|0⟩_ub,ua,a1,a2.
When measured in the computational basis after applying X gate, this gives
R'∑_c=0^N(u'_c)^2|0⟩_up|c⟩_add|0⟩_ub,ua,a1,a2,
where R and R' are corresponding normalization constants. The above equation is the transformation we sought in eq. <ref>. Given an n-qubit state such as |ψ⟩^n = ∑_pw_p|p⟩, we can compute the sum of all amplitudes by applying U_avg=H^⊗ n which gives _0 = ∑_pw_p. Using this we can compute the sum of, ∑_c(u'_c)^2 and finally measure the first qubit basis state with this value. Of course, we would need to post-multiply it by corresponding normalization constants to retrieve the right solution; importantly, we have to divide the final solution by 2, since we know that sin (2πγ) = sin(2π(1-γ). From eq. <ref> we observe that we will get repeated values when we transform into amplitude encoding. Further, we multiply classically with the appropriate viscosity ν and divide the solution by the number of grid points to yield the final dissipation rate,
ε=ν⟨(∂ u/∂ y)^2⟩.
The complexity of the above QPP is as follows: (a) STEPS 1-3 require 𝒪(log_2N) gates; (b) STEP 4 has single and two qubit gates with a complexity of 𝒪((log_2N)^2/ϵ_QPP) along with 𝒪(1/ϵ_QPP) calls to U_V; (c) STEP 5, which uses controlled rotations has the complexity 𝒪(1/ϵ_QPP). Thus, the overall complexity of 𝒪((log_2N)^2/ϵ_QPP). U_V is either the QLSA itself or, if the form of the velocity field is known, it can be prepared by QSP-1,2 thus amounting to a complexity of 𝒪(U_V) = min{𝒪(poly log (N/ϵ)κ /ϵ_QPP), 𝒪(n),𝒪(kn)}. The complexity of the entire algorithm presented in this work is summarized in Table <ref> (where n=log_2(N)). We caution the readers that these complexities are only estimates and warrant a more detailed analysis of the space, time and gate complexity of the algorithms presented here.
|
http://arxiv.org/abs/2307.01859v1 | 20230704180447 | Weak Hadamard matrices and Weakly Hadamard diagonalizable graphs | [
"Darian McLaren",
"Hermie Monterde",
"Sarah Plosker"
] | math.CO | [
"math.CO",
"quant-ph",
"15A18, 05C50, 81P45"
] |
Weak Hadamard matrices and Weakly Hadamard diagonalizable graphs
Darian McLaren1, Hermie Monterde2, and Sarah Plosker3
August 1, 2023
================================================================
footnote1
Department of Mathematics and Computer Science, Brandon University,
Brandon, MB R7A 6A9, Canada; [email protected]
Department of Mathematics, University of Manitoba, Winnipeg, MB, Canada R3T 2N2; [email protected]
Department of Mathematics and Computer Science, Brandon University,
Brandon, MB R7A 6A9, Canada; Department of Mathematics, University of Manitoba, Winnipeg, MB, Canada R3T 2N2; [email protected]
A weak Hadamard matrix is a {-1,0, 1}-matrix P such that PP^T is tridiagonal. We explore the underlying algebraic and combinatorial structure of weak Hadamard matrices and weakly Hadamard diagonalizable graphs (graphs whose Laplacian matrix is diagonalized by a weak Hadamard matrix). We also provide constructions and examples of such matrices and graphs. We then consider quantum state transfer with respect to such graphs.
Keywords: quantum state transfer, Hadamard matrices, weakly Hadamard diagonalizable graphs
MSC2010 Classification: 15A18;
05C50;
81P45
§ INTRODUCTION
Let ℳ_n be the space of real n× n matrices.
A Hadamard matrix H∈ℳ_n is a matrix whose entries are either 1 or -1 and satisfies
H^T H=n I_n,
where I_n is the n× n identity matrix.
The standard Hadamard matrices may be defined recursively through Sylvester's construction by first setting H_1=[1],
H_2=[ 1 1; 1 -1; ],
and constructing larger Hadamard matrices through the recursive equation H_2^k=H_2⊗ H_2^k -1, where k≥ 2 is any positive integer.
Recently, a generalization of Hadamard matrices was considered in <cit.>:
A matrix P∈ℳ_n is a weak Hadamard matrix if all entries of P are from {-1, 0, 1} and P^TP is a tridiagonal matrix.
We note here that the columns of a Hadamard matrix are mutually orthogonal. The generalization of weak Hadamard relaxes this condition to allow each columns to not necessarily be orthogonal to their neighbouring columns.
The rows and columns of a Hadamard matrix can be permuted, and any row or column can be multiplied by -1, and the resulting matrix is still a Hadamard matrix. Thus, it is always possible to arrange to have the first row and the first column of a Hadamard matrix to be all 1's; such a Hadamard matrix is said to be normalized. Although the rows of a weak Hadamard matrix can be permuted, and any row or column can be multiplied by -1, column permutations in general may affect the quasi-orthogonality of the columns. One can clearly see that the identity matrix cannot be normalized, and thus not all weak Hadamard matrices can be normalized.
In <cit.>, weak Hadamard matrices were introduced as a generalization of Hadamard matrices to study weakly Hadamard diagonalizable graphs. Many properties of weak Hadamard matrices have yet to be explored.
Our aim herein is to understand these properties: In Section <ref> we consider algebraic and combinatorial properties of weak Hadamard matrices, and describe a number of methods for constructing such matrices. In light of the fact that not all permutations of columns are allowed (unlike the case of Hadamard matrices), we pinpoint exactly how many “equivalent” weak Hadamard matrices there are for each order of n. In Section <ref> we move on to graphs whose Laplacian matrix is diagonalized by a weak Hadamard matrix, and consider spectral properties of such graphs. We also construct more examples of weakly Hadamard diagonalizable graphs based on known ones. Finally, in Section <ref>, we look at quantum state transfer in weakly Hadamard diagonalizable graphs, first focusing on strong cospectrality, which is a necessary condition for many types of quantum state transfer; we then consider perfect state transfer and graph operations preserving perfect state transfer.
§ PROPERTIES OF WEAK HADAMARD MATRICES
In <cit.>, weak Hadamard matrices were introduced to study graphs whose Laplacian matrix is diagonalized by a weak Hadamard matrix. Theirs was the first study of this type of matrix. Therefore, many properties of weak Hadamard matrices have not yet been developed. Here, we ask what are the combinatorial and spectral properties of weak Hadamard matrices, which matrix operations preserve the property of being a weak Hadamard matrix, and provide constructions for producing weak Hadamard matrices.
§.§ Algebraic and Combinatorial properties
Hadamard matrices of order n>2 are known to be invertible and require that n≡ 0 (mod 4). Here, we show that these properties need not extend to weak Hadamard matrices.
Let P=[x_1,…,x_n]∈ℳ_n be a weak Hadamard matrix. Then P is not invertible if and only if for some j∈{2,…,n-1} we have x_j=a(x_j-1+b x_j+1),
where either (i) a, b∈{± 1}, or (ii) a ∈{±1/2} and b∈{± 1}, or (iii) a∈{± 1} and b=0. In particular, if P has pairwise orthogonal columns, then P is invertible and P^-1=Q^-1P^T, where Q=diag(x_1^2,…,x_n^2).
Since P is a weak Hadamard matrix, P is not invertible if and only if for at least one j≥ 2, x_j is equal to either (i) ±x_j-1 or (ii) a linear combination of both x_j-1, x_j+1. If we add that P has pairwise orthogonal columns, then conditions (i) and (ii) do not hold, and so P is invertible. In fact, P^TP=Q, and so (Q^-1P^T)P=I, which implies that P^-1=Q^-1P^T.
Given a matrix in ℳ_n, it is a basic fact from linear algebra that there is a column that is a linear combination of other columns, or a scalar multiple of another column, if and only if the matrix is not invertible. The conditions on the scalars a and b in the above proposition simply say that only certain linear combinations are allowed, owing to the fact that the entries of P are from {-1,0,1}. The final statement of the above proposition, about the form of P^-1, holds for any invertible matrix P∈ℳ_n with pairwise orthogonal columns (not necessarily a weak Hadamard matrix).
If we represent the matrix below by P=[x_1,…, x_6], then one can check that nonconsecutive columns of P are orthogonal, but P is not invertible because x_3=x_2+x_4 and x_5=x_4+x_6. Thus, a weak Hadamard matrix need not be invertible, and does not obey the n≡ 0 (mod 4) relation that Hadamard matrices obey.
P=[ [ 1 1 1 0 0 0; 1 -1 -1 0 0 0; 1 0 1 1 1 0; 1 0 -1 -1 -1 0; 1 0 0 0 1 1; 1 0 0 0 -1 -1 ]]
A weighing matrix W=W(n,w) is a square matrix with entries from the set {-1,0,1} such that W has n pairwise orthogonal columns each having exactly w>0 non-zero entries. Thus, if W is a weighing matrix, then W^TW=wI_n and Proposition <ref> implies that W^-1=1/w W^T. Note that a weighing matrix is a weak Hadamard matrix with pairwise orthogonal columns. In particular, a weighing matrix that can be normalized is a Hadamard matrix.
The form of P^-1 in Proposition <ref> yields the following result.
Let P=[x_1,…,x_n]∈ℳ_n
be an invertible matrix with pairwise orthogonal columns. Then
detP=±∏_j=1^n x_j.
If P=W(n,w) is a weighing matrix, then detP=± (w^n/2). In particular, if P is a Hadamard matrix (i.e., a weighing matrix with w=n), then detP=± (n^n/2). Since Hadamard matrices have maximal determinant among matrices of the same order with entries having absolute value at most 1, it follows that Hadamard matrices have maximal determinant amongst all weak Hadamard matrices.
As stated earlier, it is well-known that Hadamard matrices of order n>2 satisfy n≡ 0 (mod 4). Below, we present sufficient conditions for when the same holds for weak Hadamard matrices. We will use the notation 1 to denote the all-ones vector of appropriate size.
Let P∈ℳ_n be a weak Hadamard matrix with two columns x and z that are orthogonal to 1. Then the following statements hold:
* The vector x has an even number of non-zero entries, exactly half of which are 1's.
* If x has k number of 1's and r number of 0's, then r=n-2k. If k is even, then n≡ r (mod 4).
* If x has all entries non-zero, then n=2k. Moreover, k is even if and only if n≡ 0 (mod 4).
* If (i) k is even and r≡ 0 (mod 4) or (ii) all entries of both x and z are non-zero, then n≡ 0 (mod 4).
Suppose x has k number of 1's and m number of -1's. Since 1^Tx=0, we get k=m, and so x has r=n-2k zero entries. In particular, if x has all entries non-zero, then n=2k. This proves (1)-(3).
Let us now prove (4). Condition (i) yields the desired conclusion by (2). Now, suppose condition (ii) holds. Then (3) implies that n is even and by a suitable relabelling of the vertices of X, we may assume that z=[1,-1]^T. As z^Tx=0, we get
∑_j=1^n/2x_j =∑_j=n/2+1^nx_j.
Assume that for the first n/2 entries of x, k_1 of them are equal to 1 and k_2 of them are equal to -1. Then ∑_j=1^n/2x_j=k_1-k_2. Moreover, k-k_1 of the latter n/2 entries of x are equal to 1, while k-k_2 of them are equal to -1, which gives us ∑_j=n/2+1^nx_j=(k-k_1)-(k-k_2)=k_2-k_1. Consequently, the above equation implies that k_1=k_2. Since x has all entries non-zero, k_1+k_2=n/2, and so n=2(k_1+k_2)=4k_1. Thus, n≡ 0 (mod 4).
Since a Hadamard matrix of order n>2 is a weak Hadamard with pairwise orthogonal columns that contain no zero entries, Theorem <ref>(4) implies that n≡ 0 (mod 4). Thus, Theorem <ref>(4) generalizes a well-known fact about the order of Hadamard matrices. Theorem <ref>(1-3) require only that a single column 𝐱 of P has entries in {-1,0,1}, thus the theorem applies to a wider range of matrices: matrices formed by the Laplacian eigenvectors of trivalent graphs <cit.>.
The following results provide some structure of the columns of a weak Hadamard matrix for odd dimensions; in particular, Corollary <ref> says that there is no normalized weak Hadamard matrix with pairwise orthogonal columns in dimension 5.
Let P∈ℳ_n be a normalized weak Hadamard matrix of dimension n=2k+1 with pairwise orthogonal columns. Then each column of P (other than 1) has three or more zero entries.
By Theorem <ref>, each column pf P other than 1 must have an odd number of zero entries. Hence, it suffices to show that a column having precisely one zero entry is forbidden. We proceed with contradiction. Let x be the column of X that has precisely one zero entry. Without loss of generality, assume x(n)=0. It follows that there must be at least one column of X, which we denote by y, such that y(n)≠ 0. Otherwise the deletion of the last row of P would produce n-1 orthogonal vectors of length n. Hence the first n-1 entries of y must have an odd number being non-zero. However, the first n-1 entries of x have an even number of non-zero entries. It follows that x and y are not orthogonal, a contradiction.
There is no normalized weak Hadamard matrix of dimension 5 that has pairwise orthogonal columns.
If such a matrix exists, then Lemma <ref> implies that each of the four columns other than 1 would have only two non-zero entries. But no such matrix can have pairwise orthogonal columns.
§.§ Sylvester-type Construction of Weak Hadamard Matrices
Here we explore the idea of an analogue of Sylvester's construction for the case of normalized weak Hadamard matrices to construct `standard (normalized) weak Hadamard matrices'.
Let K_n be the complete graph on n vertices.
Sylvester's construction of Hadamard matrices uses the matrix H_2=[ 1 1; 1 -1 ] to build Hadamard matrices of order a power two. Here, H_2 is a matrix that diagonalizes K_2. Indeed, the tensor of n copies of H_2 with itself yields a Hadamard matrix of order 2^n. For (normalized) weak Hadamard matrices, one checks that H_2⊗ A is a (normalized) weak Hadamard matrix whenever A is. In this case, H_2⊗ A need not have pairwise orthogonal columns. For example, if we consider
A=
[ 1 1 1; 1 -1 0; 1 0 -1 ],
which is a normalized weak Hadamard matrix that diagonalizes K_3, then H_2⊗ A is a normalized weak Hadamard matrix whose columns are not pairwise orthogonal.
We now seek a matrix P such that H_2⊗ P is a normalized weak Hadamard matrix having pairwise orthogonal but is not a Hadamard matrix. Choosing P∈{H_2,A} does not produce the desired properties for H_2⊗ P, so we turn our attention to the matrix diagonalizing the complete graph K_n\ e minus an edge.
The weak Hadamard matrix that diagonalizes K_4\ e is
P_1= [ 1 1 1 0; 1 -1 1 0; 1 0 -1 1; 1 0 -1 -1 ].
Note that P_1 is a particular type of a normalized weak Hadamard matrix: its columns are pairwise orthogonal. We can see Theorem <ref> in action: each column with zero entries has twice as many zeros as 1's (-1's, respectively); that is, in each column, exactly half the zeros pair up with the 1's and half pair up with the -1's. Now, for ℓ≥ 2, define P_ℓ=[ [ P_ℓ-1 P_ℓ-1; P_ℓ-1 -P_ℓ-1 ]]. Then for each ℓ≥ 1, P_ℓ is a normalized weak Hadamard matrix of order 2^ℓ+1 with pairwise orthogonal columns that is not a Hadamard matrix. That is, for each n≥ 4 that is a power of two, this Sylvester type construction produces normalized weak Hadamard matrices with pairwise orthogonal columns that are not Hadamard matrices.
More generally, one can ask for which matrices A is it the case that A⊗ B
is a weak Hadamard matrix whenever B is a weak Hadamard matrix. The following proposition provides the answer.
Let A and B be weak Hadamard matrices with B not equal to the all-zeros matrix. Then A⊗ B is a weak Hadamard matrix if and only if the columns of A are all pairwise orthogonal.
The result follows by considering the entries of (A^T ⊗ B^T)(A⊗ B)=A^T A ⊗ B^T B.
A Hadamard matrix tensored with a weak Hadamard matrix is a weak Hadamard matrix.
We note that the above corollary is implicit in <cit.>.
§.§ Geometric and Other Constructions of Weak Hadamard Matrices
For any n, one can take any orthogonal vectors 𝐚, 𝐛, 𝐜, … with components in {-1,0,1}, then construct the matrix having two copies of each vector side by side as columns: P=[ 𝐚,𝐚,𝐛,𝐛,𝐜,𝐜,… ]. Such a matrix is a weak Hadamard matrix with PP^T having non-zero entries only on the diagonal and super-diagonal.
More generally, for n=2k (even), partition ℝ^n into k orthogonal subspaces S_1, S_2, …, S_k. From each partition select any two independent vectors 𝐚_1, 𝐚_2 ∈ S_1, 𝐛_1, 𝐛_2 ∈ S_2 etc., while insuring the vectors have components in {-1,0,1}. Then the matrix [ 𝐚_1, 𝐚_2, 𝐛_1,𝐛_2, … ] is a weak Hadamard matrix.
This construction yields weak Hadamard matrices that are invertible.
The above construction works for any n>1: we can decompose ℝ^n as the direct sum of orthogonal subspaces S_1, …, S_r where ∑_i=1^ri=n.
Another construction is a block design using the Williamson construction: A matrix
P = [ A B C D; -B A -D C; -C D A -B; -D -C B A; ]
is a weak Hadamard matrix provided X^T Y =Y^T X for X,Y∈{A,B,C,D} and A^TA + B^TB + C^TC+D^TD is tridiagonal, with A,B,C,D having entries in {-1,0,1}.
Paley's method for constructing Hadamard matrices uses finite fields of order q, where q is a power of an odd prime. Let ℤ_q be the integers mod q. Define
χ(a) =
0 if a ≡ 0
1 if a ≡ b^2 for some non-zero b ∈ℤ_q
-1 otherwise
Construct the circulant matrix C by setting the (i,j)-th entry to be χ(i-j).
Then the matrix H= [ 0 1^T; 1 C^T; ]
is a weak Hadamard matrix with pairwise orthogonal columns.
Another version of Paley's construction is if q≡ 1 (mod 4), then a Hadamard matrix is produced by replacing all the one entries of a certain matrix with H_2, all the -1 entries with -H_2, and all the zero entries with [ 1 -1; -1 -1 ]. One can use a similar method for constructing weak Hadamard matrices: take any Hadamard matrix and replace all the ones with any weak Hadamard matrix P, and all the -1's with -P. The resulting matrix is a weak Hadamard matrix. However, we note that this construction is equivalent to taking the Hadamard matrix and tensor producting with P, and so reduces to the construction described by Proposition <ref>.
Another construction for weak Hadamard matrices is as follows: For n, k∈ℕ with k≤ n, define the sets X_2^k of 2^n dimensional vectors whereby the set X_2^k consists of 2^n-k vectors {𝐱_j}_j∈ℤ_2^n-k, where the i-th component of the vector 𝐱_j is given by:
𝐱_j(i)=
1 i∈ [2^k j + 1,…,2^k j + 2^k-1]
-1 i∈ [2^k j + 2^k-1 + 1,…,2^kj + 2^k]
0 otherwise
Note that each vector 𝐱_j will have 2^k non-zero consequent entries, with an equal number of 1's and -1's. It can then be trivially seen that each set X_2^k consists of 2^n-k mutually orthogonal vectors, all of which are orthogonal to 1.
The set X=⋃_1≤ k ≤ nX_2^k combined with 1 gives a collection of 2^n mutually orthogonal vectors.
Consider 𝐱_a,𝐱_b∈ X where 𝐱_a∈ X_2^A and 𝐱_b∈ X_2^B. From the discussion preceding this proposition we may assume, without loss of generality, that A<B. Then by the construction of the X_2^k's we have that either 𝐱_a^T 𝐱_b = ±∑_i 𝐱_a(i)=0, or 𝐱_a and 𝐱_b have no overlapping non-zero entries. Either way, 𝐱_a and 𝐱_b are orthogonal, which completes the proof.
For every n∈ℕ there exists a weak Hadamard matrix H of order 2^n, such that H has pairwise orthogonal columns and contains 1 as a column, but where H is not a Hadamard matrix.
Using the construction in Proposition <ref>, we have the following weak Hadamard matrix of order 2^3 with pairwise orthogonal columns:
[ 1 1 1 0 1 0 0 0; 1 1 1 0 -1 0 0 0; 1 1 -1 0 0 1 0 0; 1 1 -1 0 0 -1 0 0; 1 -1 0 1 0 0 1 0; 1 -1 0 1 0 0 -1 0; 1 -1 0 -1 0 0 0 1; 1 -1 0 -1 0 0 0 -1; ]
The columns of this matrix can be seen to be the eigenvectors of K_8\ e. In general, the construction in Proposition <ref> will yield the matrix of eigenvectors of K_2^n\ e.
Similar to the Williamson construction we have the following lemma.
Let G and H be matrices with entries in {-1, 0,1}. If H^T H + G^T G is tridiagonal and H^T G - G^T H = 0, then
[ H G; G -H ]
is a weak Hadamard matrix.
The proof follows immediately by multiplying out the transpose of the above block matrix with its original.
While the above lemma may be used to construct weak Hadamard matrices of higher order, finding suitable matrices that satisfy the commutation relation may be troublesome. Using the above form as motivation we obtain a similar construction in the following proposition that has easier to satisfy criteria.
Let G and H be weak Hadamard matrices of order n such that 𝐱 is the first column of both matrices, and, moreover, that 𝐱 is orthogonal to every other column in G and H. Then the block matrix
P=
[ H X; X -G ],
with X the matrix with first column 𝐱 and all others 0, is a weak Hadamard matrix. Furthermore, if G and H have pairwise orthogonal columns, then so too does P.
We compute
[ H^T X^T; X^T -G^T ][ H X; X -G ]
=
[ H^T H + X^T X H^T X - X^T G; X^T H - G^T X G^T G + X^T X ].
By assumption of G and H being weak Hadamard matrices, we have that H^T H + X^T X and G^T G + X^T X are tridiagonal. Furthermore, as all three matrices have first column equal to 𝐱 and the matrix X has all other columns equal to 0, it follows that
H^T X = X^T G = X^T H = G^T X,
and in particular the above products are simply the matrix with (1,1) entry 𝐱^2 and all other entries equal to 0. The result follows.
It is immediately apparent that by setting 𝐱=1 we may construct normalized weak Hadamard matrices: weak Hadamard matrices having the all-ones vector as their first column. In fact, the weak Hadamard matrix given in Example <ref> above can alternatively be determined by setting
G=H=[ 1 1 1 0; 1 1 -1 0; 1 -1 0 1; 1 -1 0 -1; ],
applying Proposition <ref>, followed by a permutation and negation of certain rows and columns.
§.§ Equivalent Weak Hadamard Matrices
Let P=[x_1,…,x_n] be a weak Hadamard matrix.
If all columns of P are pairwise orthogonal (i.e., P^TP is diagonal), then a permutation of the columns of P yields an equivalent weak Hadamard matrix. The same holds if P^TP is tridiagonal with diagonal blocks at most 2× 2.
Now, suppose consecutive columns x_1,…,x_m_1 of P are not pairwise orthogonal. Let us first examine the case m_1=3, so that P=[x_1,x_2,x_3].
* If x_2 is orthogonal to x_1 but not to x_3, then we obtain a weak Hadamard matrix by treating x_2,x_3 as a block B and permuting B and x_1. We may permute the elements of B, and so this yields four equivalent weak Hadamard matrices. The same holds when x_2 is orthogonal to x_3 but not to x_1.
* If x_2 is not orthogonal to x_1 and x_3, then only a permutation of x_1 and x_3 would result in a weak Hadamard matrix .
Now, suppose m_1=4 so that P=[x_1, x_2,x_3, x_4] with x_1^T x_3=x_1^Tx_4=0 and x_2^T x_4=0.
* Suppose x_2 is orthogonal to x_1 (so that x_1 is orthogonal to the rest) but not to x_3. We proceed with two subcases. First, suppose x_3 is not orthogonal to x_4 so that x_3 is not orthogonal to both x_2 and x_4. In this case, we obtain a weak Hadamard matrix by treating x_2,x_3, x_4 as a block B and permuting B and x_1. We may permute the first and last elements of B, and so this again yields four equivalent weak Hadamard matrices. Next, suppose x_3 is orthogonal to x_4. In this case, we obtain a weak Hadamard matrix by treating x_2,x_3 as a block B, and permuting B, x_1 and x_3. We may permute the two elements of B, and so this yields twelve equivalent weak Hadamard matrices.
* Suppose x_2 is orthogonal to x_3 but not to x_1. We have two subcases. If x_3 is not orthogonal to x_4, then we may treat x_1,x_2 and x_3,x_4 as blocks, and we obtain a weak Hadamard matrix by permuting these blocks and their elements. This yields eight equivalent weak Hadamard matrices. If x_3 is orthogonal to x_4, then we again get twelve equivalent weak Hadamard matrices.
* Suppose x_2 is orthogonal to x_1 and x_3. Then x_3 is not orthogonal to x_4 because not all consecutive columns are pairwise orthogonal by assumption. A similar argument to the previous case yields twelve equivalent weak Hadamard matrices.
* Suppose x_2 is not orthogonal to x_1 and x_3 and x_4 is orthogonal to x_3. Then we obtain a weak Hadamard matrix by treating x_1,x_2,x_3 as a block and permuting x_1,x_2,x_3 and x_4. The only permutation allowed for x_1,x_2,x_3 is switching the first and third column. This yields four permutations.
* Suppose x_2 is not orthogonal to x_1 and x_3 and x_4 is not orthogonal to x_3. Then only one permutation of the columns (reversal of the ordering) yields a weak Hadamard matrix.
As we've seen above, for low-dimensional cases there are only a handful of valid permutations that allow us to permute the columns of P while preserving the weak Hadamard property. It is a simple, albeit somewhat tedious, process to count them all. For higher dimensional cases with significantly more valid permutations, this is no longer a reliable strategy. Our goal then is to develop a formula such that one can input some defining characteristic of a given weak Hadamard and be able to immediately compute the number of valid permutations. To that end, we note that a permutation on the columns of a weak Hadamard matrix P is equivalent to right multiplication on P by a permutation matrix Q. It follows then that Q is valid permutation on P if only if Q^T P^T P Q is tridiagonal. Therefore, instead of considering permutations on the columns of P we may instead consider permutations on the rows/columns of P^T P . While this may not initially seem easier, we note that in the low dimensional examples above they each devolved into a series of cases assuming which columns of P are orthogonal to each other. This can be viewed as simply assuming a particular zero-nonzero pattern of P^T P, and so it will be useful to consider P^T P directly.
From here we may assume that P^T P has the form of a block diagonal matrix. Letting a_P be the total number of such blocks we set
P^T P = diag(Y_1, Y_2, …, Y_a_P ),
where the dimension of each block Y_i may vary from 1 to n. We note that there are only two types of valid permutations: (i) permuting entire blocks, or (ii) mirroring a block across its antidiagonal. While each of these permutations being valid is rather intuitive, it is not necessarily obvious that these are the only valid permutations. To justify this we return to considering the matrix P. We note that a given block Y_i of dimension m is associated to a sequence of adjacent columns in P. Each column in this sequence is by construction not orthogonal to both of its neighbours, with the exception of the first and last columns in the sequence which are orthogonal to their, respectively, left and right neighbour (if they exist). By the quasi-orthogonality condition on the columns of P, any permutation on P must leave invariant the neighbours of each column in the sequence, again with the exception of the fist and last column. Hence the only valid permutations on P are those which preserve the sequential ordering of the columns associated to the block Y_i, or reverse the ordering. Leaving the permutations enumerated above.
With these as the only permutations to consider, the problem of determining the number of equivalent weak Hadamard matrices one can construct by permuting the columns of P is straightforward. However, before we present the formula for computing such we must first discuss the case where XP contains two columns that are precisely the same; a problem not present when considering Hadamard matrices. Due to the quasi-orthogonality condition on weak Hadamard matrices, a weak Hadamard may contain no more than two copies of the same column, and by necessity they must be adjacent. From the requirement that they are adjacent it follows that this duplicated column must be orthogonal to every other column of P, and hence these two duplicated columns would correspond to a 2× 2 block in P^T P.
We now recall, and make the following definitions for a weak Hadamard P: a_P is the number of blocks of any size in the block diagonal form of P^T P, b_P is the number of blocks of size greater than or equal to 2, c_P is the number of pairs of identical columns in P. With these definitions we now have the following theorem, which can be used to verify the 3- and 4-dimensional cases above.
Let P be a weak Hadamard matrix. Then the number of equivalent weak Hadamard matrices attained by permuting of the columns of P, which we denote by d(P), is given by
d(P) = 2^b_P/c_P a_P!
Follows from the discussion preceding the theorem.
The theorem above purely considers permutations on the columns of P. However, we may negate any row/column, or permute any row in P, and obtain a weak Hadamard matrix. These operations in isolation are easy to consider when attempting to determine the weak Hadamard matrices equivalent to P. However, combining several different types of operations together poses an issue. For example, suppose the first three columns of P are the standard basis vectors 𝐞_1, 𝐞_2, and 𝐞_3. Swapping the first two columns produces a new weak Hadamard matrix different from P. This operation is identical however to swapping the first two rows of P. Moreover, performing both of these actions in sequence leaves P unchanged. It becomes clear that if one wanted to expand the formula in <ref> to additionally allow for, e.g. permutations on the rows of P, some careful consideration is required. We leave this open to future work.
§ WEAKLY HADAMARD DIAGONALIZABLE GRAPHS
We begin by first recalling some definitions from graph theory, which will be useful in this and following sections. For a simple weighted graph X, the adjacency matrix A(X)∈ℳ_n is a matrix such that the (i,j)-th component a_ij corresponds to the edge weight between vertices i and j (with a_ij=0 if no such edge exists). If the (non-zero) edge weights between every vertex in the graph are all equal to one, then X is said to be an unweighted graph. We will consider only undirected graphs herein, so that a_ij=a_ji for all i,j=1, …, n. The degree matrix D(X)∈ℳ_n is the diagonal matrix whose components d_ii=deg(i), referred to as the degree of the vertex i, correspond to the sum of all weights incident on the the vertex i. If the degree of every vertex is equal, then X is said to be regular. The Laplacian L(X) of a weighted graph X is defined as L(X)=D(X)-A(X).
A graph X is Hadamard diagonalizable if its Laplacian is diagonalizable by a Hadamard matrix. Similarly, a graph X is weakly Hadamard diagonalizable (WHD) if its Laplacian is diagonalizable by a weak Hadamard matrix. Note that if a graph is Hadamard diagonalizable or WHD, then it is diagonalizable by a normalized Hadamard or normalized weak Hadamard, respectively. For a WHD graph X, we denote by P_X the normalized weak Hadamard diagonalizing the Laplacian of X.
In this Section, we consider the combinatorial and spectral properties of weakly Hadamard diagonalizable graphs, and provide constructions and examples of graphs that are weakly Hadamard diagonalizable.
§.§ Eigenvalues and eigenvectors of WHD graphs
We say that X is Laplacian integral if the Laplacian eigenvalues of X are all integers. The following is an extension of <cit.> to weighted WHD graphs. We include the proof for completeness.
If X is an integer-weighted WHD graph, then X is Laplacian integral.
Let S=[1,x_2,…,x_n] be the weak Hadamard diagonalizing L(X). Since L(X) has integer entries and each x_j has entries from {0,-1,1}, the number λ_j satisfying L(x)x_j=λ_jx_j must be an integer.
For our next result, we denote the weight of the edge between vertices u and v
by ω[u,v].
Let X be a weighted WHD graph on n vertices. If 1≤ k≤n/2 and x is an eigenvector of L(X) with corresponding eigenvalue λ such that k of its entries are equal to 1, then the following hold.
* If k=1, then x=e_u-e_v for some pair of distinct vertices u and v in X and λ=deg(u)+ω[u,v].
* If k≥ 2, then x=∑_u∈ Ue_u-∑_v∈ We_v for some disjoint nonempty subsets U and W of V(X) each of size k such that for any two vertices u∈ U and v∈ W, we have
∑_w∉ U∪ Wω[u,w]=∑_w∉ U∪ Wω[v,w],
and for a fixed u∈ U, we have
λ=2∑_v∈ Wω[u,v]+∑_w∉ U∪ Wω[u,w].
In particular, λ is even if and only if one side of (<ref>) is even. Further, if X is unweighted, then λ is even if and only if each u∈ U has an even number of neighbours in V(X)\ (U∪ W).
By assumption, we may relabel the vertices of X so that x=[1,-1,0]^T. Since x is orthogonal to 1, Theorem <ref>(1) implies that k entries of
x are equal to -1. Let U and W be the vertices corresponding to k entries of x equal to 1 and -1, respectively. If X_1, X_2 and X_3 are the subgraphs of X formed by U, W and V(X)\ (U∪ W) respectively, then we obtain
[ [ L(X_1)+R_1 -P_1 -P_2; -P_1 L(X_2)+R_2 -P_3; -P_2^T -P_3^T L(X_3)+R_3 ]][[ 1; -1; 0 ]]=λ[[ 1; -1; 0 ]].
where R_j is diagonal for j=1,2,3, R_11=(P_1+P_2)1, R_21=(P_1+P_3)1 and R_31=(P_2^T+P_3^T)1. Thus, λ1=L(X_1)1+R_11+P_11 =(2P_1+P_2)1 and -λ1=-L(X_2)1-R_21-P_11 =-(2P_1+P_3)1, and so P_21=P_31. This proves (<ref>), which holds for all 1≤ k≤n/2. By noting that u and v are what are known as twin vertices whenever k=1, (1) follows directly from <cit.>. Now, since X is integer-weighted, it follows that λ is even if and only if all entries of P_21 are even. In particular, if n=2k, then X_3 is an empty graph. In this case, the matrices P_2,P_2^T,P_3,P_3^T and L(X_3)+R_3 are absent in L(X), and so we may regard any vertex of X_1 as having 0 neighbours in X_3. The same argument then yields λ1=2P_11, from which it follows that λ is even.
If x in Theorem <ref>(2) has no zero entries, i.e., U∪ W=V(X) so that x satisfies k=n/2, then λ=2∑_v∈ Wω[u,v] for any u∈ U, and hence λ is even. In particular, if X is an integer weighted Hadamard diagonalizable graph, then k=n/2 for any column of P_X, and so Theorem <ref> implies that each eigenvalue of L(X) is even. This generalizes <cit.>.
We end this subsection with a conjecture based on Lemma <ref> and Corollary <ref>:
For any odd dimension, there is no weakly Hadamard diagonalizable graph whose Laplacian eigenspaces have equal algebraic and geometric multiplicities (that is, there is no weak Hadamard matrix with pairwise orthogonal columns diagonalizing the Laplacian matrix of the graph).
§.§ Unions and complements
It is already known that the union of two WHD graphs yields a WHD graph <cit.>. We add to this result by determining the weak Hadamard that diagonalizes a union of WHD graphs. We denote the union of graphs X_1,…,X_k by _j=1^k X_j, which is a graph whose vertex set is _j=1^k V(X_j) and edge set _j=1^k E(X_j). We also denote the matrix M with its j-th column deleted by M[j].
Let X_1,… X_k be weighted graphs on n_1,…,n_k vertices, where each L(X_j) diagonalized by the matrix S_j whose first column is 1 and all other columns of S_j are orthogonal to 1. Then X=_j=1^k X_j is diagonalized by the matrix
[ 1 v_1 … v_k-1 | ⊕_j=1^k S_j[1] ]
where each v_j is a vector of order ∑_j=1^kn_j given by
v_j=e_j⊗1_n_j-e_j+1⊗1_n_j+1.
In particular, if each X_j is weighted WHD, and either k=2 or n_1=…=n_k, then X is also WHD.
Since S_j has 1 as its first column and all other columns are orthogonal to 1, the vectors e_j⊗1_n_k for each j∈{1,…,k} are eigenvectors associated to the eigenvalue 0 of L(X). Observe that B={1,v_1,…,v_k-1} is a linearly independent set of eigenvectors associated to the eigenvalue 0 of L(X), and so the matrix in (<ref>) diagonalizes X. Moreover, each vector in B has entries from the set {0,-1,1} and is orthogonal to every column of ⊕_j=1^k S_j[1]. Thus, if each X_j is weighted WHD, and either k=2 or n_1=…=n_k, then the non-consecutive elements in B are orthogonal, and so X is WHD.
If one of the X_j's in above theorem is the graph K_2⊔ O_1, then D=[ 1 0 1; 1 0 -1; 0 1 0 ] diagonalizes L(K_2⊔ O_1), where the first two columns of D are orthogonal eigenvectors associated to the eigenvalue 0. Thus, if v is a vector such that {1,v} spans the eigenspace of L(K_2⊔ O_1) associated to 0, then v cannot be orthogonal to 1. In this case, the conclusion of Proposition <ref> does not apply.
If k=2, then the last statement of Proposition <ref> holds for Hadamard diagonalizable graphs whenever X=Y <cit.>, but could fail whenever X≠ Y. Indeed, Breen et al. showed that the only disconnected graphs on 8k+4 vertices that are Hadamard diagonalizable are K_2k+2⊔ K_2k+2 and O_8k+4 <cit.>. Thus, if X and Y are Hadamard diagonalizable, and either X≠ K_2k+2 or X is not an empty graph, then X⊔ Y is not Hadamard diagonalizable whenever X⊔ Y has 8k+4 vertices.
With the assumption in Proposition <ref>, suppose each X_j is weighted WHD and n_1=…=n_k. If k=2^ℓ and each S_j has pairwise orthogonal columns, then X is WHD and L(X) is diagonalized by
[ Q | ⊕_j=1^k S_j[1] ]
with pairwise orthogonal columns, where Q=[ 1 1; 1 -1 ]⊗1_n_1 if ℓ=1 and P_ℓ-1⊗1_n_1 otherwise, where P_ℓ is the matrix in Example <ref>.
From the proof of Proposition <ref>, the vectors e_j⊗1_n_k for each j∈{1,…,k} are eigenvectors associated to the eigenvalue 0 of L(X). If ℓ=1, then we may choose v_1 such that the first two columns of the matrix in (<ref>) is [ 1 1; 1 -1 ]⊗1_n_1. For ℓ≥ 2, we may choose the v_j's such that first k columns of the matrix in (<ref>) is P_ℓ-1⊗1_n_1, where P_ℓ is the matrix in Example <ref>. In both cases, the matrix in (<ref>) has pairwise orthogonal columns. Since S_j has pairwise orthogonal columns, the result is immediate.
It is known that the complement X^c and the join X∨ X are Hadamard diagonalizable whenever X is <cit.>. For X^c, this is known to extend to WHD graphs under mild conditions <cit.>.
Let X be unweighted. If L(X) is diagonalized by a matrix S whose each column distinct from 1 is orthogonal to 1, then X^c is also diagonalized by S. In particular, if X is WHD, then so is X^c.
The assumption about the matrix S in Proposition <ref> is indeed necessary. For instance, the graph K_2⊔ O_1 is diagonalized by the weak Hadamard matrix D is Remark <ref>. But since this matrix D does not satisfy the assumption in Proposition <ref>, it follows that X^c=P_3 is not WHD. Indeed, one checks that L(P_3) has 3 as a simple eigenvalue with [1,-2,1]^T as an associated eigenvector.
If X is connected, then Proposition <ref> implies that X^c is WHD. On the other hand, if X=_j=1^kX_j, where the X_j's are WHD graphs on the same number of vertices and each L(X_j) diagonalized by the matrix S_j whose columns other than 1 are orthogonal to 1, then X^c is also WHD. In particular, if X is a disconnected regular graph whose components are WHD and have equal sizes, then X^c is also WHD by the preceding statement <cit.>. But as the next example shows, there are disconnected non-regular WHD graphs whose components have equal size, where the complement also happens to be WHD.
Amongst all unweighted graphs on four vertices, four are non-isomorphic Hadamard diagonalizable, namely K_4, C_4, K_2⊔ K_2 and O_4. Moreover, there are two non-isomorphic WHD graphs on four vertices that are not Hadamard diagonalizable with P_X having pairwise orthogonal columns, namely K_4\ e and O_2⊔ K_2. In fact, the Laplacian matrices of these six graphs are diagonalizable by the matrix P_1 in Example <ref>, which is a weak Hadamard with pairwise orthogonal columns. Let k=2^ℓ and X_(k)=_j=1^kX_j, where X_j∈{K_4,C_4,K_2⊔ K_2,O_4,K_4\ e,O_2⊔ K_2}.
* By Corollary <ref>, X_(k) is WHD and P_X_(k)=[Q | ⊕_j=1^k P_1[1] ], where Q=[ 1 1; 1 -1 ]⊗1_n_1 if ℓ=1 and P_ℓ-1⊗1_n_1 otherwise, where P_ℓ is the matrix in Example <ref>.
* Suppose at least two of the X_j's are distinct so that X_k is not regular. By Proposition <ref>, X_(k)^c is WHD, and so ℱ={X_(k):k=2^ℓ,ℓ≥ 1} is an infinite family of disconnected non-regular WHD graphs, where the X_j's have equal sizes and each X_k^c is WHD. If we restrict each X_j∈{K_4,C_4,K_4\ e}, then each X_(k)∈ℱ have components of equal sizes. Moreover, neither X_(k) nor X_(k)^c in this case are Hadamard diagonalizable, but P_X_(k)=P_X_(k)^c has pairwise orthogonal columns.
§.§ Joins
The following result shows that under mild conditions, X∨ X is WHD whenever X is. It is a special case of <cit.>, however, the proof herein does not rely on recursively balanced partitions, and it explicitly shows the weak Hadamard matrix that diagonalizes X∨ X.
Let X and Y be weighted graphs, and S be a matrix whose first column is 1 and all other columns are orthogonal to 1. If L(X) and L(Y) are diagonalized by S, then L(X⊔ Y) and L(X∨ Y) are diagonalized by [ S S; S -S ] if and only if X=Y. In particular, if Λ=S^-1L(X)S, where Λ=diag(0,λ_2,…,λ_n), then
Λ'=1/2[ [ S^-1 S^-1; S^-1 -S^-1 ]]L(X∨ X)[ [ S S; S -S ]],
where Λ'=diag(0,λ_2+n,…,λ_n+n,2n,λ_2+n,…,λ_n+n). Further, if X is WHD, then so are X⊔ X and X∨ X.
Let S=[1,…,x_n] such that Λ_X=S^-1L(X)S and Λ_Y=S^-1L(Y)S, where Λ_X and Λ_Y are diagonal matrices. Since L(X∨ Y)= [ L(X)+nI -J; -J L(Y)+nI ], where 𝐉 is the square all-ones matrix of appropriate size, one checks that
Λ'=1/2[ S^-1 S^-1; S^-1 -S^-1 ]L(X∨ Y)[ S S; S -S ]=[ 1/2(Λ_X+Λ_Y) +nI -S^-1JS Λ_X-Λ_Y; Λ_X-Λ_Y 1/2(Λ_X+Λ_Y) +nI +S^-1JS ]
Note that the (u,v) entry of S^-1JS is given by e_u^TS^-1JSe_v=e_u^TS^-1Jx_v. By assumption of the columns of S being orthogonal to 1, we have Jx_v=0 unless v=1. Now, note that S^-1 has first row equal to 1/n1, and the rest are orthogonal to 1. Thus, if v = 1, then we obtain
e_u^TS^-1Jx_1=ne_u^TS^-11=ne_u^Te_1.
Consequently, the (u,v) entry of S^-1JS is equal to n whenever u=v=1 and 0 otherwise. This implies that Λ' is diagonal if and only if Λ_X=Λ_Y, i.e., X=Y. In particular, if X=Y, then one checks that Λ'=(0,λ_2+n,…,λ_n+n,2n,λ_2+n,…,λ_n+n). The same argument applies to X⊔ Y.
We note that Theorem <ref> applies to both connected and disconnected graphs as long as the hypothesis about the matrix S is satisfied. Moreover, we already know from Proposition <ref> that X⊔ Y is always WHD whenever X and Y are. However, notice that the matrix [ 1 S[1] 1 0; 1 0 -1 S[1] ] in (<ref>) diagonalizing L(X⊔ Y) can only be transformed to [ S S; S -S ] via column operations if and only if the eigenvalues of L(X) and L(Y) having the jth column of S[1] as their eigenvector are equal for each j, i.e., X=Y. This shows that [ S S; S -S ] cannot diagonalize L(X⊔ Y) whenever X≠ Y.
For any integer k≥ 2, let Z_k=X∨…∨ X denoted the k-fold join of X with itself. More generally, if X is a connected weighted WHD, then for all integers k≥ 2, Z_k is also weighted WHD by <cit.>. The next result follows from Theorem <ref> by induction.
With the assumption in Theorem <ref>, we further suppose that X is WHD and P_X has pairwise orthogonal columns. Let T_1=P_X and for ℓ≥ 2, suppose T_ℓ=[ [ T_ℓ-1 T_ℓ-1; T_ℓ-1 -T_ℓ-1 ]]. If k=2^ℓ, then Z_k is WHD and P_Z_k=T_ℓ+1 has pairwise orthogonal columns.
Let k=2^ℓ and suppose X∈{K_4,C_4,K_2⊔ K_2,O_4,K_4\ e,O_2⊔ K_2}. Set T_1=P_1. As we know, P_X=T_1 from Example <ref>. Invoking Corollary <ref>, each Z_k is WHD and P_Z_k=T_ℓ+1 has pairwise orthogonal columns. In particular, if X∈{K_4\ e,O_2⊔ K_2}, then {Z_k: 2^k, ℓ≥ 1} is an infinite family of WHD graphs that are not Hadamard diagonalizable, but P_Z_k has pairwise orthogonal columns.
§.§ Merge
Let X and Y be two weighted graphs, each with n vertices. The merge of X and Y with respect to the integer weights w_1 and w_2, denoted X [_w_1]⊙_w_2 Y, is the graph with Laplacian matrix
[ w_1L(X)+w_2D(Y) -w_2A(Y); -w_2A(Y) w_1L(X)+w_2D(Y) ].
If w_1=w_2=1, then we simply write X [_w_1]⊙_w_2 Y as X ⊙ Y. In this case, if X and Y are graphs on the same vertex set that do not have an edge in common, then X ⊙ Y is a double cover of a graph with Laplacian matrix L(X)+L(Y). In particular, if X is unweighted and Y=X^c, then X ⊙ Y is a double cover of K_n (also called the switching graph of X), while if X=O_n, then X ⊙ Y is called the bipartite double of Y (also called the canonical double cover of Y). The merge operation was introduced by Johnston et al., and was used to produce Hadamard diagonalizable graphs from smaller ones <cit.>. As our next result implies, the same goal is achieved by the merge operation for WHD graphs.
Let X be a weighted graph and S be a matrix whose first column is 1 and all other columns are orthogonal to 1. Suppose L(X) and L(Y) are diagonalized by the same matrix S. Then L(X [_w_1]⊙_w_2 Y) is diagonalized by [ S S; S -S ] if and only if Y is a weighted-regular graph. In particular, if Y is weighted k-regular and we let Λ_1=S^-1L(X)S and Λ_2=S^-1L(Y)S, where Λ_1=diag(0,λ_2,…,λ_n) and Λ_2=diag(0,θ_2,…,θ_n), then
Λ'=1/2[ [ S^-1 S^-1; S^-1 -S^-1 ]]L(X [_w_1]⊙_w_2 Y)[ [ S S; S -S ]],
where Λ' =diag(0,w_1λ_2+w_2θ_2,…,w_1λ_n+w_2θ_n,2w_2k,w_1λ_2+w_2(2k-θ_2),…,w_1λ_n+w_2(2k-θ_n).
Using (<ref>), one checks that
Λ'=1/2[ S^-1 S^-1; S^-1 -S^-1 ]L(X [_w_1]⊙_w_2 Y)[ S S; S -S ]=[ w_1Λ_1+w_2Λ_2 0; 0 w_1Λ_1 -w_2Λ_2+2w_2S^-1D(Y)S ].
Thus, Λ' is diagonal if and only if S^-1D(Y)S is diagonal. However, S^-1D(Y)S=F for some diagonal matrix F if and only if D(Y)S=SF. If we let D(Y)=diag(d_1,…,d_n) and F=diag(f_1,…,f_n), then D(Y)S=SF if and only if d_iS_ij=f_jS_ij for each i and j. Equivalently, d_i=f_j for each i and j, i.e., D(Y) is a scalar multiple of the identity. The rest is straightforward.
The following is immediate from Theorem <ref>.
Let X and Y be weighted WHD graphs such that P_X=P_Y. Then Z=X [_w_1]⊙_w_2 Y is weighted WHD if and only if Y is weighted-regular, in which case P_Z=[ P_X P_X; P_X -P_X ]. The following also hold.
* If we add that Y is weighted Hadamard diagonalizable, then X [_w_1]⊙_w_2 Y is weighted WHD.
* If X is weighted-regular, then X [_w_1]⊙_w_2 X is weighted WHD. Moreover, if X is unweighted and regular, then X [_w_1]⊙_w_2 X^c and a double cover X ⊙ X^c for K_n are weighted WHD graphs.
Since [ S S; S -S ] in Theorem <ref> is a Hadamard matrix whenever S is, we see that X [_w_1]⊙_w_2 Y is Hadamard diagonalizable whenever X and Y are Hadamard diagonalizable. This was first observed in <cit.>, and is generalized to WHD graphs by Corollary <ref>.
Let X and Y be weighted WHD graphs with P_X=P_Y. Unlike X∨ Y, it is possible for X [_w_1]⊙_w_2 Y to be diagonalized by [ P_X P_X; P_X -P_X ] even if X≠ Y. Thus, the merge operation is advantageous in producing bigger weighted WHD graphs from smaller ones.
Let X∈{K_4\ e,O_2⊔ K_2,K_4,C_4,K_2⊔ K_2,O_4} and Y∈{K_4,C_4,K_2⊔ K_2,O_4}. From Example <ref>, X is WHD, Y is Hadamard diagonalizable (and thus, regular), and the Laplacian matrices of both X and Y are diagonalized by the matrix P_1 in Example <ref>, which has pairwise orthogonal columns. By Corollary <ref>(1), we conclude that Z=X [_w_1]⊙_w_2 Y is a weighted WHD graph and P_Z=[ P_1 P_1; P_1 -P_1 ].
For our next example, recall that conference graphs are strongly regular graphs whose number of vertices must be congruent to 1 (mod 4). In <cit.>, an infinite family of conference graphs are shown to be WHD.
Suppose X is isomorphic to either (i) K_n with n≢0 (mod 4), (ii) K_2n minus a perfect matching, where n is odd, (iii) K_n,…,n with mn≢0 (mod 4) number of vertices, or (iv) a conference graph that is WHD. Then the number of vertices of X is not a multiple of 4, and so X is not Hadamard diagonalizable. However, the graphs in (i), (ii) and (iii) are WHD by Lemma 1.5, Corollary 4.4 and Corollary 4.9 in <cit.>, respectively. Thus, X∨ X is WHD by Theorem <ref>. Moreover, since X is unweighted and regular, Corollary <ref>(2) implies that X [_w_1]⊙_w_2 X and X [_w_1]⊙_w_2 X^c are weighted WHD graphs. Further, the weak Hadamard diagonalizing L(Z), where Z∈{X∨ X,X [_w_1]⊙_w_2 X,X [_w_1]⊙_w_2 X^c}, is [ P_X P_X; P_X -P_X ]. This yields infinite families of WHD graphs that are not Hadamard diagonalizable and the columns of P_X are not pairwise orthogonal.
It is also worth mentioning that if X is the Kneser graph K(5,2) (the Petersen graph) or K(6,2), then <cit.> and the same argument used in Example <ref> imply that X∨ X, X [_w_1]⊙_w_2 X and X [_w_1]⊙_w_2 X^c are weighted WHD graphs that are not Hadamard diagonalizable.
Let Y be a weighted-regular WHD graph. Then the bipartite double of Y is WHD, and this graph is connected if and only if Y is non-bipartite.
The bipartite double of a bipartite graph Y is simply Y⊔ Y. Thus, if Y is bipartite, then the bipartite double of Y is WHD by Proposition <ref>. If Y is non-bipartite, then taking w_1=w_2=1 and X as the empty graph in Corollary <ref> yields the desired result.
If Y is one of the graphs in (i)-(iv) in Example <ref> with the additional condition that n≥ 3 in (i) and m≥ 3 in (iii), then Y is a regular non-bipartite weighted graph that is WHD, and so Corollary <ref> implies that the bipartite double of Y is a connected WHD graph that is not Hadamard diagonalizable. On the other hand, if Y∈{C_6,K_n,n}, then Y is regular, bipartite and WHD, and so the bipartite double of Y is a disconnected WHD graph by Corollary <ref>.
For two weighted graphs X and Y, denote by X□ Y, X⊠ Y and X× Y the Cartesian, strong and direct products of X and Y, which are graphs whose adjacency matrices are given by A(X)⊗ I+I⊗ A(Y), A(X)⊗ A(Y) and A(X)⊗ I+I⊗ A(Y)+A(X)⊗ A(Y) respectively. The following result can be viewed as an extension of <cit.>.
Let X and Y be weighted WHD graphs and suppose ⋆∈{□,⊠,×}. The following hold.
* If P_X has pairwise orthogonal columns, then Z=X⋆ Y is WHD with P_Z=P_X⊗ P_Y.
* If P_X=P_Y, then Z=X[_w_1]⊙_w_2Y is WHD with P_Z=[ P_X P_X; P_X -P_X ].
If we add that P_Y has pairwise orthogonal columns, then so does P_Z in both cases.
(1) is immediate from Proposition <ref> and the fact that P_X⊗ P_Y diagonalizes X⋆ Y. (2) follows from Corollary <ref>(1).
The following result, due in part to Adm et al. <cit.>, is immediate from Corollary <ref>.
If X is a weighted Hadamard diagonalizable graph and Y is weighted WHD, then X⋆ Y is WHD for any ⋆∈{□,⊠,×}. If we add that P_X=P_Y, then Z=X[_w_1]⊙_w_2Y is WHD with P_Z having pairwise orthogonal columns.
We end this section with some examples of connected WHD graphs on eight vertices where the Laplacian matrix is diagonalized by a weak Hadamard with pairwise orthogonal columns.
Let X,Y∈{K_4,C_4,K_2⊔ K_2,O_4,K_4\ e,O_2⊔ K_2}. Consider R=[ 1 1 P_1[1] 0; 1 -1 0 P_1[1] ] and T=[ P_1 P_1; P_1 -P_1 ], where P_1 is in Example <ref>. Note that R and S are weak Hadamards with pairwise orthogonal columns. Moreover, R can be obtained from the matrix H in Example <ref> by permuting its columns, and hence R and H are equivalent. The following connected unweighted graphs on 8 vertices have their Laplacian matrices diagonalized by a weak Hadamard with pairwise orthogonal columns.
* (X⊔ Y)^c, diagonalized by R
* X∨ X, diagonalized by T.
* X ⊙ Y with X≠ O_4 and Y∈{K_4,C_4,K_2⊔ K_2}, diagonalized by T.
* The bipartite double of K_4 (isomorphic to K_4,4 minus a perfect matching), diagonalized by T.
Since the set {K_4,C_4,K_2⊔ K_2,O_4,K_4\ e,O_2⊔ K_2} is closed under complements and X∨ Y=(X^c⊔ Y^c)^c for unweighted graphs X and Y, Example <ref>(1) yields 21 graphs of the form X∨ Y,where X,Y∈{K_4,C_4,K_2⊔ K_2,O_4,K_4\ e,O_2⊔ K_2}. This includes the 6 graphs in Example <ref>(2). As K_4\ e∨ K_4\ e and C_4∨ K_4 are isomorphic, 20 amongst these 21 graphs are nonisomorphic.
Since X ⊙ Y and Y ⊙ X are isomorphic, Example <ref>(3) yields 8 nonisomorphic connected graphs, namely (O_2⊔ K_2) ⊙ K_4, (K_2⊔ K_2) ⊙ C_4 (isomorphic to the graph O_4⊙ K_4 in Example <ref>(4), which is the hypercube on 8 vertices), (K_2⊔ K_2) ⊙ K_4 (isomorphic to K_2□ K_4, which is the complement of the hypercube on 8 vertices), K_4 ⊙ K_4 (isomorphic to C_4∨ C_4), C_4 ⊙ K_4 (isomorphic to (K_2⊔ K_2)∨ (K_2⊔ K_2)), C_4 ⊙ C_4 (isomorphic to O_4∨ O_4), K_4\ e ⊙ K_4 (isomorphic to (K_2⊔ K_2)∨ C_4), K_4\ e⊙ C_4 (isomorphic to (K_2⊔ K_2)∨ C_4).
Consequently, there are at least 23 nonisomorphic connected WHD graphs on 8 vertices whose Laplacian matrices are diagonalized by a weak Hadamard matrix with pairwise orthogonal columns. We display them in Table <ref>. Note that 6 graphs of these 23 graphs are Hadamard diagonalizable (see <cit.>), namely K_4∨ K_4≅ K_8, C_4∨ C_4≅ K_2,2,2,2, (K_2⊔ K_2)∨ (K_2⊔ K_2), O_4∨ O_4≅ K_4,4, O_2⊙ K_4 (the hypercube on 8 vertices), and K_2□ K_4≅ (O_2⊙ K_4)^c. The other 17 graphs are not Hadamard diagonalizable because they are not regular. Further, the three graphs (O_2⊔ K_2) ⊙ K_4, O_4⊙ K_4, and K_2□ K_4 are diagonalized by the matrix T in Example <ref>, while the rest are diagonalized by the matrix R in Example <ref>. This is done through a careful reordering of the Laplacian spectrum of each graph.
§ STATE TRANSFER IN WEAKLY HADAMARD DIAGONALIZABLE GRAPHS
Motivated by quantum information theory, we are interested in whether a given graph X exhibits different types of quantum state transfer.
The transition matrix of the graph X is the time-dependent unitary matrix U(t)=e^itL(X). We say that perfect state transfer (PST) occurs if there exists a time t such that a standard basis vector 𝐞_u, called the initial state, evolves to a different state 𝐞_v up to a phase factor. More formally, the graph X exhibits PST between vertices u and v if there exists a time t such that
U(t)𝐞_u=γ𝐞_v,
where γ∈ℂ. Since U(t) is unitary, γ satisfies |γ|^2=1, and so PST may be equivalently expressed as
|𝐞_u^T U(t) 𝐞_v|^2=1.
If u=v in the above, then vertex u is said to be periodic. Periodicity and strong cospectrality are necessary conditions for PST (see Section 3 and Lemma 14.1 in <cit.>, respectively).
From Theorem <ref>, integer-weighted WHD graphs are Laplacian integral. Using a characterization of periodic vertices due to Godsil and Coutinho <cit.>, we conclude that such graphs are periodic, and are therefore good candidates for PST. This prompts us to characterize perfect state transfer
in WHD graphs under mild conditions. But in order to do this, we first need to characterize strong cospectrality in WHD graphs. Henceforth, we assume all graphs are connected.
§.§ Strong cospectrality
Let u and v be vertices in X. The eigenvalue support σ_u(M) of u with respect to M=M(X) is the set
σ_u(M)={λ_j:E_je_u≠0},
where E_j is the orthogonal projection matrix onto the eigenspace associated with λ_j, for each j.
With respect to M, we say that u and v are cospectral if (E_j)_u,u=(E_j)_v,v for each j. We say that u and v are strongly cospectral if E_je_u=± E_je_v for each j, in which case we define the sets
σ_uv^+(M)={λ_j:E_je_u=E_je_v} and σ_uv^-(M)={λ_j:E_je_u=-E_je_v}.
Cospectral vertices have the same eigenvalue supports. Moreover, strongly cospectral vertices are cospectral, but the converse is not true. For more about strong cospectrality, see Godsil and Smith <cit.>.
The following result will be useful in characterizing strong cospectrality in WHD graphs. For each eigenvalue λ of M, we denote an orthogonal basis of eigenvectors associated to λ by W(λ). Some simple Python code that checks strong cospectrality according to Lemma <ref> is available for download at <cit.>.
Vertices u and v are strongly cospectral if and only if for each λ∈σ_u(M), either x(u)=x(v) for all x∈ W(λ) or x(u)=-x(v) for all x∈ W(λ). Moreover, if u and v are strongly cospectral, then
σ_uv^+(M)={λ:x(u)=x(v)≠ 0 for all x∈ W_λ} and σ_uv^-(M)={λ:x(u)=-x(v)≠ 0 for all x∈ W_λ}.
Since W_λ forms a basis for the eigenspace corresponding to the eigenvalue λ, then the property that 𝐱(u)=𝐱(v) (or similarly 𝐱(u)=-𝐱(v)) for all 𝐱∈ W_λ would have to extend to every eigenvector in the eigenspace of λ. Hence while the preceding lemma requires analysing eigenvectors which form orthogonal bases to prove strong cospectrality, in principle, one could rule out strong cospectrality between two vertices u and v by finding a single eigenvector that does not have its u and v-th components being equal up to a sign, or by finding two distinct eigenvectors associated to the same eigenvalue where the u- and v-th components are equal in one but are opposite in signs in the other.
The following result characterizes Laplacian (strong) cospectrality in WHD graphs where the weak Hadamard has pairwise orthogonal columns.
Let X be a weighted WHD graph For every eigenvalue λ of L(X), let W(λ) be the set of columns of P_X that are eigenvectors corresponding to λ.
* The eigenvalue support of u is σ_u(L)={λ:x(u)≠ 0 for some x∈ W(λ)}.
* Assume P_X has pairwise orthogonal columns. Vertices u and v are cospectral if and only if for every λ∈σ_u(L), either W_u(λ)=W_v(λ) or
∑_x∈ W_v(λ)\ W_u(λ)1/x^2=∑_x∈ W_u(λ)\ W_v(λ)1/x^2,
where W_u(λ)={x∈ W(λ):x(u)=0} and W_v(λ)={x∈ W(λ):x(v)=0}.
* Assume P_X has pairwise orthogonal columns. Vertices u and v are strongly cospectral if and only if for each λ∈σ_u(L), we have
(i) W_u(λ)=W_v(λ) and (ii) only one of {x∈ W(λ):x(u)x(v)=-1} and {x∈ W(λ):x(u)x(v)=1} is empty. Moreover, if u and v are strongly cospectral, then
σ_uv^+(L)={λ:x(u)x(v)=1 for all x∈ W(λ)}
and
σ_uv^-(L)={λ:x(u)x(v)=-1 for all x∈ W(λ)}.
Let λ be an eigenvalue of L(X) with W(λ)={x_1,…,x_m}. Orthogonalizing W(λ) yields an orthogonal basis of eigenvectors W(λ)={x_1,…,x_m} for the eigenspace associated with λ, and so E_λ=∑_j=1^m1/x_j^2x_jx_j^T. This gives us
(E_λ)_u,u=∑_j=1^mx_j(u)^2/x_j^2.
As E_λe_u≠ 0 if and only if (E_λ)_u,u>0, (<ref>) implies that λ∈σ_u(M) if and only if x_j(u)≠ 0 for some j. Now, if λ∈σ_u(M) and ℓ is the smallest index such that x_ℓ(u)≠ 0, then x_j(u)=0 for each j∈{1,…,ℓ}, and so x_j(u)=0 for each j∈{1,…,ℓ}. Consequently, x_j+1(u)=x_j+1(u)≠ 0, and so (1) holds.
The assumption in (2) implies that W(λ) is now an orthogonal basis of eigenvectors for the eigenspace associated with λ. Since x(u)^2=1 whenever x∉W_u(λ) and x(v)^2=1 whenever x∉W_v(λ), we get
(E_λ)_u,u-(E_λ)_v,v (*)=∑_x∈ W_v(λ)\ W_u(λ)x(u)^2
/x^2-∑_x∈ W_u(λ)\ W_v(λ)x(v)^2/x^2+∑_x∉ W_u(λ)∪ W_v(λ)x(u)^2-x(v)^2/x^2
(**)=∑_x∈ W_v(λ)\ W_u(λ)1/x^2 -∑_x∈ W_u(λ)\ W_v(λ)1/x^2.
If W_u(λ)=W_v(λ), then (*) above yields (E_λ)_u,u-(E_λ)_v,v=∑_x∉ W_u(λ)∪ W_v(λ)x(u)^2-x(v)^2/x^2=0, which results in cospectrality between u and v. Otherwise, u and v are cospectral if and only if the right hand side of (**) is zero. This proves (2). Finally, one can check that the conditions in (3) are equivalent to those in Lemma <ref>, which guarantees strong cospectrality between u and v.
If X is a Hadamard diagonalizable graph, i.e., P_X is a Hadamard matrix, then, for each eigenvalue λ of L(X), we have W_1(λ)=W_2(λ)=∅ because Hadamard matrices do not have zero entries. Invoking Lemma <ref>, we get that any two vertices in a Hadamard diagonalizable graph are cospectral.
Lemma <ref>(2) can be used to determine cospectral vertices that are not strongly cospectral. Indeed, any pair of cospectral vertices satisfying (<ref>) for some eigenvalue λ of L(X) are not strongly cospectral. Moreover, as cospectral vertices have the same eigenvalue supports, if X has λ as a simple eigenvalue with associated eigenvector x, then u and v are not cospectral whenever x(u)≠ 0 and x(v)=0.
The following example concretely illustrates Lemma <ref>(2-3).
Consider the graphs (a) (O_2⊔ K_2)∨ (O_2⊔ K_2), (b) (O_2⊔ K_2)∨ C_4, (c) (K_2⊔ K_2)∨ C_4 in Figure <ref> and (d) K_8\ e with e=[1,2]. These are not Hadamard diagonalizable, but their Laplacians are diagonalized by the weak Hadamard matrix R=[x_1 x_2 x_3 x_4 x_5 x_6 x_7 x_8] in Example <ref>. From rows 19, 11, 12, and 5 of Table <ref>, the associated eigenvalues for columns x_1,…,x_8 of R are 0,8,4,4,6,4,4,6 for (a), 0,8,4,4,6,6,8,6 for (b), 0,8,6,4,6,6,8,6 for (c) and 0,8,6,8,8,8,8,8 for (d), respectively. In (a), x_3 and x_4 are eigenvectors associated to the eigenvalue 4 satisfying x_3(1)=-x_3(2) and x_4(1)=x_4(2). Since this violates Lemma <ref>(3ii), vertices 1 and 2 in (a) are not strongly cospectral. The same argument can be used to check that vertices 7 and 8 in (a), and vertices 1 and 2 in (b), and any pair of vertices in {3,…,8} in (d) are not strongly cospectral. One can then check by inspection that for (a), vertices 3 and 4 (shaded squares), and 5 and 6 (shaded stars) are strongly cospectral with σ_j,j+1^+(L)={0,4,8} and σ_j,j+1^-(L)={6} for each j∈{3,5}. For (b), vertices 3 and 4 (shaded squares), 5 and 6 (shaded stars) and 7 and 8 (shaded clouds) are strongly cospectral with σ_3,4^+(L)={0,4,8} and σ_3,4^-(L)={6}, and σ_j,j+1^+(L)={0,8} and σ_j,j+1^-(L)={6} for j∈{5,7}. For (c), vertices 1 and 2 (shaded circles), 3 and 4 (shaded squares), 5 and 6 (shaded stars) and 7 and 8 (shaded clouds) are strongly cospectral σ_j,j+1^+(L)={0,4,8} and σ_j,j+1^-(L)={6} for j∈{1,3}, and σ_j,j+1^+(L)={0,8} and σ_j,j+1^-(L)={6} for j∈{5,7}. Lastly, for (d), vertices 1 and 2 are strongly cospectral with σ_1,2^+(L)={0,8} and σ_1,2^-(L)={6}.
In (d), if u≠ 1,2, then σ_u(L)={0,8}. Since W_3(8)=W_4(8)={x_6,x_7,x_8}, Lemma <ref>(2) implies that vertices 3 and 4 are cospectral. Furthermore, W_3(8)\ W_5(8)={x_4,x_5} and W_5(8)\ W_3(8)={x_6,x_7}, and since x_4=x_7 and x_5=x_6, Equation (<ref>) holds, and so vertices 3 and 5 are cospectral by Lemma <ref>(2). The same argument shows that 3 is cospectral with 6, 7 and 8. Thus, while no pair of vertices in (d) are strongly cospectral except for 1 and 2, any two vertices in {3,…,8} in (d)
are cospectral.
The next example yields an infinite family of WHD graphs that are not Hadamard diagonalizable containing strongly cospectral vertices.
While K_n\ e is WHD for all n≥ 4 <cit.>, any weak Hadamard matrix diagonalizing L(K_n\ e) need not have pairwise orthogonal columns whenever n≢0 (mod 4). Nonetheless, the non-adjacent vertices of K_n\ e are strongly cospectral <cit.>, and these two vertices are the only strongly cospectral pair in K_n\ e for all n≥ 3.
§.§ Perfect state transfer
We now characterize Laplacian PST in WHD graphs with P_X having pairwise orthogonal columns. Some simple Python code implementing Theorem <ref> is available for download at <cit.>.
Let X be a weighted graph. Suppose S=[x_1,…,x_n] has pairwise orthogonal columns and L(X)=SDS^-1, where D=diag(λ_1,…,λ_n). Then perfect state transfer occurs between two vertices u and v in X at time τ if and only if for each eigenvalue λ of L(X),
e^iτλ_jx_j(u)=x_j(v).
If we add that X is WHD, then we may write (<ref>) as
e^iτλ_j=x_j(u)x_j(v).
Since L=SDS^-1, we have U_L(t)=Se^it DS^-1. Recall that PST occurs between u and v at time τ if and only if U_L(t)e_u=γe_v. Equivalently, e^iτ DS^-1e_u=γ S^-1e_v for some unit γ∈ℂ. Since 0 is an eigenvalue of L(X) with eigenvector 1, we have 0∈σ_uv^+(L). Now, since e^iτλ is a phase factor for PST for all λ∈σ_uv^+(L), we conclude that γ=1, and so
e^iτ DS^-1e_u=S^-1e_v.
Now, let Q=diag(x_1^2,…,x_n^2). Since S has pairwise orthogonal columns, a direct application of Proposition <ref> yields S^-1=Q^-1S^T, and so S^-1e_u=Q^-1S^Te_u. Pre-multiplying (<ref>) to the right by e_j^T then yields e^itλ_j/x_j^2x_j(u)=1/x_j^2x_j(v), which completes the proof.
One important consequence of Theorem <ref> is that it can be used to construct weighted WHD graphs having PST at some specified time τ>0. Indeed, if we fix τ>0 and a normalized weak Hadamard S with pairwise orthogonal columns, then by choosing the eigenvalues in D to satisfy (<ref>), L = SDS^-1 will be the Laplacian matrix of some rational-weighted graph with PST at time τ. One can then scale L if one wishes to obtain an integer-weighted graph.
We denote the largest power of two that divides an integer z by ν_2(z). It is known that every integer z can be written uniquely as z=2^ν_2(z)a, where a is odd. Using Coutinho's characterization of PST <cit.>, one can show that Theorem <ref> can be restated in the context of WHD graphs as follows.
Let X be an integer-weighted WHD graph, where P_X has pairwise orthogonal columns. Then perfect state transfer occurs between vertices u and v if and only if these two conditions hold.
* Vertices u and v are strongly cospectral with
σ_uv^+(L)={λ:x(u)x(v)=1 for all x∈ W(λ)} and σ_uv^-(L)={λ:x(u)x(v)=-1 for all x∈ W(λ)}.
* ν_2(λ_j)>ν_2(λ_k)=ν_2(λ_ℓ) for all λ_k,λ_ℓ∈σ_uv^-(L) and for all λ_j∈σ_uv^+(L) with λ_j>0.
Moreover, if PST occurs between u and v, then the minimum time it occurs is π/g, where g=gcd(λ_1,…,λ_n).
Whether X is WHD or not, condition (2) and the statement about the minimum PST time in Corollary <ref> holds as long as X is Laplacian integral.
Our calculations indicate that condition 1 of Corollary <ref> is generally the condition that fails when there is no PST in a WHD graph. We also remark that every PST time is an odd multiple of π/g.
For Laplacian integral graphs, it is clear from Corollary <ref>(2) and Remark <ref> that PST does not occur whenever all non-zero Laplacian eigenvalues are odd. In particular, vertex u is not involved in PST if all eigenvalues in σ_u(L) are odd. Furthermore, PST does not occur between strongly cospectral vertices u and v whenever some λ_j∈σ_uv^+(L) is odd or ν_2(λ_j)≤ν_2(λ_k) for some λ_j∈σ_uv^+(L) and λ_k∈σ_uv^-(L).
Now, suppose X is a Hadamard diagonalizable integer-weighted graph and u is a vertex of X. Then all eigenvalues of L are even integers <cit.>. Thus, if u exhibits PST in X with minimum PST time τ=π/2, then Corollary <ref> implies that all elements in σ_uv^+(L) are integers λ≡ 0 (mod 4), while all elements in σ_uv^-(L) are integers λ≡ 2 (mod 4). This yields the number-theoretic condition required of the eigenvalues in the support of a vertex in graph to exhibit PST, which was first established by Johnston et. al <cit.>. In fact, Theorem <ref> generalizes <cit.> in two ways. First, it allows for arbitrary PST time instead of just π/2, and second, it extends the result to WHD graphs, instead of just Hadamard diagonalizable graphs.
Consider the graphs (O_2⊔ K_2)∨ (O_2⊔ K_2), (O_2⊔ K_2)∨ C_4, (K_2⊔ K_2)∨ C_4 and K_8\ e with e=[1,2] in Example <ref> whose Laplacians are diagonalized by the matrix R. Since R has pairwise orthogonal columns, a direct application of Corollary <ref> to these graphs shows that they all exhibit PST between strongly cospectral pairs of vertices with minimum time τ=π/2. In (O_2⊔ K_2)∨ (O_2⊔ K_2), these are vertices j and j+1 for each j∈{3,5}; in (O_2⊔ K_2)∨ C_4, these are vertices j and j+1 for each j∈{3,5,7}; and in (K_2⊔ K_2)∨ C_4, these are vertices j and j+1 for each j∈{1,3,5,7} (see graphs (a), (b) and (c) in Figure <ref>). Moreover, in K_8\ e, these are vertices 1 and 2. It is important to note that (O_2⊔ K_2)∨ (O_2⊔ K_2), (O_2⊔ K_2)∨ C_4, and K_8\ e are not regular and therefore not Hadamard diagonalizable, thus providing examples of bonafide weak Hadamard diagonalizable graphs having PST.
From Example <ref>, we observe that O_4∨ K_4, K_8\ e, (O_2⊔ K_2)∨ (O_2⊔ K_2), (O_2⊔ K_2)∨ C_4, and (K_2⊔ K_2)∨ C_4 are WHD graphs that are not Hadamard diagonalizable having exactly zero, one, two, three, and four pairs of vertices that exhibit PST, respectively. This intriguing observation leads us to suspect that some WHD graphs are excellent sources of PST. As an initial investigation, we examined PST in WHD graphs on eight vertices whose diagonalizing weak Hadamard matrix has pairwise orthogonal columns (see the last column of Table <ref> for a summary). As it turns out, there are only 3 amongst 23 such graphs that do not exhibit PST, namely K_8, K_4,4 and O_4∨ K_4. This suggests that in general, the subclass of WHD graphs with the property that the diagonalizing weak Hadamard matrix has pairwise orthogonal columns provides excellent sources of PST. Moreover, amongst the 20 graphs that exhibit PST, the vertices that pair up to exhibit PST are the same pairs that exhibit strong cospectrality.
§.§ Complements and joins
The following result provides a sufficient condition for PST to occur in the complement of a Laplacian integral connected graph.
Let X be a connected unweighted graph such that L(X) is diagonalized by a matrix S.
* Vertices u and v in X are strongly cospectral in X if and only if they are strongly cospectral in X^c.
* Suppose X is a Laplacian integral graph and S has pairwise orthogonal columns. If perfect state transfer occurs between u and v in X at τ=π/g, where g is given in Corollary <ref>, then perfect state transfer occurs between u and v in X^c at time τ if and only if n/g is even.
Invoking Proposition <ref> yields (1). To show (2), let S=[1,x_2,…,x_n], and λ_1=0,λ_2,…,λ_n be the eigenvalues of L(X) with corresponding eigenvectors 1,x_2,…,x_n. The eigenvalues of L(X^c) are 0,n-λ_2,…,n-λ_n, where n-λ_j and λ_j have the same eigenvectors for each j≥ 2. Since PST occurs between u and v at time τ, Theorem <ref> guarantees that e^iτλ_jx_j(u)=x_j(v) for all j≥ 2. Now, observe that e^iτ(n-λ_j)x(u)=x(j) holds if and only if e^iτ n=1. Since X is Laplacian integral, τ=π/g from Remark <ref>. Thus, e^iτ n=1 if and only if n/g is even.
To illustrate Proposition <ref>(2), consider X=K_1⊔ K_2, which is a disconnected WHD graph with P_X having pairwise orthogonal columns. Note that K_2 exhibits PST at π/2. Since the Laplacian eigenvalues of X are 0 and 2, it follows that g=2, and so n/g=3/2 is not even. Consequently, (K_1⊔ K_2)^c=P_3 does not exhibit PST between end vertices, which is a well-known result in quantum state transfer.
Consider again Z_k=X∨…∨ X, which is the k-fold join of X with itself. Denote by (u,j) the copy of u∈ V(X) in the jth copy of X in Z_k. Our next result provides a sufficient condition for strong cospectrality and PST to occur in Z_k.
Let X be a weighted graph.
* If u and v are strongly cospectral in X, then (u,j) and (v,j) are strongly cospectral in Z_k with
σ_(u,j),(v,j)^+(L(Z_k))={λ+(k-1)n:0<λ∈σ_uv^+(L(X))}∪{0,kn}
and
σ_(u,j),(v,j)^-(L(Z_k))={λ+(k-1)n:λ∈σ_uv^-(L(X))}.
* Let X be a Laplacian integral graph such that L(X) is diagonalized by a matrix S with pairwise orthogonal columns. If perfect state transfer occurs between vertices u and v in X, then it occurs between vertices (u,j) and (v,j) in Z_k if and only if n/g is even, where g is given in Corollary <ref>.
Invoking <cit.> yields (1). To prove (2), let 0,λ_2,…,λ_n be the eigenvalues of L(X). Then by Theorem <ref>, 0,λ_2+(k-1)n,…,λ_n+(k-1)n,kn are the eigenvalues of L(Z_k), Since kn∈σ_(u,j),(v,j)^+(L(Z_k)), applying the same argument in the proof of Proposition <ref> yields the desired result.
Propositions <ref> and <ref> both apply to WHD graphs with P_X having orthogonal columns. In particular, if X is unweighted Laplacian integral and either (i) n and g are powers of two or (ii) n is even and g=2, then n/g is even, and so PST in X guarantees PST in X∨…∨ X and X^c.
We end this section by providing infinite families of WHD graphs that exhibit PST. Recall that both C_4 and K_2⊔ K_2 exhibit PST with minimum time π/2 between two pairs of vertices, both K_4\ e and O_2⊔ K_2 exhibit PST with minimum time π/2 between a pair of vertices, and K_4 and O_4 do not exhibit PST.
Let k=2^ℓ and X_(k)=_j=1^kX_j, where X_j∈{K_4,C_4,K_2⊔ K_2,O_4,K_4\ e,O_2⊔ K_2}. By Example <ref>, ℱ={X_(k):k=2^ℓ,ℓ≥ 1} is an infinite family of disconnected WHD graphs, where X_(k)^c is also WHD with P_X_(k)=P_X_(k)^c having pairwise orthogonal columns. The following hold.
* If X_j∈{C_4,K_2⊔ K_2,K_4\ e,O_2⊔ K_2} for at least one j, then X_(k) exhibits PST with minimum time π/2. Since g=2 and n is even, Proposition <ref> implies that X_(k)^c exhibits PST with minimum time π/2 between the same pairs of vertices. In particular, the number of pairs of vertices in X_(k) that exhibit PST is 2a+b, where a is the number of copies of C_4 and K_2⊔ K_2 in X_(k) and b is the number of copies of K_4\ e and O_2⊔ K_2 in X_(k).
* If X_j∈{K_4,O_4} for each j, then X_(k) does not exhibit PST.
We also note that neither X_(k) nor X_(k)^c are Hadamard diagonalizable whenever at least two X_j's are distinct. Consequently, if each X_(k)∈ℱ has the property that X_j∈{C_4,K_2⊔ K_2,K_4\ e,O_2⊔ K_2} for at least one j and at least two X_j's are distinct, then {X_(k)^c:k=2^ℓ,ℓ≥ 1} is an infinite family of WHD graphs that exhibit PST whereby each X_(k)^c is not Hadamard diagonalizable but P_X_(k)^c has pairwise orthogonal columns.
Let k=2^ℓ and suppose X∈{C_4,K_2⊔ K_2,K_4\ e,O_2⊔ K_2}. From Example <ref>, each Z_k is WHD and P_Z_k has pairwise orthogonal columns. Since n=4 and g=2, a direct application of Proposition <ref>(2) yields PST in each Z_k at time π/2. Moreover, the number of pairs vertices that exhibits PST in Z_k is 2k whenever X∈{C_4,K_2⊔ K_2}, while it is k whenever X∈{K_4\ e,O_2⊔ K_2}. Moreover, if X∈{K_4\ e,O_2⊔ K_2}, then {Z_k:k=2^ℓ,ℓ≥ 1} is an infinite family of WHD graphs that exhibit PST whereby each Z_k is not Hadamard diagonalizable but P_Z_k has pairwise orthogonal columns.
Denote the transition matrix of a graph Z by U_Z(t). From <cit.>, it is known that U_X□ Y(t)=U_X(t)⊗ U_Y(t). Consequently, PST occurs between (u,w) and (v,x) in X□ Y at time τ whenever PST occurs between u and v in X and w and x in Y at time τ.
Let □_j=1^kX_j:=X_1□⋯□ X_k. For each k≥ 1, let Y_k=□_j=1^kX_j, where each X_j∈{C_4,K_2⊔ K_2,K_4\ e,O_2⊔ K_2} and X_j∈{K_4\ e,O_2⊔ K_2} for at least one j. Since each X_j exhibits PST, say between u_j and v_j, we get that PST occurs in Y_k between (u_1,…,u_k) and (v_1,…,v_k). As P_X_j has pairwise orthogonal columns for each j, Theorem <ref> implies that {Y_k:k≥ 1} is an infinite family of WHD graphs that exhibit PST whereby each Y_k is not Hadamard diagonalizable but P_Y_k has pairwise orthogonal columns.
§ CONCLUSION AND FUTURE WORK
This work uses the initial study of weak Hadamard matrices in <cit.> as a stepping stone. First, we investigated algebraic and combinatorial properties of weak Hadamard matrices, providing numerous methods of constructing such matrices. We explored the idea of equivalent weak Hadamard matrices. We then turned our attention to weakly Hadamard diagonalizable graphs, providing numerous results for graph unions, complements, joins, and merges. Lastly, we explored quantum state transfer in WHD graphs in great detail, focusing on strong cospectrality and perfect state transfer, providing infinite families of WHD graph exhibiting PST, and providing numerous examples illustrating the power of our results through graph complements and joins. We provide Python code for some of the technical computations in <cit.>.
The study of weak Hadamard matrices and WHD graphs is still very much in its infancy. We identify several open problems, the answer to which would help propel the study of these concepts forward.
With respect to weak Hadamard matrices: Further to Section <ref>, it is an open problem to characterize the number of equivalent weak Hadamard matrices attained through any operation (permutation of columns and/or rows, as well as negation of columns and/or rows). It would be of interest to find the exact number of non-equivalent weak Hadamard matrices for reasonably small n as well as the exact number of non-equivalent weak Hadamard matrices with pairwise orthogonal columns, as well as the corresponding number of non-isomorphic graphs that these matrices diagonalize (a lower bound for such graphs for n≤ 9 can be inferred from <cit.>).
An open problem pertinent to our work was recently posed <cit.>: if 1 together with n-1 non-zero vectors with entries in {-1,0,1} are mutually orthogonal, does it follow that n∈{1,2}∪{4k : k∈ℕ}? M. Alekseyev has shown that the answer is to the affirmative for small n (when n≤ 12), and I. Bogdanov has shown the answer is to the affirmative when n is an odd prime, but the general case remains elusive. If a graph can be diagonalized by a weak Hadamard matrix, then it can be diagonalized by a normalized weak Hadamard matrix. So, this open problem expands on Conjecture <ref> to include any dimension n≢0 mod 4, with n>2.
For WHD graphs, it would be of interest to compare, for reasonably small n, the number of Hadamard diagonalizable graphs (this is known from <cit.> for n≤ 36), the number of weakly Hadamard diagonalizable graphs, and, in particular, the number of weakly Hadamard diagonalizable graphs with P_X having pairwise orthogonal columns. Although we have provided infinite families of WHD graphs, and methods of constructing WHD graphs, it would be useful to see just how prevalent these graphs are in comparison to Hadamard diagonalizable graphs.
Finally, given the connection to quantum state transfer, it would be of interest to find the number of connected graphs that are Hadamard diagonalizable, weakly Hadamard diagonalizable, and weakly Hadamard diagonalizable with P_X having pairwise orthogonal columns, that exhibit perfect state transfer, for relatively small n.
plain
|
http://arxiv.org/abs/2307.03049v1 | 20230706151502 | Can baryon asymmetry be explained by a large initial value before inflation? | [
"Kai Murai",
"Fuminobu Takahashi",
"Masaki Yamada",
"Wen Yin"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph"
] |
TU-1202
Department of Physics, Tohoku University, Sendai, Miyagi 980-8578, Japan
Department of Physics, Tohoku University, Sendai, Miyagi 980-8578, Japan
Department of Physics, Tohoku University, Sendai, Miyagi 980-8578, Japan
FRIS, Tohoku University, Sendai, Miyagi 980-8578, Japan
Department of Physics, Tohoku University, Sendai, Miyagi 980-8578, Japan
We show that the baryon asymmetry of the Universe cannot be explained by a large initial value before inflation because it inevitably predicts correlated baryon isocurvature perturbations that are already excluded by cosmic microwave background observations.
Similar arguments can generally be applied to some models of dark matter.
Can baryon asymmetry be explained by a large initial value
before inflation?
Wen Yin
August 1, 2023
=============================================================================
§ INTRODUCTION
The origin of the baryon asymmetry of the Universe is a longstanding mystery in cosmology. It is commonly believed that any pre-existing asymmetries are diluted by inflation, and therefore baryon asymmetry should be produced after inflation.
Various models of baryo/leptogenesis have been studied in the literature (see, e.g., Refs. <cit.> for reviews).
However, one can still imagine the possibility that
a sufficiently large amount of baryon asymmetry is generated before inflation via, e.g., the dynamics of a complex scalar field with B-L charge <cit.>, so that the amount of baryon asymmetry is consistent with the observed value after the significant dilution by inflation.
In this letter, we discuss that the scenario of pre-existing baryon asymmetry before inflation predicts correlated baryon isocurvature perturbations and is robustly excluded by the cosmic microwave background (CMB) observations.
This is because the curvature perturbation is generated independently by the inflaton fluctuation after the baryon asymmetry is generated.
Then, we extract the essence and generalize our discussion to show that some
dark matter scenarios are excluded as well.
§ CORRELATED BARYON ISOCURVATURE PERTURBATION
As mentioned above, we assume that the baryon asymmetry is generated before inflation.
[More precisely speaking, we assume that the amount of baryon asymmetry in the later universe is determined before inflation. Prior to inflation, it could take any form, such as a large lepton asymmetry or any other type of asymmetry, anything that determines the baryon asymmetry at a later time. Our arguments similarly apply to these cases.]
After that, the baryon number density, n_B, is diluted by the cosmic expansion as
n_B
∝
a^-3 ,
where a is the scale factor.
Although the baryon asymmetry may have initial fluctuations inherent in the generation mechanism, we ignore this because it does not affect our argument and, in fact, only makes it more robust. Below we consider another source of fluctuations.
For clarity, we assume the standard scenario where the curvature perturbation is generated by the fluctuations of the inflaton.
We will discuss other possibilities later.
The curvature perturbation ℛ is represented by the fluctuation of the e-folding number N between the flat slicing during inflation and the uniform density slicing during the radiation-dominated era as
ℛ
=
δ N
.
Here the e-folding number is defined by
N(t, x) = ∫_t_i^t + δ t(x)
H(t') dt',
where t_i is an initial time before the CMB scales exited the horizon, H is the Hubble parameter, and δ t is the fluctuation of the time on the uniform density slicing.
Note that δ t can arise from the fluctuation of the duration of both the inflationary and radiation-dominated eras.
Considering n_B ∝ a^-3∝ e^-3N, we obtain the fluctuations of the baryon number density on the uniform density slicing as
δ n_B
=
- 3 δ N n̅_B
.
Here and hereafter, we denote unperturbed quantities with bars.
After the QCD phase transition, baryons become nucleons such as protons and neutrons.
Then, the fluctuation of the non-relativistic baryon energy density, ρ_B, is evaluated as
δρ_B/ρ̅_B
=
- 3 ℛ .
This fluctuation corresponds to the baryon isocurvature mode given by
𝒮_B
≡δρ_B/ρ̅_B
-
3/4δρ_γ/ρ̅_γ≃
- 3 ℛ ,
where we used δρ_γ≃ 0 on the uniform density slicing during the radiation-dominated era.
Note that 𝒮_B is fully anti-correlated with the curvature perturbation.
The isocurvature perturbation is often parameterized by
β_iso(k)
≡⟨ |𝒮_eff(k)|^2 ⟩/⟨ |ℛ(k)|^2 ⟩ + ⟨ |𝒮_eff(k)|^2 ⟩ ,
where the quantities with a comoving wavenumber k represent the Fourier modes.
Here, 𝒮_eff is an effective matter density isocurvature perturbation translated to the cold dark matter (CDM) perturbations defined by
𝒮_eff≡𝒮_CDM +
Ω_B/Ω_CDM𝒮_B
,
where 𝒮_CDM is the CDM isocurvature perturbation similarly defined as Eq. (<ref>), and Ω_B and Ω_CDM are the density parameter of the baryon and CDM, respectively.
The matter density isocurvature perturbation that is fully anti-correlated and shares the same spectral tilt with the curvature perturbation is constrained as <cit.>
β_iso(k) < 10^-3 ,
at k = 0.002 Mpc^-1, 0.05 Mpc^-1, and 0.1 Mpc^-1.
Now we consider the baryon isocurvature mode. Then, we obtain the constraint on this mode as
|𝒮_B(k) |
<
Ω_CDM/Ω_B√(10^-3/1 - 10^-3)ℛ(k)
≃
0.17 ℛ(k)
,
where we used Ω_B h^2 = 0.022 and Ω_CDM h^2 = 0.12 <cit.> with h being the reduced Hubble constant.
From the combination of the CMB and large-scale structure observations, a similar constraint is obtained <cit.>.
Thus, scenarios that generate large baryon asymmetry prior to inflation are excluded from observations, as they result in fully anti-correlated baryon isocurvature fluctuations given in Eq. (<ref>).
[
If the CDM has fully correlated isocurvature perturbations that cancel the baryon isocurvature perturbations in Eq. (<ref>), our argument can be evaded.
Such CDM isocurvature perturbations
can be generated by the mechanism discussed in Sec. <ref> if (∂lnρ_ CDM / ∂ X_*) (Ẋ_* / H_*) ≃ 3 Ω_ B / Ω_ CDM (≃ 3/5) (see Eq. (<ref>)).
A scenario with a similar cancellation has been discussed in Ref. <cit.> in a different context for baryogenesis after inflation.
]
If the baryon asymmetry had the initial fluctuations, it would lead to independent baryon isocurvature perturbations, further strengthening the argument.
§ CORRELATED DM ISOCURVATURE PERTURBATION
Here we extend the discussion to the case with other pre-existing components in the Universe based on the δ N formalism <cit.>.
For concreteness, we specifically consider CDM.
Let us consider the dark matter energy density, ρ_CDM, at a time when the density is already fixed after inflation, and the CMB scale is still superhorizon.
Suppose that ρ_CDM is a function of some quantity X other than inflaton field value ϕ, especially at the time when the CMB scale exits the horizon during inflation.
In the following, we focus on the fluctuations on the CMB scale and denote quantities at the horizon exit of the CMB scale with a subscript *.
We thus assume ρ_ CDM = ρ_ CDM (ϕ_*, X_*).
The parameter X_* depends on the model and is not specified in our argument below. It may be identified as a field value of light boson DM, as we will discuss shortly.
The CDM isocurvature can be written as
𝒮_CDM
=
3(ζ_CDM - ζ_γ)
,
where ζ_CDM and ζ_γ are curvature perturbations on the uniform density slicing with respect to CDM and photon, respectively.
Since the fluctuation of the photon energy density originates from the inflaton fluctuation, ζ_γ is given by
ζ_γ =
d N_γ/dϕ_*δϕ_*
=
- H_* δϕ_*/ϕ̇_*
=
ℛ .
Here, the e-folding number N_γ≃ N is evaluated between the flat slicing during inflation and the uniform density slicing for photon during the radiation-dominated era.
On the other hand, since we assume that the CDM density depends on the parameter X_* in addition to the inflaton field value ϕ_*, the uniform density slicing for CDM is different from that for photon, and ζ_CDM receives other contributions.
Since ζ_CDM is the curvature perturbations when ϕ_* and X_* receive fluctuations, it is given by
ζ_CDM
=
N_c(ϕ̅_* + δϕ_*, X̅_*(ϕ̅_* ) + δ X_*)
- N_c(ϕ̅_*, X̅_*(ϕ̅_*))
,
where the e-folding number N_c is now evaluated between the flat slicing during inflation and the uniform density slicing for ρ_ CDM(ϕ_*, X_*).
We explicitly write the time dependence of X̅_* via ϕ̅_*-dependence by regarding ϕ̅_* as a timer field.
Thus, ζ_CDM is written in terms of derivatives as
ζ_CDM =
∂ N_c/∂ϕ_*δϕ_*
+
∂ N_c/∂ X_*δ X_*
=
d N_c/dϕ_*δϕ_*
- ∂ N_c/∂ X_*Ẋ_* δϕ_*/ϕ̇_*
+
∂ N_c/∂ X_*δ X_*
,
where δϕ_* and δ X_* are evaluated on the flat slicing.
Since the inflaton can be regarded as a timer field, we can consider the first term in the second line to denote the fluctuation of the time, and therefore this term should be identified with the ordinary curvature perturbation, ζ_γ or ℛ.
The second term comes from the difference between the total derivative and partial derivative, which is a new source of isocurvature perturbations for pre-existing DM.
The last term expresses the isocurvature perturbation due to fluctuations of the parameter itself that arise independently of the inflaton fluctuations, which has been discussed widely and we do not focus on in this letter.
The second term in the right-hand side of Eq. (<ref>) gives isocurvature perturbation,
𝒮_CDM≃∂lnρ_ CDM/∂ X_*Ẋ_*/H_*ℛ ,
where we use ∂ N_c / ∂ X_* = (1/3) ∂lnρ_ CDM / ∂ X_*.
We thus obtain a constraint,
|∂lnρ_ CDM/∂ X_*Ẋ_*/H_*| ≲
0.032.
If this is not satisfied, pre-existing dark matter is excluded by the CMB observations in a similar way to the pre-existing baryon asymmetry.
One example of excluded scenarios is a light scalar dark matter moving before the CMB scale exits the horizon <cit.>, where X_* is identified as the scalar field value.
Another example is the misalignment production of hidden photon dark matter with an exponentially large initial field value, where X_* is identified as the amplitude of the hidden photon, A_i ∝ a^-1. (See Refs. <cit.> for the efforts in model-buildings against the suppression of a.)
We note that the standard misalignment mechanism <cit.> for a scalar field does not suffer from this type of isocurvature perturbations if its mass is much smaller than H_*.
In this case, one may identify X_* as the field value of the scalar field, which is almost constant during inflation, i.e., Ẋ_̇*̇≃ 0
and the second term of Eq. (<ref>) is negligibly small.
Then the last term of Eq. (<ref>) can be important
because it gives δ X_* ≃ H_* / (2π).
This contribution has been extensively discussed in the literature <cit.>.
A similar discussion can be applied to baryon number density <cit.>.
The argument in this section can be applied to baryon isocurvature perturbations to reproduce Eq. (<ref>), where one can identify X_* as the baryon number density.
This formulation clarifies a possible loophole in our argument.
If the baryon asymmetry is stored by the inflaton itself and is given by a function solely of the inflaton,
the second and third terms of Eq. (<ref>) are absent, and the isocurvature perturbations are not generated.
§ DISCUSSIONS
We have shown that if the baryon asymmetry is generated before inflation, the fluctuation of the duration of inflation induces baryon isocurvature perturbations proportional to the curvature perturbation at the end of inflation.
As a result, we conclude that the baryon asymmetry of the Universe cannot be explained by large initial values before inflation.
It is worth noting that our argument is unlikely to be avoided by the anthropic argument because galaxies will form even in universes with sizable isocurvature perturbations.
Note that our result does not exclude baryogenesis during inflation if baryon asymmetry is generated much after the CMB scales exit the horizon.
Since the inflaton can be identified as a timer field, one can consider a scenario in which baryogenesis is triggered
at a certain field value of the inflaton
and baryon density is uniformly generated on the comoving slice.
Then, the generated baryon asymmetry has isocurvature perturbations only on smaller scales than the horizon scale at baryogenesis.
Although the baryon isocurvature is also constrained by the inhomogeneous big bang nucleosynthesis on smaller scales than the CMB scales <cit.>, this constraint does not exclude the baryon isocurvature perturbations of the same order as the curvature perturbations unless the curvature perturbation is significantly enhanced on small scales.
We, therefore, conclude that baryogenesis must take place
after the CMB scales leave the horizon during inflation.
We emphasize that our discussion can be applied to a component other than the baryon asymmetry such as CDM as long as it exists before the CMB scales exit the horizon during inflation and evolves in time during inflation so that the duration of inflation affects its density in the later universe.
The magnitude of the isocurvature perturbations depends on how the time evolution during inflation affects the density in the later universe.
Lastly, we would like to mention the similarity between our argument and the generation of baryon asymmetry and/or dark matter in scenarios such as the curvaton scenario and similar ones. It is widely recognized that unless the baryon and/or dark matter is generated after the adiabatic density perturbation is formed by the curvaton, correlated isocurvature perturbations are produced <cit.>. However, to the best of our knowledge, it has not been recognized that the same argument applies to the standard inflationary scenario as well. The purpose of this letter is to clarify this point and to show definitively that preparing a large initial baryon asymmetry before inflation to account for the observed baryon asymmetry in our Universe is already observationally excluded.
§ ACKNOWLEDGMENTS
The present work is supported by JSPS KAKENHI Grant Numbers 20H01894 (F.T.), 20H05851 (F.T., M.Y., and W.Y.), 21K20364 (W.Y.), 22H01215 (W.Y.), 22K14029 (W.Y.), 23K13092 (M.Y.), and 23KJ0088 (K.M.), and JSPS Core-to-Core Program (grant number: JPJSCCA20200002) (F.T.).
MY was supported by MEXT Leading Initiative for Excellent Young Researchers.
This article is based upon work from COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology).
utphys
|
http://arxiv.org/abs/2307.00537v1 | 20230702103416 | Inflationary origin of gravitational waves with \textit{Miracle-less WIMP} dark matter in the light of recent PTA results | [
"Debasish Borah",
"Suruj Jyoti Das",
"Rome Samanta"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] |
=1
2212-
[email protected] of Physics, Indian Institute of Technology Guwahati, Assam 781039, India
[email protected] of Physics, Indian Institute of Technology Guwahati, Assam 781039, India
[email protected], Institute of Physics of the Czech Academy of Sciences, Na Slovance 1999/2, 182 21 Prague 8, Czech Republic
Motivated by the recent release of new results from five different pulsar timing array (PTA) experiments claiming to have found compelling evidence for primordial gravitational waves (GW) at nano-Hz frequencies, we consider the prospects of generating such a signal from inflationary blue-tilted tensor power spectrum in a specific dark matter (DM) scenario dubbed as Miracle-less WIMP. While Miracle-less WIMP, due to insufficient interaction rate with the standard bath gets thermally overproduced, inflationary blue-tilted gravitational waves (BGW) leads to conflict with cosmological observations if solely responsible for the PTA events. Both these problems are circumvented with late entropy dilution bringing DM abundance within limits while creating a doubly peaked feature of BGW. The blue-tilted part of one of these peaks can fit NANOGrav 15 yr data at 1σ level. The particle physics setup used here for illustration namely, the gauged U(1)_B-L model, naturally leads to Miracle-less WIMP and long-lived diluter for entropy dilution while also having GW complementarity due to cosmic strings.
Inflationary origin of gravitational waves with Miracle-less WIMP dark matter in the light of recent PTA results
Rome Samanta
August 1, 2023
================================================================================================================
Introduction: Recently, four different pulsar timing array (PTA) experiments namely NANOGrav <cit.>, European Pulsar Timing Array (EPTA) together with the first data release from Indian Pulsar Timing Array (InPTA) <cit.>, PPTA <cit.>, all part of the consortium called International Pulsar Timing Array (IPTA), have released their latest findings hinting at significant evidence for stochastic gravitational waves (GW) background at nano-Hz frequencies supported by Hellings-Downs inter-pulsar correlations. Similar evidence with larger statistical significance has also been reported by the Chinese Pulsar Timing Array (CPTA) collaboration <cit.>. While supermassive black hole binary (SMBHB) mergers can, in principle, generate such a signal though with a mild tension in the present data, plenty of scopes exist for exotic new physics to chip in <cit.>. Several follow-up papers have also studied the possible origin or implications of this observation from the point of view of dark matter <cit.>, axions or axion-like particles <cit.>, SMBHB <cit.>, first order phase transition <cit.>, primordial black holes <cit.>, primordial magnetic field <cit.>, domain walls <cit.>, inflation <cit.>, cosmic strings <cit.>, astrophysical neutrino oscillation <cit.> and QCD crossover <cit.>.
In this work, we revisit the recently proposed GW probe of a specific dark matter (DM) scenario known as Miracle-less WIMP<cit.> together with GW generated from inflationary blue-tilted tensor power spectrum. While weakly interacting massive particle (WIMP), the popular DM paradigm, has not shown up at direct search experiments yet, it may also be indicative of the fact that DM perhaps interact with the standard model (SM) bath more weakly. In the ballpark of Miracle-less WIMP, DM-SM interaction rates fall short of the required WIMP DM criteria, but large enough to produce it in thermal equilibrium. While typical WIMP DM mass is restricted to be within a few GeV <cit.> to few hundred TeV <cit.>, Miracle-less WIMP can have a much wider range of masses. One natural way to achieve such a weaker cross-section is to consider a heavy mediator in the form of a U(1) gauge boson. The heavy gauge boson mediator may arise from spontaneous U(1) breaking which also leads to the formation of cosmic strings (CS) <cit.>. These CS can generate stochastic GW with a characteristic spectrum which can be within the reach of near future GW detectors if the scale of symmetry breaking is sufficiently high <cit.>. While 12.5 yr data from NANOGrav <cit.> could be explained with stable cosmic string as the source of GW <cit.>, the 2023 data can not be fitted well with stable CS due to the preferred slope <cit.> which does not arise in the flat GW spectrum of CS.
While the particle physics setup we use for illustration naturally exhibits GW complementarity due to stable CS, we consider inflationary fluctuations to be the primary source of GW in order to explain the recent PTA data. However, similar to Miracle-less WIMP DM overclosing the universe due to thermal overproduction, GW from inflationary blue-tilted tensor power spectrum can violate the bounds from big bang nucleosynthesis (BBN) as well as cosmic microwave background (CMB) on effective relativistic degrees of freedom N_ eff. Both of these issues can be tackled by a source of entropy dilution in the early universe which not only gives rise to consistency with observations but also leads to a GW spectrum that can explain the 2023 PTA data while being verifiable in future GW experiments at higher frequencies due to the unique spectral shape. The entropy dilution required to satisfy the correct relic of DM also leads to a doubly peaked feature in the blue-tilted GW (BGW) from inflationary tensor perturbations with blue-tilted part of the low frequency peak fitting NANOGrav 15 yr data at 1σ level. This also leads to a unique correlation between DM mass and peak frequencies of the GW spectrum. While typical slow-roll inflation models cannot predict such blue tilt, many models beyond slow-roll predict tensor blue tilt, e.g.,<cit.>. Because the PTA experiments such as NANOGrav continue to prefer a positive slope of the GW spectrum, GW with tensor blue tilt are one of the most favorable candidates <cit.> for nano-Hz frequency GW with a positive slope. Additionally, such GW not only exhibit testable characteristic spectral features at high frequencies but even for GW detection below nano-Hz frequencies <cit.>, they are among only a few candidates of strong amplitude primordial GW.
Miracle-less WIMP Dark Matter:
In order to show a realistic scenario, we consider the example of gauged U(1)_B-L model <cit.>. As studied earlier <cit.>, a singlet Dirac fermion χ with non-trivial B-L charge q_χ can be the Miracle-less WIMP DM candidate in this model, stabilised by a remnant Z_2 symmetry. DM overproduction and subsequent entropy dilution from heavy right handed neutrino (RHN) which are part of this model naturally, are dictated by the U(1)_B-L parameters. However, our generic conclusion remains valid in any other setup as long as DM overproduction and required entropy dilution due to early matter dominated phase realised from a long-lived diluter are naturally realised.
The Lagrangian involving DM is given by
ℒ_ DM = i χD(q_χ) χ-m_χχχ.
where
D(q_χ) χ =
γ^μ(∂_μ
+ i g_BL q_χ Z_BL_μ) χ.
One can write similar Lagrangian for RHN as well. RHNs and U(1)_B-L gauge boson Z_BL acquire non-zero masses due to spontaneous symmetry breaking induced by vacuum expectation value v_B-L of a singlet scalar with B-L charge 2. Now, for sufficiently heavy Z_BL, DM around or below the TeV ballpark may freeze-out from the bath while being relativistic. The DM relic density, for relativistic freeze-out, is given by <cit.>
Ω_χ h^2 =2.745× 10^8× Y_∞ m_χ,
where Y_∞=0.278/g_*s(x_f)×3 g_χ/4 is the asymptotic comoving DM density with g_χ and g_*s(x_f) are the DM internal degrees of freedom (dof) and entropy dof of the universe at DM freeze-out temperature (T_f = m_χ/x_f) respectively. We also consider g_*s(x_f)=106.75 for standard model (SM) entropy dof as such freeze-out occurs well above the electroweak scale. If χ leads to overabundance, the required entropy dilution (felicitated by lightest RHN N_1 decay) factor S = Ω_χ h^2/0.12 can be approximated as <cit.>,
S≃[2.95×(2π^2 g̃_*(T_N_1)/45)^1/3(r M_N_1)^4/3/(Γ_N_1 M_P)^2/3]^3/4,
where g̃_*(T_N_1) is the number of relativistic dof during N_1 decay at T=T_N_1. The parameter r is the freeze-out number density of N_1.
r=g_N_1/2135 ζ(3)/4π^4 g_*^ fo.
Assuming instantaneous decay of N_1 (Γ_N_1 M_P=1.66 √(g̃_*(T_N_1)) T_N_1^2.) and considering relativistic freeze out for N_1 we find,
T_N_1≃ 3.104× 10^-10(M_N_1/m_χ) GeV.
Miracle-less WIMP and inflationary blue-tilted gravitational waves:
The following perturbed FLRW line element describes gravitational waves:
ds^2=a(τ)[-dτ^2+(δ_ij+h_ij)dx^idx^j)],
where τ and a(τ) are the conformal time and scale factor. The transverse and traceless (∂_ih^ij=0, δ^ijh_ij=0) part of h_ij represents the gravitational waves. After the Fourier space decomposition of h_ij and solving GW propagation equation in Fourier space, the energy density of the GW is computed as <cit.>ρ_ GW=1/32π G∫dk/k(k/a)^2T_T^2(τ, k)P_T(k),
where T_T^2(τ, k)=|h_k(τ)|^2/|h_k(τ_i)|^2 is a transfer function with τ_i as the initial conformal time, and k=2π f with f being the present frequency. The quantity P_T(k)=k^3/π^2|h_k(τ_i)|^2 characterizes the primordial power spectrum and relates to the inflation models with specific forms, which, generally, is parametrized as a power-law:
P_T(k)=r A_s(k_*)(k/k_*)^n_T,
where r≲ 0.06<cit.> is the tensor-to-scalar-ratio, A_s ≃ 2× 10^-9 is the scalar perturbation amplitude determined at the pivot scale k_*=0.01 Mpc^-1. We shall treat the tensor-spectral index n_T as constant plus blue-tilted (n_T>0). Recall that the single field slow-roll inflation models correspond to the consistency relation: n_T=-r/8<cit.>, i.e., the spectral index is mildly red-tilted (n_T≲ 0). The GW energy density pertinent to detection purposes is expressed as
Ω_ GW(k)=k/ρ_cdρ_ GW/dk,
where ρ_c=3H_0^2/8π G with H_0≃ 2.2 × 10^-4 Mpc^-1 being the Hubble constant. From Eq.(<ref>), the Ω_ GW(k) can be derived as
Ω_ GW(k)=1/12H_0^2(k/a_0)^2T_T^2(τ_0,k)P_T(k),
where τ_0=1.4× 10^4 Mpc.
The transfer function has been computed very accurately in literature <cit.>. In presence of an intermediate matter domination T_T^2(τ_0,k) is calculated as <cit.>
T_T^2(τ_0,k)=F(k)T_1^2(ζ_ eq)T_2^2(ζ_N_1)T_3^2(ζ_N_1 R)T_2^2(ζ_R),
where F(k) is given by
F(k)=Ω_m^2( g_*(T_k, in)/g_*0)(g_*s0/g_*s(T_k, in))^4/3(3j_1(kτ_0)/kτ_0)^2.
In Eq.(<ref>), j_1(kτ_0) is the spherical Bessel function, Ω_m=0.31, g_*0=3.36, g_*0s=3.91 and an approximate form of the scale-dependent g_* can be found in <cit.>
The individual transfer functions read
T_1^2(ζ)=1+1.57ζ+ 3.42 ζ^2,
T_2^2(ζ)=(1-0.22ζ^1.5+0.65ζ^2 )^-1,
T_3^2(ζ)=1+0.59ζ+0.65 ζ^2,
where ζ_i≡ k/k_i, with the modes k_i's in the units of Mpc^-1 given by
k_ eq=7.1× 10^-2Ω_m h^2
k_N_1=1.7× 10^14(g_*s(T_N_1)/106.75)^1/6(T_N_1/10^7 GeV),
k_N_1 R=1.7× 10^14 S^2/3(g_*s(T_N_1)/106.75)^1/6(T_N_1/10^7 GeV)
and
k_R=1.7× 10^14S^-1/3(g_*s(T_ RH)/106.75)^1/6(T_ RH/10^7 GeV)
that cross the horizon at standard matter-radiation equality temperature T_ eq, at T_N_1 when N_1 decays, at T_N_1R when N_1 starts to dominate the energy density and at T_ RH when the universe first re-heat after inflation, respectively. Two major constraints on blue-tilted GW arise from the effective number of neutrino species and LIGO bound on stochastic GW. The BBN constraint is given by <cit.>∫_f_ low^f_ high f^-1df Ω_ GW(f)h^2≲ 5.6× 10^-6Δ N_ eff,
with Δ N_ eff≲ 0.2. The frequency f_ low corresponds to the mode entering the horizon at the BBN epoch, which can be taken as f_ low≃ 10^-10 Hz. On the other hand, we take f_ high≃ 10^5 Hz, which is sufficient for numerical computation as the spectrum falls and the integration saturates at higher frequencies. We consider the LIGO bound in a much more simple way. We discard GW with amplitude more than 2.2× 10^-9 at f_ LIGO=25 Hz <cit.>. Given the above equations and constraints, we now compute the gravitational wave spectrum for a few benchmark values.
First, notice that barring n_T and r, the key quantities to evaluate the spectrum are T_N_1, S and T_ RH. By construction, in this model, T_ RH should be large–at least 𝒪(v_B-L) (because we need heavy Z_BL with mass ∼ v_B-L for weaker DM interaction cross-section and in addition, the dark matter plus the N_1 number densities are computed in first radiation domination after the universe reheats at T_ RH). However, blue-tilted GW with large n_T are incompatible with high T_ RH (the amplitude saturates BBN and LIGO bounds) unless there is large entropy production. The Miracle-less WIMP scenario naturally exhibits intermediate matter domination by N_1, leading to large entropy production, which brings overproduced dark matter density to the observed value. Such large entropy production also suppresses the overall GW spectrum, plus depending on T_N_1, it creates another peak in the overall spectrum. Note from Eq.(<ref>) and Eq.(<ref>) that two free parameters of the model M_N_1 and m_χ enter in the computation of GW through S and T_N_1 and determine its spectral features. In Fig.<ref> (left panel), we show the corresponding spectrum for the benchmarks in Table.<ref> for T_ RH=10^ 11 GeV (BP1: blue, BP2: green, BP3: red). The benchmarks are chosen to fit the recent NANOGrav results to some extent, as we will discuss shortly. In principle, this model allows higher T_ RH, bu t one needs substantial entropy production to surpass the LIGO bound. However, in that case, the low-frequency GW amplitudes also get suppressed. Therefore, even though the spectral index is compatible with NANOGrav, the overall amplitude falls below the reported range.
We fit the NANOGrav-2023 data with a power-law signal represented by the characteristic strain
h_c(f)=A(f/f_ yr)^(3-γ)/2,
where A and γ are the strain amplitude and the timing-residual cross-power spectral index (γ = 13/3 for super-massive black hole mergers), and f_ yr= yr^-1, respectively. The normalised GW energy density is expressed in terms of strain as
Ω_ GW(f)=2π^2/3 H_0^2f^2h_c(f)^2=Ω_yr(f/f_ yr)^5-γ,
where Ω_ yr=2π^2/3 H_0^2A^2f_ yr^2. We fit Eq.(<ref>) to Eq.(<ref>) within the frequency range f∈[2× 10^-9, 6× 10^-8] for the chosen benchmarks, extract A and γ from the fit and project it on NANOGrav 95% and 68% contours as shown in Fig.<ref> (right panel). The fit-points lie close to the edge of the 95% contour because of the spectral index γ≃ 5-n_T. Therefore, for n_T∼ 1, the benchmark lies close to γ∼ 4. In principle, a large n_T and small r can provide a better fit (As shown with a black dashed curve for r=10^-6, n_T=1.5, and T_ RH=10^9). The fit improves for larger values of n_T. We nonetheless note that n_T, as large as, e.g., 2, is extremely difficult to obtain while being consistent with the constraints on other inflationary observables <cit.>. We, therefore, conclude that although recent NANOGrav data is poorly fitted with inflationary gravitational waves even with a spectral index as large as 1.12, weakly coupled WIMP dark matter with mass in the MeV-GeV ballpark remains a viable option to bring amplitudes of the inflationary GW at the level of PTAs despite a large T_ RH. In addition, the model predicts another high-frequency peak that can be constrained by the interferometers such as LIGO, making it the rarest dark matter scenario that can be tested with PTA-LIGO complementarity.
We conclude with the following remarks:
∙ To suppress inflationary GW and bring them to the level of PTAs, generally one needs heavy N_1 (M_N_1∼ 10^10 GeV) so that one produces large entropy according to Eq.(<ref>). Therefore, in the Miracle-less WIMP scenario, dark matter cannot be arbitrarily heavy. Otherwise, it would require an extremely late time decay of N_1 (cf.Eq.(<ref>)), which might contradict BBN predictions.
∙ The Miracle-less WIMP scenario also predicts cosmic strings that radiate GW. Therefore one expects further spectral distortion as in <cit.>. However, for v_B-L∼ T_ RH∼ 10^11 GeV, the amplitude of the cosmic string radiated GW would be much smaller (max. Ω_ GW^CS∼ 10^-13) than the inflationary one, if the tensor tilt n_T∼ 1. The overall spectrum, nonetheless, can exhibit the features of cosmic string-radiated GW (Ω_ GW^ CS(f_ dip)>Ω_ GW^ BGW(f_ dip)–a plateau in the middle) for small values of n_T. Although the scenario then is disfavoured by the current PTA data.
Summary: Several pulsar timing array (PTA) experiments, NANOGrav, EPTA+InPTA, PPTA as well as CPTA have reported strong evidence of a stochastic common spectrum process with Hellings-Downs inter-pulsar correlations, suggesting a possible breakthrough towards the detection of a stochastic gravitational waves (GW) background at nano-Hz frequencies. While supermassive black hole binaries naturally generate such GW, another viable possibility could be a cosmological origin of such a background, which indeed provides an excellent fit to the recent data, e.g., to the NANOGrav 15 yr data. A blue-tilted inflationary gravitational wave spectrum is one of them. Inflationary GW with large blue-tilt not only generate a strong signal at nano-Hz frequencies; they offer the luring possibility to test post-inflationary cosmology with characteristic spectral features at higher frequencies. Generally, because of the large blue tilt, such GW saturate BBN bound on the effective number of neutrino species, disallowing high T_ RH temperature. However, if an entropy production epoch follows the standard reheating, blue-tilted GW evade the BBN constraints even though T_ RH is high. Besides, such a post-inflationary scenario also creates unique spectral features testable at multiple detectors spanning a wide range of frequencies. We show that a recently proposed dark matter model dubbed the Miracle-less WIMP model <cit.> naturally creates a dark matter mass-dependent matter epoch leading to entropy production prior to the BBN, and imprints blue-tilted GW. Dark matter mass in the MeV-GeV ballpark makes inflationary GW compatible with NANOGrav and generates another peak testable with the next LIGO runs. Because of their weak interaction cross-section, Miracle-less WIMPs naturally explain null results in dark matter direct detection, and standouts as one of the few, perhaps, the only dark matter candidate so far, offering a PTA-LIGO complementarity.
The work of D.B. is supported by the science and engineering research board (SERB), Government of India grant MTR/2022/000575. R. S. is supported by the MSCA-IF IV FZU - CZ.02.2.69/0.0/0.0/20 079/0017754 project
and acknowledges European Structural and Investment Fund and the Czech Ministry of
Education, Youth and Sports.
apsrev
|
http://arxiv.org/abs/2307.02472v2 | 20230705174548 | Deductive Additivity for Planning of Natural Language Proofs | [
"Zayne Sprague",
"Kaj Bostrom",
"Swarat Chaudhuri",
"Greg Durrett"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
More Exact Thermodynamics of Nonlinear Charged AdS Black Holes in 4D
Critical Gravity
Kairat Myrzakulov
August 1, 2023
=====================================================================================
Current natural language systems designed for multi-step claim validation typically operate in two phases: retrieve a set of relevant premise statements using heuristics (planning), then generate novel conclusions from those statements using a large language model (deduction). The planning step often requires expensive Transformer operations and does not scale to arbitrary numbers of premise statements. In this paper, we investigate whether an efficient planning heuristic is possible via embedding spaces compatible with deductive reasoning. Specifically, we evaluate whether embedding spaces exhibit a property we call deductive additivity: the sum of premise statement embeddings should be close to embeddings of conclusions based on those premises. We explore multiple sources of off-the-shelf dense embeddings in addition to fine-tuned embeddings from GPT3 and sparse embeddings from BM25. We study embedding models both intrinsically, evaluating whether the property of deductive additivity holds, and extrinsically, using them to assist planning in natural language proof generation. Lastly, we create a dataset, Single-Step Reasoning Contrast (SSRC), to further probe performance on various reasoning types. Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they fall short of being effective heuristics and lack the ability to model certain categories of reasoning.
§ INTRODUCTION
One way to justify the truth of a statement is to give an explanation building logically towards that statement based on deduction from shared premises. The ways facts can be combined through reasoning are numerous, including many different modes of deduction like syllogism or modus tollens. This process can be automated with natural language processing, using systems to generate natural language proofs that use evidence to derive a claim through a structured argument. Large language models (LLMs) like GPT4 <cit.> have exhibited impressive performance in reasoning tasks. However, these models can still make unsound inferences <cit.>.
One reason for these errors is that models may fail to plan reasoning effectively. LLMs do not have explicit planning capabilities: they generate conclusions in a way that conflates lexical choice and decisions of what content to generate, and no alternatives are materialized in typical greedy or sampling-based LLM inference. A recent line of work <cit.> explores how to decouple these stages. However, what is still missing is a scalable method for doing planning in these kinds of natural language reasoning settings: past work involves early-fusion invocation of pre-trained LMs <cit.> and does not scale to thousands of premises.
This work explores the feasibility of planning the reasoning process directly in a vector space, where combining statements and retrieving similar statements can be efficiently implemented as addition and cosine similarity, respectively. We introduce deductive additivity (DA), a property of an embedding space necessary to enable this planning. A visualization of an embedding space with the deductive additivity property is shown in Figure <ref>. Each piece of evidence is embedded into a fixed-size vector, and the combined embeddings of two facts should be close to embeddings of statements that are entailed from those two facts via deduction. This property can help us plan when we are trying to derive a goal statement based on premise statements. New facts that bring us closer to that goal should be explored in the deductive reasoning process, so this vector space provides a natural heuristic: we want to find fact embeddings that, when summed, achieve the highest dot product with the encoding of our goal. Crucially, the vector-based nature of this heuristic facilitates rapid retrieval through efficient search algorithms.
Our experiments test both off-the-shelf embeddings (e.g., SimCSE <cit.>) as well as embeddings that are explicitly tuned for deductive additivity. First, we conduct intrinsic evaluations to see whether embeddings of standard encoders exhibit deductive additivity. We then test how well the method performs as a search heuristic on the natural language proof generation datasets EntailmentBank <cit.> and Everyday Norms: Why Not <cit.>. Finally, we create the Single-Step Reasoning Contrast (SSRC) dataset to benchmark each method on how well they model different reasoning categories, like syllogism or modus tollens, and how robust they are to common errors in reasoning, like negation[Code and data publicly available at <https://github.com/Zayne-sprague/Deductive_Additivity_for_Planning_of_Natural_Language_Proofs>].
Our main contributions are threefold: (1) We propose a novel method for planning reasoning steps over a collection of facts purely based on vector arithmetic. (2) We show that several embedding methods have promise for deductive additivity but do not fully meet the properties required for planning in natural language deduction scenarios even when explicitly fine-tuned for it. (3) We present a new dataset meant to help diagnose and identify areas where deduction planning methods are underperforming across a range of different reasoning categories.
§ PROBLEM DESCRIPTION AND MOTIVATION
Here we introduce the problem of proof generation, the system we use to generate proofs and deductive additivity.
§.§ Problem Setup
We explore the process of proving a goal statement (or claim) g by generating an entailment tree T, given a set of general-purpose facts X = x_1, ... x_n and a collection of instance-specific facts F = f_1, ... f_m. Instance-specific facts typically pertain to the context or background of a particular scenario, while general-purpose facts can be applied more broadly. An example can be seen in Figure <ref>, where F consists of two statements, “Joe is an animal” and “Joe is in outer space”, and all other facts belong to X. T is a binary-branching tree with its leaves being members of X and F while its non-leaf nodes (which we also call intermediates) are new statements generated via deductive reasoning. The root of T must logically entail g. We use the entailment models from past work <cit.>, which are based on WaNLI <cit.> to make this judgment.
The EntailmentBank dataset <cit.> formalizes three variants of this problem setting. The first setting, denoted as Task 1 (T1), provides only the general-purpose facts relevant to the construction of the gold entailment tree, making it the easiest setting as it eliminates the need to sift through irrelevant facts. Task 2 (T2) includes both the relevant facts and lexically similar distractor facts. Task 3 (T3) <cit.> includes all facts from a large corpus like Wikipedia as the general-purpose fact set X. In all these settings, the task involves iteratively building the entailment tree through deductions until the original goal g is entailed. Our experiments will focus on the T2 setting. [While the T3 setting offers a large-scale stress test for retrieval-based approaches like ours, we found in practice that a first-stage retrieval (i.e., converting T3 to T2) with BM25 worked well for all datasets considered in this work. Nevertheless, models that scale to large X sets will be useful for future systems tackling more sophisticated problems like automatic fact-checking.]
§.§ Proof Generation
We follow past work on these tasks <cit.> where the intermediate nodes of the entailment tree are generated from a pre-trained language model. Details on the model are in Appendix <ref>. Specifically, given two premise statements p_a and p_b, we assume access to a model P(d_ab| p_a, p_b) that places a distribution over valid deductions d given the two premises. If the two premises do not combine to yield any meaningful new conclusions, the behavior of this system is not well-defined.
To produce an entailment tree T, we follow the proof generation algorithm from <cit.>; we outline it here and detail all modules of the search algorithm in Appendix <ref>. We begin with our collection of premises P = {X ⋃ F}. In EntailmentBank and ENWN, the set P is given per dataset example. From P, a heuristic M ranks pairs of premises as to how useful their deduction will be in proving the claim g (also given per example). We denote a single ranked premise pair as a step in the search, and we term the current collection of steps at any moment in the search as the search fringe.
A deductive step model, S, pops the highest-ranked step (according to M) from the fringe and generates a set of deductions.[To thoroughly explore the space of all plausible deductions, we sample k generations each time (k = 5 in all our experiments).] These deductions are validated and added back to the pool of premises P, where the heuristic will rank all potential pairs of the new set of deductions with all other previous premises to create new steps in the search fringe. This process is repeated until the maxSteps limit is reached or the fringe has been exhausted.
Our work focuses on investigating if the heuristics used during the search can leverage embedding spaces that exhibit deductive additivity.
§.§ Deductive Additivity
Recall that d_ab represents a valid conclusion from a pair of premises p_a and p_b. Our heuristics are based on an embedding function E: Σ^* →ℝ^n, embedding a sentence into n-dimensional space. We represent the sum of the embedded premises as the deductive trajectory embedding 𝐞^'_a+b = E(p_a) + E(p_b), where 𝐞^' signifies embeddings produced through arithmetic operations rather than the encoder E. An encoder E generates an embedding space exhibiting the property of deductive additivity if the deductive trajectory embedding has a higher cosine similarity with their embedded conclusion than any other statement, x, not entailed by the premises via deduction, denoted as p_a, p_b ↛ x. That is, we want
cos(𝐞^'_a+b, E(d_ab)) > cos(𝐞^'_a+b, E(x))
When the condition in Equation <ref> holds, the embedding space is capable of representing logical relationships strictly in their vectors and can be expressed through simple arithmetic operations such as addition.
§.§ Tuning for Deductive Additivity
Any sentence embedding method can be evaluated for whether or not it exhibits deductive additivity. However, we additionally describe a method for fine-tuning an embedding model to have this property.
We use EntailmentBank to obtain a collection of premise deduction triplets D = {p_a, p_b, d_ab}. Subsequently, we use a loss function to push the encoded representations of the premises closer to that of the deduction <cit.>.
l_ab = - logexp(e^'_a+b· E(d_ab)/τ)/∑^N_i=1exp(e^'_a+b· E(d_i) / τ)
where N represents the batch size. Most deductions d_i will not entail the deduction d_ab, so they serve as suitable negatives from the perspective of Equation <ref>.
For training, we employ temperature scaling in the contrastive loss in Equation <ref>. Previous work has found that contrastive learning benefits from having large batch sizes, more in-batch negatives, and hard negatives <cit.>. To take advantage of hard in-batch negatives, we leverage the tree structures in our training data (EntailmentBank). Specifically, each batch in our training loop contains all the intermediate labeled steps for an entailment tree in EntailmentBank, covering multiple trees. We discover that triplets from the same tree serve as suitable proxies for hard negatives in our contrastive learning process, allowing us to bypass the need for hard negative mining. Our batches include 100 trees, as many as we could fit onto our GPU, which equates to 200-300 triplets in a batch. We found that increasing the batch size led to better performance. We implement our method with the PyTorch Metric learning library <cit.>.
Following each epoch of training, we assess the encoder's performance by our second intrinsic evaluation, Ranking Gold Steps. We use the EntailmentBank T2 development set for checking when to stop training the encoder.
§.§ Caching
Certain heuristics used in proof generation algorithms, such as the one we construct using deductive additivity, can cache the encodings of the initial evidence pool X. This offers significant time savings in completing the first step of a search procedure (where a non-cached method would need to set up and rank the pairs for the initial set). However, any subsequent deductions will need to be encoded since they cannot be precomputed and cached. We also found the time savings to be relatively limited in the T1 and T2 settings since n is relatively small, so we do not expand on this capability further.
§ HEURISTICS AND DATASETS
To measure the performance of using deductive additivity as a proof generation heuristic, we explore five heuristics and three datasets.
§.§ Baseline Heuristics
We consider two baseline heuristics for ranking and retrieving relevant statements: BM25, a sparse retrieval method, and the original heuristic from previous work, SCSearch, which employs an early-fusion premise ranker model.
BM25 BM25 <cit.> matches items in an index with a query via sparse vector representations, capturing lexical overlap but not deeper semantic similarity. In the proof generation search procedure, we index all concatenations of strings in each step (two premises, generated deductions, or one of both), then retrieve the best step based on the goal.
SCSearch Past work <cit.> has used heuristics with a substantially different structure. These heuristics use language models like DeBERTa to score premise pairs conditioned on a claim. Specifically, these models are of the form 𝐰^⊤ E(p_1,p_2,g); they encode p_1, p_2, and g jointly with an encoder model. A linear layer 𝐰 is then used to predict a logit value used for ranking. These models are trained as binary classifiers on EntailmentBank by selecting positive examples of premise pairs that eventually lead to g and negative examples of unrelated premise pairs. This allows the language model to determine if the immediate deduction would be beneficial towards deducing the claim that it is conditioning on. It also allows the language model to see the claim and premise pairs in context and model interactions between them. Because these methods use Transformers to score the premise pair and can model nonlinear interactions between the premises, these models are strictly more expressive than vector-based heuristics.
§.§ Embedding-based Heuristics
To test if embeddings with deductive additivity can be useful in proof generation, we employ three different heuristics that all use deductive additivity but with different encoders to compare different embedding spaces. A deductive additivity heuristic will, for each step, encode any new deductions from the previous step and then sum all the pairs to create deductive representations 𝐞^'_d for hypothetical deduced pairs. We then compute the cosine similarity of each 𝐞^'_d with 𝐞_g (the goal embedding), which is used as a score to select the next step S_i = dargmax cos(𝐞^'_d, 𝐞_g).
We consider the deductive additivity heuristic under three different encoders: SimCSE and GPT3 are used to test off-the-self sentence encoders for deductive additivity, and finally, we fine-tune GPT3 explicitly for deductive additivity.
SimCSE SimCSE <cit.> is an encoder that produces sentence embeddings optimized using a contrastive objective.[Note that this contrastive objective is different from ours. Training for SimCSE was performed on natural language inference (NLI) examples from MNLI and SNLI datasets. From the perspective of data assumptions, we place it in the “fine-tuned” category; although it hasn't been trained on EntailmentBank data explicitly, it uses related entailment data.] We test to see if this encoder produces an embedding space where deductive additivity holds.
GPT3 We use OpenAI's embedding endpoint to create sentence embeddings using the Ada model <cit.>. We test to see if this encoder produces an embedding space where deductive additivity holds as well.
GPT3-tuned We combine OpenAI's embedding endpoint with three additional dense layers using the GLU activation function with residual connections between each layer. We then fine-tune these three layers using the EntailmentBank T1 dataset as described in Section <ref>.
§.§ Datasets
EntailmentBank (EB) This dataset comprises annotated entailment trees for textbook-based science facts <cit.>. We used this dataset for training the majority of our models in a T1 setting. We evaluate the models on the test slice of entailment trees for the T2 task setting.
Each example in EB contains a set of premises, P, and a claim g that we are trying to prove given P. To prove g, the system has to produce a series of deductions by combining two premises from the set P, then combining intermediate deductions and the premises in P until the claim is proven. Whether it is proven is determined via an entailment model scoring g above a certain threshold from some generated conclusion following previous work <cit.> and detailed further in Appendix <ref>. Planning heuristics must determine which premise-premise or premise-deduction pairs are most likely to help in proving the claim, as the set of pairwise premises and intermediate deductions can be large.
In the T2 setting, the number of premises n is fairly small; n < 30 for most examples. There are usually only 3 to 5 deductions involved to produce the annotated entailment tree. We allow for a total of 10 steps (maxStep), and for each step, we allow for five generations to be sampled (k).
Everyday Norms: Why Not (ENWN) ENWN <cit.> contains annotated entailment trees for common everday scenarios. Structurally, ENWN resembles EntailmentBank but with a different domain of reasoning and a larger number of required deductive steps on average (4.71 to 4.26). ENWN aims to combine common social rules deductively to determine whether a person should perform a particular action (usually something they should not do). ENWN currently does not have a T2 or T3 setting.
§.§ Single-Step Reasoning Contrast Dataset
Both EntailmentBank and ENWN test a subset of logical inference types but do not necessarily have broad coverage. For example, EntailmentBank has very few examples involving negation, despite this being a very important phenomenon to model in practice. We want to test whether our embedding methods can handle a wider range of cases.
We construct a new dataset that examines common forms of logical reasoning[We initially employed ChatGPT for annotating examples in EntailmentBank and ENWN. However, it did not yield consistent labels, signaling an opportunity for further exploration in future research. Instead, we adopted a different approach, generating a selection of widely-used labels that we subsequently employed as the reasoning categories within the SSRC dataset.] via synthesized examples. We consider fourteen categories: Analogy, Categorical Syllogism, Causal reasoning, Classification, Comparison, Composition, Division, Modus Ponens, Modus Tollens, Definition, Temporal Logic, Propositional Logic, Quantificational Logic, and Spatial Relationship. For each category, we use GPT-3.5 to generate ten examples of deductions given two premises using the corresponding reasoning category.
For every example deduction, we prompt GPT 3.5 further to perturb the premises in four ways creating additional examples of incorrect deductions. For each perturbation, we create three examples where one or both premises have been negated, three examples where one or both premises are a false premise, fifteen examples where one or both premises are an irrelevant fact, and three examples where one or both premises have an incorrect quantifier (usually meaning that “some”, “all”, or “none” has been prepended to the premise). Examples from the dataset from different reasoning categories and perturbation types are shown in Section <ref> of the Appendix in Table <ref>. Prompts to create examples and perturb the examples can be found in Appendix <ref>.
§ EXPERIMENTS
§.§ Intrinsic Evaluation
We perform two intrinsic evaluations to test if encoders exhibit the deductive additivity property: do they rank gold premise pairs in the proof generation task above incorrect pairs?
Comparing Deduction Embedding Representations
In our first intrinsic evaluation, we measure the cosine similarity distributions of premise pairs and a deduction in three settings to test for deductive additivity. The first setting uses a deduction d_ab and measures the cosine similarity of its embedding E(d_ab) with a random premise pair P_r = {p_x, p_y} where p_x and p_y are drawn randomly from the set of premises, U(P). The next setting looks at partially random premise pairs, P_p = {p_a, p_y} where p_a is one of the gold premises P_g = {p_a, p_b} that yield the deduction d_ab. Finally, we measure the distribution of scores for the gold premise pair P_g and the following deduction from those premises d_ab. These three settings correspond to Random, Partial, and Gold, respectively, in Figure <ref>.
Additionally, we also compared the gold premise pair P_g = {p_a, p_b} with model-generated deductions S_d(p_a, p_b) = d'_ab and measured their cosine similarity cos(𝐞^'_a+b, E(d'_ab)). Finally, we measured the cosine similarity scores of the annotated deductions and the generated deductions cos(E(d_ab), E(d'_ab)); this is a sort of sanity check to see if the deductive additivity property holds for proof generation. This experiment checks whether the step model introduces significant deviation in embedding similarity compared to using the gold steps. These settings correspond to Model and G. to S. respectively in Figure <ref>, all settings have their averages reported in Table <ref> in Section <ref> of the Appendix as well.
Embedding Representations Results
Figure <ref> shows a slight overlap between the cosine similarity score distributions of random and gold pairs, aligning with expectations and showing that Equation <ref> roughly holds for all three encoders. However, the partial pairs have much more overlap with the distribution of gold pairs for each encoder. Concerningly, the partial pairs are much more numerous because these pair one of the ground truth statements with an irrelevant statement, forming a pair we do not want the heuristic to surface. We will see the performance ramifications of this in the end-to-end evaluation. On a positive note, we also see high agreement between the gold premise pair and the generated deduction, indicating that deductions generated by the step model are similar to the annotated deductions.
Ranking Gold Steps
The second intrinsic evaluation measures the rankings of premise pairs, P_pairs, conditioned on a deduction embedding, E(d_ab), where one pair is the gold premise pair P_g = {p_a, p_b} which yield the deduction. All other pairs are either random P_r = {p_x, p_y}, where p_x and p_y are sampled uniformly from the set of premises U(P), or are partially random P_p = {p_a, p_y}. The full list of premise pairs is the union of all these sets P_pairs = P_g ∪ P_p ∪ P_r.
We calculate scores for each pair according to how each heuristic scores premise pairs, scores = {heuristic(P_s, d_ab) | P_s ∈ P_pairs}. For the heuristics using deductive additivity (DA), the scores are cosine similarities, scores = {cos(𝐞^'_n+m, E(d_ab)) |{p_n, p_m}∈ P_pairs}. Finally, we sort scores and find the rank of the gold premise pair.
We calculate the mean reciprocal rank (MRR) using the ranks of the gold premise pairs across all examples in the EntailmentBank T2 and Everyday Norms: Why Not datasets. We also repeat this process for EntailmentBank T2 where we make the target of the search the claim g instead of the immediate deduction d_ab. Because the claim g is often a product of multiple deductions in the premise set P, we expect the MRR scores to be lower than the scores on the immediate deductions d_ab. ENWN does not have a T2 setting, so we do not show the claim-conditioned scores because every premise would be related to the claim g, making nearly all pairs valid. These are shown in Table <ref>. A number closer to 1.0 indicates that the gold premise pair was consistently ranked higher than partial and random premise pairs.
Gold Steps MRR Results
Table <ref> shows the BM25 MRR scores as being quite competitive with the methods using deductive additivity, SimCSE, GPT3, and GPT3-tuned, all of which are within 0.1 of each other. BM25s high performance indicates that the datasets EB T2 and ENWN have many examples where the lexical overlap is enough to determine the gold premise pair P_g. GPT3 does outperform the BM25 baseline, however, and in nearly every case, the SimCSE heuristic does as well (except for ENWN). GPT3-tuned does slightly worse in both EB T2 and ENWN, showing that fine-tuning the embeddings to produce the deductive additivity property is not trivial. The degradation in performance is surprising given that the model was fine-tuned on a task very similar to the intrinsic evaluation being reported in Table <ref>. SCSearch still outperforms all leading methods. There is a significant drop across all methods between ranking premise pairs with the immediate deduction and the goal. Although this was expected, the drop is quite significant and is worth exploring further in future work on how it could be mitigated.
§.§ Extrinsic Evaluation: Generating Proofs
Next, we explore how well heuristics employing deductive additivity can perform on proof generation datasets detailed in Section <ref>.
Results
We report the percentage of proofs that entailed the goal, g, as well as the average number of steps to prove the claim across all planning heuristics in Table <ref>. GPT3 (DA), GPT3-tuned (DA), and SimCSE (DA) are all able to produce slightly more proofs than BM25 on the EB T2 dataset but fail to outperform BM25 on ENWN. Because BM25 is a limited heuristic that only employs lexical overlap, this result shows that nearly 50% of examples in these datasets can have proofs generated using simple heuristics that use no deeper semantic representations. However, deeper reasoning does help, as shown by the fact that SCSearch is able to generate far more proofs than the other methods across both datasets by as much as 36%. This finding is also supported by the MRR results of the second intrinsic evaluation, shown in Table <ref>. Disappointingly, deductive additivity does not seem to be able to capture the same sort of benefits in the heuristic it provides.
§.§ Single-Step Reasoning Contrast Dataset
To best understand where the vector-based methods are lacking in performance and pinpoint where improvements can be made, we test each method across a variety of types of reasoning and common failure cases in the Single-Step Reasoning Contrast (SSRC) dataset. In this experiment, we perform the same evaluation as our second intrinsic evaluation, Ranking Gold Steps. Here we use examples from the SSRC dataset, which have been curated and labeled to allow for a report of an MRR on different types of deductions and error cases.
Results
Table <ref> shows the averaged MRR scores across all methods. GPT3 (DA) outperforms SCSearch slightly overall, but to better understand the performance, we plot the average MRR across the fourteen reasoning categories and perturbation types for each method compared to SCSearch in Figure <ref>. GPT3 (DA) can outperform both BM25 and SimCSE (DA) consistently across nearly every reasoning category and all perturbation types. Furthermore, we see that GPT3 (DA) is capable of beating or matching SCSearch on half of the reasoning categories and perturbation types, contradicting previous results indicating that these datasets might be skewed in areas where SCSearch excels at.
GPT3-Tuned (DA) performs worse in 9 categories than GPT3 (DA) and better in only 3. This could be from the skewed reasoning categories in EntailmentBank, but it could also be that enforcing the condition in Equation <ref> directly is counterproductive. Averaged scores for each reasoning category and perturbation type can be found in Appendix <ref>, in Tables <ref> and <ref> respectively.
§ DISCUSSION
Vector-based methods are not sufficient to capture all information for planning deductions.
We've found that vector-based methods can represent complex reasoning but fall short in planning reasoning steps when compared to early-fusion premise rankers like SCSearch. Our results suggest a more complex
and structured approaches may be necessary for step-by-step systems.
Skewed datasets provide optimistic benchmarks for weaker models.
Our results focused on the T2 setting because we discovered that a BM25 + SCSearch pipeline did quite well and scaled to large numbers of premises. However, we believe this is an optimistic result and may not scale to production settings where claims may require more complex deductions that are less sensitive to lexical overlap. Developing datasets with more complex reasoning and benchmarking in real production settings is a focus for future work.
Training for Deductive Additivity can harm performance.
We found that training deductive additivity directly improves categories of reasoning prevalent in the training dataset while harming other categories. Both larger and more diverse datasets may be a solution for this problem, but GPT3 embeddings already show deductive additivity without explicitly training for it. Developing different training objectives that result in embeddings with deductive additivity is another focus for future work.
§ RELATED WORK
Our work follows from models and methods done in the Question Answering domain where models are required to generate an answer or select evidence that leads to the answer through “multi-hop” reasoning <cit.>. Although these end-to-end methods can be used in proof generation, understanding the underlying reasoning of the decisions being made is impactful for understanding the affordances of the model <cit.>.
Step-by-step methods have been looked at for proof generation, detangling planning and reasoning into separate subsystems that work together as a whole when proving a claim <cit.>. There has also been work on using similar modular systems in answering questions with a knowledge base and different types of embeddings <cit.>. Our work extends from this literature, focusing on exploring alternative heuristics for natural language deduction planning entirely in embedding space by tapping into the property of deductive additivity.
We also follow work being done in retrieval, which focuses on finding evidence from a large corpus that would help answer a query. State-of-the-art retrieval methods involve encoding the corpus into vector indexes that can be used to calculate the cosine similarity of an encoded query <cit.>. Sparse encoders, like BM25, have also been used to help reduce the search space for relevant passages <cit.>. However, none of the methods tap into the deductive additivity property in their embedding spaces and instead encode the query to find relevant passages and then re-encode the query with the appended passages to find additional relevant passages. We consider this to be similar to early-fusion premise rankers in the proof generation task.
Another line of relevant work deals with understanding reasoning errors from language models, like the detection of logical fallacies in text <cit.>. We further this line of work with the SSRC dataset, building a contrast set <cit.> for reasoning targeting certain types of deductions and common reasoning errors.
§ CONCLUSION
In this work, we have explored the property of deductive additivity in sentence embedding spaces. Results show that off-the-shelf sentence encoders exhibit the property somewhat; however, when used as heuristics in natural language proof generation, they are only slightly more successful than BM25. Furthermore, we see that fine-tuning for deductive additivity does not lead to better reasoning capabilities of the embedding space, and we posit that a large contributor to this could be skewed datasets. We introduced the Single-Step Reasoning Contrast dataset, which shows that these same skewed datasets provide over-optimistic results for inferior methods harming our ability to benchmark systems for their use in production settings. Lastly, we've shown that early-fusion premise rankers like SCSearch still outperform vector-based approaches. However, their ability to scale to more diverse reasoning datasets that are less sensitive to lexical overlap is still an open question for future work.
§ ACKNOWLEDGMENTS
This work was partially supported by NSF CAREER Award IIS-2145280, the NSF AI Institute for Foundations of Machine Learning (IFML), a gift from Salesforce, Inc., a gift from Adobe, and a grant from Open Philanthropy. Thanks to the anonymous reviewers for their helpful comments.
acl_natbib
§ EMBEDDING RECONSTRUCTION RESULTS
Table <ref> shows the averaged cosine similarity of the random, partially random, and gold pairs, as well as the cosine similarities for the gold pairs with the step model generations. This provides complementary information to Figure <ref>.
§ SSRC DATASET EXAMPLES
Table <ref> shows four examples from the SSRC dataset that have been sampled from different reasoning categories and show different perturbation types for the premises.
§ SSRC DATASET RESULTS
We report the raw scores for both the reasoning categories and perturbation types in Tables <ref> and <ref> respectively.
§ PROOF GENERATION MODULES
We outline in more detail the proof generation search algorithm we use in our experiments following work from <cit.> and <cit.>.
§.§ Deductive Step Model
The deductive step model is trained using the EntailmentBank dataset following <cit.>. We transform the annotated entailment trees into individual steps T_i = (x_1, x_2 → c) and fine-tune a pre-trained language model to generate the deduction given a set of premises. We do not use data from <cit.>.
§.§ Reasoning Validation
To ensure that the search space generates well-reasoned deductions, we implement a set of validators that examine both the types of steps being taken and the generations produced by the step models following <cit.>. Firstly, we employ a Consanguinity Threshold step to ensure that the search procedure does not permit steps to consist of the same premise or premises that result in immediate deductions. For instance, if p_a and p_b create the deduction d_ab, we disallow a new step to be (p_a, d_ab). This approach effectively promotes diversity in the types of steps being taken. We also enforce that no generation from a step model is an exact duplicate of one of the inputs.
Furthermore, to avoid identifying high-ranking pairs of premises that result in illogical deductions due to hallucination, we devise a new validation method to ensure consistency. The Deduction Agreement validator compares the embedding of the added premises e_d' with the embedding of the generated deduction e_d. If the cosine similarity falls below a threshold t_da, the step is filtered out. A running average of all cos(e_d', e_d) scores for previous deductions is maintained. If a branch in the entailment tree generates too many deductions that have low cosine similarity with their summed premises, it will be filtered out.
§.§ Entailment Scores
We employ a DeBERTa model, fine-tuned on the MNLI and WaNLI tasks, to assess the entailment of each generated natural language deduction. If a deduction achieves a score above a predefined threshold, t_g, it is considered to have recovered the goal g. Once a deduction has successfully recovered the goal, we can trace back the steps used to create that specific deduction, resulting in a minimal proof tree that contains only the essential steps required to prove the goal.
§ SSRC PROMPTING
We use ChatGPT to prompt GPT3.5 and create the SSRC dataset. We followed the same template for all reasoning categories and then used a simple Python script to parse out the examples generated. Below is an example of how we prompted ChatGPT for the reasoning category Classification. All prompts are given to ChatGPT one after another.
§ EXAMPLES OF GPT RANKING SSRC PREMISE PAIRS
Here we show three examples from the SSRC dataset and place the premise pairs in order of how GPT3 ranked them. The Category indicates which reasoning category the example belongs to, Perturbation indicates which perturbation type the example is exhibiting, Target is the claim g, Gold Premises are the correct premises that yield the claim from a deduction, Rank is the Rank GPT3 gave the gold premises (1 being the best). We also include all premise pairs and their ranks below the Rank of the gold premises, and we mark the pair (G) for the gold premise pair.
|
http://arxiv.org/abs/2307.08679v1 | 20230703113325 | Externally validating the IoTDevID device identification methodology using the CIC IoT 2022 Dataset | [
"Kahraman Kostas",
"Mike Just",
"Michael A. Lones"
] | cs.NI | [
"cs.NI",
"cs.CR"
] |
Externally validating the IoTDevID methodology using the CIC IoT 2022 dataset
K. Kostas et al.
Department of Computer Science, Heriot-Watt University, Edinburgh EH14 4AS, UK
{kk97,m.just,m.lones}@hw.ac.uk
Externally validating the IoTDevID device identification methodology using the CIC IoT 2022 DatasetKahraman Kostas supported by Republic of Turkey - Ministry of National Education
Kahraman Kostas10000-0002-4696-1857 Mike Just10000-0002-9669-5067 Michael A. Lones10000-0002-2745-9896
August 1, 2023
===================================================================================================================================================================================
In the era of rapid IoT device proliferation, recognizing, diagnosing, and securing these devices are crucial tasks. The IoTDevID method (IEEE Internet of Things ’22) proposes a machine learning approach for device identification using network packet features.
In this article we present a validation study of the IoTDevID method by testing core components, namely its feature set and its aggregation algorithm, on a new dataset.
The new dataset (CIC-IoT-2022) offers several advantages over earlier datasets, including a larger number of devices, multiple instances of the same device, both IP and non-IP device data, normal (benign) usage data, and diverse usage profiles, such as active and idle states. Using this independent dataset, we explore the validity of IoTDevID's core components, and also examine the impacts of the new data on model performance.
Our results indicate that data diversity is important to model performance. For example, models trained with active usage data outperformed those trained with idle usage data, and multiple usage data similarly improved performance.
Results for IoTDevID were strong with a 92.50 F1 score for 31 IP-only device classes, similar to our results on previous datasets. In all cases, the IoTDevID aggregation algorithm improved model performance. For non-IP devices we obtained a 78.80 F1 score for 40 device classes, though with much less data, confirming that data quantity is also important to model performance.
§ INTRODUCTION
An internet of things (IoT) device can be defined as any kind of physical device with processing capability that can be connected to the internet or other devices <cit.>.
Today, the number of IoT devices has exceeded 10 billion and is expected to reach 27 billion by 2025<cit.>. In a rapidly growing market, a variety of devices have been developed by many companies for many purposes in a short time.
Due to their various uses and physical requirements, these devices have very different hardware and software characteristics.
The heterogeneity of these devices, along with inherent vulnerabilities introduced by manufacturers and the presence of unfamiliar device interfaces, renders them susceptible to potential security risks.
Research indicates that
an IoT device connected to the internet is attacked within 5 minutes and becomes the target of a specialised attack within 24 hours <cit.>.
To cope with these attacks, it is essential to keep the devices up-to-date, identify the vulnerabilities they carry and find solutions for them. These devices may need to be updated, restricted or isolated from other devices depending on their vulnerabilities. In any measure to be taken, the first step will be to identify the device.
However, the heterogeneous structure of IoT devices makes the device identification process challenging. In this regard, many researchers are applying machine learning-based identification for more efficient solutions.
While several such studies exist, they often suffer from methodological issues that affect the reliability of their results, including data leakage, feature overfitting and selective device testing.
We previously
created IoTDevID <cit.> to address the device identification problem, while following sound methodological principles.
IoTDevID works at the individual packet level to identify IoT devices, whether IP or non-IP (such as Z-Wave, ZigBee, or Bluetooth). In doing so, it provides a high detection rate thanks to its incorporated aggregation algorithm,
which
combines similarly-modelled packets and
improves identification success over using individual packets.
In the multi-layer feature selection process, device and session-based identifying features that cause overfitting are discarded, and the most appropriate feature set is created by using a genetic algorithm. We further performed training and testing on isolated datasets in order to eliminate data leakage issues. In this context, IoTDevID claimed to provide generalisable and robust models.
In this study, we validate our IoTDevID solution by applying it to a new dataset, the https://www.unb.ca/cic/datasets/iotdataset-2022.htmlCIC IoT Dataset 2022 (CIC-IoT-22).
This dataset provides an opportunity to
test the robustness and generalisability of core components of our solution, namely its feature set and aggregation algorithm. CIC-IoT-22
contains more devices than other prominent datasets used in our original evaluation of IoTDevID: Aalto <cit.> and UNSW <cit.>.
It also has non-IP devices (which UNSW does not) and data collected during use (which Aalto does not). It also contains additional contextual data, such as whether a device is idle or active, as well as different data usage scenarios. This richer dataset will allow us to further test the generalisability of IoTDevID, and it also allows us to provide some insight into the usefulness of such data for creating more generalisable and robust models. In order to enhance transparency and ensure reproducibility, we have publicly shared our dataset, and scripts[Complete feature list:https://github.com/kahramankostas/IoTDevID-CIC/github.com/kahramankostas/IoTDevID-CIC] .
§ RELATED WORK ON DEVICE IDENTIFICATION
Device identification aims to classify devices by using feature sets (fingerprints) obtained from network data as input. These features are usually derived from individual packet headers or payloads <cit.>, but some studies have also used flow features <cit.>. Although much work has been done in the area of device identification, a number of problems are apparent,
including
data leakage, overly-specific features, selective device testing, and
insufficiently transparent experimental methodology.
As in many security <cit.> and machine learning studies <cit.>, reproducibility is a serious problem in device identification. The major factor causing the reproducibility problem in device identification is data leakage. This is often caused by an improper separation of testing and training data.
For example, in Chowdhury et al.'s study <cit.>, during feature extraction, features that could uniquely identify sessions (e.g. port numbers, TCP sequence, and TCP acknowledgement) were used.
Since data sessions were not considered when splitting training and testing data, data leakage from the training data would very likely cause an overestimation of a model's performance on the test dataset.
Similarly, in the IoTSentinel <cit.> study, models are trained with the IP address count feature which is dependent on the number of device communications in the network.
However, this is primarily determined by the network to which the device is connected rather than the device itself. Consequently, this feature is not generalisable as it will change when the device or model is moved to another network.
Additionally, Hamad et al. <cit.> used 67 features consisting of network statistics derived from 20-21 consecutive individual packet features. However, these statistics are specific to the network in which they are produced. If the same device or model is moved to another network, these network statistics will change and the model will no longer function.
As a further example, Sivanathan et al.<cit.>, include similarly network-dependent flow-based features
to create their models.
A further
problem is that
several studies suffer from a
lack of transparency, which is important for experiment validation and repeatability.
For example, IoTSense <cit.> discarded four out of 14 devices during the evaluation step. Aksoy et al. <cit.> used only 23 devices of the Aalto dataset, which has 27 devices. Sivanathan et al.<cit.> similarly did not include the four devices in their dataset in their results.
Partial device use, especially when done without adequate motivation, undermines the reliability of results.
In a similar way, the IoTsense <cit.> dataset has not been shared and, as far as we are aware, no code from any of the above studies <cit.> has been made publicly available. In such cases, full study validation and repeatability are not possible.
Another issue is the transfer problem (see Fig. <ref>), which impacts studies that combine individual packets or use flows. Even though many studies use individual packets, they combine these packets using features such as MAC or IP addresses. Unfortunately, they cannot solve the case where MAC/IP addresses represent more than one device. For example, they mistakenly assume that separate devices with the same IP gateway address are the same device. Among the studies using the Aalto dataset, IoTsentinel <cit.> suffers from transfer problems because it uses MAC addresses and <cit.> uses IP addresses. On the other hand, UNSW and IoTSense datasets do not have non-IP devices, so they do not have this problem, but their feature extraction method does not incorporate solutions for the transfer problem.
We offered a solution to the transfer problem as part of our aggregation algorithm in IoTDevID, which we describe below.
§.§ IoTDevID
Fig. <ref> shows the steps of the IoTDevID <cit.> study. With reference to this figure, IoTDevID can be summarised as follows: Network data are isolated from each other by separating them into training and testing 1. Individual packet features are extracted from the isolated pcap files 2. Identifying features that could cause overfitting were identified and discarded 3.
Using a voting method based on feature importance scores, unimportant features are eliminated 4. From the remaining features, a genetic algorithm is used to find the best feature combination 5. Different machine learning algorithms are tested to find the most appropriate algorithm 6. The optimal size for the aggregation algorithm is determined 7. In the last step, the final results are obtained by using the ML model and the aggregation algorithm 8.
In the IoTDevID study, we aimed for a transparent, repeatable and generalisable study, avoiding the pitfalls found in previous studies, such as data leakage, use of identifying features, selective device testing, and non-transparent experiments. We used features derived from packet headers as used in many other studies <cit.>. However, device identification with individual packets is very difficult due to the high noise. This noise is caused by the fact that some “empty” packets have multiple device characteristics. An example of this is the TCP 3-way handshake. For this handshake, only empty packets with the TCP flags
are sent. These packets are quite simple and stable/static. It is very difficult to tell from a single packet which device it came from once identifying data such as IP/MAC addresses is removed. Therefore mislabelling of the fingerprint from these packets is quite common. To combat this, some researchers<cit.> have constructed more descriptive fingerprints by combining features from successive packets.
The problem with this approach is that since the combination process uses identifying features such as MAC/IP, it does not work in networks where there are transfer problems or non-IP devices.
Since we aim to identify devices using any protocol, be it WiFi, Bluetooth, ZigBee or Z-Wave, we only use individual packets in the identification step,
and use an aggregation algorithm when identifying features such as MAC addresses are available. Thus, the identifying features are not used to create the models, but rather to better group similarly labelled packets to improve overall model performance.
The aggregation algorithm consists of two steps (see Fig. <ref>), using as input the MAC address and the predicted label. In the first step, it groups the MAC addresses according to the labels assigned to them and then finds the predominant MAC address for each label. If a MAC address is selected as dominant for more than one label, this MAC address is added to the exception list (likely a transfer problem with one MAC address being used for more than one device).
In the second step, the predicted labels are gathered together in groups according to their MAC addresses. The most repetitive label among these groups is applied to the whole group to obtain aggregated labels. This procedure is not applied for MAC addresses that have entered the exception list; only the individual results are used for them. The device identification process assumes a benign network environment. Although the aggregation algorithm is effective for benign data, there is a risk of grouping together malicious packets that imitate legitimate IP/MAC addresses and display similar behaviour to benign packets with the same IP/MAC address. So, when applying the aggregation algorithm to networks with malicious data, caution is advised.
In IoTDevID <cit.>, we used two datasets,
from Aalto University <cit.> and UNSW <cit.>, which were produced for device identification studies. We used the Aalto dataset to develop our method and the UNSW dataset to validate our results.
With the Aalto dataset (27 devices) IoTDevID achieved a 86.10% F1 score, with 93.70% for the UNSW dataset (32 devices).
Both datasets contain data generated from real device behaviour and have been used by most previous studies on device identification.
However, they have limitations. The Aalto dataset consists only of packets captured during device setup,
not
actual usage.
The UNSW dataset contains data from different sessions but lacks information about the nature of device use (active or idle). Additionally, it does not support the ability to aggregate devices under the same label (such as two different devices of the same brand and model) or observe non-IP devices (because the entire dataset consists only of IP devices), which limited the analysis of the transfer problem.
In 2022, a new dataset, CIC-IoT-22 <cit.>, was made public.
It contains more devices than both Aalto and UNSW and its traffic was recorded whilst devices were operating in a wider range of activity states (e.g., active and idle). Hence it addresses some of the previous dataset issues.
It also maintains the advantageous properties.
For example, like the Aalto dataset, it has multiple instances of some devices and non-IP devices. Like the UNSW dataset, it contains long-term usage data. In Section <ref>,
we will validate the methodology used in the IoTDevID study by analyzing the CIC-IoT-22 dataset.
§ CASE STUDY
In this section, the CIC-IoT-22 dataset is examined and its features are analysed in depth. The individual packets and aggregation methods used for classification are explained. Finally, how feature extraction and labelling are performed is described.
§.§ CIC-IoT-22 dataset
Data was collected during 6 different device states. These states can be summarised as follows <cit.>.
In the Power state, each device is isolated from other devices and rebooted and the network packets related to this device are collected.
In the Interactions state, the device is interacted with by buttons, applications or voice commands and the network packets generated during this process are collected.
In Scenarios, the network data of these devices are collected in scenarios such as entering the house, leaving the house, unauthorised entry to the house at night and day or user error.
In the Attack state, data is collected by applying Flood attacks and RTSP Brute Force attacks to the devices.
The idle state consists of recording every 8-hour period for 30 days in the evening hours when the devices are working but not actively used.
The Active state contains the data of the devices being used during the day for 30 days. This data is generated by people entering the lab and using the devices.
Some important points about the dataset:
In this study, the most important sections for benign device behaviour are idle and active as these states cover most normal usage and provide a wide range of data about all devices. Although it is stated in the paper <cit.> that 60 devices were used in this process, according to our experiments and the information provided in the dataset[http://205.174.165.80/IOTDataset/CIC_IOT_Dataset2022/http://205.174.165.80/IOTDataset/CIC_IOT_Dataset2022], the data for these states covers 40 devices. These 40 devices are only LAN/Wired or WIFI devices, they do not include Zigbee and Z-Wave devices. Zigbee and Z-Wave devices have data isolated from other devices in the power and interaction stages. However, these data are both very limited and do not contain normal usage data. Also, the data of the Z-Wave devices is not in pcap format.
§.§ Feature Extraction and Labelling
https://www.python.org/Python, https://scapy.net/Scapy, and https://www.wireshark.org/Wireshark were used for feature extraction from packet capture (pcap) files. Only individual packet-based features are used for feature extraction. Many of these features are derived from packet headers, but there are also payload-based features such as payload entropy and payload bytes. Although the feature extraction system created about 100 features[Complete feature list:https://github.com/kahramankostas/IoTDevID-CIC/blob/main/featurelist.mdgithub.com/kahramankostas/IoTDevID-CIC/blob/main/featurelist.md] in total,
only the features[Selected features are: pck_size, Ether_type, LLC_ctrl, EAPOL_version, EAPOL_type, IP_ihl, IP_tos, IP_len, IP_flags, IP_DF, IP_ttl, IP_options, ICMP_code, TCP_dataofs, TCP_FIN, TCP_ACK, TCP_window, UDP_len, DHCP_options, BOOTP_hlen, BOOTP_flags, BOOTP_sname, BOOTP_file, BOOTP_options, DNS_qr, DNS_rd, DNS_qdcount, dport_class, payload_bytes, entropy] selected during the feature selection phase of the IoTDevID study were used in these experiments.
Labelling was performed using the list of device names/MAC address pairs in the dataset. In each fingerprint (feature set representing a packet) extracted, the source MAC address part was replaced with the given name. The MAC addresses not given in this list (5 MAC addresses that we believe belong to the hub, switch or the computer where the data is collected) were ignored.
In the CIC-IoT-22 dataset, each of the pcap files we use for feature extraction contains network traffic recorded on a day, and is named with the date it was recorded. For example, data recorded on 24.11.2021 is labelled A211124 if active and I211124 if idle. In this context, 30 idle and 24 [Although the paper<cit.> states 30 active sessions, there are only 24 sessions in the data set.] active sessions were recorded. As a preliminary study, we aimed to test the performance of models trained on data from each session by comparing them with each other. In order to compare the sessions with each other, they should contain the same devices. Unfortunately, data was not collected from every device in every session, and in some sessions, some devices did not generate any data at all. Table <ref> shows how much data was generated by each device in each session in terms of network packets. Therefore, we only compare sessions that contain the same devices with each other. For this comparison, we create a session ID. In this ID, each device is represented by a binary digit. If the session has that device, it is indicated with 1, if not, it is indicated with 0. For example, if Session1 contains devices A and C, but not device B, then the ID number is 101(ABC). Session1 can be compared to other sessions with the same ID number without any problem. In this context, we have created a 32-digit ID for each session according to 32 device classes in total. There are 40 physical devices; however, some of these devices are in the same label because they are identical in brand and model.
§ RESULTS
We first consider data quality, in terms of its ability to support the training of device identification models, by training and testing ML models using different sessions within the CIC-IoT-2022 dataset. Based on the findings of this analysis, appropriate sessions are then merged to produce a dataset that is both representative of the data diversity and which can be reliably used to train device identification models.
§.§ Analysis of Data Quality
We begin by training models using data from each session and then testing them on other sessions with the same ID. 1036 session pairs were created for this purpose, with the first session used for training and the second for testing. These pairs were divided into active and idle categories, resulting in four training vs testing possibilities: active vs active (AA), active vs idle (AI), idle vs active (IA), and idle vs idle (II).
We utilize the F1 score as a primary metric for reporting results for two key reasons. First, unlike accuracy, the F1 score offers reliable performance evaluation on unbalanced datasets, which is often the case with IoT-generated data, including this study. Second, the F1 score provides insights not only into overall performance but also class-specific performance, enabling detailed analysis. However, for the sake of comprehensive assessment, we also include accuracy as a comparative measure, given its prevalent (if sometimes inappropriate) usage in the literature.
Table <ref> presents the average results for 1036 session pairs categorized into four conditions. When we focus on individual results, the highest F1 score is achieved in condition II (72%), closely followed by AI (71.4%). The lowest scores are observed in the AA (70.2%) and IA (67.9%) conditions, respectively. When we apply the packet aggregation algorithm, it can be seen that the results reflect those from individual packets, but with an improvement of approximately 5-7 points in each case.
Notably, the utilization of idle data for testing purposes yields higher performance compared to the use of active data. This can be attributed to the broader range of data available in active scenarios, while idle data lacks this diversity. Consequently, a model trained with active data exhibits higher success when tested on idle data, whereas a model trained with idle data shows lower performance when tested on active data.
To gain a more comprehensive understanding of the data, it is important to analyze individual cases. Fig. <ref> shows a heatmap displaying the F1 scores obtained from session pairs. The vertical axis represents the training data, while the horizontal axis represents the test data. The F1 score ranges from 51% to 93% in pairwise session comparisons. It is important to note that this is a multiple-classification task with approximately 32 classes. In contrast to binary classification, where results above 50% are considered significant, a randomly assigned multiple-classification model would achieve an accuracy of approximately 3.1% (100 divided by 32). Therefore, even an F1 score of 51% represents a substantial improvement over chance/random success.
In Fig. <ref> & *fig:comp_1 the session IDs that allow the broadest comparison, containing 28 and 17 sessions respectively, are shown. Fig. <ref> predominantly consists of idle examples, showing higher success rates when comparing consecutive dates.
The heatmap exhibits a somewhat symmetrical structure, albeit imperfect, particularly due to minimal user intervention during idle collection. In Fig. <ref>, both active and idle sessions are mixed. Similarly, success rates are higher for consecutive sessions. However, the involvement of users introduces a more distorted symmetry, especially in active sessions. Significant performance drops are observed when using data collected on specific dates coinciding with a national holiday, such as
2021-12-23, 2021-12-25, and 2021-12-28. Additionally, the lowest performances occur when using idle as training data and active as test data. We believe that active sessions offer a broader representation, because active use actually includes idle use as well, while it is not possible to say the opposite. However, the inherent differences in data collection during active sessions change the model's performance. So it is not possible to speak of perfect patterns when human factors are involved.
The subsequent section explores whether
increasing the diversity by
combining data from different sessions improves representation and model performance.
We believe that focusing on class/device-based results will give more information. By analysing the device-based results for each session, we want to focus on the problematic devices. In this context, a device that is unsuccessful in any of the sessions, with a class-based F1 score of less than 50%, is added to our list if it repeats this behaviour more than 21 times in all comparisons (21 corresponds to 2% of all session comparisons). Fig. <ref> shows the number of times the device class has failed and the distribution of these failures according to the session comparing types.
Examining Fig. <ref>,
we can see that, with some minor exceptions, the overall distribution of the pie chart remains the same. This shows that there is no significant difference between idle and active in terms of low-performance devices. On the other hand, if we focus on some devices with low performance (problematic), we can easily understand why they are included in the list. The devices with the highest number of failures are those with more than one example in the session versus session experimental set, such as Amazon Alexa Echo Dot, Gosund Plug, Gosund Socket, Teckin Plug, and Yutron Plug. Since these devices are different examples of the same physical device (same brand and model), they should be grouped under one label (e.g. Teckin Plug 1 and Teckin Plug 2 -> Teckin Plug). We believe that the success level of most of the other devices can be improved by increasing the sample diversity.
§.§ Dataset Construction
We aimed to enhance sample diversity and improve model performance by sampling from multiple sessions. The data already included idle and active sessions, which we further split into training and testing subsets. This resulted in four subsets: idle-training, idle-testing, active-training, and active-testing, derived from a total of 54 sessions. Refer to Table <ref> (Appendix) for the specific assignment of sessions to each subset. The dataset creation process is illustrated in Fig. <ref>.
However, due to some deficiencies in the dataset, we have made minor changes to the data. We copied the data of the D-Link Water Sensor, a device not included in the active sessions, from the idle sessions to the active session data. Another change was related to the LG Smart TV device. The data for this device is only present in three of the 54 sessions. Furthermore, the data for this device is so unbalanced that the data in these three sessions account for about 9% of the total number of packets in all 40 devices. For these reasons, we removed this device from the dataset.
To ensure a balanced dataset that represents session diversity without excessive size, we reduced the number of packets in each of the four datasets to 10% of the total number of packets per dataset by random sampling. This random sampling was employed during this process to maintain consistent packet rates for each device, preserving the natural distribution of the dataset.
§.§ IoTDevID Evaluation
Next, we evaluate the performance of the IoTDevID methodology on the datasets described above. Table <ref> summarises the test performances, both when using individual packets and when using the aggregation algorithm. It can be seen that very good results are obtained in all cases, with all F1 scores being above 81%. When individual packets are used, it is seen that the most successful model is AI with 90.50%, followed by AA with 84.20%, while the results of cases IA and II are very close to each other with a score of around 81%.
As in the original study, a significant further improvement is seen when the aggregation algorithm is used. The AA and IA cases improve by about eight points, and the AI and II cases improve by about 10 points. For AI, the models achieve almost perfect discrimination.
Upon comparing these results with Table <ref>, it is evident that all scores have exhibited significant improvements. Analyzing individual results, the F1 score has risen from 70.2 to 84.2 for AA, from 71.4 to 90.4 for AI, from 67.9 to 81.8 for IA, and from 72 to 81.4 for II. Notably, the aggregation results demonstrate an even greater increase. The choice of data used exerts a substantial influence on the model's performance, underscoring the importance of constructing a more representative dataset through data combination. This enhanced dataset has substantially bolstered the success of models trained using it.
Returning to Table <ref>, the active state has a broader representation as it includes network data both when the devices are used and not used. Idle includes only passive states and does not include the states when the devices are used. The AI case, which employs the idle dataset as testing, exhibits an exceptionally high performance that may not reflect practical conditions, as the uniform data distribution of the idle dataset creates an “easier” testing environment. In this context, using the active state for both training and testing gives more realistic results.
Further analysis at the class level allows for a deeper understanding. In this context, Table <ref> shows the class-based F1 scores of all devices and Fig. <ref> shows the confusion matrix for the AA case. Focusing on the aggregated results in column AA, it is evident that 22 out of 31 devices achieve near-perfect classification with an F1 score above
99%. Six devices (Globe Lamp, Gosund Plug, HeimVision Lamp, Teckin Plug, Yutron Plug) achieve lower performances, although still above 90%. These devices are lamps or plugs serving similar functions. The dataset is also rich in cameras and speakers, forming another group of devices with similar tasks. However, the classification of these devices does not pose similar challenges. This can be attributed to the fact that devices such as lamps and sockets have simpler structures compared to speakers and cameras, leading to similar data outputs that are more difficult to discriminate. Similar difficulties were encountered in previous experiments with sensors, plugs and switches in the IoTDevID study using the Aalto dataset<cit.>.
Noteworthy are the devices Ring Base Station, Amazon AE Spot and Smart Board, which exhibit significantly lower F1 scores than the other devices. Packets from Ring Base Station devices are often misclassified as speakers (Amazon Alexa family, Sonos Speaker, etc.), likely due to their role as a link between alarm systems in smart homes and management systems like Alexa or other speakers.
Analyzing the results for the Amazon AE Spot device, although the recall is high, the precision is exceptionally low (refer to Table <ref> in Appendix).
The Smart Board device poses a challenge as a majority of packets are mislabeled as Amazon AE Spot. Examining the data distribution of the Smart Board, an outlier is observed in the A211126 data, which was added to the active test dataset (see Table <ref> in Appendix). On this specific day, the data collected for this device is twice the combined amount of the other 53 days. Moreover, 78.6% of this unusually large data volume, which is 100 times greater than other sessions,
comprises uniformly empty packets that are challenging to classify (TCP packets with the ACK flag set and no payload). Although our study does not analyze the causes of these outliers, it is important to note that the imbalanced data distribution resulting from this outlier greatly complicates the identification of this device during the test phase.
§.§ Evaluation by including non-IP devices
The CIC-IoT-2022 dataset also includes non-IP device (Zigbee and Z-Wave) data, specifically in the Power and Interactions states. Only Zigbee devices have records in raw network data format (pcap); Z-Wave device data is not available in this format, limiting the opportunity for feature extraction. Additionally, the non-IP devices lack normal usage data, and the amount of data collected in the Power and Interactions states is relatively small in comparison to IP devices. Table <ref> shows the number of packets collected in both cases. Despite these limitations, we found the non-IP device data in the experimental suite to be interesting for analysis.
Although the collection method of Zigbee device data does not perfectly align with the transfer problem case, it exhibits similarities due to the shared fixed MAC address (00:00:00:00:00:00) assigned to all devices. This provides an opportunity to explore the exception part of the aggregation algorithm. We incorporated the Zigbee data into the AA state by adding the Zigbee devices from the Interactions state (which had more data) to the training set and using the data from the Power state for testing. The class-based results can be found in Table <ref>.
Table <ref> reveals that non-IP devices have minimal impact on the results of IP devices, showing almost identical performance to the previous evaluation. The only exception is the Sonos One Speaker, which exhibits a noticeable drop in F1 score, unrelated to the non-IP devices. This can be attributed to the above-mentioned anomaly in the Smart Board device.
On the other hand, the aggregation algorithm significantly improves the results for IP devices, while it has no effect on non-IP devices. This is due to the shared MAC address among Zigbee devices, which causes the aggregation algorithm to add them to the exception list and bypass aggregation for these devices.
Most of the non-IP devices show relatively low F1 scores. Nevertheless, the primary reason for these low results is likely the insufficient amount of data available for these devices. This performance issue was not encountered in the Aalto dataset, which contains more non-IP devices data, with a similar amount of data to the IP devices7. Table <ref> indicates a negative correlation between the number of packets and F1 scores, further supporting the impact of insufficient data on performance. Additionally, it should be noted that the training and test data represent distinct states rather than being collected under normal conditions.
§ CONCLUSIONS
This study validates the feature set and aggregation algorithm of the IoTDevID method using a new dataset. The dataset offers several advantages, including the presence of non-IP devices, multiple instances of the same device, normal usage data, and diverse usage profiles. The results demonstrate that models trained with active usage data outperform those trained with idle usage data, emphasizing the importance of data diversity in achieving better model performance.
The study showed successful external validation. It achieved impressive results with IP-only devices, achieving an F1 score of 92.50 for 31 device classes when evaluated in the more realistic scenario of devices undergoing active use. While non-IP devices faced challenges due to limited data availability, significant success was observed for devices with available data, showcasing the potential of the aggregation algorithm in accurately detecting non-IP devices.
In the original IoTDevID study, the UNSW dataset (32 IoT devices) achieved an F1 score of 93.70%, similar to the results obtained here. The CIC-IoT-22 and UNSW datasets have similar usage data and device counts, with some structural differences. The Aalto dataset, with a lower number of devices, achieved a lower F1 score of 86%. We attributed this to the dataset's abundance of devices performing similar tasks. Furthermore, the lack of usage data in the Aalto dataset may have contributed to the lower performance, as observed in our experiments on data quality.
This study highlights the importance of validation studies in assessing the robustness and generalizability of machine learning methods. The findings further contribute to the field of IoT device identification and provide insights into the impact of data diversity on model success.
Future research should focus on addressing the limitations related to insufficient data for non-IP devices, as well as exploring methods to enhance model performance in various scenarios. Additionally, investigating the scalability of the IoTDevID method to larger datasets and evaluating its applicability in real-world IoT environments would be valuable for practical implementation. Overall, this study serves as a foundation for further advancements in IoT device identification and security.
splncs04
|
http://arxiv.org/abs/2307.03262v1 | 20230706194754 | Projected Data Assimilation using Sliding Window Proper Orthogonal Decomposition | [
"Aishah Albarakati",
"Marko Budisic",
"Erik Van Vleck"
] | physics.data-an | [
"physics.data-an",
"math.DS",
"physics.comp-ph",
"physics.geo-ph"
] |
focal1]Aishah Albarakati
[email protected]
[focal1]organization=Department of Mathematics, University of Jeddah,
city=Jeddah,
postcode=23218,
country=Saudi Arabia
focal2]Marko Budišić
[email protected]
[focal2]organization=Department of Mathematics, Clarkson University,
city=Potsdam,
postcode=13676,
state=NY,
country=USA
focal3]Erik S.
Van Vleck
[email protected]
[focal3]organization=Department of Mathematics, University of Kansas,
city=Lawrence,
postcode=66045,
state=KS,
country=USA
Prediction of the state evolution of complex high-dimensional nonlinear systems is challenging due to the nonlinear sensitivity of the evolution to small inaccuracies in the model.
DA techniques improve state estimates by combining model simulations with real-time data.
Few DA techniques can simultaneously handle nonlinear evolution, non-Gaussian uncertainty, and the high dimension of the state.
We recently proposed addressing these challenges using a POD technique that projects the physical and data models into a reduced-dimensional subspace.
POD is a tool to extract spatiotemporal patterns (modes) that dominate the observed data.
We combined the POD-based projection operator, computed in an offline fashion, with a DA scheme that models non-Gaussian uncertainty in lower dimensional subspace.
If the model parameters change significantly during time evolution, the offline computation of the projection operators ceases to be useful.
We address this challenge using a SWPOD, which recomputes the projection operator based on a sliding subset of snapshots from the entire evolution.
The physical model projection is updated dynamically in terms of modes and number of modes, and the data model projection is also chosen to promote a sparse approximation.
We test the efficacy of this technique on a modified L96 with a time-varying forcing and compare it with the time-invariant offline projected algorithm.
In particular, dynamically determined physical and data model projections decrease the RMSE and the resampling rate.
data assimilation, particle filters, order reduction, proper orthogonal decomposition, Lorenz'96 model.
Projected Data Assimilation using Sliding Window Proper Orthogonal Decomposition
[
August 1, 2023
================================================================================
DA
§ INTRODUCTION
Significant challenges in developing DA techniques include nonlinearity, high-dimensional physical and data models, and non-Gaussian posterior distributions.
Few DA techniques are capable of simultaneously addressing these problems. In this manuscript, we focus on the PF class of algorithms.
Despite particle filters' remarkable ability to handle nonlinearities and non-Gaussian distributions, they suffer from several associated issues.
The PF often performs poorly with high-dimensional problems due to what is known as `filter degeneracy' when one of the particle weights approaches one, and all others approach zero, resulting in the posterior distribution being approximately approximated by a single particle.
Filter degeneracy is linked to a phenomenon known as the curse of dimensionality, which affects all sampling algorithms whose efficiency quickly decreases with the increasing dimension of the state space <cit.>.
High-dimensional spaces do not allow adequate resampling to prevent degeneracy, and the number of particles must be extremely large to be considered a good estimate of the posterior <cit.>, which reduces the computing efficiency of the algorithm.
TheOP-PF OP-PF methods <cit.>, have been developed to reduce the sample size necessary to counter particle degeneracy in high dimensions.
Nevertheless, studies analyzing the performance of the OP-PF show that the ensemble size required to generate the optimal proposal grows exponentially of the variance of the log-likelihoods of the observations based on the previous state; otherwise, it suffers from filter degeneracy <cit.>.
Various approaches have been developed to reduce the dimension of the physical state model <cit.>, data model <cit.> or both physical and data models <cit.> to prevent degeneracy.
Our motivation is based on the development of recent techniques, ProjOPPF <cit.>, that combined the PODPOD-based projection operator, computed in an offline fashion, with the OP-PF.
The POD can be used to determine the dominant energy modes of medium to high-dimensional models and exploit a possible low-dimensional structure of the model space by reducing it to its corresponding subspace for use in the nonlinear filtering problem <cit.>.
Assimilation of projected data involves using the projection operator based on reduced-order models and data assimilation techniques.
In order to calculate the projection operator, we produce a single time-invariant model used throughout the simulation.
However, the offline computation of the projection operators is no longer useful if the model parameters change or the simulation across the periods of the regime change significantly over time.
In this paper, we extend <cit.> by employing SWPODSWPOD to compute projection operators tailored to the moving window of data.
This method can be applied to the analysis of data where specific events occur or change over a relatively short period <cit.>.
The SWPOD has been used to extract the linear and nonlinear modes <cit.> and also used in data assimilation with machine learning to improve the efficiency of high-dimensional 4D-Var in <cit.>.
The SWPOD works by splitting the time domain into subdomains (windows) so that short-lived events or transients affect only a subset of all projections employed.
Furthermore, recomputing of projections allows active tuning of parameters of the projection, e.g., the order of the ROM, depending on the quality of the model reduction. For a recent comprehensive review of ROM techniques based upon see <cit.> and see also <cit.> for specific techniques that are applicable in the framework developed here.
We employ SWPOD to enhance the OP-PF assimilation by determining the model and data projections in an offline and online fashion to dynamically determine optimal modes and dimensions and promote sparsity in the case of data-based projections.
The resulting methods adapt to transients in the data to produce effective assimilation with ProjOPPF for a nonlinear, high dimensional L96 model with variable forcing.
In this paper, we present the background about data assimilation techniques in <ref>.
We formulate the projected data assimilation using abstract orthogonal projections in <ref> and their use in the context of projected optimal proposal particle filters ProjOPPF in <ref>.
<Ref>, presents the basics of the POD and training of mode using SWPODSWPOD.
The offline fixed dimension scheme is presented in <ref>, the offline adaptivity in <ref> and the online adaptivity in <ref>.
<Ref>, contains the numerical results obtained using the SWPOD algorithms developed to take advantage of these projected physical and data models from a sliding subset of snapshots.
The methods are applied to the nonlinear L96 with time-varying forcing in <ref>.
The numerical results in <ref> show the efficacy of the SWPOD technique compared to the primary POD method.
§ BACKGROUND ON PARTICLE FILTERS FOR REDUCED-ORDER MODELS
In studying large-scale geophysical systems such as climate, ocean, and atmosphere, DADA is commonly used to provide accurate estimates of states.
DA calculates estimates of the state _ of some physical system at a given time by combining observations _ with a dynamical physical model in an optimal way.
A physical state and data models are used to formulate the data assimilation problem.
Consider a state vector _∈ℝ^ that evolves according to the discrete-time stochastic model:
_= (_-1 )+_, (Physical model)
where :^→^ is a deterministic function of the state _-1 and _ is the state noise vector that models the uncaptured physics, or randomness inherent in the physical process.
We assume _ is Gaussian _∼𝒩(0,_), with covariance matrix _.
The data model relates the observation vector _∈ℝ^ to the state by
_ = _ + _. (Data model)
We will assume a linear observation operator :^→^, ≤ and that the observation noise vector _∼𝒩(0,_), is normally distributed with a covariance matrix _.
The initial state _0, and the noise vectors _ and _ at each step are all assumed to be independent of each other.
The (<ref>) implies that the data _ is generated from the true (unknown) state by _ = ^truth_ + _, and then there is a framework that converts the estimates of the state _ into `data space' via _.
Since we are interested in DA problems in which the model (<ref>) is nonlinear, particle filters are among the most common DA methods that work with nonlinear systems, as they are capable of reproducing the true target state in the case of large numbers of particles <cit.>.
PFOP-PF
The PFPF approach <cit.> is based on Monte Carlo sampling of the uncertainty distribution.
PF uses a set of state vectors (called particles) ^_ - 1 and associated non-negative weights _-1^, with ∑__-1^ = 1, as a discrete probability model for the uncertainty in the state estimate.
The density of the uncertainty distribution model for the state at time is
_() = ∑_=1^ w_^ℓδ(-^_),
_∫_() d
= ∑_=1^ w_^ℓ^_.
Though the PF algorithm is suitable for assimilation of nonlinear systems, it suffers from several associated issues, including filter degeneracy.
The PF models the distribution of uncertainty by running several copies (particles) of the model in parallel, with a weight coefficient that is progressively increased if the particle agrees with incoming measurements.
The ensemble estimate (<ref>) is computed as a weighted average of particle states.
If all except one of the particles acquire zero-weight, known as filter degeneracy, PF cannot continue effectively capturing the uncertainty spread.
At that moment, the particles are resampled to restore their weights, although this affects the continuity of data assimilation and potentially leads to a decrease in the quality of the ensemble estimate.
To detect degeneracy, one monitors the ESS, for non-negative weights such that
∑_=1^(^)=1,
ESS[∑_=1^(^)^2]^-1
which always satisfies 1 ≤ESS≤.
Ideally, ESS =; if ESS drops below a chosen threshold, the particles are resampled using one of the standard algorithms (see for example <cit.>).
Filter degeneracy can be mitigated by increasing the number of particles <cit.>, although the required number scales exponentially with the dimension of the space <cit.>.
Alternatively, our previous work <cit.> has shown that employing model reduction techniques to the physical model or the data model is efficacious in preventing degeneracy.
§.§ Reduction of state and data models via orthogonal projections
Both physical model (<ref>) and data model (<ref>) can be projected to reduce dimension.
Starting with the reduction of the dimension of the state _∈ℝ^.
Consider a matrix _∈ℝ^× whose columns form an orthonormal basis (_^⊤_≡𝕀) for a time-dependent subspace on which we are projecting the models.
The map _^ : ℝ^→ℝ^, _^ = __^⊤ is the orthogonal projection onto the _, which is interpreted as a composition of the reduction and reconstruction maps.
The reduction of the state vector _^⊤:ℝ^→ℝ^ creates a vector of inner products between the input and the orthonormal basis of the target subspace
= _^⊤, ∈ℝ^.
The reconstruction _:ℝ^→ℝ^ generates the reconstructed state ^r as a linear combination of the basis with the coefficients taken from the input vector
^r_.
The output ^r is an element of the full state space ℝ^, restricted to the _.
Computing the reduction of the reconstruction, recovers the reduced states _^⊤(__) = _, due to orthogonality (_^⊤_=𝕀).
Computing the reconstruction of a reduction projects the state onto the subspace, __^⊤_∈_.
Unless the state _ was initially in _, this map does not reconstruct the input state exactly.
To evolve reduced states _ using the physical model, we first reconstruct the state to form __, apply the evolution map (<ref>) to it, and then reduce the output using _^⊤:
_ =
(_ - 1) + _,
where( _-1 ) = _^⊤(_-1 _-1), _∼𝒩(0,_^⊤___=_).
We can similarly reduce the observation space to another -dimensional subspace, spanned by columns of _.
The reduced data model _ is given by:
_ =_ +
_,
_∼𝒩(0,_),
where the projected observation operator and the corresponding observation covariance _ are given by
=
_^⊤^_
, _ =
_^⊤^_(^)^⊤_,
The orthonormal bases _ and _ can be obtained from different dimension reduction techniques, in particular, POD, described in more detail in <ref>.
§.§ Projected Optimal Proposal Particle Filter (Proj-OP-PF)
We now summarize the ProjOPPF developed in <cit.> that relies on the described orthogonal projection of model equations.
Let
^__^⊤^_ for =1,…, denote the th projected particle at time where is the total number of particles.
Using the projected physical model with the projected data model (in-state model space), together with their corresponding covariance matrices _ = _^⊤__ and _ = _^⊤^_(^)^⊤_ respectively, with the projected observation operator,
_ = _^⊤^_.
Projected particle update:
use the optimal proposal particle update on the projected physical and original data models as:
^_ = (^_ - 1) +_+ K(_ - _(^_ - 1)),
whereK =_p(_)^⊤^-1_,
^-1_p = (_)^-1 + (_)^⊤_^-1 (_)
Projected weight update:
^_∝ exp[-1/2(^_)^⊤ (Z_)^-1(^_)]_ - 1^, =1,…,,
whereZ_ (_)_(_)^⊤ +_,
^_ = _ - _^_.
I-Proj
The unprojected OP-PF is recovered by setting projections to identity matrices _≡_≡𝕀.
We will refer to the unprojected OP-PF as I-Proj, when it will be used as the reference model for the projected filter.
§.§ Resampling scheme
The ESS is given by (<ref>).
Resampling by an extension of the resampling scheme given in <cit.> occurs when the ESS falls below a threshold (e.g., < 1/2).
Resampled particles are added by regenerating an unweighted particle ensemble. We then diffuse these particles by adding noise. The noise is generated by sampling gaussian random vectors ∼𝒩(0,ω𝕀), and transforming them as
_^⊤[α__^⊤+(1-α)𝕀].
The parameter 0 < α < 1 is the proportion of resampling variance inside the subspace of the reduced data model, _, and ω≥ 0 is the (tuneable) total resampling variance.
§ TIME-VARYING MODEL REDUCTION USING SLIDING-WINDOW POD
§.§ Proper Orthogonal Decomposition (POD)
PODPOD refers to the calculation of orthogonal coordinates for the subspace in which collected data evolve.
It is omnipresent in applied mathematics and is known as principal component analysis (PCA), Karhunen–Loéve decomposition, and empirical orthogonal function (EOF) decomposition in other contexts.
An excellent short review of the main features can be found in <cit.>.
Consider state vectors (called snapshots) _∈ℝ^, where =1,…,
[ _1 _2 … _ ]
A decomposition of the state evolution into a separation of variables ansatz can be written as
_≈∑_m=1^_m_m_ ,m,
where unit-norm vectors _m, and _m represent the, respectively, “spatial” and temporal profiles, associated with the mode m, while _m are the linear combination coefficients.
Although there are many possible separation of variable decompositions, POD is characterized by the requirement that both the {_m}_m=1^ and {_m}_m=1^ should be orthogonal sets.
POD can be computed by the SVDSVD of the matrix , writing the factorization in (<ref>) as a product of three matrices
=
[ _1 _2 … ][ _1 ; _2 ; ⋱ ][ _1 _2 … ]^⊤
,
where the entries σ_m≥ 0 in the diagonal matrix are the singular values, while left- and right-singular vectors, _m, _m, are orthonormal bases for the column and row spaces, respectively.
The singular values σ_m are commonly ordered in decreasing order, and the σ_m and vectors _m with a low index are called dominant.
In particular, we work with the “economy” version of SVD that omits the singular values that are equal to zero and their associated singular vectors.
The remaining vectors _m and _m, respectively, span the range and the cokernel of .
To reduce the dimension of , we form the reduction matrix
in the following way
=
[ _1 ⋯ _ ],
retaining the dominant < basis vectors.
According to the Eckart–Young theorem <cit.>, the projected snapshot matrix
^⊤
is the best approximation of among all matrices of rank as measured by either induced 2-norm or the Frobenius norm; for more see <cit.>.
§.§ Computing model reduction and data reduction matrices using p
The model reduction matrix is computed by applying POD to the snapshot matrix , which is equivalent to computing the economy singular value decomposition = ΦΣΨ and retaining first < columns, setting = Φ_[1:].
The data reduction matrix can be computed in two non-equivalent ways, as the resulting subspace needs to be in the span of the dominant POD vectors and in the cokernel (input subspace) of the data operator H.
Since projections to POD-dominant vectors and to the cokernel of , _^, do not commute in general, the two choices correspond to order in which these operations are computed.
The first option (A) is to re-use POD of the snapshot matrix and choose vectors from it, whether by choosing by their singular values or by performing an additional optimization as in <Ref>.
The span of chosen columns Φ is then additionally projected to the observation space by setting _A = _Φ_, the -projection of the first columns of the based POD modes.
The second option (B) is to apply POD
to the projected snapshot matrix
_, compute the singular value decomposition _ = Φ̂Σ̂Ψ̂, and then set _B = Φ̂_ by choosing columns from Φ̂, the first columns of the POD modes of
the -projection of the snapshots.
The difference between using _A (the projected POD vectors) vs. _B (POD vectors of projected snapshots) is subtle.
Comparing the projections onto spans of _A and _B shows that the (A) option is equivalent to the composition ___ while the projection in (B) option is equivalent to __, where in both cases _ is the same projection on a -dimensional subspace of the span of snapshots.
If there is a significant intersection between the cokernel of and the span of snapshots, the distinction between (A) and (B) is minor.
However, in the case that the column rank of is significantly smaller, or if a significant part of its range is orthogonal to the dominant subspace of snapshots, then the (A) case may overly restrict the size of the subspace in which assimilation is performed.
A more detailed discussion of this issue can be found in <cit.>.
As the observation operators chosen below contain all variables, or a large subset of them, we do not expect that the distinction between _A and _B plays a major role.
For the remainder of this paper, we choose the (B) option for the computation of the data operator and drop the subscript _A to simplify the notation.
Both dimensions and can be chosen based on additional knowledge about the model or data space, e.g., if the model analysis suggests that dynamics collapses to a subspace of a known dimension.
Alternatively, the dimensions can be chosen in a data-driven way, by selecting the number of the retained POD vectors so that the resulting approximation of matrices and ^ is within a certain distance (induced by ‖·‖_F) of the original matrices.
We explore several versions of the data-driven approach in this work.
§.§ Time-varying projection using a sliding window
SWPOD
The SWPOD computes a time-dependent POD projection operator based on a sliding subset of snapshots from the entire evolution.
The process of the SWPOD starts by splitting the time interval into sub-intervals (windows) W_i = [_i,_i + ], where _i=[0, , 2, …].
Throughout this paper, we choose the window size to be twice the window shift, = 2, so that a generic time point belongs to two consecutive windows W_i, W_i+1.
The reduction matrices _W_i = [ _1 ⋯ _ ] are computed using POD of windowed snapshot matrices _W_i that are formed from columns of belonging to the window W_i.
The choice of the dimension of the reduced space , viz. the rank of _W_i, is essential for the quality of the assimilation.
Below, we first present results where the is common to all windows.
Alternatively, the choice of the number of retained vectors can be adaptively chosen for each different window, resulting in time-varying choices () and (), as demonstrated in <ref>.
<Ref>, illustrates the SWPOD with windows containing only four snapshots =
[ _1 _2 _3 _4 ].
Three windows are shown in the graph with window sizes =4 and window shift =/2.
Two new snapshots are added at time +1, and the oldest snapshots are dropped.
POD is computed for windowed _[, +]; it is updated every steps and singular values σ_m and corresponding modes _m are computed.
During the online phase of the assimilation, at every time instance , the algorithm chooses between the reduction matrices _W_i or _W_i+1 for two consecutive windows which both contain .
In all computations presented here, we choose the later window, with index i() = /.
§.§ Strategies for choosing parameters of the reduction
We introduced the SWPOD reductions to react to regime changes in the model dynamics that may happen during the assimilation period.
The first strategy we present here is the offline fixed dimension where the parameters of model reduction and data reduction are set to a fixed value per window assimilation.
Offline Fixed Dimension.
The offline fixed dimension strategy selects a fixed size of the reduced physical model dimension and data model dimension based on the offline-computed digital twin simulation, i.e., before the assimilation/analysis loop begins.
The offline techniques have mode selection at fixed times which we assume here to be equally spaced.
This treats mode selection as a relatively scarce (or expensive) resource that cannot be used on demand but only at scheduled times.
This can clearly cause difficulties if there are changes in the dynamics but the modes are not updated.
The techniques developed here also apply when mode selection times are found adaptively.
In case of a significant change in the dimension of the subspace in which dynamics are concentrated, the dimensions and can be adapted in time as well.
The adaptive mode selection strategy aims to improve the assimilation and avoid overfitting and underfitting that might happen with fixed, reduced dimensions.
Using the sliding window POD, the number of retained vectors can be tailored for every window, resulting in time-varying choices for the number of retained vectors.
We present here the strategies for determining () and () adaptively.
Offline Adaptive Dimension.
The offline strategy is computes both () and () based on the offline-computed digital twin simulation, i.e., before the assimilation/analysis loop begins.
The number of modes is determined by retaining a sufficient fraction of POD modes needed to compress the block of snapshots belonging to the window to a pre-determined tolerance.
This produces a piecewise-constant change in both reduction dimensions.
The reduction matrices _ and _ are constructed by choosing the most dominant () and () POD vectors corresponding to the selected window i(t).
Online Sparse Subspace.
Model reduction dimension () is computed as above.
In contrast to the above strategies, the data reduction operator is not constructed from the most dominant POD vectors.
Instead, this strategy employs sparse selection (ℓ^1 minimization) to choose a subset of POD vectors to compress the incoming measurement within a specified tolerance.
§.§.§ Offline adaptivity
Recall, to achieve a model reduction of the _[, +], we form the reduction matrix _[, +] =
[ _1 ⋯ _ ], (<ref>) where we choose
≤ (and hopefully ≪), and then perform the projection on the set of the POD basis [ _1 ⋯ _ ].
The choice of the rank of the projection (number of retained POD basis elements) can be crucial.
With the benefit of the sliding window POD, the choice of the number of retained vectors can be performed for each different window, resulting in time-varying choices () and ().
Since performing such a prior choice is unfeasible, we choose the dimensions automatically based on a tolerance parameter .
Assuming that the singular values are ordered in decreasing order, σ_i≥σ_j for i<j, the order
r=min{ k :∑_m=1^kσ_m^2≤∑_m=1^Mσ_m^2}
is the smallest number of modes required to retain the fraction ∈ [0,1] of the total ·_F-norm of the matrix.
We employ two tolerance parameters, _ and _ in place of in (<ref>), controlling the degree of reduction in the model and data spaces respectively, and apply the selection procedure separately to POD decompositions of windowed snapshot matrix _[, +] and its restriction ^_[, +] to the cokernel of the observation operator .
As a result of this process, _, (), _, and () are piecewise-constant functions, changing only when the choice of the time window is changed, and can be fully determined before the analysis loop begins.
§.§.§ Online Adaptivity
The online adaptive method aims to determine () ≤() for each time step based on the just observed measurement.
The _ and () are determined as in <ref>; then _ is chosen as the subset of columns of _ that best approximate ^_.
The dimension and the operators are used only in the weight-update step of the OP-PF, which we can compute the _ and choose the size () online for each observation.
Reducing significantly is associated with staving off the particle degeneracy, which decreases the effectiveness of the particle filter assimilation strategy.
The proposed rule chooses the columns as the subset of columns of by performing a sparse ℓ^2 regression (lasso <cit.>) of the incoming measurements onto the POD basis.
The regression coefficients c⃗^∗ are computed by the minimization
c⃗^∗= _c⃗∈ℝ^[
_c-^__2+_Dc_1]
where the term c⃗_1 is responsible for promoting sparsity of the coefficient vector c⃗.
The indexes of non-zero coefficients c⃗^∗ then correspond to those columns in that are used to construct .
The target of regression ^_
promotes retaining only POD modes that are in the cokernel of , i.e., those POD modes whose magnitude is observable under the action of the .
The lasso optimization is the convex relaxation of selecting best subset of columns of _ to approximate ^_.
More on this can be found in standard references on statistical inference, for example <cit.>.
The parameter _D ∈ [0,1) is tunable.
Setting it to _D = 0 results in a “standard”, i.e., non-sparse, ℓ^2 regression, which generically sets _ = _ (and consequently =).
Increasing _D results in the increased importance of the sparsity-promoting term c⃗_1, which reduces below .
Computationally, most numerical packages have algorithms that efficiently solve this regression and the impact on the runtime of the analysis procedure is minimal.
Both non-regularized (_D=0) and ℓ^1-regularized (_D > 0) regressions can be given a bayesian interpretation.
Bayesian regression equivalent to the given procedure consists in assuming a prior distribution for each coefficient being tuned, computing a posterior distribution based on the data, that is the term ^_ in (<ref>), and then computing the mode of the posterior as the value for each parameter.
The non-regularized case corresponds to taking the uniform distribution as the prior for each parameter; the regularized case instead uses the bi-exponential (Laplace) distribution ∼exp(-_D c_i) as the prior <cit.>.
Replacing the ℓ^1 norm with the ℓ^2 norm in (<ref>) would correspond to the normal distribution being used as a prior.
Since the Laplace distribution is sharply peaked at zero, it is more likely to yield zero-valued coefficients than either normal or uniform distributions, resulting in overall sparser coefficient vectors c⃗.
This procedure results in a piecewise-constant _ that is determined offline (before the analysis loop) and changes only when the window changes.
The _ varies in each observation step, as it is computed depending on the incoming data _.
L96
§ NUMERICAL EVALUATION OF THE ASSIMILATION
We evaluate the described assimilation procedure on the often-used L96 <cit.>.
It is a system of autonomous, nonlinear, ordinary differential equations parametrized by the dimension of the system and the value of the forcing parameter, first developed as a simplified model of global horizontal circulation of the atmosphere.
Depending on the combination of these values, the system can exhibit behaviors ranging from steady states and regular traveling waves to fully-developed chaos.
In the first phase of evaluation, we demonstrate the effect that the recomputation of basis has on the assimilation in both regular and chaotic regimes, even when the reduction dimension is not tailored.
In the second phase, we evaluate the adaptive strategies from <ref> that can be used to change how many basis elements and which basis elements are used in the reduction scheme.
Overall, we demonstrate that the sliding window with the adaptive tuning of the reduction operator brings a significant improvement in efficacy as compared to the time-invariant reduction operator.
§.§ Lorenz '96 model
The state of the model is a vector = (u_i)_i=1^ of an arbitrary dimension ,
evolving according to the ordinary differential equation
u̇_i = u_i-1(u_i+1-u_i-2) -u_i+ F, i=1,…,,
with the periodic boundary condition u_i≡ u_i+.
The parameter F determines if the evolution will be qualitatively regular or chaotic.
The discrete-time evolution map used to formulate the assimilation scheme in (<ref>) is computed by solving (<ref>) using an adaptive Runge–Kutta scheme (Dormand–Prince pair used by MATLAB's ) with the solution resampled at multiples of the observation time .
To produce a regime change in the model, the forcing parameter F is replaced by a time-varying function.
In all our simulations, F changes discontinuously between F=3 and F=8 at one, or more, time instances =.
<Ref> illustrates the space-time behavior of a typical solutions of L96 in a described configuration.
In all cases, the initial condition is u_m(0) = cos(2 π m/M), m=1,…, M.
The model-reduction projections are in this work generated using singular vectors of POD for a single simulation of model equations.
To demonstrate the degree to which the range of the projection conforms to alternative realizations of the model evolution, we perform the following calculations.
First, we generate _ by integrating the (deterministic) model equations (<ref>) for = 400, where we change the value of F = 8 → 3 during the simulation, therefore triggering the regime change.
This evolution is then used to generate a projection matrix of order =390 as explained in <Ref>.
Since is generated by retaining dominant POD vectors, we can use the singular values to estimate the time-integrated error (𝕀 - )_p / _p, for induced norm p = 2 or Frobenius norm p=F.
However, we are here additionally interested in how the snapshot error E^truth_(𝕀 - )__2 / __2 evolves in time.
Second, starting from the same initial condition _0 =_0, we integrate the model equations with the stochastic error (<ref>) to produce the evolution _.
We then compute the relative snapshot error E_(𝕀 - )__2 / __2 using the same projection matrix computed from the “noiseless” evolution _.
Finally, we apply the ProjOPPF <cit.> to assimilate observations generated using _ to produce the ensemble estimate _ as the weighted ensemble mean (<ref>).
The same projection is used in the algorithm, with the ensemble size L =20.
Again, we compute the relative snapshot error E^ens_(𝕀 - )__2 / __2.
<Ref> shows the time traces for each of the three errors E^truth_, E_, and E^ens_, as well as the pointwise difference between the projected and unprojected solutions.
The relative and pointwise errors for ^truth are small, but not negligible, representing the truncation error of the singular value to retain the dominant < basis vectors.
The noisy evolution _ has slightly larger errors as neither the noise, or nonlinear evolution function, are constrained to be in the range of .
Since the ensemble estimate _ is computed in the reduced space, and reconstructed using the orthonormal system spanning the range of , the _ is constrained to be in the range of .
As a result, the relative and pointwise difference errors for _ are on the order of machine precision (e.g., ∼ 10^-15 ).
<Ref> demonstrates that when a time-invariant projection is derived from a solution with variable spatial complexity, as ^truth_, even that same solution can be significantly (mis)aligned with the range of .
This leads us to consider time-varying projections, computed by applying POD to segments of the trajectory (sliding data windows), expecting that regime change points would be restricted to only a small proportion of such windows, leading to a more stable and predictable fit between the solution and the used projection.
To demonstrate that the degree of complexity of the solutions changes within the two regimes, we compute the POD decomposition within each regime.
Generally speaking, order reduction techniques are effective when a relatively small number of modes are capable of reproducing the dynamics.
For POD, this is indicated by the magnitude of singular values σ_m, ordered in descending order.
The singular vectors (modes) associated with small σ_m are thought of as less important for reconstructing the solution, with accuracy measured by the ℓ_2-norm.
Therefore, regimes in which σ_m quickly drop off are interpreted as simpler, and we expect order reduction techniques based on POD to perform better.
<Ref> shows the setup of the SWPOD for L96 model with model and data dimensions =
=400 with the changing forcing parameter F=8→ F=3 at =/2=2500 indicated by vertical dashline.
The time axis is distributed between time windows as in the rest of the manuscript: the subsequent windows are always overlapped by half
(=/2).
<Ref> shows that the singular values computed for the SWPOD drop off much more slowly in the first half of the evolution (regular regime), compared to the second half (chaotic regime).
As a result, we expect that during the regular regime the order reduction techniques will perform well even when ≪.
ESS
§.§ Experimental setup
The experiments in which we evaluate the data assimilation share the following setup.
We use L96 with the state space of dimension =400.
The particles are initialized with equal weights (1/) with a fixed number of particles, =20.
The projected ESS is calculated (<ref>) and projected resampling, using the scheme described in <Ref> is performed with the spread of particles governed by <ref>, where the proportion of resampling variance inside the projection subspace is always taken to be α = 0.99 using the multinomial resampling scheme (see, e.g., <cit.>).
Total resampling variances ω = 10^-6 when <1/2.
The physical and data models error covariances are fixed to be
=1 𝕀 and = 0.01 𝕀, respectively.
The standard deviation of observation error of 0.1 is included for comparison in the figures when reporting on .
The observations are computed at every step, yielding the effective time step of assimilation = 1.
The assimilation is performed over 5000 observation times.
The “truth”, _ is determined by running a single simulation of the model.
The noisy observations of the evolution computed using the data model (<ref>) are fed into the assimilation process.
RMSERESAMPSWPOD
We refer to the calculation of POD without employing the sliding-window as NON-SW-POD and we use it as the baseline case for comparison with various described variants of SWPOD.
For each assimilation algorithm, the success of assimilation can be measured by how closely the ensemble estimate matches .
Numerically, we quantify this match by several measures.
* Pointwise difference Δ∈ℝ^ is used to show the spatial distribution of the error
Δ - ,
* RMSE, a scalar measure of the difference between the estimate of a state and the true state,
*Δ_2/√(),
where denotes the model dimension, expresses the quality of the estimate using a scalar.
* ESSESS
measures the spread of the weights across particles, and is an indicator of the particle filter collapse as described in <Ref>
= [∑_=1^(^)^2]^-1≤ threshold.
* The RESAMP measures the number of times the particle population had to be resampled.
We report the moving mean of twenty consecutive RESAMP values.
Better performance is implied by higher values of ESS and lower values of all other indicators.
We want the ESS to stay above the threshold 1/2 to avoid frequent resampling.
In some experiments, we calculate the moving minimum over time of ESS where each minimum is calculated over a moving window of length 20.
We compare some of the results to the unprojected particle filter (I-Proj), or equivalently the use of identity matrices =I, =I.
The experiments (1 – 4) evaluate two types of adaptive techniques described in <ref> and <ref>.
The experiments' parametrization details are given in <ref>.
§.§ Summary of experimental results
We present experiments to show the robustness of mode selection using SWPOD.
We first experimented with using the fixed dimensions of reduced models
with modes updated whenever the data window is moved.
In general, the assimilation of the order reduction is challenging in the chaotic regime and using SWPOD with a fixed dimension SWPOD does not improve it.
We see only improvement using SWPOD when we increase the size of the reduced physical model to very high dimensions =390 or even =400 resulting in a high error in the regular regime, see <ref>.
This also demonstrates that when dynamics changes the regime but the order reduction is not recomputed (as in NOSW), the performance of the assimilation is heavily dependent on the duration of each regime.
Additionally, we found that shrinking the window size results in a faster adaptation of the estimate to the new dynamical regime, as long as the smaller window is efficacious at representing the dynamics, as is the case in the regular regime of L96 (F=3).
Next, we changed the ROM dimensions dynamically with every change in the data window by using a fixed fraction of the squared singular values and a sparse selection technique developed for the projected data model (see <ref> for the description).
The great advantage of using the SWPOD is the fast adapting of the change of the forcing parameter F.
This allows for a change in rank of the projections to mimic the local Kaplan-Yorke dimension (see, e.g., <cit.>).
Overall, the offline fixed dimension SWPOD is much more effective in a regular regime than the NOSW.
Still, the assimilation with order reduction is challenging in the chaotic regime (F=8) with or without using the SWPOD.
To overcome some of the inherent difficulties in having mode selection at fixed times, especially in the chaotic regime, the offline adaptivity technique (described in <ref>) determines the subspace dimension of SVD condition to some tolerance value.
In earlier development of our techniques <cit.> projected methods successfully lowered the dimension of the observation space.
We further augment the model selection for the observations by an online sparse selection technique (described in <ref>).
While still restricted to the larger sets of modes we have as candidates, the modes used for the projected data model are potentially sparsely selected.
In experiments (1-4), we present the result of the SWPOD using offline and online adaptivity.
While the impact of the interaction between mode selection times and parameter switching is evident, the performance is still good in terms of both RMSE and resampling.
In all experiments (1-4), good RMSE has been achieved by tuning the adaptation parameters, with the added benefit of the adaptive online method (2-4), which results in the lowest resampling rate and the highest size reduction.
§.§ Experiment 1: the effect of adaptive offline tolerance
<Ref> shows the results of SWPOD using the offline-adaptive mode selection scheme described in <ref>.
The offline adaptive mode selection method allows us to choose
the size of the reduced physical and data models for each window based on some tolerance value (i.e., the percentage of total information captured by the reduced space, <ref>).
We consider three cases in which the data tolerance is fixed _=0.90 and the physical model tolerances vary: _=0.90, _=0.99 and _=0.999, and another case in which the data tolerance is increased to _=0.99.
The color on <ref> changes from dark to light indicate the time changes, where we are looking for a low RMSE and above threshold minimum ESS.
Based on RMSE in <ref>, the offline adaptive method converges more toward observation error as the tolerance of the offline model increases from _=0.90 to _=0.999 when the system is chaotic F=8. In contracts, we do not observe a significant improvement in the RMSE size when the data tolerance is increased to _=0.99. In the overall offline adaptivity experiment, _=0.90 appears to be sufficient where we do not benefit from larger values of high data tolerance but do benefit from larger values of model tolerance _.
For the size of the reduced dimensions model and the data in <ref>, the offline adaptive mode selection achieves the minimum reduction in both the model and the data when the dynamic is more regular F=3 with good RMSE.In the chaotic regime F=8, the sizes of both and are still large (e.g., = 360, =200).
That indicates the fixed mode selection needed ≈ 400 in the F=8 region to get competitive RMSE values as illustrated in <ref>.
However, we do not see an improvement of the offline adaptivity mode selection scheme over the fixed mode selection scheme in terms of resembling in <ref> and ESS <ref>, where the RESAMP for both methods are still significant with lower than threshold ESS in the chaotic regime.
To conclude, the offline adaptive mode selection scheme has the advantage of choosing the size of the reduced physical model dimension and data model dimension dynamically, where we don't have to deal with overfitting and underfitting the reduced sizes. However, it's important to note that resampling and reduction in model and data dimensions remain significant in the chaotic regime.
§.§ Experiment 2:
the effect of adaptive online tolerance
In this experiment, we compare SWPOD using the online-adaptive mode selection scheme with physical model tolerances _={0.9; 0.99; 0.999}, <ref> and tunable online tolerance _D=0.9, <ref>, to the offline mode selection scheme <ref>, with physical model tolerance _=0.9 and data model with tolerance _=0.9.
<Ref> shows the indexes of non-zero coefficients c⃗^∗ that correspond to the columns in that are used to construct for some selected time windows W_i, {i=3,4,8,9} of tunable online data tolerance _D= 0.9 and physical model tolerance _=0.999.
We can see a pattern of the small size of the selected modes indicated by the yellow color in <ref>.
Therefore, we expect a smaller overall dimension without the need to order the modes as the offline adaptivity, further reducing the reduced data dimension size.
The numerical results in <ref> are obtained by averaging over five trials.
In terms of RMSE in <ref>, offline and online methods perform similarly, as you can see in the first two cases on the legend.
Despite both methods, offline <ref> and online <ref>, show a convergent RMSE to and below the observation error with increasing the offline tolerance, it is evident that the online modes selection scheme has superior performance when it comes to low resampling (<ref>) and high ESS above the threshold (<ref>) when the system is chaotic (F=8).
The online adaptive mode selection scheme provides the lowest reduced data dimension (i.e.
=1, 2 uniformly in both regions F=8 and F=3).
Similarly to the offline adaptivity experiment in <ref>, the online data adaptivity experiment does not benefit from larger values of high data tolerance _ but from larger values of model tolerance _.
Overall, the online modes selection scheme with high offline model tolerance and online data tolerance (i.e., _= 0.999 _= 0.9) is the most effective method in comparison to all other methods that are offline fixed and adaptive dimension.
§.§ Experiment 3: the effect of varying the observation time step of sparse observations
In this experiment <ref> shows the effect of varying the observation time step (=0.1; 0.05; 0.025; 0.01) of sparse observation (=2) where 2 is canonical projections onto every other state variable using the online adaptivity scheme.
The online-adaptive mode selection scheme with offline physical model tolerances _=0.999, <ref> and tunable online tolerance _D=0.9, <ref>.
The RMSE in <ref> is converging to and below the observation error (√()) in both regime (i.e., chaotic (F=8) and regular (F=3)) of a sparse observation (=2) with a smaller time step (i.e., =0.025).
The ESS with smaller time steps (i.e., =0.025 and =0.01) of sparse observation (=2) is staying above than the threshold as in <ref> which indicates less resampling as shown in <ref>.
The sizes of the reduced model dimension , which are calculated adaptively offline, decrease as the observation time step gets smaller, as shown in the top panel of <ref>.
Whereas the sizes of the reduced data dimension are calculated adaptively online using lasso regression <ref> (see <ref> for the description).
As a result, the sizes of the reduced data dimension are very small and stable no matter how small the observation time step as you can see in the lower panel of <ref>.
§.§ Experiment 4: the effect of time-dependent forcing parameter of online adaptivity
In this experiment, we compare the offline adaptivity (<ref>) and online adaptivity (<ref>) to see how the time-dependent forcing parameter F affects the simulation.
The forcing term F in (<ref>) determines whether the evolution will be qualitatively regular or chaotic.
This experiment uses an offline adaptive mode selection scheme based on offline tolerances for model _=0.999, and data _=0.90, <ref>.
The online-adaptive mode selection scheme based on offline physical model tolerances _=0.999, <ref> and tunable online tolerance _D=0.9, <ref>.
The effects of changing dynamics from regular to chaotic and vice versa at different time switches are shown in <ref>.
Both adaptive online and offline methods provide a convergent RMSE below the observation error whether the system is regular or chaotic as in <ref>.
The adaptive online method wins by offering the most significant reduction in data dimension in <ref> with a maximum reduced data dimension =2 and with the lowest percentage of resampling in <ref> where the highest total number of times resampling has occurred for offline.
Overall, the adaptive online method provides the best results with the lowest RMSE, RESAMP and reduction of data dimension .
§ DISCUSSION AND CONCLUSION
In this paper, we show the efficiency of using SWPOD with the developed ProjOPPF in <cit.>, <cit.>.
Generally, the ProjOPPF is developed to perform well if either the physical model or the observational data have a smaller effective dimension or both.
A low resampling percentage and RMSE can be achieved using lower-dimensional projected models if the adequate dimensions are sufficiently small.
The SWPOD with fixed and adaptive mode selection methods show promising results with lower RMSE and RESAMP, smaller error differences, and higher ESS than NOSW.
In addition, SWPOD reacts faster to the time-varying forcing parameter F= 8→ 3 of L96, where we saw a quick drop in RMSE in all experiments in the same window as F changes.
The SWPOD with the offline adaptive mode selection method described in <ref>, shows successful results with a high model tolerance value _=0.999.
The SWPOD with an adaptive online method described in <ref> performs the best out of all other methods (i.e., fixed offline and adaptive offline) in terms of low RMSE, less resampling RESAMP and high ESS.
It also offers the most reduction in the data dimension, as we saw in experiments (6-8).
The techniques developed are effective through a combination of very low adaptive data dimension and model dimension that are optimized dynamically dependent on the underlying complexity of the behavior of the physical model.
As a future work, we are exploring using a sliding window with DMDDMD applied to the two-layer Lorenz 96 coupled model with changing coupling parameters.
Another avenue is the development of adaptive in-time techniques to determine when to update modes based on monitoring the representation error in projecting onto the current set of modes.
elsarticle-num
|
http://arxiv.org/abs/2307.01249v1 | 20230703180000 | An inflationary disk phase to explain extended protoplanetary dust disks | [
"Raphael Marschall",
"Alessandro Morbidelli"
] | astro-ph.EP | [
"astro-ph.EP",
"physics.space-ph"
] |
1]Raphael Marschall
1]Alessandro Morbidelli
[1]CNRS, Observatoire de la Côte d'Azur, Laboratoire J.-L. Lagrange, CS 34229, 06304 Nice Cedex 4, France
An inflationary disk phase to explain extended protoplanetary dust disks
[
August 1, 2023
========================================================================
Context: Understanding planetesimal formation is an essential first step to understanding planet formation.
The distribution of these first solid bodies will drive the locations where planetary embryos can grow, eventually leading to fully-fledged planets.
Aim: We seek to understand the parameter space of possible protoplanetary disk formation and evolution models of our Solar System.
A good protoplanetary disk scenario for the Solar System must meet at least the following three criteria: 1) It must produce an extended gas and dust disk (e.g., 45 au for the dust); 2) within the disk, the local dust-to-gas ratio in at least two distinct locations must sufficiently increase to explain the early formation of the parent bodies of non-carbonaceous and carbonaceous iron meteorite; and 3) dust particles, which have condensed at high temperatures (i.e., calcium–aluminium-rich inclusion, CAIs), must be transported to the outer disk.
Though able to satisfy a combination of these three criteria, current protoplanetary disk models have not been successful in recreating all three features.
We aim to find scenarios that satisfy all three criteria.
Methods: In this study, we use a 1D disk model that tracks the evolution of the gas and dust disks.
Planetesimals are formed within the disk at locations where the streaming instability can be triggered.
We explore a large parameter space to study the effect of the disk viscosity, the timescale of infall of material into the disk, the distance within which material is deposited into the disk, and the fragmentation threshold of dust particles.
Results: We find that scenarios with a large initial disk viscosity (α>0.05), relatively short infall timescale (T_<100-200 kyr), and a small centrifugal radius (R_C∼0.4 au; the distance within which material falls into the disk) result in disks that satisfy all three criteria for a good protoplanetary disk of the Solar System.
The large initial viscosity and short infall timescale result in a rapid initial expansion of the disk, which we dub the inflationary phase of the disk.
Furthermore, a temperature-dependent fragmentation threshold, which mimics that cold icy particles break more easily, results in larger and more massive disks.
This in turn, results in more “icy” than “rocky” planetesimals.
Such scenarios are also better in line with our Solar System, which has small terrestrial planets and massive giant planet cores.
Finally, we find that scenarios with large R_C cannot transport CAIs to the outer disk and do not produce planetesimals at two locations within the disk.
§ INTRODUCTION
Understanding planetesimal formation within protoplanetary disks is an important first step to understanding planet formation.
The distribution of these first solid bodies will drive the locations where planetary embryos can grow, eventually leading to fully fledged planets <cit.>.
Observations of protoplanetary dust disks show two distinct properties: they are large and long-lasting.
Their sizes range from 10-500 au with typical sizes ∼30 au <cit.>, and have lifetimes of millions of years <cit.>.
Because the disk formation occurs on much shorter timescales (of the order of 100 thousand years), dust is not continuously supplied to the system.
It, therefore, needs to be preserved at large heliocentric distances for millions of years after disk formation.
The Solar System provides a set of additional constraints on the properties and evolution of the protosolar disk.
However, it is unknown a priori whether these were common to most protoplanetary disks or specific to our own.
The existence and the properties of comets suggest that the protosolar disk was typical in terms of radial extension and lifetime. In fact,
comets are thought to have formed at distances between 20 and 40 au <cit.>. Furthermore, cold classical Kuiper belt objects are thought to have formed in-situ up to a distance of 45 au <cit.>. Additionally, comets have likely formed late <cit.>, i.e., several million years after the formation of the first solids, the so-called calcium–aluminium-rich inclusion <cit.>. A late formation is needed to avoid any significant radiogenic heating, which would result in the loss of highly volatile ices such as CO_2 and CO <cit.>. The presence of these highly volatile species also in very large comets (∼ 100 km) such as Hale-Bopp or Bernardinelli–Bernstein <cit.> confirms that comets remained cold not because of their small sizes but rather because of they formed late, at a time when most short-lived radioactive elements (e.g. ^26Al) had already decayed. Also, radioactive heating would have increased the bulk density of large objects to a degree inconsistent with the low density of icy bodies such as Trojans and Kuiper-belt objects (between 300 and 1500 km/m^-3; ), further supporting late formation.
We have additional evidence for a long-lasting protosolar disk.
The meteoritic record contains both samples from differentiated and un-differentiated parent bodies.
The latter formed significantly later – up to 5 million years after CAI formation <cit.>.
Therefore, ample evidence suggests that our Solar System formed from an extended and long-lived protoplanetary disk.
Because we will focus in this work on the first generation of planetesimal, and the problem of long-lasting disks is an issue in itself, our first requirement for a good model of the Solar System disk is its large size.
Focusing on the first generation of planetesimals, the differentiated parent bodies of iron meteorites, we find that these can be divided into two isotopically distinct groups akin to carbonaceous chondrites (CC) and non-carbonaceous chondrites (NC) <cit.>.
Thus, they are usually referred to as CC- and NC-iron meteorites, respectively.
Both groups of iron meteorites formed essentially simultaneously in the disk <cit.>.
Because they have formed simultaneously, they must form at distinctly different locations in the disk that can have a different disk composition.
Therefore, our second requirement for a good model of the Solar System disk is that it produces planetesimals at two distinct locations in the disk.
Finally, the oldest Solar System solids, CAIs, are thought to have formed as high temperature condensates very close (few tenths of an au) to the proto-Sun <cit.>.
The age of CAIs sets what is usually considered time zero of Solar System formation <cit.>.
Their age is 4,567.30 ± 0.16 million years according to Pb-Pb dating <cit.>.
Recent work argues for a revised age for CAIs of 4,568.7 Myr <cit.>.
The duration of CAI formation appears to be very short, from ∼ 100 kyr <cit.> to just ∼ 10 kyr <cit.>.
Importantly, the abundance of CAIs is significantly higher in CCs than NCs <cit.>, the latter of which are thought to have formed closer to the Sun than the former <cit.>.
Furthermore, CAIs have even been found in comets <cit.>, which descend from planetesimals formed the farthest away from the Sun.
Therefore, even though CAIs were formed close to the Sun, the planetesimals formed the furthest away are more enriched with them.
This implies that these high-temperature condensates have been transported efficiently to the outer disk, so that the latter became enriched with CAIs while the inner disk remained depleted in CAIs.
The fact that the isotopic compositions of differentiated/early and undifferentiated/late planetesimals overlap within the CC and NC reservoirs, respectively <cit.> indicates that this division of a CAI-rich outer and CAI-depleted inner disk was present already at the time when the parent bodies of the iron meteorites formed.
It has been proposed that CAIs were transported ballistically to the outer disk via magnetised winds <cit.>.
But modern simulations reveal that only particles much smaller than observed CAIs can be efficiently transported this way <cit.>.
Thus, the radial transport of CAIs during the outward spreading of the disk <cit.> remains the best option.
In summary, for our Solar System, a disk formation and evolution scenario must satisfy at least the following three properties:
* it must develop an extended disk of gas and dust (up to 45 au for the dust);
* in at least two distinct locations in the disk, the dust/gas ratio must be able to increase sufficiently to produce planetesimals and explain the early formation of NC- and CC-iron meteorite parent bodies;
* particles which condensed at high temperatures (i.e., CAIs) must be able to reach large heliocentric distances, i.e., be transported from the star's proximity to large distances.
In this work, we try to build such a scenario. In section <ref>, we describe the key processes in the formation of the disk, the evolution of its gas and dust components and planetesimal formation. Then we describe the disk model we use (Sec. <ref>) before discussing the model setup (Sec. <ref>).
In particular, we will describe four assumptions' influence on satisfying the Solar System constraints.
These are i) the centrifugal radius, R_C; ii) the initial viscosity of the disk, α_0; iii) the infall timescale of material onto the disk, T_infall; and iv) the effect of a temperature-dependent fragmentation threshold for icy particles.
Our results are presented in Sec. <ref>.
We will show that an initial rapid expansion – forming an inflationary disk stage – can result in large dust disks, forming planetesimals at two locations in the disk and transporting CAIs to the outer disk.
We will also show that disks forming from clouds with large angular momentum, which readily solves the problem of dust-disk sizes by delivering material directly at large distances, are unable to form planetesimals at two distinct locations and don't allow the transport of CAIs into the outer disk.
§ KEY PROCESSES IN DISK EVOLUTION AND PLANETESIMAL FORMATION
As anticipated in the introduction, we start discussing key processes in the formation and evolution of the disk and planetesimal accretion, focusing on the unknowns we will parametrise and test in our models.
§.§ Accretion of material into a protoplanetary disk
Whether protoplanetary disks are “born” big (i.e., form from the outside in) or “grow up” to be big (i.e., grow from the inside out) depends on the angular momentum of the infalling material.
Thus, the angular momentum of the pre-stellar cloud determines where material falls into the disk.
The larger the angular momentum of the material, the larger the distance at which it falls into the disk.
The radius in the disk where the angular momentum of the infalling material is equal to the angular momentum of the Keplerian disk is called the centrifugal radius, R_C.
If, e.g., the pre-stellar cloud has a constant angular speed throughout, then shells of material closer to the centre collapse first and, having a small specific angular momentum, will fall very close to the proto-star.
More distant shells fall into the disk later and, having larger specific angular momentums, fall farther away from the star.
Therefore, R_C increases with time for a pre-stellar cloud with a constant angular frequency.
Depending on the pre-stellar cloud, the centrifugal radius can be as large as 100 au <cit.>.
However, it is also possible the material falls continuously close to the star because of magnetic braking, which removes a significant amount of the angular momentum of the infalling material <cit.>.
The formation of such small disks is observed in some magnetohydrodynamics (MHD) simulations of the gravitational collapse of pre-stellar clouds <cit.>. These disks can then spread radially due to viscous evolution.
Current cloud collapse simulations do not yet provide a firm prescription on how a disk forms and where it collects the material falling from the molecular cloud.
Thus, in the following, we will test different idealised recipes to identify which best fits the constraints enumerated in the introduction.
Observations suggest that the timescale of accretion of material into the disk is of the order of 10^5 y, with large uncertainties <cit.>, so that the infall timescale can be considered a free parameter within an order of magnitude.
Late accretion through streamers is sometimes observed <cit.> but, given the stochastic nature of this process, we don't include it in our investigations.
The viscosity plays a key role in the evolution of the disk and its spreading away from R_C.
There is a big discussion in the literature on the actual viscosity of protoplanetary disks, but it concerns isolated accretion disks.
As long as the disk is accreting material from the molecular cloud, it is expected to suffer strong Raynold stresses that act as an effective viscosity <cit.>.
Thus, it seems legitimate to assume that a disk which is still accreting mass has a viscosity proportional to the mass infall rate, but the proportionality factor is poorly constrained, and therefore we will consider different values in our study.
§.§ Motion of dust particles within the disk
For disks forming with a small R_C where, e.g., the material never falls outside of 10 au, dust particles must be efficiently transported from the vicinity to distances far away from the star in order to build the large observed dust disks.
In such cases, the disk (dust and gas) forms from the inside out.
The outward motion of the dust is induced through the radial aerodynamic drag of the radially expanding gas <cit.>.
Gas within R_C has a negative radial velocity (towards the star), but the gas close to and beyond R_C viscously spreads outwards.
Eventually, the entire gas disk becomes an accretion disk with a negative radial velocity throughout the disk.
The radial motion of the dust depends on its size.
The important parameter for dust dynamics is not the particle size but its Stokes number, defined as:
St = π a ρ_d/4 Σ_g ,
where a is the diameter of the dust particle, ρ_d is the particle solid density, and Σ_g is the gas surface density.
The radial dust velocity, v_r^d, can then be written as
v_r^d = 2St/1+St^2v_t^g + 1/1+St^2 v_r^g ,
where v_t^g and v_r^g are the tangential and radial velocities of the gas relative to a circular Keplerian orbit, respectively.
When there is no dust feedback onto the gas, v_t^g=η v_K is the difference between the azimuthal gas speed and the Keplerian speed due to the partial pressure support of the gas.
The radial velocity of the gas is due to viscosity.
For small dust, when St≪ 1, the radial dust speed is dominated by the radial gas speed (v_r^d ∝ v_r^g, Eq. <ref>).
Thus, when the dust is small, it initially expands outwards from R_C with the gas.
Once the dust has grown sufficiently (i.e., St∼ 1), the tangential speed of the gas can become the dominant factor in Eq. <ref>.
Because the gas is sub-keplerian v_t^g < 0, the radial dust speed can also become negative once the dust has grown large enough, even if the gas is still in radial expansion.
This reflects the fact that dust particles that are large enough feel the headwind of the gas – the dust is moving at keplerian while the gas is at sub-keplerian speed.
Thus, while the gas can further expand outwards viscously, large dust particles will begin to drift back towards the star.
§.§ Dust growth
Particles grow on a timescale 1/ZΩ, where Z=Σ_d/Σ_g is the local column integrated dust-to-gas ratio, but their growth is limited by the so-called fragmentation barrier <cit.>. When particles grow, they start to partially decouple from the gas.
The turbulence in the disk and the radial drift of particles in the disk then enhance the relative speeds among dust particles and when the latter is larger than the fragmentation velocity v_frag, dust particles cannot coagulate further but rather break upon collisions.
The largest Stokes number that particles can acquire by coagulation is estimated to be <cit.> the minimum between:
St_frag = 0.37 v_frag^2/3 α c_s^2 ,
and
St_ddf = 0.37 v_frag/2 |η v_K| ,
where α is the gas viscosity parameter, following the assumption that the viscosity ν=α c_s^2/Ω <cit.>, is the Schmidt number relating viscous angular momentum transfer to turbulent diffusion, and c_s is the local sound speed. Eq. (<ref>) comes from the velocity dispersion due to turbulence in the disk, and Eq. (<ref>) comes from the differential radial speed of particles of different Stokes numbers.
The fragmentation velocity v_frag depends on the material properties.
Following the results of laboratory experiments <cit.>, it is typically assumed that v_frag=100 cm/s for refractory and silicate particles whereas v_frag=1,000 cm/s for icy particles beyond the water snowline.
Yet, recent laboratory experiments have shown that ice particles are only 'sticky' close to the sublimation temperature and more brittle when the ice is cold <cit.>.
Therefore, we will explore an additional fragmentation threshold prescription for icy particles, which is temperature dependent.
Similarly, it may be possible that silicate particles become more sticky when their temperature is close to sublimation <cit.> but, awaiting experimental confirmations, we don't yet consider this possibility in our model.
§.§ Planetesimal formation
The currently favoured mechanism for planetesimal formation is through the streaming instability <cit.> and subsequent gravitational collapse to form large – the preferred size of 100 km – planetesimals <cit.>.
The SI is triggered once sufficient dust collects within a certain region of the disk and causes the local dust-to-gas ratio to reach some threshold value <cit.>.
At that point, clouds of dust particles collapse under their own gravity to form planetesimals <cit.>.
Previous models exploring the formation of planetesimals within a disk have focused on static disks, i.e., snapshots of a given disk phase.
Such models have been successful in showing that planetesimal formation is particularly favoured in the vicinity of sublimation lines, in particular, the water snowline <cit.>.
More recently, these static models were extended to include the temporal evolution of the gas and dust disks and confirm that planetesimal formation at the snowline remains the dominant location for forming a first generation of planetesimals <cit.>.
Such evolving disk models capture the expansion phase of the disk and therefore do not rely on a prescribed disk profile, e.g., the surface density of gas and dust.
The addition of the silicate condensation line, in conjunction with a small centrifugal radius, was shown by <cit.> to result in planetesimals forming at the silicate line in addition to those forming at the snow line.
Yet, these newer, explicitly time-dependent inside-out formation models exhibit the problem that they cannot satisfy at least two of our requirements.
These disks typically don't result in extended disks (requirement 1), and by extension, will also struggle to bring CAIs to the outer disk (requirement 3). This shows that a more in-depth investigation is needed, which motivates the present paper.
The reason why the published models fail on requirements 1 and 3 is that the resulting dust disk sizes are merely slightly larger than the location of the water snowline (∼5 au).
This is because particles beyond the snowline rapidly grow and drift back towards the proto-star on much shorter time scales due to aerodynamic drag in the tangential direction <cit.>.
Thus, the underlying problem is one of the particle sizes and their associated dynamical timescales.
Indeed, equation <ref> tells us that when the dust growth timescale is much shorter than the timescale for particles to be dragged outwards by the gas, dust will be lost into the star efficiently.
Therefore, to prevent dust particles from drifting towards the star, we must prevent them from growing to large sizes too fast.
§ MODEL
We use the previously presented protoplanetary disk model of <cit.>, which includes dust and gas evolution.
Here we summarise the model's main features and refer the reader to the methods section of <cit.> for a detailed model description.
We only detail the improvements made for this work.
We typically initiate the model with an empty disk and a proto-star with an initial mass of 0.5M_⊙.
This is consistent with a Class-0 protostar.
Subsequently, the disk is populated through an infall function describing the amount of mass added to the star-disk system as a function of time and distance to the star.
The mass added to the disk is assumed to decay over time as exp(-t/T_infall), where t is time and T_infall is the infall timescale, a free parameter of the model.
The time-integrated mass of the infall is scaled to result in a star-disk system with one solar mass.
The green line in Figure <ref> shows an example of the disk mass infall function for T_infall=100 kyr.
The maximum distance within which material falls into the disk is the centrifugal radius, R_C.
As recalled in section <ref>, the classic recipe for the evolution of R_C over time is derived from the assumption of a rigidly rotating sphere of material <cit.> and is <cit.>:
R_C(t) ≃ 53 ( ω/10^-14s^-1)^2 ( T/10K)^-4( M(t)/1 M_⊙)^3 au ,
where ω is the angular speed of the cloud, T is the cloud temperature, and M(t) is the total mass of the star-disk system.
For ω=9×10^-15 s^-1 and T=15 K, R_C and never exceeds 10 au (orange line in Fig. <ref>).
For a larger angular speed of, e.g., ω=3.1×10^-14 s^-1 the centrifugal radius will grow to 100 au.
Therefore, depending on the angular speed of the molecular cloud the centrifugal radius can become very large.
As a reference <cit.> used ω=1×10^-14 s^-1 and T=15 K for their study bringing R_C to 10.5 au.
<cit.> suggested that the alternative scenario, where R_C remains small throughout the infall process due to magnetic braking of the infalling material, should be appropriate, at least for our Solar System, to aid the formation of planetesimals at two locations within the disk.
We thus adopt the prescription of <cit.> of
R_C(t) = 0.35/√(M_⋆(t)) au ,
where M_⋆ is the mass of the proto-star in solar masses, M_⊙.
We stress that the crucial assumption of Eq. <ref> is not its exact form but that R_C remains small, particularly that it remains smaller than the condensation line of silicates and refractories.
There is an ongoing debate over the scale at which this disk forms <cit.> and we thus don't constrain ourselves to only exploring scenarios using Eq. <ref>. Thus, although we mainly present results using that prescription from <cit.> we will also examine the effects of using the more traditional “Shu recipe” (see results in Sec. <ref>). In particular, we will show results where the R_C grows to 10 and 100 au, respectively.
The prescription of R_C forms our first main assumption in the model.
Material falling closer than 0.05 au (the inner edge of our simulation domain) is assumed to be directly accreted onto the star.
The gas disk evolves under viscous heating and spreading.
We use the usual definition of the viscosity ν=α H^2 Ω (or, equivalently ν=α c_s^2 /Ω), where Ω is the keplerian frequency and H=√(RTr^3/μ G M_⋆ M_⊙) is the scale height, with R the gas constant, μ the mean molecular weight of the gas, and G the gravitational constant.
The scale height is computed self-consistently at each distance, r, of the disk by measuring the temperature, T.
The viscosity parameter, α, is a free parameter and varies in time and with radial distance.
As discussed in Sect. <ref>, it is reasonable to assume that α decays over time in a manner proportional to the disk infall function (two examples are shown in Fig. <ref>).
However, the initial value of α – denoted α_0 – is considered a free parameter.
A minimum value of α is set at 5×10^-5, the order of magnitude of the effective turbulence generated by hydrodynamical mechanisms such as the vertical shear instability <cit.>.
In addition, at locations in the disk where it is gravitationally unstable or close to instability, the
disk develops clumps and waves that also generate an effective viscosity.
We take this into account by increasing α in those locations locally <cit.>.
Of the infalling mass, 1% is considered dust and the rest gas (hydrogen), corresponding to the solar metallicity <cit.>.
The dust is further split up into three sub-species: 1) all refractory species with a sublimation temperature above 1,400 K, 2) silicates with a sublimation temperature of 1,000 K, 3) water/ice with a sublimation temperature of 170 K.
In reality, the sublimation temperature for silicates depends on the disk pressure and global chemistry (e.g. the C/O ratio).
For instance, <cit.> showed that the silicate sublimation temperature could be 1,060 K for P=10^-4 bar and C/O=1.0.
For simplicity, we have kept the sublimation temperature of silicates at 1,000 K.
The species are assumed to have a relative abundance of 0.35/0.35/0.3.
When the local disk temperature is above one of these sublimation temperatures, the corresponding dust specie is considered to be in the gaseous form and thus evolves in the same way the overall gas does.
In the part of the disk where a dust specie is in solid form, we track the size of dust particles, or rather its stokes number, St.
The model has only one dust size at each radial distance, as in most codes.
For dust size distributions that are dominated by the largest size, this is a good approximation and is indeed the result of dust growth models <cit.>.
Because of the Eulerian nature of our code, we don't just consider the limit Stokes number given by the fragmentation barriers (<ref>) and (<ref>), where we assume =0.1 <cit.>, but also need to consider that particles cannot be as large that they immediately drift out of a given cell.
This drift boundary is defined as
St_drift = 0.055 Σ_d/Σ_grΩ/η v_K ,
where r is the radial distance to the star.
The barriers (<ref>) and (<ref>) are additions to the model compared to the one published in <cit.>, which only considered (<ref>).
The final particle size is determined through the minimum among St_growth, given by the growth algorithm with timescale ZΩ, and St_frag, St_ddf and St_drift.
We have also improved the dust advection treatment in the code.
For each cell, we now calculate the flux of particles out of the current cell to the lower/upper neighbouring cell based on the respective dust speed at the edge of the cell.
Additionally, we compute the flux of particles from the lower/upper cell to the current cell.
Taking into account all four possible loss/gain contributions is important, in particular, at the water snow line, because there the dust size can significantly change from one cell to the next.
The particles beyond the snow line may drift towards the star, while those within the snow line may still drift away from the star.
The dust surface density is evolved, taking into account advection and diffusion.
The back-reaction from the dust onto the gas is accounted for.
At each timestep, the midplane volume density of the dust and gas is calculated.
When the ratio of the two exceeds 0.5, we assume that planetesimal formation can occur via the streaming instability in that ring, removing the dust in excess <cit.>.
§ MODEL SETUPS AND CONSTRAINTS
As described in the introduction, the underlying problem that prevents dust from forming a large disk which extends far beyond the water snowline is that it grows too fast.
We will explore two ways to prevent dust from growing to a size large enough to make it drift towards the star during the expansion phase of the gas disk.
§.§ Expansion speed of the disk
First, a more rapid expansion of the gas disk – which in turn drags the dust particles in the radial direction when the Stokes number is small (Eq. <ref>) – can transport dust into more distant regions of the disk before the dust has a chance to grow significantly.
Faster expansion of the gas disk should manifest when the gas viscosity (α) is higher or the infall timescale (T_infall) is short.
To explore the effect of these two parameters of our model, we have varied them.
For the viscosity, we have one free parameter, the initial value of α at the beginning of the simulation, denoted α_0.
Once α_0 is set, it decreases as described in Sec. <ref> proportional to the mass added to the disk.
Because the mass added to the disk decays over time, so will α.
We have chosen to vary α_0 between 0.01 and 0.1 and steps of 0.01.
The lower limit is consistent with the nominal case presented in <cit.>.
The upper limit might be considered quite high, but <cit.> showed that for cases where the mass that is added to the disk is a large fraction of the disk mass itself, the disk wide α can reach large values (see their Fig. 8).
In particular, when the infalling mass is on the same order as the disk mass, α reaches values of 0.1.
Such a mass ratio is reached early in our simulations.
Therefore, we believe such a high value of α_0 is plausible for a brief period at the beginning of the simulation. Remember that we let our α decay over time at the same rate as the infalling material decays (Fig. <ref>).
An increased viscosity has the added benefit of increasing the relative velocities between the dust particles and, therefore, their collision speeds.
This results in more fragmentation and, thus, smaller particles, making it easier for the gas to transport the dust to large distances.
Regarding the infall timescale, we have tested nine values of T_infall between 15 kyr and 630 kyr.
A logarithmic spacing between cases was used.
In combination with the ten different α_0, we arrive at 90 simulations.
§.§ Fragmentation threshold of the dust
The second way to ensure particles reach larger distances in the disk is more straightforward.
In our nominal cases, we follow the assumptions of <cit.> and impose a fragmentation threshold of v_frag=100 cm/s for refractory and silicate particles and v_frag=1,000 cm/s for icy particles beyond the water snowline.
However, we also test a temperature-dependent fragmentation threshold prescription for icy particles:
v_frag(T) = v_0 + v_CΓ(T)^5/6 ,
where T is the temperature, v_0=100 cm/s, v_C=1,600 m/s, and
Γ(T) = Γ_C + Γ_d0tanh(β(T-T_0)) ,
where Γ_C=Γ_d0=0.25, β=0.105, and T_0=150.
These parameters (v_0, v_C, Γ_C, Γ_d0, and T_0) where chosen to match the experimental data presented in <cit.>.
Figure <ref> shows both the data from <cit.> (orange crosses; shifted to account for the different sublimation temperatures between the laboratory and the real disk) and Eq. <ref> (light blue line).
The fragmentation threshold decreases from 1,000 cm/s to 100 cm/s between disk temperatures of 170 K and 120 K.
The new prescription makes icy particles easier to break in cold regions of the disk.
This limits their size and should help transport them to larger distances from the star.
For locations in the disk above the sublimation temperature of 170 K, i.e., for dry particles, we retain a fragmentation threshold of 100 cm/s whereas for locations with temperature below 170 K we use Eq. <ref>.
We have run two sets of 90 simulations (the variations in α_0 and T_infall) for the nominal fragmentation threshold and the new temperature-dependent fragmentation threshold.
The effects from rapidly expanding disks are expected to compound when also applying the new fragmentation threshold.
§.§ Summary of assumptions
To summarise, there are four main assumptions that we will explore in this work:
* The centrifugal radius, R_C: either according to Eq. <ref> (`Shu recipe') growing to 10 and 100 au respectively or Eq. <ref> where R_C remains small. Our nominal simulations are performed with Eq. <ref>.
* Variation of the initial disk viscosity, α_0, between 0.01 and 0.1.
* Variation of the infall timescale T_infall between 15 kyr and 630 kyr.
* The fragmentation threshold for icy particles: either constant at 1,000 cm/s (nominal case) or temperature-dependent according to Eq. <ref>.
§ RESULTS
§.§ Temperature independent fragmentation threshold
First, we present the results from the cases where the nominal fragmentation threshold for dust particles and the small R_C according to Eq. <ref> was used.
In these cases, particles within the water snowline fragment at 100 cm/s while those outside at 1,000 cm/s.
As discussed in the introduction, the main factor limiting dust transport to large distances is the fast growth and subsequent inward drift of particles once they have crossed the snowline.
Already very early on, e.g., after only 1,000 years, the dust particles just outside the snowline grow to the centimetre scale and effectively stop their outward radial motion.
This is shown in panel a_1 of Figure <ref>, which depicts the results of the case where we have a small viscosity of α_0=0.01 and T_infall=100 kyr <cit.>.
Particles just outside of the water snowline (dashed yellow line) have a size between 0.1 and 1 cm (Fig. <ref>a_2) and consequently have almost zero radial velocity (Fig. <ref>a_3).
Because the gas continues to spread outwards, the dust and gas disks “decouple”, i.e., the dust expansion lags the one of the gas.
Therefore, even at this very early time, the dust disk is already smaller than the gas disk (fine black dashed line in Fig. <ref>).
In contrast, when the initial viscosity is much higher, e.g., α_0=0.1 (Fig. <ref>b), the dust particles beyond the snowline are roughly an order of magnitude smaller (Fig. <ref>b_2) and thus retain a positive/outward motion (Fig. <ref>b_3).
The dust expansion keeps up with the gas expansion, and therefore the two disks retain the same size (Fig. <ref>b_1).
As expected, disks with larger viscosity expand faster.
After 1,000 years of expansion, the gas disk with α_0=0.01 has expanded to roughly 4 au (measured where the gas surface density is 1 g/cm^2).
In contrast, the disk with α_0=0.1 has reached 10 au and is, therefore, more than double the size of the other (Fig. <ref>b).
For a given T_infall, the time a disk takes to reach 100 au, denoted T_100 au, decreases as the initial viscosity increases (Fig. <ref>).
To measure the size of the disk, we have used the location where the gas surface density takes a value of 1 g/cm^2.
For the dust, we have adopted a value 100 times smaller than the gas because of the metallicity of our infalling material being 1%, i.e., a value of 0.01 g/cm^2.
We are aware that this choice is somewhat arbitrary but have found it to be the definition that leads to the easiest and most reliable measure of the disk size, particularly for the dust.
Other definitions, e.g., using the distance containing a certain fraction of the total mass, have proven unstable for the dust.
Figure <ref> also shows that there is a transition of the expansion regime.
For each value of α_0, the orange star on the corresponding curve indicates the viscous timescale, T_visc, of the disk, to be read on the horizontal axis.
T_visc represents the average viscous timescale within 10 au at t=0 for a disk with an aspect ratio of 6%.
When the infall timescale is shorter than the viscous timescale (on the left side of the orange line), the expansion of the disk slows as the infall timescale decreases.
In the extreme case where the infall timescale is much shorter than the viscous timescale, the disk's ability to spread viscously is limited.
Thus, the expansion timescale reaches a plateau.
This can be clearly seen in the case of the lowest viscosity case.
In contrast, when the infall timescale is larger than the viscous timescale, the expansion of the disk slows with increasing infall timescale.
This means the expansion is limited by the amount of material resupplied by the infall.
In the most extreme cases T_100 au∼400 kyr (when α_0 and T_infall are minimal) and T_100 au∼10 kyr (when α_0 is maximal and T_infall is minimal).
We baptise such a rapid expansion, reaching 100 au in just a few tens of thousands of years, the inflationary phase of the disk.
Because T_infall in these tests varies by more than one order of magnitude, we might better measure T_100 au in units of T_infall.
Indeed, the right panel of Fig. <ref> shows the expansion time as a fraction of the infall timescale.
In this view, we can recognise that for a given α_0, the expansion time as a fraction of T_infall always decreased with increasing T_infall.
It is remarkable that if T_infall=T_visc, the value T_100 au/T_infall is independent of viscosity (i.e. the orange stars fall on a horizontal line).
§.§.§ Mass and size of the dust disk
We have measured the maximum dust mass a given disk holds 1 au beyond the snowline.
To make sure the measurement was not contaminated by the dynamics around the snowline, we chose to exclude the dust mass just outside the snowline.
We will refer to this part of the disk as the `outer disk'.
These masses and sizes are illustrated in Fig. <ref>.
Disks with an initial small viscosity result in small disks that contain little to no dust beyond the snowline (Fig. <ref>a_1).
In these cases, the disks can be as small as 5 au.
The most massive disks are formed with the highest viscosity and reach 60 M_⊕ and sizes between 30 and 50 au.
For a given viscosity, the infall timescale plays a crucial role in determining the dust mass in the outer disk.
The shorter the infall timescale, T_infall, is, the more massive the outer disk is (Fig. <ref>a_2).
Therefore, short T_infall and large α_0 produced the largest and most massive outer disks.
These disks thus satisfy our first criteria for good protoplanetary disks of the Solar System.
§.§.§ Planetesimal formation
To address our second criterion for good protoplanetary disks of the Solar System, we evaluate whether planetesimals form and at how many locations in the disk.
Figure <ref> summarises the mass of planetesimals formed in each of the disks.
Because planetesimals typically form at up to two locations in the disk (Fig. <ref>, right panel), we have split the results into “rocky” planetesimals (forming at the silicate condensation line) and “icy” planetesimals forming at/outside of the water snowline.
First, we observe that for most cases with T_infall>100 kyr, no “rocky” planetesimals are formed.
Second, for “rocky” planetesimals, there is an optimal viscosity given a T_infall.
This is most clearly visible for T_infall=39 kys (the third line from the bottom).
For this infall timescale, the optimum viscosity to produce “rocky” planetesimals is α_0=0.05.
The planetesimal mass decreases for higher and lower values of α_0.
When the viscosity is too low, the amount of mass transported to the planetesimal forming region is too small because of the lower radial velocity of the gas, and when the viscosity is too high, the dust cannot settle sufficiently in the midplane to trigger the SI.
Third, the mass of “icy” planetesimals is maximised the larger the viscosity and the shorter the infall timescale.
This comes from the fact that those disks are also the most massive beyond the snowline (Fig.<ref>a).
Fourth, a small part of our parameter space (high viscosity and long infall timescales) does not form any planetesimals at any location in the disk.
Fifth, the reservoirs of “rocky” and “icy” planetesimals have a similar order of magnitude in mass.
§.§.§ CAI transport to the outer disk
For the third criterion for good protoplanetary disks of the Solar System, we track high-temperature condensates.
For this purpose, we introduce dust tracers, one for refractory particles that condensate at the refractory line, and a second for refractories that never sublimated.
A fraction of the high-temperature condensates will be CAIs, but in our model, we will just refer to such particles as potential CAIs because we do not track the full condensation sequence of refractories but rather just treat all refractories as one species of dust.
Nevertheless, this lets us determine the locations in the disk that will be enriched or depleted in CAIs.
The ability of the disk to transport CAIs to the outer disk and retain them there depends again on the viscosity of the disk and the infall time scale.
In particular, the transport of CAIs is promoted when the centrifugal radius is smaller than the refractory condensation line.
If the infall timescale is too long (larger than ∼ 200 ky for α_0=0.05) the disk is rather cold from the beginning, and therefore the refractory condensation line (defined as T=1,400 K) is located inside R_C, and no CAIs are transported to the outer disk (Fig. <ref>).
In contrast, when the infall timescale is short (less than ∼ 100 ky) CAIs are efficiently transported to the outer disk, but then drift back into the inner disk due to the fast evolution of the disk, which transitions to a fully accreting disk within 3-4 T_infall.
While we show these results for α_0=0.05 they are qualitatively the same for other initial viscosities.
For larger initial viscosities, the infall timescale where the disk is too cold to create CAIs is shorter (e.g., at T_infall∼ 150 kyr for α_0=0.1).
Conversely, this transition happens at larger infall timescales when the viscosity is smaller (e.g., at T_infall > 400 kyr for α_0=0.01).
But in all cases, neither very short nor long T_infall are favoured for the transport of CAIs to the outer disk.
The smaller the initial viscosity is, the larger the fraction of the disk that is populated by CAIs.
For example, when α_0 < 0.05 for T_infall=100 kyr the inner disk gets similarly enriched with CAIs as the outer disk (Fig. <ref>).
When in addition to a low initial viscosity, the infall timescale is also short, then the entire disk is populated by potential CAIs.
Such disks would clearly not match the observations.
Yet, the larger the initial viscosity, the clearer the divide is between a CAI-enriched outer and CAI-depleted inner disk.
The presence of CAIs in outer planetesimals thus suggests a high initial viscosity with the associated rapid expansion phase of the disk.
This appears to be consistent with large, kinetic, Si isotopic variations observed in refractory inclusions, which suggest a turbulent environment during condensation <cit.>.
In all of our simulations, we have kept the Schmidt number at =0.1.
A higher Schmidt number of, e.g., =1 would aid the transport of CAIs to the outer disk.
However, the larger Sc the more the dust will have difficulty settling in the midplane and thus tend to make planetesimal formation more difficult.
§.§ Temperature dependent fragmentation threshold
In the case where we impose the temperature-dependent fragmentation threshold beyond the snowline (see Sec. <ref> and Fig. <ref>), we expected that dust fragments more easily and therefore, the outer disk gets populated with more mass.
Indeed, all disks now have at least 10 M_⊕ in the outer disk (Fig. <ref>).
Though the disks are, in general, not significantly more massive (10-70 M_⊕ compared to 0-60 M_⊕), the disks with the temperature-dependent fragmentation threshold are much larger (30-80 au instead of 5-50 au).
Thus there is, as expected, a general shift to more massive and larger outer disks.
This shift of dust mass from the inner to the outer disk has clear consequences.
We now have significantly more “icy” planetesimals than “rocky” ones (Fig. <ref>).
For some combination of parameters α_0 and T_infall (e.g., 0.07 ≤α_0 ≤ 0.1 and 40 kyr≤ T_infall≤ 100 kyr), a couple of Earth masses of “rocky” planetesimals form together with a couple of tens of Earth masses of “icy” planetesimals.
This is in very good agreement with the structure of the Solar System, with massive giant planets' cores and small terrestrial planets.
Similarly to the temperature-independent fragmentation threshold, there are little to no planetesimals when T_infall>100 kyr.
The delineation is even a bit clearer.
Nevertheless, the part of parameter space with two planetesimals rings is roughly equally large irrespective of the fragmentation threshold.
Concerning CAI transport, the overall behaviour is similar to the case with the nominal fragmentation threshold.
But, because particles are more easily transported to the outer disk CAIs also reach much larger distances.
§.§ Shu infall
Because our prescription of the infall is somewhat unconventional, i.e., the centrifugal radius, R_C∼0.35 au (Eq <ref>), we have also tested the more common assumption according to <cit.>.
In the “Shu-case” the R_C rapidly grows from 1 au to 8 au (Fig. <ref>, Eq. <ref> with ω=9×10^-15 s^-1 and T=15 K).
This is because the molecular cloud is assumed to be a rigidly rotating body and angular momentum is conserved (i.e. no magnetic braking).
Therefore, gas with small angular momentum collapses into the disk first, close to the star.
Later, outer shells with larger specific angular momenta fall at larger distances.
This behaviour is in contrast to our preferred cases described above, where magnetic braking reduces the angular momentum of the infalling gas to roughly a fixed value independently of the initial angular momentum of the gas in the molecular cloud.
A major consequence of the “Shu-type” infall is connected to the radial gas speed.
The disk within R_C is an accretion disk, i.e., the radial gas velocity, v_r,g, is negative (Fig. <ref>).
Therefore, dust within R_C will also always have a negative radial velocity (v_r,d<0).
Outside of R_C, the disk can spread viscously outwards (v_r^g>0; Fig. <ref>), and therefore small dust particles will also have a positive radial motion as long as they do not grow large enough to feel the headwind of the gas and start drifting back towards the star.
We have tested two different angular velocities, ω, of the molecular cloud.
Once with ω=10^-14 s^-1 resulting in a maximum R_C of roughly 10 au as shown in Fig. <ref> and once with ω=3.1 × 10^-14 s^-1 resulting in a maximum R_C of roughly 100 au.
The temperature of the molecular clouds is assumed to be 15 K in both cases.
We use here the evolution of R_C according to Eq. 3 of <cit.>.
The prescription of the T_infall and α_0 remain the same as above.
In all cases studied the “Shu-type” infall has no difficulty producing large and massive disks (Fig. <ref>).
When R_C grows to 10 au, and we use the nominal temperature-independent fragmentation threshold, the disks are between 10 and 100 au and have masses between 2 and 200 M_⊕ (Fig. <ref>a_1).
For the same molecular cloud angular velocity but with the temperature-dependent fragmentation threshold, the disks are overall larger and more massive in particular for the cases with small α_0.
The sizes and masses are also confined to 80-150 au and 30-300 M_⊕ (Fig. <ref>b_1).
When R_C grows to 100 au the disks are even larger and more massive.
For the temperature-independent fragmentation threshold, the disks are between 40 and 100 au and have masses between 30 and 600 M_⊕ (Fig. <ref>a_2).
For the temperature-dependent fragmentation threshold, the disk sizes and masses are only weakly dependent on α_0 and T_infall.
These disks are between 150 and 400 au and have masses between 300 and 700 M_⊕ (Fig. <ref>b_2), and therefore very massive and large.
When we prescribe the “Shu-infall” particles in the inner disk (within the water snowline) drift rapidly towards the star (Fig. <ref>).
This does not allow them to pile up at the silicate sublimation line, and therefore no “rocky” planetesimals are formed in any of the cases (left panels in Fig. <ref>).
Additionally, even at the water snowline, we observe only sparse formation of planetesimals (centre panel in Fig. <ref>).
This result is largely independent of which angular velocity of the molecular cloud we used and which fragmentation threshold is applied.
Our results differ from the results found by <cit.>.
We do not find any planetesimal formation during the phase when the snow line moves outwards.
This might be caused by the different assumptions of the disk infall prescription.
We assume that the mass added to the disk decays over time while a constant function with a sudden cut-off is assumed in <cit.>.
Additionally, we find much fewer planetesimals at the snow line.
We believe that <cit.> overestimated the amount of water vapour in their disks due to a difference in treatment of the inner disk boundary condition for water vapour to that of hydrogen.
This supports planetesimal formation.
Finally, we have also studied the transport of CAIs in such disks.
As expected no CAIs are able to reach the outer disk, or even the terrestrial planet region (Fig. <ref>).
The example shown in the left panel of Fig. <ref> assumes α_0=0.05, T_infall=100 kyr, the temperature-dependent fragmentation threshold, and R_C growing to roughly 10 au but is representative of almost all combinations of α_0 and T_infall.
The only exception is for α_0=0.01 and T_infall<25 kyr (right panel of Fig. <ref>).
In this case, some potential CAIs are produced and transported to the outskirts of the disk (at roughly 100 au).
For cases where R_C grows to roughly 100 au, the situation is even worse because in none of the cases are there any potential CAIs in the disk.
This behaviour is not surprising.
The inward motion of the gas prevents any CAIs from being transported to the terrestrial planet region or outer disk.
Our results are broadly consistent with those of <cit.> in that the fraction of CAIs is largest in the outermost part of the disk (towards the edge of the disk itself).
<cit.> assume a constant function for the infall of material into the disk, whereas we assume a decaying function.
Assuming a constant source function results in R_C growing much slower than in our cases.
This in turn extends the period during which R_C is smaller than the refractory condensation line.
Therefore, CAIs can be produced for longer and transported into more distant regions of the disk.
This way the disk generally can be more enhanced with CAIs than in our cases.
§ DISCUSSION AND CONCLUSION
Infall of material into protoplanetary disks occurs more or less close to the star – typically much less than the observed disk sizes).
The disks, therefore, undergo an initial phase of viscous spreading <cit.>.
The dust on the one hand is entrained in the outward motion of the gas, and on the other hand is slowed down by the sub-keplerian motion of the gas (see Eq. <ref>) which causes its inward drift.
Whether the radial outward entrainment or sub-keplerian drag dominates the dust motion depends on the particle size.
A key parameter in any protoplanetary disk model is the so-called centrifugal radius, R_C.
This is the radius in the disk where the angular momentum is the same as that of the infalling material.
If e.g., the pre-stellar cloud rotates as a rigid sphere <cit.>, then shells of material closer to the centre collapse first and, having a small specific angular momentum, fall very close to the proto-star.
Outer shells, with larger angular momentum, will fall at larger distances and in a later stage in disk formation <cit.>.
In such scenarios, R_C grows with time and we refer to them as “Shu-type” infall models.
Contrary to this, magnetic braking can remove angular momentum from the infalling material.
This can cause the material to fall close to the star irrespectively of the initial angular momentum of the material.
In the introduction we have described that a disk formation and evolution scenario for the Solar System must satisfy at least the following three requirements:
* it must develop an extended disk of gas and dust (up 45 au for dust);
* in at least two distinct locations in the disk, the dust/gas ratio must be able to enhance sufficiently to produce planetesimals and explain the early formation of NC- and CC-iron meteorite parent bodies;
* particles which condensed at high temperatures (i.e., CAIs) must be able to reach large heliocentric distances, i.e., be transported from the star's proximity to large distances.
We found that scenarios using a “Shu-type” infall model with an associated large R_C are very successful in achieving requirement 1, as they easily result in large and massive disks.
Yet they fail to produce planetesimals at two locations in the disk (requirement 2) and transport CAIs to the outer disk (requirement 3).
Therefore, these scenarios are bad candidates for the Solar System protoplanetary disk.
On the other hand, we show that a disk fed by material with a small R_C can satisfy all three requirements, in particular when the initial viscosity is large, the infall timescale is of order or smaller than 100 kyr.
The main results from our nominal disks with a small centrifugal radius, R_C, can be summarised as follows.
* The larger is the initial viscosity, α_0, the larger is the outer dust disk.
* The shorter is the infall timescale, T_infall, the more massive is the outer dust disk.
* Therefore, an initial inflationary expansion phase is needed to produce large, massive dust disks. The disk can reach a size of 100 au within a few tens of thousands of years.
* A temperature-dependent fragmentation threshold is more realistic and results in significantly larger and slightly more massive dust disks because particles are more fragile and therefore remain smaller at cold temperatures.
* No “rocky” and very few “icy” planetesimals form when T_infall>100 kyr.
* The largest mass of “icy” planetesimals forms when α_0>0.05.
* There is an optimum α_0 that maximises the mass of “rocky” planetesimals. For example, for T_infall=39 kyr it is α_0=0.05.
* The temperature-dependent fragmentation threshold results in more “icy” than “rocky” planetesimals (roughly by a factor of 10) than in the conventional case where the two are of the same order. This is a direct consequence of the temperature-dependent fragmentation threshold resulting in more massive outer disks.
Although our disks with a small R_C can satisfy the three requirements we had put forth at the beginning, there are two additional related requirements that will need to be met eventually but cannot at this point.
Observations show that protoplanetary disks are long-lived, i.e., 3-4 million years <cit.>.
All dust in our models (even in the “Shu-type” infall models) drifts into the star on a timescale of a few hundred thousand years.
Therefore, the entire dust disk is lost on that timescale.
Not only does this prevent us from explaining long-lived disks, but our disks are also not able to produce a generation of planetesimals late enough to avoid differentiation, because no dust is available at these later times.
The retention of a large disk and the production of a population of planetesimals that forms late are two additional requirements for a good protoplanetary disk of the Solar System.
Clearly, our model lacks some additional disk processes that can prevent the loss of dust from the disk.
For example, once the disk viscosity is sufficiently small magneto-hydrodynamic (MHD) effects might become dominant and structures (rings and gaps) might appear, impeding dust drift <cit.>.
This will be the object of future work.
§ ACKNOWLEDGMENTS
We acknowledge the funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 101019380).
Additionally, we acknowledge support from programme ANR-20-CE49-0006 (ANR DISKBUILD).
We thank Sebastien Charnoz, Yves Marrocchi and Francesco Lovascio for reading the manuscript and providing helpful comments.
We thank the anonymous reviewer for their constructive and useful comments that helped us improve the paper.
aasjournal
|
http://arxiv.org/abs/2307.02846v1 | 20230706082255 | Probability Metrics for Tropical Spaces of Different Dimensions | [
"Roan Talbut",
"Daniele Tramontano",
"Yueqi Cao",
"Mathias Drton",
"Anthea Monod"
] | math.MG | [
"math.MG",
"math.CO",
"math.ST",
"q-bio.PE",
"stat.TH"
] |
#1 1
Probability Metrics for Tropical Spaces of Different Dimensions
Roan Talbut^1,†, Daniele Tramontano^2, Yueqi Cao^1, Mathias Drton^2, and Anthea Monod^1
1 Department of Mathematics, Imperial College London, UK
2 School of Computation, Information and Technology, Department of Mathematics, Technical University of Munich, Germany
† Corresponding e-mail: [email protected]
§ ABSTRACT
The problem of comparing probability distributions is at the heart of many tasks in statistics and machine learning and the most classical comparison methods assume that the distributions occur in spaces of the same dimension. Recently, a new geometric solution has been proposed to address this problem when the measures live in Euclidean spaces of differing dimensions. Here, we study the same problem of comparing probability distributions of different dimensions in the tropical geometric setting, which is becoming increasingly relevant in computations and applications involving complex, geometric data structures. Specifically, we construct a Wasserstein distance between measures on different tropical projective tori—the focal metric spaces in both theory and applications of tropical geometry—via tropical mappings between probability measures. We prove equivalence of the directionality of the maps, whether starting from the lower dimensional space and mapping to the higher dimensional space or vice versa. As an important practical implication, our work provides a framework for comparing probability distributions on the spaces of phylogenetic trees with different leaf sets.
Keywords: Simple projection; tropical matrix maps; tropical projective torus; Wasserstein metrics.
§ INTRODUCTION
Some of the most immediate and important tasks in statistics and machine learning—such as clustering <cit.> and density estimation <cit.>—require measuring a distance between probability distributions. This is usually done with a notion of distance or divergence between measures; some of the most well-known distances and divergences between probability distributions are the Kullback–Leibler divergence, total variation distance, and Hellinger distance. These notions fall within the class of maps known in probability theory as f-divergences. More recently, the Wasserstein distance between measures has become increasingly relevant in statistics as it metrises weak convergence of measures and reflects the geometry of the underlying state space <cit.>.
An underlying assumption behind computing f-divergences and Wasserstein distances, however, is that the measures live in the same space. Nevertheless, there exist many application settings where the measures to be compared occur in distinct spaces of different dimensions. For example, phylogenetic trees are used to capture the progression of cancer in a biomedical application or the evolutionary patterns of the spread of disease in a public health application. In such applications, the numbers of leaves in the trees can differ before and after intervention such as treatment or public inoculation, and therefore such datasets will lie on spaces of different dimensions. Existing work by <cit.> and <cit.> study the problem of comparing sets of trees with differing numbers of leaves; here, we consider probability measures rather than discrete data sets.
There is an inherent connection between the space of phylogenetic trees and tropical geometry <cit.>, which has recently been exploited to develop many statistical and machine learning tools for sets of phylogenetic trees <cit.>. The ambient space of phylogenetic trees as well as Gröbner complexes—central objects in tropical geometry—is the tropical projective torus, which makes it a fundamental space in tropical geometry for both theory and applications. In this work, we therefore study Wasserstein distances between probability distributions on tropical projective tori of different dimensions.
Our strategy is to leverage recent work in the Euclidean setting by <cit.>, who construct a Wasserstein distance between Euclidean spaces of different dimensions. The optimal transport problem and thus Wasserstein distances are well defined and have been previously studied on the tropical projective torus by <cit.>. In this work, we build on these existing results and relevant foundations to construct a Wasserstein distance between measures on tropical projective tori of different dimensions.
We prove that there need not be a choice of whether to map from the lower dimensional torus to the higher dimensional one or vice versa, because the directional mappings are equivalent.
The remainder of this manuscript is structured as follows. We begin with an outline of the relevant background and concepts in <Ref>. We then turn to constructing tropical equivalents of the necessary tools for our task in <Ref>, which lead to our main theorem presented in <Ref>, where we also discuss the practical implications relating to our application of interest involving phylogenetic trees. We close with a discussion of ongoing and future work in <Ref>.
§ BACKGROUND AND PRELIMINARIES
In this section, we present the setting of our work as well as overview the approach to map between probability distributions of different dimensions in the Euclidean case, which we will closely follow in our work.
Notation. Throughout this manuscript, we use the notation [n] := {1,2,…,n}.
§.§ Essentials of Tropical Algebra and Geometry
We begin by outlining the basic concepts of tropical algebra and tropical geometry needed for our work; a complete discussion can be found in <cit.>.
The tropical algebra is the semiring = ∪{- ∞} with the addition and multiplication operators—tropical addition and tropical multiplication, respectively—given by
a ⊞ b = max{ a,b }, a ⊙ b = a+b.
The additive identity is -∞ and the multiplicative identity is 0. Tropical subtraction is not defined; tropical division is given by classical subtraction.
Note that <cit.> uses the min-plus convention where tropical addition is given by the minimum between two elements, rather than the maximum as above. While they are equivalent, the max-plus convention has been used more frequently in recent applications <cit.>.
Using tropical algebra, we can define tropical parallels to most algebraic objects. For example, matrix multiplication is given by
(M x⃗)_i = _j=1^n M_ij⊙ x_j = max_j ≤ n{ M_ij+x_j }.
Similarly, we can define tropical polynomials as the maximum of finitely many linear maps with integer coefficients. While classical polynomials are uniquely defined up to scaling by their roots, tropical polynomials are uniquely defined up to tropical scaling by their nonlinear points. That is, the points where the maximum is achieved by two or more linear maps. These nonlinear points are referred to as the tropical hypersurface of a polynomial. Evaluating functions or other mathematical expressions using the tropical algebra is referred to as tropicalisation.
The metric space in which we work is the following.
The n-dimensional tropical projective torus is a quotient space constructed by endowing ^n+1 with the equivalence relation
𝐱∼𝐲⇔𝐱 = a ⊙𝐲;
it is denoted by n+1.
The generalised Hilbert projective metric, also referred to as the tropical metric, is given by
(x⃗,y⃗) = max_i { x_i-y_i } - min_i { x_i - y_i } = max_i,j{ x_i - y_i - x_j + y_j}.
As in classical projective spaces, we can normalise the first coordinate to embed n+1 in ^n:
φ: n+1→^n, (x_0, …, x_n) ↦ (x_2-x_0, …, x_n-x_0).
Phylogenetic Trees and Gröbner Complexes.
The space of phylogenetic trees with n leaves, which are metric n-trees with nonnegative lengths on all edges, have an important connection to tropical geometry. <cit.> proved that the tropical Grassmannian—the projective variety obtained by tropicalising the Grassmannian—is homeomorphic to the space of phylogenetic trees, and therefore the tropical projective torus is the ambient space of phylogenetic trees. Phylogenetic trees are a fundamental data structure in biology that model evolutionary processes in many settings, such as the evolution of species and disease, as well as the spread of pathogens.
Computing Gröbner bases is one of the most important techniques to solve systems of polynomial equations, which is a focal task in algebraic geometry. Gröbner bases give rise to Gröbner complexes, which then relate to tropical bases; see <cit.> for more details on the relationship. The Gröbner complex is therefore an important structure in tropical geometry and the ambient space of Gröbner complexes is the tropical projective torus.
The tropical projective torus, therefore, is an important space in tropical geometry as the ambient space of important theoretical and applied structures.
Two other tropical geometric objects that will come up in this work are given in the following definition.
For any two points a⃗, b⃗∈n, the tropical line segment between a⃗ and b⃗ is the set
γ_a⃗b⃗ = {α⊙a⃗⊞β⊙b⃗|α, β∈}
with tropical addition taken coordinate-wise.
For a finite subset X = {x⃗_1,…, x⃗_n}⊂n, the tropical convex hull of X is the smallest subset containing X where the tropical line segment between any two points in X is contained in X; it is the set of all tropical linear combinations of X,
tconv(X) = {α_1 ⊙x⃗_1 ⊞α_2 ⊙x⃗_2 ⊞⋯⊞α_n ⊙x⃗_n |α_1, …, α_n ∈}.
§.§ Wasserstein Distances Between Probability Distributions
Wasserstein distances measure distances between probability distributions. They arise as a certain class of optimal transport problem, with applications in PDEs, geometry, optimisation and statistics. As opposed to f-divergences, they preserve geometric structure of the state space over which they are defined and metrise weak convergence results such as the central limit theorem <cit.>.
The Wasserstein distance is a solution to a particular case of the optimal transport problem, first introduced by <cit.>. The optimal transport problem is the search for an optimal mapping to transport a set of resources from their sources to sinks which minimises the transport cost. When the cost function is given by a distance between source and sink, the solution to the optimal transport problem yields the Wasserstein distances. The space on which the source and sink locations are located is referred to as the state space, while the physical distance that the resources need to be transported is measured by the ground metric.
The optimal transport problem was then relaxed to a probabilistic framework <cit.>. The question is then to find the optimal coupling of two random variables which minimises the expectation of a cost function; when the cost function is the ground metric the state space, the solution to the problem gives the Wasserstein distance between probability measures.
Let (Ω,d) be a Polish metric space and p ∈ [1, ∞). Let μ, ν∈ P(Ω) and let Π(μ,ν) be the set of all couplings of μ and ν. The p-Wasserstein distance W_p on P(Ω) is defined by
W_p(μ,ν)^p inf_π∈Π(μ,ν)_(X,Y) ∼π[ d(X,Y)^p ].
The p-Wasserstein distance is not necessarily finite on all of P(Ω), but will be finite on measures with finite p-moments, P_p(Ω) <cit.>.
The tropical projective torus is a Polish space <cit.>, while the tropical p-Wasserstein distance is well-defined and has been previously studied by <cit.> using the tropical projective torus as the state space and the tropical metric as the ground metric.
§.§ Wasserstein Distances Between Probability Distributions
in Euclidean Spaces of Different Dimensions
<cit.> constructed a Wasserstein pseudometric to measure differences between probability distributions supported on Euclidean spaces of different dimensions; it is a pseudometric in that it is positive up to an isometry equivalence relation. They note this pseudometric reflects well-understood geometric properties of certain families of measures, as well as demonstrating computational advantages over the Gromov–Wasserstein distance (<cit.>).
We will leverage their approach in our work and provide an outline of their strategy here.
<cit.> consider two measures, μ, ν in P_p(^m) and P_p(^n) respectively, with m <n. Both measures are required to be on the same state space to take a Wasserstein distance, so <cit.> define their projections of interest as those composed of a semi-orthogonal matrix and a translation:
ℳ = {ϕ: ^n →^m : ϕ(x)=Ax⃗+b, A ∈ O(m,n), b ∈^m }.
Then the sets of projection and embedding measures are defined as, respectively,
Φ^-(ν,m) {β∈ P(^m) : β = ϕ(ν) for some ϕ∈ℳ},
Φ^+(μ,n) {α∈ P(^n) : μ = ϕ(α) for some ϕ∈ℳ},
From these projected and embedded measures, the projection and embedding Wasserstein distances are defined respectively by
W_p^-(μ,ν) = inf_β∈Φ^-(ν,m) W_p(μ, β),
W_p^+(μ,ν) = inf_α∈Φ^+(μ,n) W_p(α, ν).
While both the projection and embedding distances offer a geometrically intuitive measure for comparing μ and ν, it is not obvious which would be more meaningful in practice, to map from the lower dimensional space to the higher dimensional one, or vice versa. The main result by <cit.> tells us we need not make an arbitrary choice; the two distances are equal.
<cit.>
For m ≤ n, p ∈ [1, ∞], let μ∈ P_p(^m), ν∈ P_p(^n). Then
W^-(μ,ν) = W^+(μ,ν)
where W^- and W^+ are defined by (<ref>) and (<ref>).
The focal contribution of this work is a tropical parallel to the framework of <cit.>, allowing us to map between probability measures in tropical projective tori of different dimensions and establish the same metric equivalence in the more complex tropical setting.
§ BUILDING TROPICAL TOOLS
To build up to the tropical equivalent of <Ref>, we first require tropical counterparts of central objects to the construction by <cit.>. Specifically, we require a semi-orthogonal map in the tropical setting. Notice that in our setting, the equivalence relation (<ref>) defining the tropical projective torus must always be preserved in mappings.
§.§ Tropical Matrix Maps
Standard linear maps fail to preserve the equivalence relation (<ref>) that characterizes the tropical projective torus, making them unsuitable for our aim of mapping between tropical projective tori. Instead, we consider tropical linear maps. We now review and formalise the properties of general tropical matrices, highlighting their irregular properties; namely, that tropical matrices can have bounded image and fibres of different dimension.
We study tropical matrices M∈^m× n with at least one real entry per row—a non-degeneracy condition which ensures M maps into m. This condition is assumed through the rest of the manuscript.
Images of Tropical Matrices. We study the image of tropical matrices as they are used to define embedded measures via pullbacks. Let M ∈ℝ^n × m be a tropical matrix acting on n. The image of M in m is the tropical span of the columns of M; when M has all real entries, its image in m is the tropical convex hull of its columns <cit.>.
Let M := [ 2 0 0; -2 2 1; 1 3 -1 ]. Its image is given in <Ref>.
The following result characterises surjectivity for maps defined by tropical matrices.
A tropical matrix map M is surjective on m if and only if, for each row i, there is at least one column c(i) such that
M_r,c(i)∈
-∞ if r≠ i,
if r = i.
Suppose M is surjective. Then let x⃗ be such that Mx⃗ = [(0, …, K, …, 0)], where only the i-th entry is K, and K is a positive real number such that
K>max{M_i_0j_0-M_i_1j_1: M_i_0j_0,M_i_1j_1∈}.
Let j ∈ [n] be some coordinate such that (Mx⃗)_i = M_ij + x_j = K. If M_lj is real for any other coordinate, then
0 = (M x⃗)_l ≥ M_lj + x_j = M_lj-M_ij+M_ij+ x_j = M_lj - M_ij + K > 0,
a contradiction. Notice that the final equality comes from the definition of x⃗, while the last equality is a consequence of the definition of K.
To prove the other direction, let us consider y⃗=(y_1,…,y_m)∈m. For each i∈[m], we can pick some column c(i) that satisfies (<ref>). Let I := {c(i) : i ∈ [m] }. Then we define K as before and define x⃗=(x_1,…,x_n)∈n by
x_j:=
y_i-M_i,c(i) if j=c(i),
min_i ∈ [m] { -K-x_c(i)} j ∉ I
By direct computation, we can check that the maximum for each row i is achieved at c(i) and that the value is exactly y_i.
Fibres of Tropical Matrices.
The fibres of M determine the dimensionality of a projected measure and are therefore central to our work.
The fibre of M at a point y⃗∈m is given by F_y⃗ = {x⃗∈n : M x⃗ = y⃗}.
To study the fibres of tropical matrix maps, we first partition n by type.
Let M∈^m× n. The type of a point x⃗∈n with respect to M is the n-tuple (S_1,…,S_n), where
S_j = { i ∈ [m]: (M x⃗)_i=M_i·⊙x⃗ attains its maximum at M_ij+x_j }.
The type of a point is denoted by (x⃗).
Note that these sets cover [m]. This definition is given by <cit.> for real-valued matrices; we note it is also valid for matrices with -∞ entries.
Intuitively, S_j is the set of coordinates i of Mx⃗ that depend on the jth coordinate of x⃗; in neighbourhoods where S_j = ∅ then Mx⃗ is invariant under changes to x_j, and in neighbourhoods where S_j = [m] for some j then Mx⃗ takes the form x_j ⊙ M_· j and is therefore constant.
We can then divide n into convex polyhedra according to type, where we denote one polyhedron by
X_S := {x⃗∈n : S⊂(x⃗) }.
As discussed for real matrices in <cit.>, the collection of all type cells X_S over all types form a polyhedral complex where X_S ≤ X_T if, and only if, ∀ j, T_j ⊆ S_j. For any two types S and T, X_S∩ X_T=X_S∪ T.
Let M:= [ a b c; d e f ]. The fibres of M are illustrated in <Ref>. M partitions 3 by type; points of the same colour must map to the same point.
The following lemma characterises the fibres for general tropical matrices.
The fibres F_y⃗ = {x⃗∈n: M(x⃗)=y⃗} of a general tropical matrix M are (unbounded) polyhedral complexes.
For a given type S, we define
B^S_y⃗ := ⋂_i ∈ [m]{x⃗∈n : x_j - x_k = M_1k-M_ij+y_i-y_1| 1∈ S_k, i∈ S_j}.
Then we define C^S_y⃗:=X_S∩ B^S_y⃗ and let 𝒞_y⃗:={C^S_y⃗ : S is a type}. We first prove that 𝒞_y⃗ is a polyhedral complex in n. Since for every type S, both X_S and B^S_y⃗ are polyhedra in n, C^S_y⃗ is a well-defined polyhedron in n as well.
We now prove 𝒞_y⃗ is closed under intersections. Consider two types S and T; we can write C^S_y⃗∩ C^T_y⃗ as
(X_S∩ X_T)∩(B^S_y⃗∩ B^T_y⃗)=X_S∪ T∩(B^S_y⃗∩ B^T_y⃗)
and we will prove this last term is equal to X_S∪ T∩ B^S∪ T_y⃗. By definition, B^S∪ T_y⃗ is equal to:
⋂_i ∈ [m]{x⃗∈n : x_j - x_k = M_1k-M_ij+y_i-y_1| 1∈ S_k∪ T_k, i∈ S_j∪ T_j}.
We see that B^S∪ T_y⃗⊂ B^S_y⃗∩ B^T_y⃗, so it remains to show that for any x⃗∈ X_S∪ T∩(B^S_y⃗∩ B^T_y⃗), x⃗ belongs to B^S∪ T_y⃗ as well. This amounts to proving that:
x_j - x_k = M_1k-M_ij+y_i-y_1, ∀ i∈[m], 1∈ S_k∪ T_k, i∈ S_j∪ T_j.
Let i,j,k be such that 1∈ S_k and i∈ S_j. In this case, the equation holds trivially based on the definition of B^S_y⃗. Similarly, if 1∈ T_k and i∈ T_j we are done. So, the remaining case is the one in which 1∈ S_k and i∈ T_j. We consider j_0 such that i∈ S_j_0, notice that this exists since S covers [m]. Since x⃗∈ X_S∪ T the following equation holds:
M_ij+x_j=M_ij_0+x_j_0=y_i,
implying that x_j=M_ij_0-M_ij+x_j_0. Now we can write x_j-x_k in the following form:
M_ij_0-M_ij+x_j_0-x_k=M_ij_0-M_ij+M_1k-M_ij_0+y_i-y_1=M_1k-M_ij+y_i-y_1,
where for the first equality we used that x⃗∈ B^S_y⃗. Notice that we don't need to prove the case in which 1∈ T_k and i∈ S_i, as the roles of S and T are equivalent and so the same proof applies swapping their roles. So we proved that 𝒞_y⃗ is closed under intersection.
We prove now that the faces of elements of 𝒞_y⃗ are contained in it. Since by definition B^S_y⃗ is an affine space for any type S, the faces of C^S_y⃗ are of the form X_T ∩ B^S_y⃗ where X_T is one of the faces of X_S; that is, S ≤ T. It suffices to show now that X_T ∩ B^S_y⃗ = X_T ∩ B^T_y⃗.
Consider x⃗∈ X_T ∩ B^S_y⃗. Then for any i^*∈[m] and a^*,b^*∈[n] such that i^*∈ T_a^*∩ T_b^*, we have
x_a^*-x_b^*=M_i^*b^*-M_i^*a^*.
Now consider i∈[m], k_0,k_1,j_0,j_1∈[n] such that 1∈ S_k_0∩ T_k_1 and i∈ S_j_0∩ T_j_1; notice that these always exist since S,T both cover [m]. Moreover, S≤ T implies S_j⊂ T_j for all choices of j∈[m], which means that (<ref>), holds for i^*=1 and (a^*,b^*)=(k_0,k_1), or i^*=i and (a^*,b^*)=(j_0,j_1). This gives us the following:
x_j_1-x_k_1 =(x_j_1-x_j_0)+(x_j_0-x_k_0)+(x_k_0-x_k_1)
= (M_ij_0-M_ij_1)+(M_1k_0-M_ij_0+y_i-y_1)+(M_1k_1-M_1k_0)
=M_1k_1-M_ij_1+y_i-y_1,
proving that x⃗∈ B^T_y⃗.
In order to complete the proof, it only remains to show that F_y⃗=∪_S C^S_y⃗.
We know that n=∪_S X_S, which implies that F_y⃗=∪_S(X_S∩ F_y⃗). So it is enough to prove that X_S∩ F_y⃗=X_S∩ B^S_y⃗.
We start by proving X_S∩ F_y⃗⊆ X_S∩ B^S_y⃗. If x⃗∈ F_y⃗, then ∀ i^*∈[m]:
(Mx⃗)_i^*-(Mx⃗)_1=y_i^*-y_1.
If x⃗∈ X_S, then for every j^* such that i^*∈ S_j^*,
(Mx⃗)_i^*=M_i^*,j^*+x_j^*.
Now consider i∈[n] and k,j∈[m], such that 1∈ S_k and i∈ S_j. We can write x_j-x_k as
x_j-x_k =M_1k-M_ij+(x_j+M_ij)-(M_1k+x_k)
=M_1k-M_ij+(Mx⃗)_i-(Mx⃗)_1
=M_1k-M_ij+y_i-y_1,
where in (<ref>) we made use of (<ref>) twice, with i^*=1,i respectively, while for the last equality we used (<ref>) with i^*=i.
Finally, we show X_S ∩ B_y⃗^S ⊆ F_y⃗ by direct computation. For i,j,k such that 1 ∈ S_k and i ∈ S_j:
(M x⃗)_i - (M x⃗)_1 = M_ij + x_j - M_1k - x_k = y_i - y_1
Therefore the fibre F_y⃗ = ∪_S C_S is a polyhedral complex, as desired.
From our proof of <Ref>, we note that the poset of fibre cells C_S is given by a reverse type inclusion as in the case of type cells. Therefore, the maximal fibre cells C_S (and maximal type cells) are exactly those whose type is given by a partition of [m].
Metric Geometry Under Tropical Linear Maps.
To relate Wasserstein distances over tropical linear maps, we must understand their metric geometry. Currently, there are limited results on the behaviour of the tropical metric under the action of tropical linear maps. Here we prove a useful property of tropical linear maps, namely that they are always non-expansive with respect to the tropical metric.
For any tropical matrix M:
∀ x⃗, y⃗∈n: (M x⃗,M y⃗) ≤( x⃗, y⃗)
There exist coordinates r,s such that
(Mx⃗, My⃗) = max_i {M_ri + x_i} - max_j {M_rj + y_j} - max_k{M_sk + x_k} + max_ℓ{M_sℓ + y_ℓ},
Fix i,j,k,ℓ as the maximal arguments for the terms above.
Then
(M x⃗,M y⃗) = M_ri + x_i - (M_rj + y_j) - (M_sk + x_k) + M_sℓ + y_ℓ
≤ M_ri + x_i - (M_ri + y_i) - (M_sℓ + x_ℓ) + M_sℓ + y_ℓ
≤ x_i - y_i - x_ℓ + y_ℓ≤(x⃗, y⃗).
§.§ Simple Projections
In addition to the assumption that the tropical matrices we study have at least one real entry per row as specified above, the following class of tropical matrices—simple projections—allows us to derive additional geometric implications of the mapping that will be useful for our task of constructing our tropical Wasserstein distance.
A tropical matrix M ∈^m × n, where n>m, is called a simple projection when each column has at most one real entry.
For M, we define J_i { j: M_ij∈}. These sets are disjoint and nonempty, but do not necessarily cover [n]. Then
(M x⃗)_i = max_j ∈ J_i{ M_ij + x_j }.
The following relationship establishes when a tropical matrix is a simple projection in terms of the dimensionality of its fibres.
M is a simple projection if and only if, for all y⃗∈m, every maximal cell of the fibre F_y⃗ has dimension n-m.
By <Ref>, a cell of type X_S is maximal if and only if S is a partition of [m]. For such an S, we define the classical matrix M^S∈^m× n and b^S∈^m as follows
M^S_i,j:=
1, if i∈ S_j,
0, if i∉ S_j
b_i:=M_i,j(i),
where j(i) is the only j such that i∈ S_j. From this, we directly notice that the restriction of M to X_S is given by the classically affine operator mapping x∈^n to M^S· x+b^S∈^m, where here, matrix–vector multiplication is computed in the usual (i.e., non-tropical) manner. The dimension of a maximal fibre cell C_S in X_S is then given by the dimension of the kernel of M^S. Since this is a 0/1 matrix with one nonzero entry per row, we may use a standard linear algebra argument to derive that
M^S=|{j∈[m] : M^S_i,j=0 ∀ i∈[m]}|=|{j∈[m] : S_j=∅}|≤ n-m.
We then deduce that every maximal type cell has exactly m nonempty S_j if and only if, ∀ y⃗∈m, every maximal cell in F_y⃗ has dimension n-m. It therefore suffices to show that the type of every maximal type consists of m singletons if and only if M is a simple projection.
Suppose for contradiction that M is not a simple projection. Then for some j ∈ [n] and i_0 ≠ i_1 ∈ [m], both M_i_0j and M_i_1j are real. We set
B_j := min{min(M_i_0j - M_i_0k, M_i_1j - M_i_1k) : k ∈[n]}
U_j := {x⃗∈n: x_k - x_j ≤ -B for k∈[n]∖{j}}.
U_j is a full dimensional polyhedron in n, so there is some X_S such that (X_S∩ U_j)=(X_S). By construction, for all x⃗∈ U_j and all k ∈ [n],
M_i_0k + x_k ≤ M_i_0j + x_j,
M_i_1k + x_k ≤ M_i_1j + x_j,
and hence i_0,i_1 ∈ S_j on U_j. As S does not consist of singletons, the maximal fibre cell C_S containing x⃗∈ U_j ∩ X_S is not n-m dimensional.
To show the converse, suppose that M is a simple projection. Then for all x⃗ in a maximal type cell X_S, i ∈ S_j for some j ∈ J_i. As the J_i are disjoint, each S_j is either ∅ or a singleton.
We can verify that simple projections satisfy the condition of <Ref> by their definition, and hence are surjective. By <Ref>, they also have fibres of dimension n-m. We conclude that simple projections are the most general tropical matrices satisfying these properties.
In <Ref> and <Ref> we now use these properties of simple projections to construct a homeomorphism f_M between n and m× F_0⃗ which is crucial for our main theorem.
Let M be a simple projection from n to m and let F_0⃗ be the fibre at 0⃗∈m. We define a map f_M by mapping x⃗∈n to (Mx⃗,x⃗-z⃗^x⃗)∈m×n, where z⃗^x⃗ is defined as follows:
z^x⃗_j :=
(Mx⃗)_i - max_k (Mx⃗)_k j ∈ J_i,
0 j ∉∪_i∈[n] J_i.
The map f_M is a well-defined map between n and m× F_0⃗.
We must prove that this map respects the equivalence class defining n. This amounts to proving that for all c∈ and all x⃗∈n, f_M(x⃗)∼ f_M(c⊙x⃗). Let us start by considering z⃗^c⊙x⃗, which, from (<ref>), is given by
z^c ⊙x⃗_j =
(M(c⊙x⃗)_i-max_k(M(c⊙x⃗)_k)=M(x⃗)_i-max_k(Mx⃗)_k j ∈ J_i
0 j ∉∪_i J_i,
so it is equal to z⃗^x⃗. Therefore we can write f_M(c⊙x⃗) as
(c ⊙ Mx⃗, c ⊙x⃗ -z⃗^x⃗)=(c ⊙ Mx⃗, c ⊙( x⃗ -z⃗^x⃗))∼ (Mx⃗, x⃗- z⃗^x⃗) = f_M(x⃗),
therefore proving that f_M is well defined.
We now prove that f_M ⊂m× F_0⃗. It suffices to show x⃗-z⃗^x⃗∈ F_0⃗, which can be done by writing M(x⃗-z⃗^x⃗)_i for each i∈[m] as
max_j ∈ J_i{ M_ij + x_j - (Mx⃗)_i - max_k (Mx⃗)_k }= max_j ∈ J_i{ x_j +M_ij} - (Mx⃗)_i - max_k (Mx⃗)_k
=- max_k (Mx⃗)_k,
which is independent of i, and hence M(x⃗-z⃗^x⃗)∼0⃗. Notice that for the first equality, we may take (Mx⃗)_i and max_k (Mx⃗)_k out of the maximum since they are independent of j, while for the second equality we used (<ref>).
The map f_M we introduced in <Ref> defines a homeomorphism between n and m× F_0⃗.
We begin by showing the surjectivity of f_M. Consider (y⃗,u⃗) in m× F_0⃗ and define
w_j :=
y_i - max_k y_k j ∈ J_i
0 j ∉∪_i J_i.
We will show f_M(w⃗+u⃗) ∼ (y⃗,u⃗). Let us start by considering the entries of M(w⃗+u⃗), which we can write in the following way:
M(w⃗+u⃗)_i = max_j ∈ J_i{M_ij + y_i - max_k y_k+u_j }= max_j ∈ J_i{M_ij+ u_j } + y_i - max_k y_k= y_i - max_k y_k,
implying that M(w⃗+u⃗)=y⃗ and w⃗ =z⃗^w⃗+u⃗. This then implies that:
f_M(w⃗+u⃗) = (y⃗, w⃗+⃗u⃗-⃗w⃗) = (y⃗,u⃗),
hence f_M is surjective.
In order to show injectivity, consider x⃗^1,x⃗^2∈n such that f_M(x⃗^1)∼ f_M(x⃗^2). This means that there are c_1,c_2∈ where:
Mx⃗^2 = c_1 ⊙ Mx⃗^1,
x⃗^1 - z⃗^x⃗^1 = c_2⊙ (x⃗^2 - z⃗^x⃗^2).
From (<ref>) we see that (<ref>) implies
z⃗^x⃗^1 = z⃗^x⃗^2.
We can rewrite (<ref>) as
x^1_j=z^x⃗^1_j+c_2+x^2_j-z^x⃗^2_j=z^x⃗^2_j+c_2+x^2_j-z^x⃗^2_j=c_2+x^2_j ∀ j∈[n]
from which we deduce x⃗^1∼x⃗^2, proving f_M is a bijection.
Finally, we prove that f_M is an homeomorphism, i.e., f_M and f_M^-1 are both continuous. We show this via the following metric equivalence, where here, ^m,^n denote the tropical metric over the tropical projective tori of dimensions m and n, respectively:
1/2^m(Mx⃗^1, Mx⃗^2) + 1/4^n(x⃗^1-z⃗^x⃗^1, x⃗^2-z⃗^x⃗^2) ≤^n(x⃗^1, x⃗^2)
≤^m(Mx⃗^1, Mx⃗^2) + ^n(x⃗^1-z⃗^x⃗^1, x⃗^2-z⃗^x⃗^2)
for all x⃗^1,x⃗^2∈n.
First, note that
z^x⃗^1_j-z^x⃗^2_j =
(Mx⃗^1)_i - max_k (Mx⃗^1)_k - (Mx⃗^2)_i + max_k (Mx⃗^2)_k j ∈ J_i
0 j ∉∪_i J_i,
so we consider r_1,r_2∈[m] such that
(Mx⃗^1)_r_1 = max_k (Mx⃗^1)_k and (Mx⃗^2)_r_2 = max_k (Mx⃗^2)_k. We then have
z^x⃗^1_r_1-z^x⃗^2_r_1≥ 0,
z^x⃗^1_r_2-z^x⃗^2_r_2≤ 0,
hence,
^n(z⃗^x⃗^1,z⃗^x⃗^2) =z⃗^x⃗^1 - z⃗^x⃗^2_tr = max_a,b∈[n]{ z^x⃗^1_a-z^x⃗^2_a - z^x⃗^1_b + z^x⃗^2_b }=max_a,b∈∪_iJ_i{z^x⃗^1_a-z^x⃗^2_a - z^x⃗^1_b + z^x⃗^2_b }
= max_r,s∈[m]{ (Mx⃗^1)_r - (Mx⃗^2)_r - (Mx⃗^1)_s + (Mx⃗^2)_s}= ^m(Mx⃗^1,Mx⃗^2).
Notice that (<ref>) allows us to restrict the maximum to ∪_iJ_i. Indeed, if we consider a^*∉∪_iJ_i then
max_b∈[n]{ z^x⃗^1_a^*-z^x⃗^2_a^* - z^x⃗^1_b + z^x⃗^2_b } =max_b∈[n]{- z^x⃗^1_b + z^x⃗^2_b }≤ z^x⃗^1_r_1-z^x⃗^2_r_1+max_b∈[n]{- z^x⃗^1_b + z^x⃗^2_b }
=max_b∈[n]{z^x⃗^1_r_1-z^x⃗^2_r_1+- z^x⃗^1_b + z^x⃗^2_b }≤max_a,b∈[n]{z^x⃗^1_a-z^x⃗^2_a+- z^x⃗^1_b + z^x⃗^2_b }
and the same argument holds when considering b^*∉∪_iJ_i.
We now prove (<ref>):
1/2^m(Mx⃗^1,Mx⃗^2) + 1/4^n(x⃗^1-z⃗^x⃗^1,x⃗^2-z⃗^x⃗^2) ≤1/2^n(x⃗^1,x⃗^2) + 1/4x⃗^1-z⃗^x⃗^1-x⃗^2+z⃗^x⃗^2_tr
≤1/2^n(x⃗^1,x⃗^2) + 1/4 ( x⃗^1-x⃗^2_tr + z⃗^x⃗^2-z⃗^x⃗^1_tr )
= 1/2^n(x⃗^1,x⃗^2) + 1/4 (^n(x⃗^1,x⃗^2) + ^m(Mx⃗^1,Mx⃗^2))
≤^n(x⃗^1,x⃗^2),
where we made use of <Ref> and (<ref>). We now prove (<ref>):
^n(x⃗^1,x⃗^2) = x⃗^1-z⃗^x⃗^1+z⃗^x⃗^1-z⃗^x⃗^2+z⃗^x⃗^2-x⃗^2_tr = x⃗^1-z⃗^x⃗^1-x⃗^2+z⃗^x⃗^2+z⃗^x⃗^1-z⃗^x⃗^2_tr
≤x⃗^1-z⃗^x⃗^1-x⃗^2+z⃗^x⃗^2_tr+z⃗^x⃗^1-z⃗^x⃗^2_tr
=^n(x⃗^1-z⃗^x⃗^1, x⃗^2-z⃗^x⃗^2) + ^m(Mx⃗^1, Mx⃗^2),
where we used (<ref>) for the inequality.
This shows strong equivalence of the metrics, proving topological equivalence of the spaces.
This product space structure of n will allow us to prove that when optimising over simple projections, the projective and embedding Wasserstein distances on tropical projective tori are the same.
§ TROPICAL WASSERSTEIN MAPPINGS FOR DIFFERENT DIMENSIONS
Given the tropical tools and results established above, we are now equipped to present our main theoretical result; when we use simple projections as our inter-dimensional tropical maps, the projection and embedding Wasserstein distances coincide as in the Euclidean case.
Following the presentation of our main result in this section, we discuss implications of our work in the setting of phylogenetic trees.
§.§ Equivalent Tropical Wasserstein Distances
We define our set of simple projections from n to m by
ℳ_tr{ϕ_M: n→m : ϕ_M(x) = Mx where M is a simple projection}.
We note each ϕ∈ℳ_tr is measurable as it is continuous.
The sets of projected and embedded measures Φ_tr^- and Φ_tr^+ are now given by
Φ_tr^-(ν,m) {β∈ P(m) : β = ϕ(ν) for some ϕ∈ℳ_tr},
Φ_tr^+(μ,n) {α∈ P(n) : μ = ϕ(α) for some ϕ∈ℳ_tr},
while the tropical projection and embedding Wasserstein distances are given by
W_tr,p^-(μ,ν) = inf_β∈Φ_tr^-(ν,m) W_tr,p(μ, β),
W_tr,p^+(μ,ν) = inf_α∈Φ_tr^+(μ,n) W_tr,p(α, ν).
We begin by establishing the following lemma, which verifies that simple projections are non-expansive on P_p(n) as well as on n.
For all p ∈ [0,∞), ϕ∈ℳ_tr and α, ν∈ P_p(n), we have
W_tr,p(ϕ(α), ϕ(ν)) ≤ W_tr,p(α, ν).
This proof follows the same approach as the proof of Lemma 2.1 by <cit.>.
Let π∈ P((n)^2) be the optimal transport coupling for α and ν. Then define π_m ∈ P((m)^2) as the pushforward of π through ϕ×ϕ:
π_m(A × B) = π( ϕ^-1(A) ×ϕ^-1(B)).
Checking the marginals of π_m, we have
π_m(A,m) = π(ϕ^-1(A) ×n) = α(ϕ^-1(A)) = ϕ(α)(A),
π_m(m,B) = π(n×ϕ^-1(B)) = ν(ϕ^-1(B)) = ϕ(ν)(B).
Therefore, π_m is a coupling of ϕ(α) and ϕ(ν). We can then bound the Wasserstein distance between ϕ(α) and ϕ(ν) by:
W_tr,p(ϕ(α), ϕ(ν))^p
≤_(X,Y) ∼π_m[d_tr(X,Y)^p]
= _(U,V) ∼π[d_tr(ϕ(U),ϕ(Y))^p]
≤_(U,V) ∼π[d_tr(U,V)^p]= W_tr,p(α,ν)^p,
where for the last equality, we used <Ref>.
Notice that although stated for simple projections, in the proof, we never used the assumption so the proposition holds for any tropically linear map.
As well as relating Wasserstein distances across spaces, <Ref> gives us the following corollary ensuring finite moments.
For any ν∈ P_p(n) and any ϕ∈ℳ_tr, the pushforward ϕ(ν) has finite pth moment; that is, ϕ(ν) ∈ P_p(m).
In order to prove that ϕ(ν) as finite pth moment, we only need to show that there exists a measure with finite pth moment from which ϕ(ν) has finite p-Wasserstein distance. Consider the Dirac measure on m, concentrated in 0⃗. This can be seen as the pushforward of a Dirac measure in n with the support in any point of x⃗_0∈ F_0⃗. Let δ_x⃗ be the Dirac measure concentrated at x⃗. Then we can write
W_tr,p(δ_0⃗,ϕ(ν))=W_tr,p(ϕ(δ_x⃗_0),ϕ(ν))≤ W_tr,p(δ_x⃗_0,ν).
Here, notice the last term is finite since ν has finite moments by definition.
We now state and prove our main theorem: the equivalence of W_tr,p^- and W_tr,p^+.
Let p ∈ [1,∞). For all μ∈ P_p(m) and ν∈ P_p(n),
W_tr,p^-(μ,ν) = W_tr,p^+(μ,ν) < ∞.
The various results in <Ref> enable us to follow the same approach as in Theorem 2.2 of <cit.>.
We begin by proving W_tr,p^- ≤ W_tr,p^+. By <Ref>,
W_tr,p^+(μ,ν) = inf_ϕ∈ℳ_tr{ W_tr,p(α, ν) : μ = ϕ(α)}≥inf_ϕ∈ℳ_tr{ W_tr,p(μ, ϕ(ν))}= W_tr,p^-(μ,ν).
Note that each W_tr,p(μ, ϕ(ν)) is finite by <Ref>. Hence, W_tr,p^+(μ,ν) is finite.
In order to show that W_tr,p^-(μ,ν) ≤ W_tr,p^+(μ,ν), we prove that for all ϵ > 0, and β_ϵ∈Φ_tr,p^-(ν, m) such that W_p(μ, β_ϵ) ≤ W_p^-(μ,ν) + ϵ, there exists α_ϵ∈Φ_tr^+(μ,n) such that W_tr,p(α_ϵ,ν) ≤ W_tr,p(μ,β_ϵ). Indeed, if our claim is true, then we have ∀ ϵ>0:
W_tr,p^+(μ,ν) ≤ W_tr,p(α_ϵ, ν) ≤ W_tr,p(μ,β_ϵ) ≤ W_p^-(μ,ν) + ϵ.
Letting ϵ tend to 0 gives us the desired result.
Now let ϕ_M be the simple projection sending ν to β_ϵ, i.e., β_ϵ=ϕ_M(ν). Using the homeomorphism f_M as defined in <Ref>, we define the complementary projection to ϕ_M as ϕ_M^F_0⃗: n→ F_0⃗ given by proj_2 ∘ f_M. To define a coupling on n×n, we first consider m× F_0⃗×m and look to apply the gluing lemma to f_M(ν) and π_m.
<cit.>
Let (𝒳_i, μ_i), i=1,2,3 be Polish spaces. Let (X_1,X_2) be a coupling of (μ_1,μ_2) and (X_2,X_3) be a coupling of (μ_2,μ_3). Then there exists a coupling of random variables (Z_1,Z_2,Z_3) such that (Z_1,Z_2) ∼ (X_1,X_2) and (Z_2,Z_3) ∼ (X_2,X_3).
The space m is Polish, while F_0⃗ is a closed subset of a Polish space and is therefore also Polish. Hence there exist random vectors (X,Y,Z) on m× F_0⃗×m such that (X,Y) ∼ f_M(ν) and (X,Z) ∼π_m. We call the distribution of (X,Y,Z) ∼π̃.
We now define ρ: m× F_0⃗×m→ (n)^2 given by
ρ(x⃗,y⃗,z⃗) → (f_M^-1(x⃗,y⃗), f_M^-1(z⃗, y⃗)).
This is a measurable map as f_M^-1 is continuous.
We then define a coupling π_n on (n)^2 and measure α_ϵ by
π_n = ρ(π̃) and α_ϵ = proj_2(π_n).
It remains to show that π_n is a coupling of ν,α_ϵ, which will then be used to bound W_tr,p(α_ϵ,ν), and that α_ϵ∈Φ^+(μ,n).
Since α_ϵ is the second marginal of π_n by definition, it suffices to compute the first marginal:
π_n(A, n) = π̃(ρ^-1(A,n))= π̃(f_M(A),m) = ν(A).
We now want to bound W_tr,p(α_ϵ,ν) using π_n. We begin by showing that for any (x⃗,y⃗)∈m× F_0⃗ and z⃗∈m, we have
^n(ρ(x⃗,y⃗,z⃗)) = ^m(x⃗, z⃗).
We obtain the lower bound on ^n(ρ(x⃗,y⃗,z⃗)) through the following argument:
^n(ρ(x⃗,y⃗,z⃗)) =^n(f_M^-1(x⃗,y⃗), f_M^-1(z⃗, y⃗)) ≥^m(ϕ_M(f_M^-1(x⃗, y⃗)), ϕ_M(f_M^-1(z⃗, y⃗))) = ^m(x⃗, z⃗),
in which we used <Ref> for the inequality, while the equalities come from the definitions of ρ and f_M, respectively.
Using the upper bound of (<ref>) gives the upper bound:
^n(ρ(x⃗,y⃗,z⃗)) =^n(f_M^-1(x⃗,y⃗), f_M^-1(z⃗, y⃗)) ≤^m(x⃗, z⃗) + ^n(ϕ_M^F_0⃗(f_M^-1(x⃗,y⃗)), ϕ_M^F_0⃗(f_M^-1(z⃗,y⃗)))
= ^m(x⃗, z⃗) + ^n(y⃗, y⃗)= ^m(x⃗, z⃗).
This gives us the following upper bound for W_tr,p(α_ϵ,ν):
W_tr,p(α_ϵ,ν)^p ≤_U,V∼π_n[^n(U,V)^p] = _X,Y,Z ∼π̃[^n(ρ(X,Y,Z))^p]= _X,Y,Z ∼π̃[^m(X,Z)^p]
= _X,Z ∼π_m[^m(X,Z)^p]= W_tr,p(μ,β_ϵ)^p.
Finally, it only remains to show that α_ϵ∈Φ^+(μ,n), i.e., ϕ_M(α_ϵ) = μ:
ϕ_M(α_ϵ)(A) = π_n(n, ϕ_M^-1(A))= π̃(ρ^-1(n, ϕ_M^-1(A)))= π̃(m, F_0⃗, A) = π_m(m,A) = μ(A).
Given this equivalence, from now on, we now denote W_tr,p^-,W_tr,p^+ by W_tr,p^±.
The distance W_tr,p^± is 1-Lipschitz continuous on (P_p(m), W_tr,p) × (P_p(n), W_tr,p).
|W_tr,p^±(μ_1,ν_1) - W_tr,p^±(μ_2,ν_2)| = |inf_ϕ∈ℳ_tr W_tr,p(μ_1,ϕ(ν_1)) - inf_ϕ∈ℳ_tr W_tr,p(μ_2,ϕ(ν_2))|
≤ |inf_ϕ∈ℳ_tr[ W_tr,p(μ_1,μ_2) + W_tr,p(μ_2,ϕ(ν_2)) + W_tr,p(ϕ(ν_2),ϕ(ν_1)) ] - inf_ϕ∈ℳ_tr W_tr,p(μ_2,ϕ(ν_2))|
≤ |inf_ϕ∈ℳ_tr[W_tr,p(μ_1,μ_2) + W_tr,p(μ_2,ϕ(ν_2)) + W_tr,p(ϕ(ν_2),ϕ(ν_1)) ] - inf_ϕ∈ℳ_tr W_tr,p(μ_2,ϕ(ν_2))|
≤ |W_tr,p(μ_1,μ_2) + W_tr,p(ϕ(ν_2),ϕ(ν_1)) + inf_ϕ∈ℳ_tr[W_tr,p(μ_2,ϕ(ν_2)) ] - inf_ϕ∈ℳ_tr W_tr,p(μ_2,ϕ(ν_2))|
≤ |W_tr,p(μ_1,μ_2) + W_tr,p(ν_2,ν_1)|
≤ W_tr,p(μ_1,μ_2) + W_tr,p(ν_2,ν_1).
This in turn gives the following implication for estimation of W_tr,p^±(μ,ν).
Let X={x⃗_1,…,x⃗_r}⊂n and Y={y⃗_1,…,y⃗_s}⊂m sampled independently from μ∈ P_p(m) and ν∈ P_p(n), respectively. Consider μ̂_r and ν̂_s, the empirical measures of X and Y.Then W_tr,p^± = W_tr,p^±(μ̂_R, ν̂_S) is a consistent estimator for W_tr,p^±(μ,ν).
By <Ref> and the fact that Wasserstein distances respect weak convergence of measures <cit.>, we only require the weak convergence of μ̂_r(/ν̂_s) to μ(/ν) as r(/s) tend to infinity. This is true by the law of large numbers as n is a Polish space <cit.>.
We stress that this proof only relies on ϕ∈ℳ being a simple projection, and not ℳ_tr exhibiting any closure. We can therefore define Φ_tr^±, W_tr,p^± with respect to any subset of maps ℳ_tr' ⊂ℳ_tr and reach the same result.
However, it is important to note that the quantity W_tr,p^±(μ,ν) is not a rigorous metric. Indeed, we can have W_tr,p^±(μ,ν)=0 for μ≠ν if, say, ϕ∈ℳ_tr, such that ν=ϕ(μ).
§.§ Practical Implications: Phylogenetic Trees
We now return to the application of interest which motivated our work, which involved the tropical geometric interpretation of phylogenetic tree space. We have constructed Wasserstein distances between probability measures in tropical projective tori of different dimensions and proved that the projection and embedding distances are equivalent. We would like to be able to use this tool to compare probability distributions over phylogenetic tree spaces corresponding to different leaf sets, and hence living within different tropical projective tori. There are, however, important considerations specific to phylogenetic trees to take into account.
An immediate concern when studying phylogenetic tree space within the tropical projective torus is the fact that it is tropically non-convex <cit.>. This poses problems when studying the optimal transport problem within the same state space <cit.> as geodesics within the tropical projective torus do not necessarily remain within tree space which can lead to interpretive difficulties and limits the applicability of some optimal transport theory. However, the Wasserstein distances proposed here considers measures on state spaces of different dimensions; we do not expect continuous approaches such as the use of flows in <cit.> to apply, as it is not clear that deforming dimensionality could be a smooth process.
Therefore when looking to apply general optimal transport methods to tree data, special care should be taken to consider the implications of non-convexity and discrete dimensionality.
The concern of non-convexity may be dropped if we restrict to the case of ultrametric trees. These are special cases of phylogenetic trees that are equidistant: they are rooted phylogenetic trees where the distance from every leaf to its root is a constant. Their space forms a proper subspace of phylogenetic tree space <cit.>. The space of ultrametric trees is known to be tropically convex <cit.>, unlike the space of phylogenetic trees; this geometry has been recently used to characterise the combinatorial behaviour of tree topologies in the tropical setting <cit.>. A reasonable strategy to search for couplings between tree spaces is to start with this comparatively simpler geometry of ultrametric tree space.
Equidistant trees, despite their more restricted definition, are important in practice. Cancer evolution is known to be coalescent, which means that it is captured by an equidistant tree <cit.>. Therefore, in our motivating example of studying the distributional behaviour of sets of phylogenetic trees capturing cancer evolution before and after treatment, our tropical Wasserstein construction may be applied over the space of ultrametric trees.
§ DISCUSSION
In this manuscript, we studied the problem of comparing probability distributions over tropical projective tori with different dimensions and we proposed the first mapping to do this as a Wasserstein distance. Our construction was largely inspired by a recent solution to the same problem in Euclidean spaces. Our approach was to identify the key components of the Euclidean mapping, find tropical equivalents and establish properties of these tropical equivalents to be able to arrive at the same conclusion for tropical projective tori. Specifically, we studied tropical matrix maps and simple projections; interestingly, it turns out that only basic properties of such maps were required, such as surjectivity, in order to achieve the same result as in the Euclidean setting, even though the tropical projective torus can be seen as comparatively more restrictive (e.g., it is not a Hilbert space under the tropical norm <cit.>).
An important missing component of this manuscript to validate the practical feasibility of our theoretical contribution is a numerical implementation, which is ongoing work currently being developed and will be appended to this work in the near future.
§ ACKNOWLEDGMENTS
The authors wish to thank Yue Ren and Felipe Rincón for helpful discussions.
Y.C. is funded by a President's PhD Scholarship at Imperial College London. D.T.'s PhD scholarship is funded by the IGSSE/TUM-GS via a Technical University of Munich–-Imperial College London Joint Academy of Doctoral Studies (JADS) award (2021 cohort, PIs Drton/Monod), from which R.T. also receives partial support.
authordate3
|
http://arxiv.org/abs/2307.02245v2 | 20230705123958 | Set Learning for Accurate and Calibrated Models | [
"Lukas Muttenthaler",
"Robert A. Vandermeulen",
"Qiuyi Zhang",
"Thomas Unterthiner",
"Klaus-Robert Müller"
] | cs.LG | [
"cs.LG",
"cs.CV",
"cs.IT",
"math.IT"
] |
S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning
Jayateja Kalla Soma Biswas
August 1, 2023
===================================================================================
Model overconfidence and poor calibration are common in machine learning and difficult to account for when applying standard empirical risk minimization. In this work, we propose a novel method to alleviate these problems that we call odd-k-out learning (OKO), which minimizes the cross-entropy error for sets rather than for single examples. This naturally allows the model to capture correlations across data examples and achieves both better accuracy and calibration, especially in limited training data and class-imbalanced regimes. Perhaps surprisingly, OKO often yields better calibration even when training with hard labels and dropping any additional calibration parameter tuning, such as temperature scaling. We provide theoretical justification, establishing that OKO naturally yields better calibration, and provide extensive experimental analyses that corroborate our theoretical findings. We emphasize that OKO is a general framework that can be easily adapted to many settings and the trained model can be applied to single examples at inference time, without introducing significant run-time overhead or architecture changes.
§ INTRODUCTION
In machine learning, a classifier is typically trained to minimize cross-entropy on individual examples rather than on sets of examples. By construction, this paradigm ignores information that may be found in correlations between sets of data. Therefore, we present odd-k-out learning (OKO), a new training framework based on learning from sets. It draws inspiration from the odd-one-out task which is commonly used in the cognitive sciences to infer notions of object similarity from human decision-making processes <cit.>. The odd-one-out task is a similarity task where subjects choose the most similar pair in a set of objects. We use an adapted version of that task to learn better model parameters while not making any changes to the architecture (see Fig. <ref>; a).
Standard classification training often yields overconfident classifiers that are not well-calibrated <cit.>. Classically, calibration has been treated as an orthogonal problem to accuracy. Miscalibration has been observed to severely worsen while accuracy improves, an interesting phenomenon attributed to over-parametrization, reduced regularization, and biased loss functions <cit.>. Even log-likelihood — a proper scoring rule — was accused of biasing network weights to better classification accuracy at the expense of well-calibrated probabilities <cit.>. Other scoring rules were proposed that are differentiable versions of calibrative measures but these approximations can be crude <cit.>. Thus, calibration methods are often treated as an afterthought, comprised of ad-hoc post-processing procedures that require an additional hold-out dataset and monotonically transform the output probabilities, usually without affecting the learned model parameters or accuracy.
Since calibration is inherently a performance metric on sets of data, we posit that a classifier has to be trained on sets of examples rather than on single examples to find model parameters that yield accurate calibration without the necessity for ad-hoc post-processing methods. We consider this particularly pressing in limited training data and class-imbalanced settings for which there exists surprisingly little work on calibration <cit.>.
Various techniques have been proposed to improve accuracy for imbalanced datasets <cit.>, which are typically based on non-uniform class sampling or reweighting of the loss function. However, neural nets can still easily overfit to the few training examples for the rare classes <cit.>. There is growing interest in the development of new techniques for handling class imbalance <cit.>. Such techniques are adapted variants of non-uniform sampling, often focusing exclusively on accuracy, and ignoring model calibration. However, techniques for mitigating the effects of imbalance on classification accuracy do not improve calibration for minority instances and standard calibration procedures tend to systematically underestimate the probabilities for minority class instances <cit.>. Moreover, it is widely known that direct undersampling of overrepresented classes modifies the training set distribution and introduces probabilistic biases <cit.>. Bayesian prior readjustments were introduced to manipulate posterior probabilities for ameliorating that issue <cit.>.
It is known that hard labels tend to induce extreme logit values and therefore cause overconfidence in model predictions <cit.>. Label smoothing has been proposed to improve model calibration by changing the cross-entropy targets rather than scaling the logits after training <cit.>. Label smoothing, in combination with batch balancing, achieves promising results on heavy-tail classification benchmarks, i.e. datasets that contain many classes with few samples and a few classes with many samples <cit.>. Yet, all these methods ignore the need for accuracy on the underrepresented classes, generally lack rigorous theoretical grounding, and require fine-tuned parameters for good empirical performance, such as the noise parameter for label smoothing, or the scaling parameter for temperature scaling for which additional held-out data is required.
In contrast to the popular philosophy of training for accuracy and then calibrating, we pose our main question: Can we provide a theoretically grounded training framework to learn network parameters that simultaneously obtain better accuracy and calibration, especially with class imbalance?
Contributions.
Indeed, we find that OKO achieves better calibration and uncertainty estimates than standard cross-entropy training. The benefits of OKO over vanilla cross-entropy are even more pronounced in limited training data settings and with heavy-tailed class distributions.[A implementation of OKO is publicly available on GitHub: <https://github.com/LukasMut/OKO>]
Empirical. First, through extensive experiments, we show that OKO often achieves better accuracy while being better or equally well calibrated than other methods for improving calibration, especially in low data regimes and for heavy-tailed class distribution settings (see Fig. <ref>; b). Second, OKO is a principled approach that changes the learning objective by presenting a model with sets of examples instead of individual examples, as calibration is inherently a metric on sets. As such, OKO does not introduce additional hyperparameters for post-training tuning or require careful warping of the label distribution via a noise parameter as in label smoothing (see Fig. <ref>). Third, surprisingly, this differently posed set learning problem results in smoothed logits that yield accurate calibration, although models are trained using hard labels. Fourth, we emphasize that OKO is extremely easy to plug into any model architecture, as it provides a general training framework that does not modify the model architecture and can therefore be applied to single examples at test time exactly like any network trained via single-example learning (see Fig. <ref>; a). The training complexity scales linearly in O(|𝒮|) where |𝒮| denotes the number of examples in a set and hence introduces little computational overhead during training. Last, in few-shot settings, OKO achieves compellingly low calibration and classification errors (see Fig. <ref>; b). Notably, OKO improves test accuracy for 10-shot MNIST by 8.59% over the best previously reported results <cit.>.
Theoretical. Through mathematical analyses, we show that OKO yields logit values that are not as strongly encouraged to diverge as in standard cross-entropy training. To provably demonstrate improved calibration, we develop a new scoring rule that measures a notion of excess confidence on a per-datapoint level. This scoring rule compares the predictive entropies and cross-entropies and for calibrated predictors, we show that our measure is consistent in that the average excess confidence is 0. By using this new scoring rule we demonstrate that OKO implicitly performs a form of entropic regularization, giving insight into how it prevents excess confidence in certain low entropy regions.
§ RELATED WORK
The odd-one-out task has been widely used in the cognitive sciences to infer notions of object similarity from human participants <cit.>, and first uses are slowly percolating into machine learning: <cit.> trained a self-supervised video understanding network by predicting which one out of three sequences was in reverse time order, <cit.> used comparisons between samples as weak supervision target. <cit.> use human odd-one-out choices to improve pretrained representations for few-shot learning and anomaly detection tasks.
However, none of these works investigated calibration or provided any theory for grounding learning with odd-one-out sets.
Improving calibration is of practical interest for many applications. However, deep neural networks often appear badly calibrated <cit.>. Even though this depends on the concrete architecture used, scaling up a model usually increases accuracy at the cost of calibration <cit.>. Many post-hoc approaches to increase calibration have been proposed, such as temperature scaling <cit.>, isotonic regression <cit.>, and Bayesian binning <cit.>, while improving calibration during training is a less explored topic.
Most related to our approach are techniques that use data augmentations that blend different inputs together <cit.> or use ensembles to combine representations <cit.>. However, none of these works examined calibration for sets of data.
The task of classifying sets of instances is known as multiple instance learning <cit.>. It is desirable to leverage the set structure, instead of simply using a concatenated representation of the examples in each set. A common approach is to pool representations, either by mean pooling, which is akin to OKO, or max pooling <cit.>. Other approaches include the use of permutation invariant networks <cit.> or attention mechanisms <cit.>. We are unaware of work that leverage set learning for improving calibration of standard cross-entropy training.
Learning from imbalanced data has a long history in machine learning <cit.>. Approaches usually center around resampling the training data <cit.> or modifying the loss function <cit.>, or combinations thereof <cit.>. Transfer learning <cit.>, self-supervised learning <cit.>, or ensembles of experts <cit.> can also be helpful in classifying rare classes.
Our method is a novel way to improve performance on imbalanced data at excellent calibration.
§ METHOD
Here we present the core contribution of this work, odd-k-out training (OKO). In OKO a model is simultaneously presented with multiple data points. At least two of these data points are from the same class, while the remaining k data points are each from a different class, i.e., the odd-k-outs, or odd classes. The objective is to predict the pair class. This forces a model to consider correlations between sets of examples that would otherwise be ignored in standard, single-example learning.
Notation More formally, we are interested in the classification setting on a training set 𝒟 = {(x_1,y_1), …, (x_n,y_n)}⊂ℝ^d× [C] of inputs x_i and labels y_i from C classes. The number of odd classes k is chosen such that k +1 ≤ C. We construct an OKO training example as follows:
Let 𝒳_c be the set of all training inputs, x_i, such that y_i = c. One first randomly selects a label y'∈ [C] and sets y_1'=y_2'=y' as the pair class. Next y'_3,…,y'_k+2 are sampled uniformly without replacement from [C]∖{y'} as the odd classes. Finally x'_1,…,x'_k+2 are selected uniformly at random from 𝒳_y'_1,…,𝒳_y'_k+2, while enforcing x'_1≠ x'_2. So x'_1 and x'_2 have the same class label, y', and x'_3,…,x'_k+2 all have unique class labels not equal to y'. A training example is then = ((x'_1, y_1'),…, (x'_k+2,y'_k+2) ). Let _x := (x'_1,…,x'_k+2) and _y = (y'_1,…,y'_k+2). Alg. <ref> describes the sampling process. The distribution of according to Alg. <ref> is 𝒜.
OKO objective For an ordered tuple of vectors, _x := (x'_1,…,x'_k+2), we define
f_θ(_x) ∑_i=1^m f_θ(x'_i).
We define the following soft loss for a fixed set :
ℓ_oko^(_y,f_θ(_x) ) -((k+2)^-1∑_i=1^k+2_y'_i)^⊤log[ (f_θ(_x))],
where _a∈^C is the indicator vector at index a and denotes the softmax function. The soft loss encourages a model to learn the distribution of all labels in the set 𝒮. One may also consider the case where the network should learn to only identify the most common class y', yielding the hard loss:
ℓ_oko^(_y, f_θ(_x ) ) -_y'^⊤log[ (f_θ(_x))].
In preliminary experiments, we found the hard loss to always outperform the soft loss and have thus chosen not to include experimental results for the soft loss.
For OKO set sampling, = (_x, _y) ∼𝒜, the empirical risk is
_∼𝒜[ℓ_oko^(_y,f_θ(_x) ) ].
§ PROPERTIES OF OKO
Here, we first theoretically demonstrate that the OKO objective function encourages logit outputs to diverge less strongly than cross entropy. Second, we develop an understanding of calibration beyond the standard set-based notion and provide a measure of over- or under-confidence of each data point.
§.§ OKO is smoothing the loss landscape
It has been repeatedly observed that well-trained neural networks typically “overfit” the training data.
We assume that when a model has memorized its training data, its outputs will be identical for each input corresponding to the same class. In other words, its output on training data only depends on the training label: f_θ(x_i)≈ F(y_i):=F_y_i. Here we will consider a matrix F of logit outputs, such that F_i,j denotes the logit for class j when the true label is i.
One issue with standard cross entropy is that it strongly encourages the entries of F to diverge: For the cross-entropy risk ℛ(F) = [-_y^T log((f(x)) )] = [-_y^T log((F_y) )], and for all F and i, j, ∂ℛ(F)/∂ F_i,i>0 and for all i≠ j, ∂ℛ(F)/∂ F_i,j<0. In other words: standard cross-entropy loss always encourages the logits of the true class to move towards ∞ and the logits of the wrong class towards -∞. As a result, neural networks tend to be overconfident. In particular, if f_θ(x) predicts class ŷ then P(ŷ| x)<[(f_θ(x))]_ŷ.
A natural way to counteract this issue is via weight decay. Indeed, weight decay has been shown to improve calibration. However, this improvement comes at a cost of generalization, and modern networks therefore typically utilize little to no weight decay <cit.>. Thus, there is a desire to find ways to calibrate networks without using weight decay. Label smoothing is one potential solution to this since it encourages exp F_y to be proportional to _y(1 - α) + α / C <cit.>. We include label smoothing as a competitor for our method in the experimental section.
Before deriving the risk functions for the hard and soft OKO losses, we must introduce further notation. Let _i = {S | S∈ 2^[C], |S| = k, i∉ S}. _i represents a potential set of odd labels when y'=i. For OKO we have
ℛ_soft(F) = -∑_i=1^k ∑_Y ∈_i∑_j=1^k+2log[ (∑_ℓ=1^k+2F_Y'_ℓ)]_Y'_j ℛ_hard(F) = -∑_i=1^k ∑_Y ∈_i log[ (∑_ℓ=1^k+2F_Y'_ℓ) ]_i.
See Appx. <ref> for a derivation of these. The following proposition demonstrates that the OKO risks naturally encourage the risk not to overfit by constraining when each index of F is viewed individually.
For a fixed i,j, both ℛ_soft(F) and ℛ_hard(F) are convex and each admits a unique global minimum with respect to F_i,j.
This acts as an implicit form of smoothing and helps to keep the logits from diverging during training. Unlike label smoothing <cit.>, however, F is not encouraged to converge to a unique minimum. Optimizing the (hard) OKO risk still allows F to diverge if that's advantageous — as we demonstrate in the following proposition. For detailed proofs of Proposition <ref> and Proposition <ref> see Appx. <ref>.
There exist an initial value, F^(0), such that the sequence of points given by minimizing ℛ_hard with fixed step size gradient descent, F^(0),F^(1), F^(2),…, yields, lim_a →∞F^(a)_i,i→∞ and lim_a →∞F^(a)_i,j→ -∞, for all i and j≠ i.
Hence, the OKO risk strikes a balance between the excessive overconfidence caused by standard cross-entropy and the inflexible calibration of fixed minima in label smoothing <cit.>.
§.§ OKO is well-calibrated due to entropic regularization
To begin with, we introduce an entropy-based measure of datapoint calibration and demonstrate empirically that it is a useful measure of calibration, along with theoretical justification for its utility. In a sense, our measure is inspired by the loglikelihood score and is a normalized scoring rule that provides localized probabilistic insight (for full motivation and details, see Appx. <ref>).
Let the relative cross entropy of distributions P, Q be RC(P, Q) = H(P, Q) - H(Q).
Since RC can be computed for each (y, ŷ) datapoint, it is a scoring rule. The relative cross entropy is very similar to KL divergence but with a different entropy term. However, unlike the KL divergence, it is not always non-negative. In fact, note that if an incorrect prediction is overconfident, then RC(y, ŷ) →∞ is extremely positive, implying that RC captures some measure of excess confidence. Specifically, we can show when the predictions are inaccurate, we have a provable deviation.
For hard labels y, if _y^⊤ŷ≤ 1/ |C|, then RC(y, ŷ) ≥ 0.
Furthermore, we show that RC captures some notion of calibration when averaged across all data points. Specifically, when a predictor is perfectly calibrated, its average RC, a measure of excess confidence, should be 0. Note that RC is no longer proper due to this zero mean.
If ŷ is a predictor that is perfectly calibrated across , then the average excess confidence, as measured by relative cross entropy, is E_(x, y) ∼[RC(y, ŷ(x)) ] = 0
We exploit our entropic framework of calibration to demonstrate that OKO implicitly performs regularization by preventing models from overfitting to low entropy regions, thereby inducing higher entropy in the predictions. To see this, imagine a dataset where the majority of the data is noisy. Specifically, most of the data share the same feature vector but have different class labels — one-feature-to-many-classes, and each class has 0<ϵ≪ 1 fraction of data points in a low entropy region in which the data points are clustered together by one class label — many-features-to-one-class.
In such a high-noise dataset
it is likely that the low entropy regions are mislabeled. If f_θ has high capacity and was fitted via non-regularized, standard regression, it would overfit to low entropy regions by classifying them with high certainty since those examples are well-separated from the noise. As mentioned in the previous section, this follows from the datapoint independence and even with label smoothing, such overfitting is only slightly alleviated <cit.>.
Let x_c be the common feature vector for class c, and for simplicity, we assume that every class has 1-ϵ proportion of its data points as x_c. That is, almost all data are noisy. When sampling sets of data, a (1 - ϵ)^k+2 portion of the sets will be (repeats of) the singleton set {x_c}. This implies that after training, f_θ(x_c) must be near-uniform for sufficiently small ϵ. Surprisingly, with OKO the learned model parameters extend the uncertainty in the noisy majority to all data points in the classes. We consider the first order terms: after ignoring all singleton sets {x_c}, the most common sets would be pair sets of the form {x_c, x_i}, where x_i ≠ x_c belongs to some class C_i. Since all classes have the same proportions, all pair sets are equally likely and our loss, restricted to these sets, is by symmetry
∑_j ∈ [C]_j^⊤∑_x_ilog(((k+1)f_θ(x_c) + f_θ(x_i))).
Now for a fixed x_i, the loss will try to minimize ∑_j _j^⊤log(((k+1)f_θ(x_c) + f_θ(x_i))), implying that f_θ(x_i) will also have high entropy as j is summed through every label.
§ EXPERIMENTAL RESULTS
In this section, we present experimental results for both generalization performance and model calibration. In general, model calibration and generalization performance are orthogonal quantities that are difficult to optimize jointly. A classifier can show strong generalization performance while being poorly calibrated, and, vice versa, a classifier can be well-calibrated although its generalization performance is weak. Here, we are equally interested in both quantities.
Experimental details. For every experiment we present in this section, we use a simple randomly-initialized CNN for MNIST and FashionMNIST and ResNet18 and ResNet34 architectures <cit.> for CIFAR-10 and CIFAR-100 respectively. We use standard SGD with momentum and schedule the learning rate via cosine annealing. We select hyperparameters and train every model until convergence on a held-out validation set. To examine generalization performance and model calibration in low data regimes, we vary the number of training data points while holding the number of test data points fixed.
We report accuracy for the official test sets of MNIST, FashionMNIST, CIFAR-10, and CIFAR-100. We are specifically interested in heavy-tailed class distributions. Since heavy-tailed class distributions are a special rather than a standard classification setting, we report experimental results for both uniform and heavy-tailed class distributions during training. We consider heavy-tailed class distributions with probability mass p=0.9 distributed uniformly across three overrepresented classes and (1-p)=0.1 distributed across the remaining 7 or 97 underrepresented classes respectively. In ablation experiments, we have seen that although odd class examples are crucial OKO is not sensitive to the choice of k (see App. <ref>). Therefore, we set k in OKO to 1 for all experiments. Note that k=1 results in the computationally least expensive version of OKO. Since preliminary experiments have shown that generalization performance can be boosted by predicting the odd class using an additional classification head, in the following we report results for a version of OKO with k=1 where in addition to the pair class prediction (see Eq. <ref>) a model is trained to classify the odd class with a second classification head that is discarded at inference time.
For simplicity and fairness of comparing against single example methods, we set the maximum number of randomly sampled sets to the total number of training data points n_train in every setting. This is guaranteed to yield the same number of gradient updates as standard cross-entropy training.
Training methods. Alongside OKO, we consider five different baseline methods for comparing generalization performance and six different methods for investigating model calibration: 1.) Standard maximum-likelihood estimation (see Eq. <ref> in Appx. <ref>), 2.) Vanilla + label smoothing (LS; ), 3.) Cross-entropy error reweighting (see Eq. <ref> in Appx. <ref>), 4.) Batch-balancing (BB; see Alg. <ref> in Appx. <ref>), 5.) BB + LS, 6.) BB + temperature scaling (TS; τ=2.0). We consider label smoothing because it yields significantly better calibration than using hard labels for training neural nets and equivalent model calibration to temperature scaling <cit.>. We deliberately ignore temperature scaling for generalization performance analyses because it does not change the of a classifier's predicted probability distribution after training and therefore yields the same test accuracy as BB.
Generalization. For both uniform and heavy-tailed class distribution settings, OKO either outperforms or performs on par with the best baseline approaches considered in our analyses across all four datasets (see Fig. <ref>).
We observe the most substantial improvements over the baseline approaches for both balanced and heavy-tailed MNIST, heavy-tailed FashionMNIST, and balanced CIFAR-10 and CIFAR-100.
For 10-shot MNIST OKO achieves an average test set accuracy of 87.62%, with the best random seed achieving 90.14%. This improves upon the previously reported best accuracy by 8.59% <cit.>. For 20-shot and 50-shot MNIST, OKO improves upon the previously reported best test set accuracies by 2.85% and 1.81% respectively <cit.>. OKO achieves the strongest average generalization performance across all datasets and class distribution settings (see Tab. <ref>). Improvements over the other training methods are most substantial for the heavy-tailed class distribution settings.
Calibration. We present different qualitative and quantitative results for model calibration. Although model calibration is an orthogonal quantity to generalization performance, it is equally important for the deployment of machine learning models.
Reliability. The reliability of a model can be measured by looking at a model's accuracy as a function of its confidence. An optimally calibrated classifier is a model whose predicted class is correct with probability p̂_θ(x), where p̂_θ(x) is the confidence of a model's prediction, i.e., optimal calibration occurs along the diagonal of a reliability diagram (see Fig. <ref>). OKO's reliability lies along the diagonal substantially more often than to any competing method. This is quantified by lower Expected Calibration Errors (see Fig. <ref>; <ref>) of OKO compared to the other methods. Its calibration is on par with BB + LS or BB + TS in some settings. In Fig. <ref>, we show reliability diagrams for MNIST, FashionMNIST, CIFAR-10, and CIFAR-100 averaged over all training settings using a uniform class distribution. Reliability diagrams for the heavy-tail training settings can be found in Appx. <ref>.
Uncertainty. Entropy is a measure of uncertainty and therefore can be used to quantify the confidence of a classifier's prediction. Here, we examine the distribution of entropies of the predicted probability distributions for individual test data points as a function of (in-)correct predictions.
[24]r0.7
0.7
< g r a p h i c s >
0.7
< g r a p h i c s >
0.7
< g r a p h i c s >
0.7
< g r a p h i c s >
Reliability diagrams for balanced datasets. Confidence and accuracy scores were averaged over random seeds and the number of training data points. Dashed diagonal lines indicate perfect calibration.
An optimally calibrated classifier has much density at entropy close to log(1) and little density at entropy close to log(C) for correct predictions, and, vice versa, small density at entropy close to log(1) and much density at entropy close to log(C) for incorrect predictions, irrespective of whether classes were in the tail or the mode of the training class distribution. In Fig. <ref>, we show the distribution of entropies of the models' probabilistic outputs partitioned into correct and incorrect predictions respectively for MNIST and FashionMNIST across all training settings with heavy-tailed class distributions. We observe that label smoothing does alleviate the overconfidence problem to some extent, but is worse calibrated than OKO. More entropy visualizations can be found in Appx. <ref>.
[13]r0.7
0.7
< g r a p h i c s >
0.7
< g r a p h i c s >
Here, we show the distribution of entropies of the predicted probability distributions for individual test data points across all heavy-tailed training settings partitioned into correct and incorrect predictions respectively.
ECE. ECE is a widely used scoring rule to measure a classifier's calibration. It is complementary to reliability diagrams (see Fig. <ref>) in that it quantifies the reliability of a model's confidence with a single score, whereas reliability diagrams qualitatively demonstrate model calibration. A high ECE indicates poor calibration, whereas a classifier that achieves a low ECE is generally well-calibrated. Aside from CIFAR-100 where batch-balancing in combination label smoothing shows slightly lower ECEs than OKO, OKO achieves lower ECE scores than any other method across training settings (see Fig. <ref> in <ref> and Fig <ref> in Appx. <ref>).
. Here, we demonstrate empirically that our novel entropy-based measure of datapoint calibration is a useful measure of calibration. Following Def. <ref> and Lemma <ref> in <ref>, we quantify the average excess confidence RC(y, ŷ(x)) by measuring the mean absolute difference (MAE) between H̅(P, Q) and H̅(Q) for the different number of training data point settings (see Fig. <ref>). We find that OKO achieves the lowest MAE for all balanced training settings and is among the top-2 or top-3 training methods with the lowest MAE for the heavy-tailed training settings (see Tab. <ref>).
§ CONCLUSION
In standard empirical risk minimization, a classifier minimizes the risk on individual examples; thereby ignoring more complex correlations that may emerge when considering sets of data. Our proposed odd-k-out (OKO) framework addresses this caveat — inspired by the odd-one-out task used in the cognitive sciences <cit.>. Specifically, in OKO, a classifier learns from sets of data, leveraging the odd-one-out task rather than single example classification (see Fig. <ref>). We find that OKO yields well-calibrated model predictions, being better or equally well-calibrated as models that are either trained with label smoothing or whose logits are scaled with a temperature parameter found via grid search after training (see <ref>). This alleviates the ubiquitous calibration problem in ML in a more principled manner. In addition to being well-calibrated, OKO achieves better test set accuracy than all training approaches considered in our analyses (see Tab. <ref>). Improvements are particularly pronounced for the heavy-tailed class distribution settings.
OKO is a theoretically grounded learning algorithm that modifies the training objective into a classification problem for sets of data. We show various consistency proofs and theoretical analyses proving that OKO yields smoother logits than standard cross-entropy, corroborated by empirical results. OKO does not require any grid search over an additional hyperparameter. While OKO is trained on sets, at test time it can be applied to single examples exactly like any model trained via a standard single example loss. The training complexity scales linearly in O(|𝒮|) where |𝒮| denotes the number of examples in a set and hence introduces little computational overhead during training.
One caveat of OKO is that classes are treated as semantically equally distant — similar to standard cross-entropy training. An objective function that better reflects global similarity structure may alleviate this limitation. In addition, we remark that we have developed OKO only for supervised learning with labeled data. It may thus be interesting to extend OKO to self-supervised learning.
We expect OKO to benefit areas that are in need of reliable aleatoric uncertainty estimates but suffer from a lack of training data — such as medicine, physics, or chemistry, where data collection is costly and class distributions are often heavy-tailed.
§ ACKNOWLEDGEMENTS
LM, RV, and KRM acknowledge funding from the German Federal Ministry of Education and Research (BMBF) for the grants BIFOLD22B and BIFOLD23B. LM acknowledges support through the Google Research Collabs Programme. We thank Rodolphe Jenatton for helpful comments on an earlier version of the manuscript.
iclr
§ OKO CLASSIFICATION HEAD
Here, we demonstrate how easily OKO can be applied to any neural network model in practice, irrespective of its architecture. OKO does not require an additional set of parameters. OKO essentially is just a sum over the logits in a set of inputs. Below we provide JAX code for the classification head that is used for OKO. During training, logits obtained from the classification head are summed across the inputs in a set. At inference time, the classification head is applied to single inputs just as any standard classification head.
style=mystyle
[language=Python, caption=OKO classification head implemented in JAX.]
import flax.linen as nn
import jax.numpy as jnp
from einops import rearrange
from jax import vmap
Array = jnp.ndarray
class OKOHead(nn.Module):
num_classes: int # number of classes in the data
k: int # number of odd classes in a set
def setup(self) -> None:
self.clf = nn.Dense(self.num_classes)
self.scard = self.k+2 # set card is number of odd classes + 2
def set_sum(self, x: Array) -> Array:
"""Aggregate the logits across all examples in a set."""
x = rearrange(x, "(b scard) d -> b scard d", scard=self.scard)
dots = vmap(self.clf, in_axes=1, out_axes=1)(x)
set_logits = dots.sum(axis=1) # set sum
return set_logits
@nn.compact
def __call__(self, x: Array, train: bool) -> Array:
# x ∈ℝ^b(k+2) ×d
if train:
logits = self.set_sum(x_p)
else:
logits = self.clf(x)
return logits
§ BACKGROUND
In the heavy-tailed class distribution setting, we are interested in the classification setting where a classifier has access to training data 𝒟 = {(x_1,y_1), …, (x_n,y_n)}⊂ℝ^d× [C] consisting of inputs x_i and labels y_i from C classes. For c∈[C], 𝒟_c⊂𝒟 will denote those samples with label c, so 𝒟 = ⋃_c=1^C𝒟_c. Class imbalance occurs when |𝒟_c |
≫|𝒟_c'| for some c and c'. Let _a∈^C be the indicator vector at index a and be the softmax function. Let μ_n be the uniform empirical measure of .
Cross-entropy. The cross-entropy error between the original and the predicted labels is the following risk function when averaged over all n data points in the dataset 𝒟,
ℒ_vanilla( ,θ) _(X,Y) ∼μ_n[(_Y,( f_θ(X) ) ]= -1/n∑_i=1^n_y_i^T log[ ( f_θ(x_i) ) ].
In the class imbalanced setting optimizing Eq. <ref> is known to produce classifiers that strongly favor common classes and are therefore likely to incorrectly label rare classes during test time. Here we describe a few methods designed to counteract this phenomenon, which we will use as competitors in our experiments.
§.§ Error re-weighting
To counteract the fact that there are fewer terms in the summation in Eq. <ref> for rare classes, one may simply weight those terms more greatly. Let _n,Y( ·)μ_n (^d ×·) be the empirical distribution over the class labels. In error re-weighting, the terms in Eq. <ref> are weighted inversely to their class frequency, such that the contribution of a sample in class c to the error decreases with its number of unique examples n_c,
ℒ_re-weighted(𝒟,θ) _(X,Y) ∼μ_n[ℓ(_Y,( f_θ(x_i) )) /μ_n,Y( Y )] ∝ -1/n∑_i=1^na/n_y_i_y_i^T log[ ( f_θ(x_i) ) ],
where a ∈ℝ is a constant to bring the error term back onto the correct scale to avoid vanishing gradient problems or a learning rate η that is unusually large, since n_c≫ 1 ∀ c ∈{1,…,C}.
§.§ Batch-Balancing
In normal batch construction, a sample for a batch is selected uniformly at random from the entire dataset . Compensating for this by including additional copies of samples from rare classes in the training dataset, or during batch construction, is known as resampling or oversampling <cit.>.
The prototypical version of this selects batch samples by first selecting a class c∈ [C], uniformly at random, and then selecting a sample from _c, uniformly at random. This causes a batch to contain an equal number of samples from each class on average. We term this batch-balancing. The works <cit.> use a slight modification of this where the stochasticity of the labels for each batch is removed so each batch contains an equal number of samples from each class.
The pseudo-code for batch-balancing is described in Alg. <ref> in Appx. <ref>. Error re-weighting and batch-balancing are not specific to the cross entropy loss and may be applied to any loss that is the empirical expectation of some loss function. They represent two different ways of remedying class imbalance: one can weigh the rare examples more heavily or one can present the rare examples more often. The following proposition shows that two methods are equivalent in expectation, although we find that batch-balancing always works better in practice (see <ref>). The sampling distribution for batch-balancing is denoted by μ̃_n.
Let ℬ be a batch selected uniformly at random and ℬ' be a batch selected using batch-balancing. Then there exists λ >0 such that λ_ℬ[ℒ_re-weighted(ℬ,θ)] = _ℬ'[ℒ_vanilla(ℬ',θ)] for all θ.
Let q = 𝒰([n]). For the empirical class distribution, μ_n,Y( Y ) = |{i | y_i = y_q }|/n. So now we have
_(X,Y) ∼μ_n[ℓ(Y,f_θ(X)) /μ_n,Y( Y )]
=n_q∼𝒰([n])[ℓ(y_q,f_θ(x_q))/|{i | y_i = y_q }|]
= n∑_j=1^C_q∼𝒰([n])[ℓ(y_q,f_θ(x_q)/|{i | y_i = y_q }|| q=j ] P_q∼𝒰([n])(y_q = j)
= n∑_j=1^C_q∼𝒰([n])[ℓ(y_q,f_θ(x_q)/|{i | y_i = j }|| q=j ] |{i | y_i = j }|/n
= ∑_j=1^C_q∼𝒰([n])[ℓ(y_q,f_θ(x_q) | y_q=j ]
= C∑_j=1^C_q∼𝒰([n])[ℓ(y_q,f_θ(x_q) | y_q=j ]C^-1.
Let ℬ be the distribution over [n] according to batch-balancing. We know that 𝒰 and ℬ select uniformly, conditioned on class label, so
_q∼𝒰([n])[ℓ(y_q,f_θ(x_q) | y_q=j ] = _q∼ℬ[ℓ(y_q,f_θ(x_q) | y_q=j ],
and that ℬ selects class label uniformly
P_q ∼ℬ[y_q = j] = C^-1,
for all j∈[n], so Eq. <ref> is equal to
C∑_j=1^C_q∼𝒰([n])[ℓ(y_q,f_θ(x_q) | y_q=j ]C^-1 =C∑_j=1^C_q∼ℬ[ℓ(y_q,f_θ(x_q) | y_q=j ]P_q ∼ℬ[y_q = j]
=C_q∼ℬ[ℓ(y_q,f_θ(x_q) ]
=C_(X,Y)∼μ̃_n[ ℓ(Y,f_θ(X)) ].
§ OKO OVERFIT RISK DERIVATION
Here we derive the expressions in Eq. <ref> in the main text. We have that
_∼𝒜[ℓ_oko^(_y,f_θ(_x) ) ] = ∑_i=1^k P(y' = i )_∼𝒜[ℓ_oko^(_y,f_θ(_x) )| y'=i ]
= ∑_i=1^k P(y' = i )
∑_Y ∈_i (P ({y'_3,..,y'_k+2}=Y| y'=i).
. ⋯×_∼𝒜[ℓ_oko^(_y,f_θ(_x) )| y'=i, {y'_3,..,y'_k+2} =Y ]),
noting that the parenthesis after the second summation in Eq. <ref> extends to the end of Eq. <ref>. Since y' is chosen uniformly at random P(y' = i ) are equal for all i and similarly (y'_3,…,y'_k+2) are chosen uniformly at random given y' and |_i| are all equal so P({y'_3,…,y'_k+2} =Y| y'=i) are equal for all Y and i, thus we can ignore these terms when optimizing over F. Letting Y' = (i,i,Y_1,…,Y_k) in the summation we have, that _∼𝒜[ℓ_oko^(_y,f_θ(_x) ) ] is proportional to,
∑_i=1^k ∑_Y ∈_i _∼𝒜[ℓ_oko^(_y,f_θ(_x) )| y'=i, {y'_3,…,y'_k+2} =Y ]
=∑_i=1^k ∑_Y ∈_i _∼𝒜[((k+2)^-1∑_i=1^k+2_y'_i)^T log[ (f_θ(_x))]| y'=i, {y'_3,…,y'_k+2} =Y ]
∝∑_i=1^k ∑_Y ∈_i _∼𝒜[(∑_i=1^k+2_Y'_i)^T log[ (∑_i=1^k+2F_Y'_i)] ]
= ∑_i=1^k ∑_Y ∈_i (∑_i=1^k+2_Y'_i)^T log[ (∑_i=1^k+2F_Y'_i)]
= ∑_i=1^k ∑_Y ∈_i∑_j=1^k+2log[ (∑_ℓ=1^k+2F_Y'_ℓ)]_Y'_j.
The derivation of the hard risk is similar, however the third summation in Eq. <ref> only contains the i term, for the single hard label.
§ PROOFS FROM <REF>
Before proving Proposition <ref> from the main text we will first introduce the following support lemma, which will be proven later.
Let, N,N' ∈ℕ be positive. Let q_i,q'_i ∈^C, with q_i,1= a_ix+b_i, q'_i,1= a'_ix + b'_i (the remaining entries of q_i and q_i' are fixed and do not depend on x) with a_i> 0 and a'_i > 0, and n_i be a sequence of N' elements in [C]∖{1}. Then
f(x) = -∑_i=1^N log((q_i ) )_1_ℒ- ∑_i=1^N'log((q'_i ) )_n_i_ℛ
is strictly convex and admits a unique minimizer.
This proof will be proven using surrogate indices i',j' in place of i,j in the proposition for F_i,j; it will be useful to be able to use i and j to refer to indices in other expressions.
Observe that simply relabeling the network outputs does not affect the risk so, for a permutation σ and G defined by G_i',j' = F_σ(i'),σ(j') we have that ℛ(F) = ℛ(G). Because of this we will simply let j'=1 for concreteness. We will begin with the case that assuming i'=j'. We will use LHS and RHS to denote left and right hand sides of equations and MT to refer to “main text,” for equations in the main text.
Case i'=j'=1:Note that each summand in MT LHS Eq. <ref> and MT RHS Eq. <ref> is either a constant with respect to F_1,1 or has the form of the left hand sum, ℒ, or right hand sum, ℛ, of Eq. <ref>, substituting in x← F_1,1. To finish the i'=j' case we will show that both MT LHS Eq. <ref> and MT RHS Eq. <ref> has a summand of the form in ℒ of one summand of the form of ℛ.
* For ℒ: consider i=1 and an arbitrary Y ∈_1 for MT RHS Eq. <ref>; for MT LHS Eq. <ref> we use the same values with j=1 since Y'_1 =1.
* For ℛ: for MT LHS Eq. <ref> we can let i=1, Y ∈_1 arbitrary, and j = 3 since Y'_3 ≠ 1 = j' in that case. For MT RHS Eq. <ref> we need only consider i=2 and some Y= _2 that contains an entry with 1.
From Lemma <ref> it follows that both ℛ_soft(F) and ℛ_hard(F) are strictly convex and contain a unique minimum when optimizing over F_i',i'.
Case i'≠ j'=1: This case proceeds in a similar fashion to the last.
* For ℒ: MT RHS Eq. <ref> we have i=1 and some Y ∈_1, so that Y'_3 =i'; for MT LHS Eq. <ref> we add j=1 so Y'_1=1.
* For ℛ: MT RHS Eq. <ref> we have i=2, Y∈_2 such that i' is in Y. We can use the same values for i and Y for MT LHS Eq. <ref>, with j=1 since Y_1 =2.
To show strict convexity, we will show that d^2f/dx^2 is strictly positive. First, we have that
df/dx = -∑_i=1^N a_i (1-(q_i )_1) - ∑_i=1^N' a_i' (-(q'_i )_1)
= ∑_i=1^N a_i ((q_i )_1 - 1) + ∑_i=1^N' a_i' ((q'_i )_1)
and thus
d^2f/dx^2 = ∑_i=1^N a_i^2 ( 1 -(q_i )_1)(q_i )_1 + ∑_i=1^N' a_i'^2 ((q'_i )_1)(1-(q'_i ))
which is clearly positive for all x.
To demonstrate the existence of a minimizer we will show that df/dx attains both positive and negative values as a function of x and, by the intermediate value theorem, df/dx must equal 0 somewhere. To see this observe that
lim_x →∞∑_i=1^N a_i ((q_i )_1 - 1) + ∑_i=1^N' a_i' ((q'_i )_1) = ∑_i=1^N' a_i' >0
lim_x → -∞∑_i=1^N a_i ((q_i )_1 - 1) + ∑_i=1^N' a_i' ((q'_i )_1) = ∑_i=1^N -a_i<0
,
which completes the proof.
Let ℱ⊂^C× C be the set of matrices F where F_i,i = a for all i and F_i,j = b for all i≠ j. Let F(a,b)∈ℱ be the matrix which contains a in the diagonal entries and b in all other entries. The chain rule tells us that
∂/∂ aℛ_hard(F(a,b) ) = ∑_i=1^C .∂/∂ F_i,i(F )|_F= F(a,b),
the sum of all the partial derivatives along the diagonal, and
∂/∂ b(F(a,b)) = ∑_i≠ j.∂/∂ F_i,j(F )|_F= F(a,b),
the sum of all partial derivatives for the entries off the diagonal.
Due to the invariance with respect to labeling, as in the proof of Proposition <ref>, for any F∈ℱ it follows that ∂/∂ F_i,i(F) = ∂/∂ F_i',i'(F) for all i and i', and ∂/∂ F_i,j(F) = ∂/∂ F_i',j'(F) for all i≠ j, i'≠ j'. Because of this ∇ℛ_hard(F(a,b)) will always lie in ℱ and thus a path following gradient descent starting in ℱ will always remain in ℱ.
Considering Eq. <ref> and Eq. <ref>, if we show that -∂/∂ aℛ_hard(F(a,b) )>0 and -∂/∂ bℛ_hard(F(a,b) )<0, for any a,b, it would follow that, for F∈ℱ, -∇(F) = F(a',b') for some a'>0 and b'<0. This would imply that gradient descent starting from an element of ℱ will diverge to F(∞, -∞), which would complete the proof. We will now proceed proving -∂/∂ aℛ_hard(F(a,b) )>0 and -∂/∂ bℛ_hard(F(a,b) )<0.
The risk expression applied to F(a,b) is equal to
ℛ_hard(F(a,b)) = -∑_i=1^k ∑_Y ∈_i log[ (∑_ℓ=1^k+2F(a,b)_Y'_ℓ) ]_i.
For concreteness we will consider the summand with i=1 and Y = [2,…,k+1 ] fixed, which implies Y' = [1,1,2,…,k+1 ] is also fixed. In this case we have that
∑_ℓ=1^k+2F(a,b)_Y'_ℓ = [
2a + kb
a + (k+1)b
⋮
a + (k+1)b
(k+2)b
⋮
(k+2)b
],
with k entries containing a + (k+1)b and C-(k+1) entries containing (k+2)b (note that C-(k+1) is nonnegative). Continuing with fixed i and Y' have that
log[ (∑_ℓ=1^k+2F(a,b)_Y'_ℓ) ]_i
= log(exp(2a+bk)/exp(2a+bk) + kexp(a+b(k+1)) + (C-k-1) exp((k+2)b)).
We will define R(a,b) to be equal to Eq. <ref>
Note that every summand in Eq. <ref> is equal to Eq. <ref>. Because of this we need only show that ∂/∂ a R(a,b)>0 and ∂/∂ b R(a,b)< 0 to finish the proof.
Differentiating with respect to a gives us
∂/∂ a R(a,b) =2 - 2exp(2a+bk) + kexp(a+b(k+1))/exp(2a+bk) + kexp(a+b(k+1)) + (C-k-1) exp((k+2)b).
Letting
Q(a,b):=exp(2a+bk) + kexp(a+b(k+1)) + (C-k-1) exp((k+2)b)
it follows that
∂/∂ a R(a,b) = 2 -exp(2a+bk) + kexp(a+b(k+1))/Q(a,b) - exp(2a+bk) /Q(a,b).
We have that
exp(2a+bk) + kexp(a+b(k+1))/Q(a,b)≤ 1
and exp(2a+bk) /Q(a,b) <1
so ∂/∂ a R(a,b)>0.
Differentiating with respect to b we get
∂/∂ b R(a,b)
= k -kexp(2a+bk) + k(k+1)exp(a+b(k+1)) +(k+2)(C-k-1) exp((k+2)b)/exp(2a+bk) + kexp(a+b(k+1)) + (C-k-1) exp((k+2)b).
Observe that
k< kexp(2a+bk) + k(k+1)exp(a+b(k+1)) +(k+2)(C-k-1) exp((k+2)b)/exp(2a+bk) + kexp(a+b(k+1)) + (C-k-1) exp((k+2)b)
so ∂/∂ b R(a,b)<0, which completes the proof.
§ CALIBRATION
Here, we provide more intuition about our new entropy-based measure of datapoint calibration. In a sense, our measure is a normalized scoring rule that provides localized probabilistic insight.
Scoring Rules and Likelihood A scoring rule provides a local measure of fit or calibration given a predictive distribution and its corresponding labels. Specifically, for a datapoint (x_i,y_i), let ŷ_i be the predictive distribution of a model. Then, a scoring rule is any function that outputs a scalar evaluation of the goodness of fit: S(y_i,ŷ_i). Such a scoring rule is called proper if _y ∼ Q[S(y,ŷ)] is maximized when ŷ = Q, meaning that when the predictive distribution is perfectly calibrated and equal to the true label distribution, the score is maximal.
A common proper scoring rule is the log likelihood. Recall that for distributions over [C], p and q that cross entropy is H(p, q) = -∑_i p_ilog(q_i) and entropy is H(p) = H(p,p) = -∑_i p_i log(p_i). In the classification setting the
negative log likelihood becomes the negative cross entropy S( y, ŷ) = -H(y, ŷ) = _y^⊤log(ŷ). Indeed, we see that for any predictive distribution ŷ and label distribution Q, _y∼ Q[S(y, ŷ)] = H(Q, ŷ). If ŷ = Q and our label distribution is uniform, then this is equal to the maximum entropy of log(|C|). When ŷ is perfectly calibrated, the average negative cross entropy will be equal to the negative entropy:
_y ∼ŷ[_y^⊤log(ŷ)] = ∑_i ŷ_ilog(ŷ_i) = - H(ŷ). It is easy to see that this is a proper rule since for any predictive distribution ŷ and label distribution Q, _y ∼ Q[S(y, Q)] - _y ∼ Q[S(y, ŷ)] = -H(Q) + H(Q, ŷ) = KL( Q || ŷ) ≥ 0, where the relative entropy or KL divergence is non-negative.
The definition of proper scoring rule works well when the label distribution is static and continuous, as when Q is a point mass, the KL divergence becomes H(Q, ŷ), which is the case with hard labels. However, in most settings, y is dependent on x and Q = _(x,y) ∼μ[_y|x ] and H(Q) is not known for each x. Furthermore, this gets even trickier when the predicted classes are non uniform, but under-represented classes require better calibration.
Cross Entropy vs Entropy Due to the limitations of proper scoring rules, such calibrative measures are not meaningful measures of over-confidence on a per-datapoint level. Specifically, we consider the question of whether each datapoint is calibrated and note that the scoring rule of H(y,ŷ) does not give an inherent notion of calibration. Instead, we motivate the following definition of relative cross entropy by noting that if y ∼ŷ, then S(y, ŷ) - H(ŷ) is a random variable with expectation 0.
We define relative cross entropy of distributions P, Q to be RC(P, Q) = H(P, Q) - H(Q).
Since RC can be computed for each (y, ŷ) datapoint, it is a scoring rule. The relative cross entropy is very similar to KL divergence but with a different entropy term. However, unlike the KL divergence, it is not always non-negative. In fact, note that if an incorrect prediction is overconfident, then RC(y, ŷ) →∞ is extremely positive, implying that RC captures some measure of excess confidence. Specifically, we can show when the predictions are inaccurate, we have a provable deviation.
For hard labels y, if _y^⊤ŷ≤ 1/ |C|, then RC(y, ŷ) ≥ 0.
Let i be the index corresponding to y. Then, by definitions of corresponding entropy measures,
RC(y, ŷ) = -log (ŷ_i) - (∑_j -ŷ_jlog(ŷ_j))
= (ŷ_i- 1)log (ŷ_i) - (∑_j ≠ i -ŷ_jlog(ŷ_j))
≥ (ŷ_i- 1)log (ŷ_i) + (1- ŷ_i) log(1-ŷ_i/|C|-1)
= (1- ŷ_i)[ log(1-ŷ_i/|C|-1) - log(ŷ_i) ].
The third line uses the principle that entropy is maximized when uniform. To see this let α = ∑_j≠ iŷ_j= 1 - ŷ_i and observe that the maximizing of the left hand side of the following equation admits the same argument as the subtrahend in Eq. <ref>,
α^-1∑_j ≠ i -ŷ_j(log(ŷ_j)- logα) = ∑_j ≠ i -(ŷ_j/α)log(ŷ_j/α),
and the right hand side is maximized with a uniform distribution.
The lemma follows since both terms in Eq. <ref> are positive.
Furthermore, we show that RC captures some notion of calibration when averaged across all datapoints. Specifically, when a predictor is perfectly calibrated, its average RC, a measure of excess confidence, should be 0. Note that RC is no longer proper due to this zero mean.
If ŷ is a predictor that is perfectly calibrated across , then the average excess confidence, as measured by relative cross entropy, is E_(x, y) ∼[RC(y, ŷ(x)) ] = 0.
This follows since that ŷ is a perfectly calibrated predictor, then
E_y ∼ŷ[RC(y, ŷ)] = E_y ∼ŷ[C(y, ŷ)] - H(ŷ) = 0.
§ ADDITIONAL EXPERIMENTAL RESULTS
In this section, we expand upon the results that we presented in <ref> and show additional experimental results and visualizations for model calibration. We start by presenting reliability diagrams for the heavy-tailed class distribution settings and continue with uncertainty and expected calibration error analyses.
§.§ Reliability
In Figure <ref>, we present reliability diagrams for heavy-tailed MNIST, FashionMNIST, CIFAR-10 and CIFAR-100. Both confidence and accuracy scores are averaged over five random seeds and across the number of training data points, similarly to Figure <ref> in <ref>. Note that optimal calibration occurs along the diagonal of a reliability diagram, highlighted by the blue dashed lines. For training regimes with heavy-tailed class distributions, either OKO, batch-balancing in combination with label smoothing, or batch-balancing in combination with posthoc temperature scaling achieves the best calibration on the held-out test set.
§.§ Uncertainty
In Figure <ref> we show the distribution of entropies of the predicted probability distributions for individual test data points across all heavy-tailed training settings for CIFAR-10 and CIFAR-100 respectively. In addition to the distribution of entropies of the predicted probability distribution for heavy-tailed training settings, in Figure <ref> we show similar distributions for individual test data points across all balanced training settings for all four datasets. We find OKO to be very certain — H(Q) is close to log(1) – for the majority of correct predictions and to be highly uncertain — H(Q) is close to log(C) – for the majority of incorrect predictions across all datasets. Batch-balancing in combination with either label smoothing or temperature scaling shows a similar distribution of entropies for the incorrect predictions, but is often too uncertain for the correct predictions, indicating random guesses rather than certain predictions for a significant number of predictions (see Fig. <ref>).
§.§ Expected Calibration Error (ECE)
Here we present ECE as a function of the number of data points used during training for both uniform and heavy-tailed class distributions for all four datasets considered in our analyses. We remark that for every method the ECE was computed on the official test set. For balanced MNIST, FashionMNIST, and CIFAR-10 as well as for heavy-tailed MNIST OKO achieves a lower ECE than any other training method. For the other training settings, OKO is either on par with label smoothing or achieves a slightly larger ECE compared to label smoothing. This suggests that OKO is either better calibrated than or equally well-calibrated as label smoothing. The results are most striking in the low data settings.
§.§ What's the matter with k?
In this section, we compare different values of k for generalization performance and calibration. Recall that k determines the number of examples coming from the odd classes — the classes that are different from the pair class — in a set 𝒮.
Odd class examples are crucial. Removing any odd class examples from the training sets — i.e., setting k to zero — decreases generalization performance and worsens calibration across almost all training settings (see Fig. <ref>, Fig. <ref>, and Fig. <ref>). Although odd class examples are ignored in the training labels, they are crucial for OKO's superior classification and calibration performance. Odd class examples appear to be particularly important for heavy-tailed training settings.
The value of k does not really matter. Concerning test set accuracy, we find that although odd class examples are crucial, OKO is fairly insensitive to the particular value of k apart from balanced CIFAR-10 and CIFAR-100 where k=1 achieves stronger generalization performance than larger values of k (see Fig. <ref>). However, this may be due to the additional classification head that we used for predicting the odd class in a set rather than a special advantage of k=1 over larger values of k.
We find larger values of k to result in worse ECEs for training settings with a uniform class distribution and similarly low or slightly lower ECEs for heavy-tailed class distribution settings (see Fig. <ref>). Similarly, we find the mean absolute difference (MAE) between the average cross-entropy errors, H̅(P, Q), and the average entropies, H̅(Q), on the test sets for different numbers of training data points to be slightly lower for k=1 than for larger values of k for uniform class distribution settings and equally low or slightly larger for k=1 compared to larger values of k for heavy-tailed class distribution settings (see Fig. <ref> for a visualization of this relationship and Tab. <ref> for a quantification thereof). Across all training settings the MAE between H̅(P, Q) and H̅(Q) is the largest and therefore the worst for k=0.
§ COMPUTE
We used a compute time of approximately 50 hours on a single Nvidia A100 GPU with 40GB VRAM for all CIFAR-10 and CIFAR-100 experiments using a ResNet-18 or ResNet-34 respectively and approximately 100 CPU-hours of 2.90GHz Intel Xeon Gold 6326 CPUs for MNIST and FashionMNIST experiments using the custom convolutional neural network architecture. The computations were performed on a standard, large-scale academic SLURM cluster.
|
http://arxiv.org/abs/2307.01939v1 | 20230704220419 | Optimal Information Encoding in Chemical Reaction Networks | [
"Austin Luchsinger",
"David Doty",
"David Soloveichik"
] | cs.CC | [
"cs.CC",
"cs.DC",
"cs.ET",
"F.1.1"
] |
A relative orientation for the moduli space of stable maps to a del Pezzo surface
Kirsten Wickelgren
July 2023
=================================================================================
Discrete chemical reaction networks formalize the interactions of molecular species in a well-mixed solution as stochastic events. Given their basic mathematical and physical role,
the computational power of chemical reaction networks has been widely studied in the molecular programming and distributed computing communities.
While for Turing-universal systems there is a universal measure of optimal information encoding based on Kolmogorov complexity,
chemical reaction networks are not Turing universal unless error and unbounded molecular counts are permitted.
Nonetheless, here we show that the optimal number of reactions to generate a specific count x ∈ℕ with probability 1 is asymptotically equal to a “space-aware” version of the Kolmogorov complexity of x,
defined as (x) = min_p{p / logp + log(((p))) : (p) = x }, where p is a program for universal Turing machine .
This version of Kolmogorov complexity incorporates not just the length of the shortest program for generating x, but also the space usage of that program.
Probability 1 computation is captured by the standard notion of stable computation from distributed computing, but we limit our consideration to chemical reaction networks
obeying a stronger constraint:
they “know when they are done” in the sense that they produce a special species to indicate completion.
As part of our results, we develop a module for encoding and unpacking any b bits of information via O(b/logb) reactions, which is information-theoretically optimal for incompressible information.
Our work provides one answer to the question of how succinctly chemical self-organization can be encoded—in the sense of generating precise molecular counts of species as the desired state.
arabic
§ INTRODUCTION
In potential biochemical, nanotechnological, or medical applications, synthetic chemical computation could allow for the re-programming of biological regulatory networks and the insertion of control modules where traditional electronic controllers are not feasible.
Understanding the design principles of chemical information processing also may achieve better understanding of the complex information processing that occurs in biological chemical interactions.
Discrete chemical reaction networks, also called stochastic chemical reaction networks, is a formal model of chemical kinetics in a well-mixed solution.
While in continuous chemical kinetics, continuous concentrations change in time governed by ordinary differential questions,
here the state consists of non-negative integer molecular counts of the species, and reaction events occur stochastically as a continuous time Markov process.
Closely related models include population protocols in distributed computing <cit.>, as well as models without stochastic kinetics such as Petri nets <cit.>, vector addition systems <cit.> and commutative semigroups <cit.>.
The model is particularly relevant when some species are present in small molecular counts,
which are not well-approximated by continuous concentrations <cit.>; this regime is germane for small volumes such as that of a cell, natural or artificial.
For the rest of this paper, the acronym CRNs (Chemical Reaction Networks) refers to the discrete model.
Typically the ensuing sequence of reactions can be predicted only stochastically since multiple reactions compete with each other.
Nonetheless certain behaviors are independent of the order in which reactions happen to occur.
Such probability 1 behavior is formalized using the notion of stable computation.
For example the reactions X_1 → 2Y and X_2 + Y →∅ compute the function f(x_1,x_2) = max(2x_1 - x_2, 0) regardless of the order in which reactions happen.
Below when we say that a CRN computes something, we mean it in the sense of stable computation.
It is known that stably computing CRNs are not Turing-universal <cit.>, but instead are limited to computing semilinear predicates and functions <cit.>.
However, the scaling of the computational power of CRNs with the number of reactions and species still lacks a tight and general characterization.
Prior approaches to answering the question of reaction or species complexity—in the equivalent language of population protocols—have focused largely on predicate computation and can be divided into two groups.
(We should point out that the literature makes the important distinction between population protocols with and without a “leader,” which is equivalent to starting with a single copy of a distinguished species in the initial state.
The prior results described here as well as our work correspond to protocols with a leader.)
The first line of work focuses on specific predicates—with the prototypical choice being the so-called “counting predicates” in which the task is to decide whether the count of the input species is at least some threshold x ∈ℕ <cit.>.
In particular, close upper and lower bounds were developed:
for infinitely many x, the predicate can be stably decided with (loglogx) species <cit.>, and ((loglogx)^1/2 - ϵ) species are required <cit.>.
[For journal version] DS: somewhere point out when these bounds match ours
Other work has focused on the more general characterization of predicate computation.
It is well-known that semilinear predicates can be characterized in terms of Presburger arithmetic, the first-order theory of addition.
It was subsequently shown that a CRN can decide a semilinear predicate with the number of species scaling polynomially with the size of the corresponding Presburger formula <cit.>.
There are also provable tradeoffs between the speed of computation and the number of species (e.g., <cit.>).
We do not consider the time-complexity of CRNs further in this paper.
While the prior work described above involves stably deciding a counting predicate where the system recognizes if the count of some species is at least x, we investigate the problem of generating exactly x copies of a particular species Y, starting from a single copy of another species L.
This idea of generation is natural for engineers of these systems who may wish to prepare a particular configuration to be used in a downstream process, and captures a certain form of chemical self-organization.
(We note the conceptual connection to another type of self-organization: leader-election, in which we want to end up with exactly one molecule of a species, starting from many <cit.>.)
Our constructions can be adopted to deciding the counting predicates with only a constant more reactions—giving a novel upper bound on the number of reactions (see Open Questions).
It is also worth noting that other complexity questions have been investigated for CRNs, such as “the size of the smallest chemical reaction network that approximates a desired distribution” <cit.>.
The goal of this paper is to connect the complexity of the most compact CRN for generating x to the well-known measures of the optimal “description length” of x.
Kolmogorov complexity, a widely recognized concept across various disciplines in computer science and information theory, serves as a universal, broadly accepted measure of description length <cit.>.
This notion quantifies the complexity of an object, such as a string or a number, by the length of the shortest program that produces it.
While the minimal number of species or reactions to generate count x cannot be connected to the canonical Kolmogorov complexity, we provide tight asymptotic bounds to a modification of Kolmogorov complexity (<Ref>).
As this quantity incorporates not only the length of the shortest program to produce x, but also the space (memory) usage of the program, it can be called “space-aware.”
Unlike the canonical Kolmogorov complexity, is computable.
Our quantity characterizes the CRN complexity of generating x in the range from (loglog x) for highly “compressible” x to (log x/loglog x) for “incompressible” x.
The module we develop for optimally encoding b bits of information with (b/log b) reactions via a permutation code may be of independent interest.
The encoded information could be used for other purposes than for generating a desired amount of some species, which justifies a more general interpretation of our work as studying the encoding information in CRNs.
§ PRELIMINARIES
We use notation from <cit.> and stable computation definitions from <cit.> for (discrete) chemical reaction networks.
Let denote the nonnegative integers.
For any finite set (of species), we write ^ to mean the set of functions f: →.
Equivalently, ^ can be interpreted as the set of vectors indexed by the elements of ,
and so c⃗∈^ specifies nonnegative integer counts for all elements of .
For a⃗,b⃗∈^, we write a⃗≤b⃗ if a⃗(i) ≤b⃗(i), ∀ i.
§.§ Chemical Reaction Networks
A chemical reaction network (CRN) C = (, ) is defined by a finite set of species, and a finite set of reactions where each reaction is a pair ⟨r⃗,p⃗⟩∈^×^ that denotes the reactant species consumed by the reaction and the product species generated by the reaction.
For example, given = {A,B,C}, the reaction ⟨ (2,0,0), (0,1,1) ⟩ represents 2A B + C.
Although the definition allows for more general stoichiometry, in this paper we only consider third-order reactions (with at most three reactants and three products).
For reversible reactions, we will use the notation A + B C + D to mean A + B C + D and C + D A + B.
We say that the size of a CRN (denoted C) is simply the number of reactions in .[When considering systems with third-order reactions it is clear that ^1/6≤≤ 6.]
A configuration c⃗∈^ of a CRN assigns integer counts to every species s ∈.
When convenient, we use the notation {n_1S_1,n_2S_2,…,n_kS_k} to describe a configuration with n_i∈ copies of species _i, ∀ i ∈[1,k]. When using this notation, any species S_j∈ that is not listed is assumed to have a zero count (e.g., given = {A,B,C}, the configuration {3A, 2B} has three copies of species A, two copies of species B, and zero of species C).
For two configurations a⃗,b⃗∈^, we say b⃗ covers a⃗ if a⃗≤b⃗; in other words, for all species, b⃗ has at least as many copies as a⃗.
A reaction ⟨r⃗,p⃗⟩ is said to be applicable in configuration c⃗ if r⃗≤c⃗.
If the reaction ⟨r⃗,p⃗⟩ is applicable, it results in configuration c⃗' = c⃗ - r⃗ + p⃗ if it occurs, and we write c⃗c⃗'.
If there exists a finite sequence of configurations such that c⃗c⃗_1 …c⃗_n d⃗, then we say that d⃗ is reachable from c⃗ and we write c⃗d⃗.
In keeping with established definitions for stable computation, we specify an output species Y ∈ and a leader species L ∈ for stable integer computation.[For stable function computation, an ordered subset of input species {X_1,X_2,…,X_n}⊂ is also included; however, stable integer computation would be something along the lines of f(1) = x, so a single copy of the leader species serves as the “input” here.]
We start from an initial configuration i⃗ = {1 L}.
A configuration c⃗ is output-stable if ∀d⃗ such that c⃗d⃗, c⃗(Y) = d⃗(Y).
CRN C stably computes integer x if, from any configuration c⃗ that is reachable from input configuration i⃗, there is an output-stable configuration o⃗ reachable from c⃗ with o⃗(Y) = x.
Note that when considering systems with bounded state spaces like those discussed in this paper, stable computation is equivalent to probability 1 computing.
We also consider a much stronger constraint on CRN computation that specifies a special halting species.
A species H ∈ is a halting species if ∀c⃗ such that c⃗(H) ≥ 1,
c⃗ is output stable and ∀d⃗ where c⃗d⃗, d⃗(H) ≥ 1.
We say that a CRN C haltingly computes an integer x if (1) C stably computes x and (2) C has a halting species H.
Intuitively, a halting CRN knows when it is done—the halting species can initiate some downstream process that is only meant to occur when the computation is finished.
§.§ Kolmogorov Complexity
A focus of this paper is the “optimal description” of integers.
As such, we often refer to the traditional notion of Kolmogorov complexity which we define here.
Let be a universal Turing machine.
The Kolmogorov complexity for an integer x is the value (x) = min{p : (p) = x}.
In other words, the Kolmogorov complexity of x is the size of the smallest Turing machine program p that outputs x.
This captures the descriptional complexity of x in the sense that a (smaller) description of x can be given to some machine that generates x based on the given description.
We use a “space-aware” variant of this quantity which we later connect to the size of the smallest CRN stably computing x:
(x) = min{p/logp + log(((p))) : (p) = x }.
Note that (x) does not refer to CRNs in any direct way, so the tight asymptotic connection (<Ref>) we establish may be surprising.
(x) is similar to the Kolmogorov complexity variant defined as (x) = min{p + log(((p))) : (p,i) = x[i]} by Allender, Kouckỳ, Ronneburger, and Roy <cit.>
in that it additively mixes program size with the log of the space usage.
There are two differences:
(1) The program size component of is p/logp rather than p.
The intuition is that a single chemical reaction can encode more than one bit of information; thus, a Turing machine program p can be converted to a “CRN program” with a number of reactions that is asymptotically smaller than the number of bits of p.
(2) (x) is defined with respect to programs that,
given index i as input,
output x[i], the i'th bit of x, while our (x) is defined with respect to programs that (taking no input) directly output all of x.
Thus (x) ≥logx, since the Turing machine must at least store the output integer, while (x) may be smaller in principle.
Due to the ability of efficient universal Turing machines to simulate each other efficiently,
(like ) is invariant within multiplicative constants to the choice of universal Turing machine , as long as is space-efficient.
Note that if were not robust to the choice of , it could hardly be a universal measure.
It is worth noting that unlike (x), (x) is computable.
To see this, one can enumerate all programs for universal Turing machine and run them in order from smallest to largest, stopping on the first machine that outputs x.
Since the space usage of (p) is included in , we can terminate executions as soon as they start using too much space.
This ensures that no execution will run forever, and so we are guaranteed to find the smallest p that outputs x.
DS: [For journal version]: Include something about max species count bounded by .
§.§ Overview
Here, we give a high level overview for the constructions and results presented in the subsequent sections of this paper.
Our constructions rely on the ability of CRNs to “efficiently” simulate space-bounded Turing machines (in terms of program size and space usage, not time) by “efficiently” simulating bounded-count register machines.
<Ref> details how to use a combination of previous results to achieve this.
The first half of the section describes how to construct a CRN to faithfully simulate a bounded-count register machine.
The second half of the section shows how to generate a large register machine bound (2^2^n) with very few species/reactions (n).
While the latter result is from previous work <cit.>, we translate their construction from a commutative semigroup presentation into a chemical reaction network.
In <Ref>, we present a method for constructing a CRN C_x which (optimally) haltingly computes n-bit integer x with C_x = (n/log n) by using a permutation code (<Ref>).
The idea of the construction is to generate a specified permutation and convert that permutation to a mapped target integer x.
This construction relies on the “efficient” bounded-count register machine and space-bounded Turing machine simulations.
We then show how to use our permutation construction to achieve an optimal encoding (within global multiplicative constants) for algorithmically compressible integers in <Ref>.
Here, we use our permutation code technique to “unpack” a Turing machine program that that outputs x, resulting in a CRN that haltingly computes x with () reactions (<Ref>).
Afterwards, we use a result from Künnemann et al. <cit.> to show that the size of our constructed CRN is within multiplicative constants of the optimal size of a CRN that stably computes x, denoted (x) (<Ref>).
The results of the paper culminate with us connecting (x) and in <Ref> (our main theorem), which is directly implied by the combination of <Ref> and <Ref>.
Lastly, we present some open questions for future work in <Ref>.
§ EFFICIENT SIMULATION OF BOUNDED REGISTER MACHINES
§.§ Register machines
A register machine is a finite state machine along with a fixed number of registers, each with non-negative integer counts.
The two fundamental instructions for a register machine are increment inc(r_i,s_j) and decrement dec(r_i,s_j,s_k).
The first instruction increments register r_i and transitions the machine to state s_j.
The second instruction decrements register r_i if it is non-zero and transitions the machine to state s_j, otherwise the machine just transitions to state s_k.
We also consider the more advanced instruction of copy(r_i,r_j,s_k), which adds the value of register r_i to register r_j, i.e., it is equivalent to the assignment statement r_j := r_j + r_i (note that the value is preserved in r_i).
It is clear that copy can be constructed with a constant number of register machine states.
In fact, register machines are known to be Turing-universal with three registers <cit.>.[Turing-universality has also been shown for machines with two registers, but only when a nontrivial encoding of the input/output is allowed <cit.>.]
In <cit.>, a simple CRN construction was shown to simulate register machines with some possibility of error (thus not directly compatible with stable computation).
The source of the error is due to the zero-checking in a dec instruction.
For the simulation, the CRN has a finite set of species (one for each register and one for each state of the register machine) and a finite set of reactions (one for each instruction in the register machine program).
Each inc(r_i,s_j) instruction corresponds to the reaction S_j' R_i + S_j, and each dec(r_i,s_j,s_k) instruction to two reactions S_j' + R_i S_j and S_j' S_k.
In the chemical reaction network implementation of a dec instruction, the two reactions are competing for the state species S_j'.
While in general this is an unavoidable problem,
in the special case that the maximum value in our counters is bounded by a constant,
we can remedy this following the idea from <cit.> as follows.
Let's consider bounded registers that can contain a value no greater than b ∈.
For each register r_i, we can use two species R_i^A and R_i^I as “active” and “inactive” species for register r_i, respectively.
The idea is that the total sum of the counts of species R_i^A and R_i^I is always equal to b:
whenever one is consumed, the other is produced.
Now, an inc(r_i,s_j) instruction could be implemented with the reaction S_j' + R_i^I R_i^A + S_j, and a dec(r_i,s_j, s_k) instruction could be implemented with the reactions S_j' + R_i^A R_i^I + S_j and S_j' + bR_i^I bR_i^I + S_k.
With this approach, register r_i has a zero count exactly when inactive species R_i^I has a count of b, and so we can zero-check without error.
Notice that this approach uses reactions with a large stoichiometric coefficient b.
At this point, there are two issues to be addressed:
(1) how to generate an initial b count of inactive species R_i^I, and
(2) how to transform the reactions into a series of third-order reactions
(avoiding the large stoichiometric coefficient b).
Let's first consider a very simple construction which addresses the above concerns, albeit suboptimally.
Suppose b = 2^n is a power of two.
To handle (1), we can initially produce count b of R_i^I from a single copy of A_1 using O(log b) species with reactions
A_1 2A_2
A_2 2A_3
⋮
A_n R_i^I.
To handle (2), we can transform the decrement reactions into a series of n bimolecular reactions by adding reversible versions of the reactions from (1) and “counting down” to some unique zero count indicator species C_1:
R_i^I C_n
2 C_n C_n - 1
⋮
2 C_2 C_1.
Then C_1 is producible if and only if R_i^I had count ≥ b, so the reaction S_j + C_1 S_k + C_1 implements the “jump to state k if r_i=0” portion of the dec(r_i,s_j,s_k) command.
This construction allows an error-free simulation of a register machine with counters with bound b exponential in the number of species.
Now we discuss a more sophisticated construction, based on previous results <cit.>,
that achieves a counter bound b that is doubly exponential in the number of species.
§.§ Counting to 2^2^n with n species
The CRN constructions in this paper simulate bounded register machines in the manner discussed previously.
Since we are focused on reducing the size of our CRN, we want to do this simulation with as few species (reactions) as possible.
Fortunately, we can rely on established results from prior work to do this.
Lipton provided a construction for which the largest producible amount of a species is a doubly exponential count <cit.>.
However, this amount is only produced non-deterministically and (most) paths produce less.
Cardoza et al. went on to present a fully reversible system that can achieve this doubly exponential count as well <cit.>.
Further, their system is halting in the sense that a new species is produced precisely when the maximum amount is reached.
While Cardoza et al. <cit.> describe their construction in the language of commutative semigroup presentations,
we present a modified construction in <Ref> articulated as a CRN.
In the figure and in the text below, we use the “box” notation to indicate meta-reactions, which correspond to a set of reactions.
Note that in <Ref> we will see that the combined behavior of the reactions in a meta-reaction module faithfully implement the meta-reaction semantics.
By construction, the sets of reactions that meta-reactions expand to overlap, and we include only one copy of any repeated reaction.
Each layer of the construction introduces (1) more reactions and species—9 reactions ((1)–(9)) and 9 species (S_i^k, H_i^k, X_i^k, T1_i^k, T2_i^k, C1_i^k, C2_i^k, C3_i^k, C4_i^k) for each i ∈{1,2,3,4}.
The idea of the construction is to produce (or consume) a doubly exponential count of species X by recursively producing (or consuming) quadratically more X's than the previous layer.
Each species type performs a different role.
X is the counting
species to be generated or consumed.
S starts the process to generate/consume many molecules of species X.
T transforms different types of X species into one another.
H indicates that the generation/consumption process has completed.
C “cleans up” the H species.
Reaction (5) in the meta-reaction implementation
(which converts X_2^k-1 into X_3^k-1) changes based on i.
If i ∈{1,2}, then X_i appears as a product and is generated by this reaction.
If i ∈{3,4}, the X_i appears as a reactant and is consumed by this reaction.
A high level diagram of a layer-k meta-reaction is shown in <Ref>,
which is helpful in understanding the behavior of the system.
Let c⃗ be a configuration of CRN C given above.
We say c⃗ is well-led if c⃗(S_*^*) + c⃗(H_*^*) + c⃗(T_*^*) = 1 where the notation S_*^* denotes any species with label S, regardless of the subscript or superscript.
In other words, there is only a single leader in the system and it either has the label S, H, or T.
We call species S_*^*, H_*^*, and T_*^* leader species.
Every reaction has exactly one leader species as a reactant, and exactly one leader species as a product.
The following is immediate from <Ref>:
Let c⃗ be a well-led configuration of CRN C given above.
Then any configuration d⃗ such that c⃗d⃗ is also well-led.
In other words, the well-led property is forward invariant.
Informally, the observation above together with the well-led condition implies that we can reason about the meta-reactions in isolation,
without fear of cross-talk—because while one meta-reaction is executing, no reactions outside of it are applicable.
This allows us to inductively prove the main result of this section:
Consider the CRN implementing S_i^k H_i^k + 2^2^k X_i^k.
For any n ∈,
let s⃗ = {n X_i^k, 1S_i^k} and h⃗ = {(2^2^k + n) X_i^k, 1H_i^k},
and let c⃗ be any configuration reachable from s⃗ or h⃗.
Then:
(a) Both s⃗ and h⃗ are reachable from c⃗.
(b) If c⃗ contains S_i^k then c⃗ = s⃗, and if c⃗ contains H_i^k then c⃗ = h⃗.
Consider the CRN implementing 2^2^k X_i^k + S_i^k H_i^k.
For any n ∈,
let s⃗ = {(2^2^k + n)X_i^k, 1S_i^k} and h⃗ = {nX_i^k, 1H_i^k},
and let c⃗ be any configuration reachable from s⃗ or h⃗.
Then:
(a) Both s⃗ and h⃗ are reachable from c⃗.
(b) If c⃗ contains S_i^k then c⃗ = s⃗, and if c⃗ contains H_i^k then c⃗ = h⃗.
(Of <Ref> and <Ref>, Sketch)
Both lemmas are proven by induction over the layers of the construction.
The base case (k=1) can be checked by inspection.
Now assume the lemmas are true for k-1 layers, and we want to prove them true for k layers.
First we argue that the construction is correct if the k-1 layer meta-reactions are “atomic” and occur in one step.
As visualized in <Ref>,
the CRN
iterates through a nested loop process.
Each state transition (states 1^k through 6^k) is coupled to a conversion of the leader species;
the well-led condition ensures that the CRN is in exactly one state at any given time.
Each net forward traversal of the outer loop converts a X_1^k-1 to X_4^k-1,
and each forward traversal of the inner loop converts a X_2^k-1 to X_3^k-1.
Step 1^k makes 2^2^k-1 X_1^k-1, bounding the net maximum number of times that the outer loop can happen in the forward direction.
Step 3^k makes 2^2^k-1 X_2^k-1, bounding the net maximum number of times that the inner loop can happen in the forward direction for every net forward traversal of the outer loop.
This implies that reaction (5) can fire at most a net total 2^2^k times (producing at most a net total 2^2^k X_i^k's).
Step 5^k consumes 2^2^k-1 X_3^k-1, requiring the net total number of forward traversals of the inner loop to be at least 2^2^k-1 for every net forward traversal of the outer loop.
Step 6^k consumes 2^2^k-1 X_4^k-1, requiring the net total number of forward traversals of the outer loop to be at least 2^2^k-1.
This implies that reaction (5) must fire at least a net total 2^2^k times (producing at least a net total 2^2^k X_i^k's).
Thus, reaction (5) must be executed exactly 2^2^k times (producing exactly 2^2^k X_i^k's).
Notice that an excess of X_i^k (as allowed by the statement of the lemma) does not affect the net total number of times reaction (5) can fire (forward or backward) since X_2^k-1 and X_3^k-1 are the limiting factors.
Now we need to make sure that this behavior is preserved once the meta-reactions are expanded to their constituent reactions.
Each meta-reaction i in <Ref> expands to some set R_i of reactions.
First we note that for each meta-reaction, R_i overlaps with reactions not in R_i only over species S_i^k-1, H_i^k-1, and X_i^k-1.
We are not worried about cross-talk in species S_i^k-1 and H_i^k-1 because of the well-led property.
We may still be concerned, however, that external consumption of X_i^k-1 might somehow interfere with the meta-reaction.
Luckily, the well-led property and <Ref> enforce that unless we have S_i^k-1 or H_i^k-1 (i.e., we are at the beginning or end of the meta-reaction), it is never the case that a reaction in R_i and a reaction not in R_i are applicable at the same time.
Thus nothing outside the meta-reaction can change X_i^k-1 while the meta-reaction is executing.
Note that although we chose to write <Ref> and <Ref> separately, we could have just one kind of meta-reaction (production or consumption) and obtain the other kind by running the meta-reaction backward switching the roles of S and H.
We include the two different versions because it is conceptually easier to just think about the intended execution being in the forward direction.
§ OPTIMAL ENCODING
§.§ Encoding Information in CRNs
In this section we discuss the encoding of an integer in a chemical reaction network.
In the same sense as Kolmogorov-optimal programs for Turing machines, we consider a similar measure of optimality for chemical reaction networks.
In particular, we ask the question, “what is the smallest chemical reaction network that can produce a desired count of a particular chemical species?”
A simple construction shows that x copies of some species can be produced using (log x) reactions.
The idea is to have a reaction for each bit b_i of the binary expansion of x, and produce a copy of your output species in each reaction where b_i = 1.
More concretely, consider log x reactions of the form X_i 2X_i+1 and X_i 2X_i+1 + Y.
For each bit b_i in the binary expansion of x, use the first reaction if b_i = 0 and use the second reaction if b_i = 1.
Each species X_i will have a count equal to 2^i, and species Y will have a count equal to the sum of the powers of two that were chosen (which is x).
While this simple construction generates x with log x reactions, it is not immediately clear how to improve upon it.
Our first result shows how to construct a CRN that can generate x copies of an output species (from an initial configuration with only a single molecule) yet uses only (log x/loglog x) many reactions.
This matches the lower bound dictated by Kolmogorov complexity (see end of <Ref>),
which suggests that the full power of CRNs is really being used in our construction.
Our construction is achieved through the simulation of (space-bounded) Turing machines via the simulation of (space-bounded) register machines.
A key aspect in this process is the ability of CRNs to use the previously discussed recursive counting technique to count very high with very few species (counting to 2^2^k with k species).
§.§ Our Construction
Now, we present an encoding scheme to produce count x of a particular species with (n/log n) CRN reactions, where n = log x.
In the simple CRN given in <Ref>,
each reaction encodes a single bit of x.
In the optimized construction with k reactions,
each reaction will encode log k bits instead.[Adleman et al. <cit.> provided a clever base conversion trick for tile assembly programs. Here, we employ a permutation encoding trick to yield the same effect.]
A sketch of our construction is as follows:
[For journal version] DS: We using k for a different thing now. But ok...
Sketch:
We start with a CRN in configuration c⃗_1 = {1 L} and create a configuration c⃗_2 = {1 S_i, m_1R_1, m_2R_2, …, m_kR_k} that represents a particular permutation of k distinct elements.
We encode this permutation in the count of a species I, transforming configuration c⃗_2 into a configuration c⃗_3 = {1 S_j, m I}.
The count of species I can be interpreted as the input to a Turing machine, so we simulate a Turing machine that maps the permutation to a unique integer via Lehmer code/factorial number system <cit.> (by choosing the right value of k, we can ensure there are sufficiently many permutations to let us map to x).
This Turing machine simulation transforms configuration c⃗_3 into configuration c⃗_4 = {1 H, x Y}.
For any n ∈ and any n-bit integer x, there exists a chemical reaction network C_x that haltingly computes x from initial configuration {1 L} with C_x = (n/log n).
First, we describe how to construct CRN C_x that haltingly computes x from starting configuration {1 L}, then we describe the size of C_x.
Let k = ⌈ n / log n ⌉.
We will map a permutation of k distinct elements to the integer x, and this value of k ensures there are at least x permutations.
We break the construction into three primary steps.
Step 1: {1 L}{1 S_i,m_1R_1, m_2R_2,…, m_kR_k}.
We can transform {1 L} into a configuration {1 S_i,m_1R_1, m_2R_2, …, m_kR_k} where (m_1, m_2, …, m_n) is a permutation of the integers 1 through k.
This can be achieved with k registers and 2k register machine states.
For example, to set the permutation (2,4,3,1), use instructions
s_0: inc(r_2,s_1)
s_1: copy(r_2, r_4, s_2)
s_2: inc(r_2,s_3)
s_3: copy(r_2, r_1, s_4)
s_4: inc(r_2,s_5)
s_5: copy(r_2, r_3, s_6)
s_6: inc(r_2,s_7)
Step 2: {1 S_i, m_1R_1, m_2R_2,…, m_kR_k}{1 S_j, m I}.
Now, we can transform configuration {1 S_i, m_1R_1, m_2R_2, …, m_kR_k} into configuration {1 S_j, mI}, encoding the permutation as the integer count m of species I.
For each register r_i for i from 1 to k in order, we can decrement the register to 0.
On each decrement, we double the count of I and then add 1 to it, i.e., appending a 1 to m's binary expansion.
After the register reaches 0, before moving to the next register, we double the count of I again, appending a 0 to m's binary expansion.
For example, if the permutation configuration was {3 R_1, 1 R_2, 2 R_3}, the resulting count of I in binary would be
111_3 0 1_1 0 11_2.
Step 3: {1 S_j, m I}{1 H, x Y}.
At this point, we can consider the value in register I, expressed as a binary string,
to be the input tape content for a Turing machine that maps the permutation to the integer x using a
standard Lehmer code/factorial number system technique<cit.>.
The output of the Turing machine will be the count of Y in configuration c⃗ at the end of the computation (with c⃗(Y) = x).
Our register machine will have a state species that corresponds to the halted state of the Turing machine—and such a species serves as our halting species H.
Now we argue the size of CRN C_x = (n/log n), i.e., it uses (k) reactions.
The register machine program from Step 1 generates the permutation using k registers and 2k register machine states, which results in (k) CRN reactions.
The register machine program from Step 2 encodes the permutation as a binary number in register I using (k) registers and (k) register machine states, which also results in (k) CRN reactions.
Even a naive algorithm for the Turing machine from Step 3 maps the permutation to an integer using (k^2log k) space ((k^2) bits to store the initial permutation, (klog k) bits to store the Lehmer code, (k^2log k) bits to store factorial bases 1! through k!, and (klog k) bits to store the integer x).
Recall, a Turing machine using space (k^2log k) can be simulated by a register machine with count bound (2^k^2log k) on its registers.
This can in turn be simulated by a CRN via the construction of <Ref> with ( loglog 2^k^2log k ) = (log k) reactions.
Thus (k) reactions suffices to simulate the register machine instructions as well as the bounded counters for our register machine to simulate this Turing machine.
The above construction is optimal for almost all integers x in the following sense.
Any CRN of C reactions, each with (1) reactants and products, can be encoded in a string of length (ClogC).
Given an encoded CRN stably computing an integer x,
a fixed-size program can simulate it and return x.
Thus (x) ≤(ClogC).
The pigeonhole principle argument for Kolmogorov complexity implies that (x) < ⌈log x ⌉ - Δ for at most (1/2)^Δ of all x <cit.>.
Together these observations imply that there is a c such that for most x there does not exist a CRN C of size smaller than c n/log n that stably computes x.
§ ALGORITHMIC COMPRESSION
The construction in Section <ref> is optimal for incompressible integers (integers x where K(x) ≈x, which is the case for “most” integers).
Now we extend the construction to be optimal within global multiplicative constants for all integers.
For algorithmically compressible integers x, there exists a p such that (p) = x and p < x.
We discuss the construction in Section <ref> and we argue optimality of our construction in Section <ref>.
§.§ Our Construction
We now show how to fully exploit the encoding scheme and doubly exponential counter from Section <ref> to achieve an optimal result for all integers.
A sketch of our construction is as follows:
Sketch: Given a program p for a fixed Universal Turing Machine such that (p) = x, we construct a CRN that simulates running p on via a register machine simulation.
The idea is to use (p / logp) reactions to encode p, and to use (log(((p))) reactions for a counter machine simulation of (p).
For any integer x, there exists a CRN C_x that haltingly computes x from initial configuration {1 L} with
C_x = ((x)).
Let p be a program for a fixed Universal Turing Machine such that (p)=x.
We encode p in the manner provided by <Ref> using (p/logp) reactions.
This results in p count of species Y (specifically, configuration {1 H, p Y}).
Since haltingly-computing CRNs are composable via concatenation <cit.>, we can consider {1 H, p Y} to be taken as the input for another system which simulates running (p) via the previously described register machine method with bounded register count (<Ref>).
Again, we need enough species/reactions to ensure our bounded registers can count high enough.
The registers must be able to store an integer that represents the current configuration of the Turing machine being simulated (at most this is 2^((p))).
Since we have doubly exponential counters, an additional log(((p))) species are needed to do this.
So, the total size of our CRN is (p/logp + log(((p)))) and by choosing the program p that minimizes this expression, we see C_x = ((x)).
It is interesting to note the appearance of our “space-aware” version of Kolmogorov complexity.
Importantly, this notion is different from space-bounded Kolmogorov complexity that puts a limit on the space usage of the program that outputs x.
This alternate version allows a trade-off between compact program descriptions and the space required to run those programs, which seems natural for systems like CRNs.
Perhaps it is surprising
that this (computable) measure of complexity shows up here, and at first it may seem like log of this space usage is a bit arbitrary, but we will show that this is indeed optimal (within global multiplicative constants) for CRNs.
§.§ Optimality
Here, we argue that size of CRN C_x from <Ref> is optimal.
We begin by giving a definition for the size of the optimal CRN that haltingly computes an integer x.
For any integer x, define (x) = min{C : CRN C haltingly computes x}. In other words, (x) is the size of the smallest CRN that haltingly computes x.
Our argument relies on a Turing machine that solves the coverability problem for CRNs.
We give the definition for this problem in <Ref> and discuss its space complexity in <Ref>.
Given a CRN C, initial configuration s⃗, and target configuration u⃗, does there exist a configuration t⃗≥u⃗ such that s⃗t⃗?
Using the natural notion of problem size n for the specification of a coverability problem,
Lipton provided a 2^Ω(√(n)) space lower bound for coverability <cit.>, which was later improved to 2^Ω(n) by Mayr and Meyer <cit.>.
As for upper bounds, Rackoff provided an algorithm to decide coverability that uses 2^(nlog n) space <cit.>.
Following this, Koppenhagen and Mayr gave an algorithm that decided coverability in 2^(n) space for reversible systems, closing the gap for this class of systems <cit.>.
A recent result by Künnemann et al. also closes this gap <cit.> for Vector Addition Systems with States.
Our work uses this latest result.
Let CRN C = (, ) be a CRN that haltingly computes x.
Then there exists an algorithm which solves coverability for CRN C for initial configuration {1 L} and target configuration {1 H} which uses 2^(C) space.
This result follows from Theorem 3.3 from the recent work by Künnemann et al. <cit.>.
There, the authors consider the problem of coverability in Vector Addition Systems with States (VASS).
They show that if the answer to coverability is yes, the length of the longest path is n^2^(d) where n is the maximum value change of any transition and d is the dimension of the vector.
For us, n = 2 since each reaction has at most two reactants/products and d =.
While vector addition systems are not capable of “catalytic” transitions, it is known that the same effect can be achieved by decomposing transitions into two vector additions.
So the path length would at most double for our systems.
With this bound on the path length, we can consider an algorithm that non-deterministically explores the state space of C (from starting configuration {1 L}) by simulating reactions on a current configuration of the system until a configuration that covers {1 H} is found.
By Savitch's Theorem <cit.>, this can be converted to a deterministic algorithm using the same space: (log) bits to hold a description of C, 2^() bits for a path length counter, and 2^()·log(S) bits to store the current configuration of C.
All of these values are absorbed under a 2^(C) bound.
With this space bound on the coverability problem established, we can now argue that the size of our constructed CRN from <Ref> is asymptotically equal to the size of the the smallest CRN that haltingly computes x.
For all x ∈ℕ,
letting C_x be the CRN from <Ref>,
C_x = Θ((x)).
Clearly C_x = Ω((x)), by definition of (x) and since C_x from <Ref> is an instance of a CRN that haltingly computes x.
Now, we argue that C_x = ((x)).
The big picture is that one of the programs over which (x) is minimized in the construction of C_x is the program solving coverability for the optimal CRN for generating x.
Start with the CRN K = (_K, _K) that haltingly computes x with optimal size K = (x) = n.
Consider program p_K that solves coverability for K with initial configuration {1 L} and target configuration {1 H}, and outputs x.
Now, with that program p_K, build a CRN K' by following our construction for <Ref>.
We know p_K = (nlog n) so our final CRN needs (n) reactions to encode p_K (by <Ref>).
By <Ref>, we know that the space usage of (p_K) is 2^(K),
so our final CRN needs (K) additional reactions to have large enough registers for the simulation of (p_K).
Thus, the total size of our final CRN is (K + K) = ((x)).
The following theorem, which is the main result of our paper, follows immediately from <Ref> and <Ref>.
It characterizes the optimal number of reactions haltingly computing a number x using the space-aware Kolmogorov complexity measure defined in <Ref>.
For all x ∈ℕ,
(x) = Θ((x)).
Although, as mentioned above, CRN stable computation is not Turing universal, the theorem underlines its essential connection to space-bounded Turing machine computation.
§ OPEN QUESTIONS
Our results rely on the fact that we consider CRNs that perform halting computation—the end of the computation is indicated by the production of a designated halting species.
This constraint, intuitively that the systems know when they have finished a computation, is rather strong.
It is known that a much larger class of functions can be stably computed than can be haltingly computed <cit.>.
It remains an open question if lifting this halting requirement (and allowing just stable computation) reduces the reaction complexity.
It is also worth noting that our approach starts with exactly one copy of a special leader species.
Recently, Czerner showed that leaderless protocols are capable of deciding doubly exponential thresholds <cit.>.
While starting in some uniform state and converging to a specific state would be a better expression of “chemical self-organization,” their construction
seems incompatible with our register machine simulation.
Leaderless stable integer computation remains an area for future work.
Making a tight connection between stable integer computation and counting predicate computation commonly studied in population protocols <cit.> also remains open.
We can easily follow the halting generation of a specific amount of x by running the “less-than-or-equal-to” predicate, thereby converting our constructions to compute a counting predicate with only a constant more reactions.
This gives a new general upper bound on the complexity of counting in terms of (x).
However, it is unclear whether counting predicate constructions carry over to the generation problem, leaving it open whether counting may be easier.
Our notion of “space-aware” Kolmogorov complexity is interesting in its own right.
While the similar quantity has been previously studied in the context of computational complexity theory <cit.> (see also <Ref>),
it is not clear which properties proven of carry over to .
Although the robustness to the choice of carries over,
other properties may not.
For example, it is not obvious whether our results still hold if we consider programs that output a single bit of x at a time (like does).
A core piece of this work is simulating space-bounded Turing machines, so it is very natural to extend the discussion to Boolean circuits (computing functions ϕ:{0,1}^n →{0,1}).
When attempting to compute Boolean functions with CRNs, one may be tempted to directly implement a Boolean circuit by creating (1) reactions per gate in the circuit.
However, our results imply that reaction complexity can be improved by doing a space-bounded Turing machine simulation instead—when the circuit is algorithmically “compressible.”
An important class of such compressible circuits are uniform circuits, i.e., those constructable by a fixed Turing machine given an input size.
Prior work established a quadratically tight connection between the depth of uniform circuits and Turing machine space <cit.>.
Further investigation into optimal Boolean function computation is warranted.
|
http://arxiv.org/abs/2307.00853v2 | 20230703084712 | Short Flip Sequences to Untangle Segments in the Plane | [
"Guilherme D. da Fonseca",
"Yan Gerard",
"Bastien Rivier"
] | cs.CG | [
"cs.CG"
] |
BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU enhanced Keystroke Dynamics
[
August 1, 2023
==============================================================================================================
A (multi)set of segments in the plane may form a TSP tour, a matching, a tree, or any multigraph. If two segments cross, then we can reduce the total length with the following flip operation.
We remove a pair of crossing segments, and insert a pair of non-crossing segments, while keeping the same vertex degrees.
The goal of this paper is to devise efficient strategies to flip the segments in order to obtain crossing-free segments after a small number of flips.
Linear and near-linear bounds on the number of flips were only known for segments with endpoints in convex position. We generalize these results, proving linear and near-linear bounds for cases with endpoints that are not in convex position.
Our results are proved in a general setting that applies to multiple problems, using multigraphs and the distinction between removal and insertion choices when performing a flip.
§ INTRODUCTION
TheEuclidean Travelling Salesman Problem (TSP) is one of the most studied geometric optimization problems. We are given a set P of points in the plane and the goal is to find a tour S of minimum length. While the optimal solution has no crossing segments, essentially all approximation algorithms, heuristics, and PTASs may produce solutions S with crossings. Given S, the only procedure known to obtain a solution S' without crossings and of shorter length is to perform a flip operation. In our case, a flip consists of removing a pair of crossing segments, and then inserting a pair of non-crossing segments preserving a tour (and consequently reducing its length). Flips are performed in sequence until a crossing-free tour is obtained, in a procedure called untangle.
The same flip operation may be applied in other settings. More precisely, a flip consists of removing a pair of crossing segments s_1,s_2 and inserting a pair of segments s'_1, s'_2 in a way that s_1,s'_1,s_2,s'_2 forms a cycle and a certain graph property is preserved. In the case of TSP tours, the property is being a Hamiltonian cycle. Other properties have also been studied, such as spanning trees, perfect matchings, and multigraphs. Notice that flips preserve the degrees of all vertices and multiple copies of the same edge may appear when we perform a flip on certain graphs.
When the goal is to obtain a crossing-free TSP tour, we are allowed to choose which pair of crossing segments to remove in order to perform fewer flips, which we call removal choice (Figure <ref>(a)). Notice that, in a tour, choosing which pair of crossing edges we remove defines which pair of crossing edges we insert. However, this is not the case for matchings and multigraphs. There, we are also allowed to choose which pair of segments to insert among two possibilities, which we call insertion choice (Figure <ref>(b)).
Using removal or insertion choices to obtain shorter flip sequences has not been explicitly studied before and opens several new questions, while unifying the solution to multiple reconfiguration problems. Next, we describe previous work according to which choices are used. Throughout, P denotes the set of points and n the number of segments.
Using no choice: Van Leeuwen et al. <cit.> showed that the length (i.e. the number of flips) of any untangle sequence for a TSP tour is (n^3) and it is easy to construct Ω(n^2) examples. The same proof has been rediscovered in the context of matchings <cit.> after 35 years. If P is in convex position, then the number of crossings decreases at each flip, which gives a tight bound of Θ(n^2). If all points except the endpoints of t segments are in convex position, then the authors <cit.> recently showed a bound of (tn^2).
Using only insertion choice: Bonnet et al. <cit.> showed that using only insertion choice, it is possible to untangle a matching using (n^2) flips. Let σ be the spread of P, that is, the ratio between the maximum and minimum distances among points in P. Using insertion choice, it is also possible to untangle a matching using (n σ) flips <cit.>.
Using only removal choice: If P is in convex position, then by using (n) flips we can untangle a TSP tour <cit.>, as well as a red-blue matching <cit.>, while the best known bound for trees is (n log n) <cit.>.
If instead of convex position, we have colinear red points in a red-blue matching, then (n^2) flips suffice <cit.>.
Using both removal and insertion choices: If P is in convex position, then by using (n) flips we can untangle a matching <cit.>.
§.§ New Results
Previous results are usually stated for a single graph property. Using choices, we are able to state the results in a more general setting. Proofs that use insertion choice are unlikely to generalize to red-blue matchings, TSP tours, or trees, where insertion choice is not available (still, they may hold for both non-bipartite matchings and multigraphs). In contrast, bounds for multigraphs using only removal choice apply to all these cases.
Previously, we only knew linear or near-linear bounds when the points P are in convex position and removal choice is available.
The goal of the paper is to obtain linear and near-linear bounds to as many cases as possible, considering near-convex configurations as well as removal and insertion choices.
Let P = C ∪ T where C is in convex position and the points of T are outside the convex hull of C, unless otherwise specified. Let S be a multiset of n segments with endpoints P and t be the number of segments with at least one endpoint in T. We prove the following results to untangle S, and some are summarized in Table <ref>.
Using only insertion choice (Section <ref>): If T=∅, then (n log n) flips suffice. If T is separated from C by two parallel lines, then (t n log n) flips suffice.
Using only removal choice (Section <ref>): If |T| ≤ 2 and t = (1), then (n log n) flips suffice. In this case, our results hold with the points T being anywhere with respect to the convex hull of C. As the bounds hold for trees, it is useful to compare them against the (n log n) bound for trees from <cit.> that strongly uses the fact that S forms a tree.
The (log n) factor is not present for the special cases of TSP tours and red-blue matchings.
Using both removal and insertion choices (Section <ref>): If T is separated from C by two parallel lines, then (t n) flips suffice. If T is anywhere outside the convex hull of C and S is a matching, then (t^3 n) flips suffice.
In a matching or TSP tour, we have t = (|T|) and n = (|P|), however in a tree, t can be as high as (|T|^2). In a multigraph t and n can be much larger than |T| and |P|. The theorems describe more precise bounds as functions of all these parameters. For simplicity, the introduction only shows bounds in terms of only n and t.
§.§ Related Reconfiguration Problems
Combinatorial reconfiguration studies the step-by-step transition from one solution to another, for a given combinatorial problem. Many reconfiguration problems are presented in <cit.>. We give a brief overlook of reconfiguration among line segments using alternative flip operations.
The 2OPT flip is not restricted to crossing segments. It removes and inserts pairs of segments (the four segments forming a cycle) as the total length decreases. In contrast to flips among crossing segments, the number of 2OPT flips performed may be exponential <cit.>.
It is possible to relax the flip definition even further to all operations that replace two segments by two others forming a cycle <cit.>. This definition has also been considered for multigraphs <cit.>.
Another type of flip consists of removing a single segment and inserting another one.
Such flips are widely studied for triangulations <cit.>.
They have also been considered for non-crossing trees <cit.> and paths. It is possible to reconfigure any two non-crossing paths if the points are in convex position <cit.> or if there is one point inside the convex hull <cit.>.
§.§ Preliminaries
Throughout, we consider multigraphs (P,S) whose vertices P (called endpoints) are points in the plane and edges S are a multiset of line segments.
We assume that the endpoints are in general position and that the two endpoints of a segment are distinct.
Given two (possibly equal) sets P_1,P_2 of endpoints, we say that a segment is a P_1P_2-segment if one endpoint is in P_1 and the other is in P_2. Similarly, we say that a segment is a P_1-segment if at least one endpoint is in P_1.
We say that two segments cross if they intersect at a single point that is not an endpoint of either segment. We say that a line crosses a segment if they intersect at a single point that is not an endpoint of the segment.
We say that a segment or a line h separates a set of points P if P can be partitioned into two non-empty sets P_1,P_2 such that every segment p_1p_2 with p_1 ∈ P_1, p_2 ∈ P_2 crosses h.
Given a set of segments S, the line potential λ(ℓ) is the number of segments of S crossed by ℓ.
Several proofs in this paper use the following two lemmas from previous papers.
Given a multiset S of segments and a line ℓ, let λ(ℓ) be the number of segments in S crossing ℓ. Then, λ(ℓ) never increases at a flip.
Consider a partition S=⋃_i S_i of the multiset S of segments and let P_i be the set of endpoints of S_i. If no segment of P_i2 crosses a segment of P_j2 for i ≠ j, then the sequences of flips in each S_i are independent.
We say that a segment s is uncrossable if for any two endpoints p_1,p_2, we have that p_1p_2 do not cross s. Lemma <ref> implies that an uncrossable segment cannot be flipped.
Our bounds often have terms like (tn) and (n log |C|) that would incorrectly become 0 if t or log |C| is 0. In order to avoid this problem, factors in the notation should be made at least 1. For example, the aforementioned bounds should be respectively interpreted as ((1+t) n) and (n log (2+|C|)).
§.§ Techniques
To prove our results, we combine previous and new potential functions with refined strategies and analysis. Van Leeuwen et al. <cit.> as well as Bonnet et al. <cit.> consider λ(ℓ) for a set L of all (|P|^2) lines defined by P. Since there always exists a line in L whose potential decreases at a flip, we obtain the (|P|^2n) = (n^3) classical bound without any choice.
Bonnet et al. <cit.> show that a set L of |P|-1 parallel lines (with one point between two consecutive lines) suffices with insertion choice. Since there is always an insertion choice that makes some λ(ℓ) decrease for ℓ∈ L, the (|P|n) = (n^2) bound follows.
In order to avoid a quadratic dependency in n, new line potentials have to be introduced with careful removal and/or insertion choices.
For example, to prove Theorem <ref>, we have to perform several flips in order to find a line ℓ with λ(ℓ) = (t) before applying the line potential argument. In contrast, to prove Theorem <ref>, we modify the line potential to only count the t T-segments. However, with this change the line potential may increase, which we need to handle properly.
Another key potential, inspired by <cit.> and used for the convex case is the depth potential δ(p_ap_b) of a segment p_ap_b, defined as the number of points between p_a and p_b along the convex hull boundary with a given orientation.
Careful removal and insertion choices as well as adaptations of this potential had to be made in order to guarantee that the potential decreases during most flips and never increases by too much. For example, to prove Theorems <ref> and <ref>, we had to consider the product of the depth, instead of the usual sum. To prove Theorem <ref>, we had to modify the depth potential to only count endpoints of segments that have crossings, which we call the crossing depth δ_×(p_ap_b).
In the convex case, the number of crossings decreases at each flip, which implies the trivial n2 upper bound. However, the number of crossings may increase when the points are not in convex position. An analysis of the number of crossings is used to bound the number of flips in the proof of Theorem <ref>.
Finally, we use the concept of splitting from <cit.>, presented in Lemma <ref>. The difficulty of splitting is to obtain the disjoint sets required by the lemma. For example, in Theorem <ref>, we untangle segments with both endpoints in C last to obtain the desired separation. In Theorem <ref>, we carefully find lines that split the original problem into problems with a smaller value of t that are solved recursively. The special case of uncrossable segments is used in Theorems <ref> and <ref>.
§ INSERTION CHOICE
In this section, we show how to untangle a multigraph using only insertion choice, that is, our strategies do not choose which pair of crossing segments is removed, but only which pair of segments with the same endpoints is subsequently inserted. We start with the convex case, followed by points outside the convex separated by two parallel lines.
§.§ Convex
Let P = C = {p_1,…,p_|C|} be a set of points in convex position sorted in counterclockwise order along the convex hull boundary (Figure <ref>(a)). Given a segment p_ap_b, we define the depth δ(p_ap_b) = |b-a|. This definition resembles but is not the same as the depth used in <cit.>. We use the depth to prove the following theorem.
Every multigraph (C,S) with C in convex position has an untangle sequence of length (n log |C|) = (n log n) using only insertion choice, where n = |S|.
Let the potential function
ϕ(S) = ∏_s ∈ Sδ(s).
As δ(s) ∈{1,…,|C|-1}, we have that ϕ(S) is integer, positive, and at most |C|^n. Next, we show that for any flipped pair of segments p_ap_b,p_cp_d there exists an insertion choice that multiplies ϕ(S) by a factor of at most 3/4, and the theorem follows.
Consider a flip of a segment p_ap_b with a segment p_cp_d and assume without loss of generality that a < c < b < d.
The contribution of the pair of segments p_ap_b,p_cp_d to the potential ϕ(S) is the factor f=δ(p_ap_b)δ(p_cp_d).
Let f' be the factor corresponding to the pair of inserted segments.
Case 1: If δ(p_ap_c) ≤δ(p_cp_b), then we insert the segments p_ap_c and p_bp_d and we get f'=δ(p_ap_c)δ(p_bp_d) (Figure <ref>(b)).
We notice δ(p_ap_b)=δ(p_ap_c)+δ(p_cp_b). It follows δ(p_ap_c) ≤δ(p_ap_b)/2 and we have δ(p_bp_d) ≤δ(p_cp_d) and then f'≤ f/2.
Case 2: If δ(p_bp_d) ≤δ(p_cp_b), then we insert the same segments p_ap_c and p_bp_d as previously. We have δ(p_ap_c) ≤δ(p_ap_b) and δ(p_bp_d)≤δ(p_cp_d)/2, which gives f'≤ f/2.
Case 3: If (i) δ(p_ap_c) > δ(p_cp_b) and (ii) δ(p_bp_d) > δ(p_cp_b), then we insert the segments p_ap_d and p_cp_b (Figure <ref>(c)).
The contribution of the new pair of segments is f'=δ(p_ap_d)δ(p_cp_b).
We introduce the coefficients x=δ(p_ap_c)/δ(p_cp_b) and y=δ(p_bp_d)/δ(p_cp_b) so that δ(p_ap_c) = xδ(p_cp_b) and δ(p_bp_d) = yδ(p_cp_b). It follows that δ(p_ap_b) = (1+x)δ(p_cp_b), δ(p_cp_d)=(1+y)δ(p_cp_b) and δ(p_ap_d) = (1+x+y)δ(p_cp_b). The ratio f'/f is equal to a function g(x,y) = 1+x+y/(1+x)(1+y). Due to (i) and (ii), we have that x≥ 1 and y ≥ 1.
In other words, we can upper bound the ratio f'/f by the maximum of the function g(x,y) with x,y ≥ 1. It is easy to show that the function g(x,y) is decreasing with both x and y. Then its maximum
is obtained for x=y=1 and it is equal to 3/4, showing that f'≤ 3f/4.
§.§ Separated by Two Parallel Lines
In this section, we prove the following theorem, which is a generalization of Theorem <ref>.
Consider a multigraph (P,S) with P = C ∪ T_1 ∪ T_2 where C is in convex position and there exist two horizontal lines ℓ_1,ℓ_2, with T_1 above ℓ_1 above C above ℓ_2 above T_2.
Let T = T_1 ∪ T_2, n = |S|, and t be the number of T-segments.
There exists an untangle sequence of length (t |P| log |C| + n log |C|) = (tn log n) using only insertion choice.
We start by describing the insertion choice for flips involving at least one point in T.
Let p_1,…,p_|P| be the points P sorted vertically from top to bottom.
Consider a flip involving the points p_a,p_b,p_c,p_d with a<b<c<d. The insertion choice is to create the segments p_ap_b and p_cp_d. See Figure <ref>(b). As in <cit.>, we define the potential η of a segment p_ip_j as
η(p_ip_j) = |i-j|.
Notice that η is an integer between 1 and |P|-1. We define η_T(S) as the sum of η(p_ip_j) for p_ip_j ∈ S with p_i or p_j in T. Notice that 0 < η_T(S) < t |P|. It is easy to verify that any flip involving a point in T decreases η_T(S) and other flips do not change η_T(S). Hence, the number of flips involving at least one point in T is (t|P|).
For the flips involving only points of C, we use the same choice as in the proof of Theorem <ref>.
The potential function
ϕ(S) = ∏_p_ip_j ∈ S : p_i∈ C and p_j ∈ Cδ(p_ip_j)
is at most |C|^n and decreases by a factor of at most 3/4 at every flip that involves only points of C.
However, ϕ(S) may increase by a factor of (|C|^2) when performing a flip that involves a point in T. As such flips only happen (t|P|) times, the total increase is at most a factor of |C|^(t|P|).
Concluding, the number of flips involving only points in C is at most
log_4/3(|C|^(n) |C|^(t|P|)) = (n log |C| + t|P| log |C|).
§ REMOVAL CHOICE
In this section, we show how to untangle a multigraph using only removal choice. We start with the convex case, followed by 1 point inside or outside the convex, then 2 points outside the convex, 2 points inside the convex, and 1 point inside and 1 outside the convex. As only removal choice is used, all results also apply to red-blue matchings, TSP tours, and trees.
§.§ Convex
Let P = C = {p_1,…,p_|C|} be a set of points in convex position sorted in counterclockwise order along the convex hull boundary and consider a set of segments S with endpoints P. Given a segment p_ap_b and assuming without loss of generality that a<b, we define the crossing depth δ_×(p_ap_b) as the number of points in p_a+1,…,p_b-1 that are an endpoint of a segment in S that crosses any other segment in S (not necessarily p_ap_b). We use the crossing depth to prove the following theorem, which implies a simpler and more general proof of the (n log n) bound for trees <cit.>.
Every multigraph (C,S) with C in convex position has an untangle sequence of length (n log |C|) = (n log n) using only removal choice, where n = |S|.
We repeat the following procedure until there are no more crossings. Let p_ap_b ∈ S be a segment with crossings (hence, crossing depth at least one) and a<b minimizing δ_×(p_ap_b) (Figure <ref>(a)). Let q_1,…,q_δ_×(p_ap_b) be the points defining δ_×(p_ap_b) in order and let i = δ_×(p_ap_b)/2. Since p_ap_b has minimum crossing depth, the point q_i is the endpoint of segment q_ip_c that crosses p_ap_b. When flipping q_ip_c and p_ap_b, we obtain a segment s (either s=q_ip_a or s=q_ip_b) with δ_×(s) at most half of the original value of δ_×(p_ap_b) (Figure <ref>(b,c)). Hence, this operation always divides the value of the smallest positive crossing depth by at least two. As the crossing depth is an integer smaller than |C|, after performing this operation (log |C|) times, it produces a segment of crossing depth 0. As the segments of crossing depth 0 can no longer participate in a flip, the claimed bound follows.
§.§ One Point Inside or Outside a Convex
In this section, we prove Theorem <ref>.
In the case of TSP tours <cit.> and red-blue matchings <cit.>, the preprocessing to untangle CC-segments takes (n) flips. However, in the case of trees <cit.> and in general (Theorem <ref>), the best bound known is (n log n).
We first state a lemma used to prove Theorem <ref>.
Consider a set C of points in convex position, and a multiset S of n crossing-free segments with endpoints in C. Consider the multiset S ∪{s} where s is an extra segment with one endpoint in C and one endpoint q anywhere in the plane.
There exists an untangle sequence for S ∪{s} of length (n) using only removal choice.
Iteratively flip the segment qp_1 with the segment p_2p_3 ∈ S crossing qp_1 the farthest from q.
This flip inserts a CC-segment p_1p_2, which is impossible to flip again, because the line p_1p_2 is crossing free. The flip does not create any crossing between CC-segments.
We are now ready to state and prove the theorem.
Consider a multigraph (P,S) with P = C ∪ T where C is in convex position, and T = {q}, and such that there is no crossing pair of CC-segments (possibly after a preprocessing for the convex case).
Let n = |S| and t be the number of T-segments.
There exists an untangle sequence of length (tn) using only removal choice.
For each segment s with endpoint q with crossing, we apply Lemma <ref> to s and the CC-segments crossing s.
Once a segment s incident to q is crossing free, it is impossible to flip it again as we fall in one of the following cases.
Let ℓ be the line containing s.
Case 1: If ℓ is crossing free, then it splits the multigraph in three partitions: the segments on one side of ℓ, the segments on the other side of ℓ, and the segment s itself.
Case 2: If ℓ is not crossing free and q is outside the convex hull of C, then s is uncrossable.
Case 3: If q is inside the convex hull of C, then introducing a crossing on s would require that q lies in the interior of the convex quadrilateral whose diagonals are the two segments removed by a flip. The procedure excludes this possibility by ensuring that there are no crossing pair of CC-segments, and, therefore, that one of the removed segment already has q as an endpoint.
Therefore, we need at most n flips for each of the t segments incident to q.
§.§ Two Points Outside a Convex
In this section, we prove a theorem with a bound that is exponential in t, which makes it of little interest for large t. Notice, however, that in matchings t ≤ 2, in a TSP tour t ≤ 4, and in a binary tree t ≤ 6. Also notice that the definition of t is different from other theorems (here TT-segments are counted twice). Both definitions are equivalent up to a factor of 2, but since t appears in the exponent, they are not exchangeable.
Consider a multigraph (P,S) with P = C ∪ T where C is in convex position, the points of T are outside the convex hull of C, and |T| ≤ 2.
Let n = |S| and t be the sum of the degrees of the points in T.
There exists an untangle sequence of length (2^t d_conv(n)) using only removal choice, where d_conv(n) is the number of flips to untangle any multiset of at most n segments with endpoints in convex position.
Throughout this proof, we partition the TT-segments respectively the CT-segments into two types: TTI-segment and CTI-segment if it intersects the interior of the convex hull of C and TTO-segment and CTO-segment otherwise.
Let f(t) be the number of flips to untangle a multiset S as in the statement of the theorem. The proof proceeds by induction. The base case is t = 0, when f(0) ≤ d_conv(n) by definition of d_conv(n).
Next, we show how to bound f(t) for t > 0, but first we need some definitions. A line ℓ is a T-splitter if ℓ is crossing free and either ℓ contains a T-segment or there are T-segments on both sides of ℓ. We abusively say that a segment s is a T-splitter if the line containing s is a T-splitter. A T-splitter is useful because we can apply Lemma <ref> and solve sub-problems with a lower value of t by induction.
Phase 1: untangle all but one segment by induction. We remove an arbitrary CT-segment or TT-segment s from S. We then use induction to untangle S using f(t-1) flips and insert the segment s back in S afterwards. Notice that all crossings are now on s.
Phase 2.1: apply induction if possible.
If S admits a T-splitter ℓ, then we apply Lemma <ref> to solve each side of ℓ independently using induction.
If S has a crossing-free TTO-segment qq' such that the line qq' is not crossing free, then qq' is uncrossable, and we remove qq' from S and untangle S by induction.
Similarly, in the case where T={q,q'} and where qq' is a TTI-segment, if S has a CTO-segment, say pq, then pq is uncrossable, and we remove pq from S and untangle S by induction.
In all the three cases of Phase 2.1 we get f(t) ≤ f(t-1) + f(t_1) + f(t_2), where t_1+t_2 ≤ t and t_1,t_2 ≥ 1.
Phase 2.2: split after one flip.
If S contains no T-splitter and if s is a TT-segment, then there remains no CT-segment in S (as every CT-segment shares an endpoint with the TT-segment s that contains all crossings), and s crosses a CC-segment s'.
A crossing-free CT-segment would either be a CTI-segment, hence a T-splitter, or a CTO-segment and, hence uncrossable and removed by one of the induction cases of Phase 2.1.
The segment s' becomes a T-splitter after flipping s with s', and we invoke induction.
By Lemma <ref>, we get in this case f(t) ≤ f(t-1) + 1 + f(t_1) + f(t_2), where t_1+t_2 ≤ t and t_1,t_2 ≥ 1.
Phase 2.3: split after (n) flips.
In this case, S contains no T-splitter and s is a CT-segment, say with q as its endpoint in T.
While s', the segment of S that crosses s the farthest away from q, is a CC-segment, we flip s and s' and we set s to be the newly inserted CT-segment incident to q. By Lemma <ref>, at most n flips are performed in this loop.
At the end of the loop, either s is crossing free, or s' is a CT-segment, say with q' as its endpoint in T.
Then, we also flip s and s'.
Insertion case 1:
If two CT-segments are inserted, then, either one of them is uncrossable (this is the case if s' is a CTO-segment), or s' is now a T-splitter (recall that if
qq' is a TTI-segment, then all the CTO-segments have been removed at Phase 2.1.).
Insertion case 2:
If the TT-segment qq' is inserted, then the inserted CC-segment is crossing free (as in the proof of Lemma <ref>), and, if qq' is not already crossing free, we flip qq' with any segment, say pp'.
Next, we split S as follows.
Among the CTI-segments of S which are on the upper (respectively lower) side of the line qq', consider the one whose endpoint p_upper (respectively p_lower) in C is the closest to the line qq'.
The segments of S are either inside or outside the convex quadrilateral qp_lowerq'p_upper, and we know that only the segments inside may have crossings.
By Lemma <ref>, we remove from S all the segments outside qp_lowerq'p_upper.
Recall that, in our case, qq' is a TTI-segment, and all the CTO-segments have been removed at Phase 2.1.
The line pp' is finally a T-splitter.
Again, by Lemma <ref>, we get in this case f(t) ≤ f(t-1) + n+2 + f(t_1) + f(t_2), where t_1+t_2 ≤ t and t_1,t_2 ≥ 1.
The last bound on f(t) dominates the recurrence. Using that f(t_1) + f(t_2) ≤ f(t-1) + f(1) and t<n we get
f(t) ≤ f(t-1) + n+2 + f(t_1) + f(t_2) ≤(n) + 2 f(t-1),
which solves to f(t) = (2^t d_conv(n)) as claimed.
§.§ Two Points inside a Convex
We prove a similar theorem for two points inside the convex hull of C.
Consider a multigraph (P,S) with P = C ∪ T where C is in convex position, the points of T are inside the convex hull of C, and T = {q,q'}.
Let n = |S| and t be the number of T-segments.
There exists an untangle sequence of length (d_conv(n) + tn) using only removal choice, where d_conv(n) is the number of flips to untangle any multiset of at most n segments with endpoints in convex position.
The untangle sequence is decomposed in five phases. At the end of each phase, a new type of crossings is removed, and types of crossings removed in the previous phases are not present, even if they may temporarily appear during the phase.
Phase 1 (𝐂𝐓× 𝐂𝐓). In this phase, we remove all crossings between pairs of CT-segments using (d_conv(t)) = (d_conv(n)) flips. We separately solve two convex sub-problems defined by the CT-segments, one on each side of the line qq'.
Phase 2 (𝐂𝐂× 𝐂𝐂). In this phase, we remove all crossings between pairs of CC-segments using (d_conv(n)) flips. As no CT-segment has been created, there is still no crossing between a pair of CT segments. Throughout, our removal will preserve the invariant that no pair of CC-segments crosses.
Phase 3 (𝐂𝐓 ×𝐧𝐨𝐧-𝐜𝐞𝐧𝐭𝐫𝐚𝐥 𝐂𝐂).
We distinguish between a few types of CC-segments. The central CC-segments cross the segment qq' (regardless of qq' being in S or not), while the non-central do not. The peripheral CC-segments cross the line qq' but not the segment qq', while the outermost CC-segments do not cross either. In this phase, we remove all crossings between CT-segments and non-central CC-segments.
Given a non-central CC-segment pp', let the out-depth δ'(pp') be the number of points of C that are contained inside the halfplane bounded by the line pp' and not containing T. Also, let χ be the number of crossings between the non-central CC-segments and the CT-segments. At the end of each step the two following invariants are preserved. (i) No pair of CC-segments crosses. (ii) No pair of CT-segments crosses.
At each step, we choose to flip the non-central CC-segment pp' of minimum out-depth that crosses a CT-segment. We flip pp' with the CT-segment q”p” (with q”∈{q,q'})
that crosses pp' at the point closest to p (Figure <ref>(a) and Figure <ref>(a)).
One of the possibly inserted pairs may contain a CT-segment s that crosses another CT-segment s', violating the invariant (ii) (Figure <ref>(b) and Figure <ref>(b)). If there are multiple such segments s', then we consider s' to be the segment whose crossing with s is closer to q”. We flip s and s' and obtain either two CT-segments (Figure <ref>(c) and Figure <ref>(c)) or a CC-segment and the segment qq' (Figure <ref>(d) and Figure <ref>(d)). The analysis is divided in two main cases.
If pp' is an outermost CC-segment (see Figure <ref>), then case analysis shows that the two invariants are preserved and χ decreases.
If pp' is a peripheral CC-segment (see Figure <ref>), then a case analysis shows that the two invariants are preserved and χ has the following behavior. If no CC-segment is inserted, then χ decreases. Otherwise a CC-segment and a TT-segment are inserted and χ may increase by (t) (Figure <ref>(d)). Notice that the number of times the TT-segment qq' is inserted is (t), which bounds the total increase by (t^2).
As χ = (tn), the total increase is (t^2), and χ decreases at all but (t) steps, we have that the number of flips in Phase 3 is (tn).
Phase 4 (𝐂𝐓 ×𝐜𝐞𝐧𝐭𝐫𝐚𝐥 𝐂𝐂).
At this point, each crossing involves a central CC-segment and either a CT-segment or the TT-segment qq'.
In this phase, we remove all crossings between CT-segments and central CC-segments, ignoring the TT-segments.
This phase ends with crossings only between qq' and central CC-segments.
Given four endpoints q”∈ T, p,p”∈ C, and x ∈ C ∪ T, we say that a pair of segments p”q”,xp ∈ S crossing at a point c contains an ear pp” if the interior of the triangle pp”c intersects no segment of S (see Figure <ref>(a) and <ref>(b)).
Every set of segments with endpoints in C ∪ T with |T| = 2 that has crossings (not involving the TT-segment) contains an ear (adjacent to the crossing that is farthest from the line qq').
At each step, we flip a pair of segments p”q”,xp that contains an ear pp”, prioritizing pairs where both segments are CT-segments. Notice that, even though initially we did not have crossing pairs of CT-segments, they may be produced in the flip (Figure <ref>(c)).
If the flip inserts a non-central CC-segment which crosses some CT-segments (Figure <ref>(d)), then, we perform the following while loop. Assume without loss of generality that qq' is horizontal and s is closer to q' than to q. While there exists a non-central CC-segment s with crossings, we flip s with the CT-segment s' crossing s that comes first according to the following order. As a first criterion, a segment incident to q comes before a segment incident to q'. As a second tie-breaking criterion, a segment whose crossing point with s that is farther from the line qq' comes before one that is closer.
Let χ = (tn) be the number of crossings between central CC-segments and CT-segments plus the number of crossings between CT-segments.
A case analysis shows that the value of χ decreases at each step. If no non-central CC-segment is inserted, then the corresponding step consists of a single flip. As χ decreases, there are (tn) steps that do not insert a non-central CC-segment.
However, if a non-central CC-segment is inserted, at the end of the step we inserted a CC-segment that can no longer be flipped (Lemma <ref>). As the number of CC-segments is (n), we have that the number of times the while loop is executed is (n). Since each execution of the while loop performs (t) flips, we have a total of (tn) flips in this phase.
Phase 5 (𝐓𝐓 ×𝐜𝐞𝐧𝐭𝐫𝐚𝐥 𝐂𝐂). In this phase, we remove all crossings left, which are between the possibly multiple copies of the TT-segment qq' and central CC-segments. The endpoints of the segments with crossings are in convex position and all other endpoints are outside their convex hull. Hence, by Lemma <ref>, it is possible to obtain a crossing-free multigraph using (d_conv(n)) flips.
§.§ One Point inside and One Point Outside a Convex
Given an endpoint p, let δ(p) denote the degree of p, that is, the number of segments incident to p. The following lemma is used to prove Theorem <ref>.
Consider a multigraph (P,S) with P = C ∪ T where C is in convex position, and T = {q,q'} such that q is outside the convex hull of C and q' is inside the convex hull of C. Consider that q is the endpoint of a single segment s and all crossings are on s.
Let n = |S| and t = (δ(q')) be the number of T-segments.
There exists a flip sequence of length (tn) using only removal choice that ends with all crossings (if any) on the segment qq'.
We proceed as follows, while s has crossings. For induction purpose, let f(n') be the length of the flip sequence in the lemma statement for n' < n segments.
Let s' be the segment that crosses s at the point farthest from q. We flip s and s', arriving at one of the three cases below (Figure <ref>).
Case 1 (𝐂𝐓 × 𝐂𝐂). In this case, the segment s' is a CC-segment. Notice that the line ℓ containing s' becomes crossing free after the flip. There are segments on both sides of ℓ.
If ℓ separates q,q', then we untangle both sides independently (Lemma <ref>) using (n) and (t n) flips (Theorem <ref>). Otherwise, the segments on one side of ℓ are already crossing free (because of the specific choice of s') and we inductively untangle the n' ≤ n-1 segments on the other side of ℓ using f(n') flips.
Case 2 (𝐂𝐓 × 𝐂𝐓 → 𝐂𝐂,𝐓𝐓). If s' is a CT-segment and one of the inserted segments is the TT-segment qq', then the procedure is over as all crossings are on qq'.
Case 3 (𝐂𝐓 × 𝐂𝐓 → 𝐂𝐓,𝐂𝐓). In this case two CT-segments are inserted. Let p ∈ C be an endpoint of s = qp. Since the inserted CT-segment q'p is crossing free, Case 3 only happens (t) times before we arrive at Case 1 or Case 2.
Putting the three cases together, we obtain the recurrence
f(n) ≤(t) + f(n'), with n' ≤ n-1,
which solves to f(n) = (tn), as claimed.
We are now ready to prove the theorem.
Consider a multigraph (P,S) with P = C ∪ T where C is in convex position, and T = {q,q'} such that q is outside the convex hull of C and q' is inside the convex hull of C.
Let n = |S| and t be the number of T-segments.
There exists an untangle sequence of length (d_conv(n) + δ(q)δ(q')n) = (d_conv(n) + t^2n) using only removal choice, where d_conv(n) is the number of flips to untangle any multiset of at most n segments with endpoints in convex position.
The untangle sequence contains four phases.
Phase 1 (𝐂𝐂× 𝐂𝐂). In this phase, we remove all crossings between pairs of CC-segments using d_conv(n) flips. Throughout all the phases, the invariant that no pair of CC-segments crosses is preserved.
Phase 2 (𝐂𝐪' × 𝐂𝐂). In this phase, we remove all crossings between pairs composed of a CC-segment and a CT-segment incident to q' (the point inside the convex hull of C) using (tn) flips by Theorem <ref>.
Phase 3 (𝐂𝐪).
At this point, all crossings involve a segment incident to q. In this phase, we deal with all remaining crossings except the crossings involving the segment qq'. Lemma <ref> allows us to remove the crossings in each CT-segment s incident to q independently, which we do using (δ(q') n) flips using Lemma <ref>.
As there are δ(q) CT-segments adjacent to q, the total number of flips is (δ(q) δ(q') n) = (t^2n).
Phase 4 (𝐂𝐂 × 𝐓𝐓). At this point, all crossings involve the TT-segment qq'. The endpoints in C that are adjacent to segments with crossings, together with q', are all in convex position. Hence, the only endpoint not in convex position is q, and we apply Theorem <ref> using (tn) flips.
After the d_conv(n) flips in Phase 1, the number of flips is dominated by Phase 3 with (δ(q) δ(q') n) = (t^2n) flips.
Notice that, in certain cases (for example in the red-blue case with q,q' having different colors) a flip between two CT-segments never produces two CT-segments. Consequently, Case 3 of the proof of Lemma <ref> never happens, and the bound in Theorem <ref> decreases to (d_conv(n) + tn).
§ REMOVAL AND INSERTION CHOICES
In this section, we show how to untangle a matching or a multigraph using both removal and insertion choices. We start with the case of points outside the convex separated by two parallel lines. Afterwards, we prove an important lemma and apply it to untangle a matching with points outside the convex.
§.§ Separated by Two Parallel Lines
We start with the simpler case in which T is separated from C by two parallel lines. In this case, our bound of (n + t|P|) interpolates the tight convex bound of (n) from <cit.> and the (t|P|) bound from <cit.> for t arbitrary segments.
Consider a multigraph (P,S) with P = C ∪ T_1 ∪ T_2 where C is in convex position and there exist two horizontal lines ℓ_1,ℓ_2, with T_1 above ℓ_1 above C above ℓ_2 above T_2.
Let n = |S|, T = T_1 ∪ T_2, and t be the number of T-segments.
There exists an untangle sequence of length (n + t|P|) = (tn) using both removal and insertion choices.
The algorithm runs in two phases.
Phase 1. We use removal choice to perform the flips involving a point in T. At the end of the first phase, there can only be crossings among segments with all endpoints in C.
The insertion choice for the first phase is the following.
Let p_1,…,p_|P| be the points P sorted vertically from top to bottom.
Consider a flip involving the points p_a,p_b,p_c,p_d with a<b<c<d. The insertion choice is to create the segments p_ap_b and p_cp_d. As in <cit.>, we define the potential η of a segment p_ip_j as
η(p_ip_j) = |i-j|.
Notice that η is an integer from 1 to |P|-1. We define η(S) as the sum of η(p_ip_j) for p_ip_j ∈ S with p_i or p_j in T. Notice that 0 < η(S) < t |P|. It is easy to verify that any flip involving a point in T decreases η(S). Hence, the number of flips in Phase 1 is (t|P|).
Phase 2.
Since T is outside the convex hull of C, flips between segments with all endpoints in C cannot create crossings with the other segments, which are guaranteed to be crossing free at this point. Hence, it suffices to run an algorithm to untangle a convex set with removal and insertion choice from <cit.>, which performs (n) flips.
§.§ Liberating a Line
In this section, we prove the following key lemma, which we use in the following section. The lemma only applies to matchings and it is easy to find a counter-example for multisets (S consisting of n copies of a single segment that crosses pq).
Consider a matching S of n segments with endpoints C in convex position, and a segment pq separating C. Using (n) flips with removal and insertion choices on the initial set S ∪{pq}, we obtain a set of segments that do not cross the line pq.
For each flip performed in the subroutine described hereafter, at least one of the inserted segments does not cross the line pq and is removed from S (see Figure <ref>).
Preprocessing.
First, we remove from S the segments that do not intersect the line pq, as they are irrelevant.
Second, anytime two segments in S cross, we flip them choosing to insert the pair of segments not crossing the line pq. One such flip removes two segments from S.
Let p_1p_2 (respectively p_2n-1p_2n) be the segment in S whose intersection point with pq is the closest from p (respectively q).
Without loss of generality, assume that the points p_1 and p_2n-1 are on the same side of the line pq.
First flip.
Lemma <ref> applied to the segment pq and the triangle p_1p_2p_2n-1 shows that at least one of the segments among pp_2n-1,qp_1,qp_2 intersects all the segments of S.
Without loss of generality, assume that pp_2n-1 is such a segment, i.e., that pp_2n-1 crosses all segments of S ∖{p_2n-1p_2n}.
We choose to remove the segments pq and p_2n-1p_2n, and we choose to insert the segments pp_2n-1 and qp_2n.
As the segment qp_2n does not cross the line pq, we remove it from S.
Second flip. We choose to flip the segments pp_2n-1 and p_1p_2.
If n is odd, we choose to insert the pair of segments pp_1,p_2p_2n-1.
If n is even, we insert the segments pp_2,p_1p_2n-1.
By convexity, one of the inserted segment (the one with endpoints in C) crosses all other n-2 segments.
The other inserted segment (the one with p as one of its endpoints) does not cross the line pq, so we remove it from S.
Note that the condition on the parity of n is there only to ensure that the last segment p_2n-3p_2n-2 is dealt with at the last flip.
Remaining flips.
We describe the third flip. The remaining flips are performed similarly.
Let s be the previously inserted segment.
Let p_3p_4 be the segment in S whose intersection point with pq is the closest from p. Without loss of generality, assume that p_3 is on the same side of the line pq as p_1 and p_2n-1.
We choose to flip s with p_3p_4.
If s = p_2p_2n-1, we choose to insert the pair of segments p_2p_4,p_3p_2n-1.
If s = p_1p_2n-1, we choose to insert the pair of segments p_1p_3,p_4p_2n-1.
By convexity, one inserted segment (the one with p_2n-1 as an endpoint) crosses all other n-3 segments.
The other inserted segment does not cross the line pq, so we remove it from S.
Note that the insertion choice described is the only viable one, as the alternative would insert a crossing-free segment crossing the line pq that cannot be removed.
§.§ Points Outside a Convex
We are now ready to prove the following theorem, which only applies to matchings because it uses Lemma <ref>.
Consider a matching S consisting of n segments with endpoints P = C ∪ T where C is in convex position and T is outside the convex hull of C.
Let t = |T|.
There exists an untangle sequence of length (t^3n) using both removal and insertion choices.
Throughout this proof, we partition the TT-segments into two types: TTI-segment if it intersects the interior of the convex hull of C and TTO-segment otherwise.
𝐓𝐓-segments.
At any time during the untangle procedure, if there is a TTI-segment s that crosses more than t segments, we apply Lemma <ref> to liberate s from every CC-segment using (n) flips.
Let ℓ be the line containing s. Since λ(ℓ) cannot increase (Lemma <ref>), λ(ℓ) < t after Lemma <ref>, and there are (t^2) different TTI-segments, it follows that Lemma <ref> is applied (t^2) times, performing a total (t^2n) flips.
As the number of times s is inserted and removed differ by at most 1 and λ(ℓ) decreases at each flip that removes s, it follows that s participates in (t) flips. As there are (t^2) different TTI-segments, the total number of flips involving TTI-segments is (t^3).
We define a set L of (t) lines as follows. For each point q ∈ T, we have two lines ℓ_1, ℓ_2 ∈ L that are the two tangents of the convex hull of C that pass through q. As the lines ℓ∈ L do not separate C, the potential λ(ℓ) = (t).
When flipping a TTO-segment q_1q_2 with another segment q_3p with q_3 ∈ T (p may be in T or in C), we make the insertion choice of creating a TTO-segment q_1q_3 such that there exists a line ℓ∈ L whose potential λ(ℓ) decreases. It is easy to verify that ℓ always exist (see Lemmas <ref> and <ref> in the Appendix). Hence, the number of flips involving TTO-segments is (t^2) and the number of flips involving TT-segments in general is (t^3).
All except pairs of 𝐂𝐂-segments.
We keep flipping segments that are not both CC-segments with the following insertion choices.
Whenever we flip two CT-segments, we make the insertion choice of creating a TT-segment. Hence, as the number of flips involving TT-segments is (t^3), so is the number of flips of two CT-segments.
Whenever we flip a CT-segment p_1q with q ∈ T and a CC-segment p_3p4, we make the following insertion choice. Let v(q) be a vector such that the dot product v(q) · q < v(q) · p for all p ∈ C, that is, v is orthogonal to a line ℓ separating q from C and pointing towards C. We define the potential η(p_xq) of a segment with p_x ∈ C and q ∈ T as the number of points p ∈ C such that v(q) · p < v(q) · p_x, that is the number of points in C before p_x in direction v. We choose to insert the segment p_xq that minimizes η(p_xq) for x = {1,2}. Let η(S) be the sum of η(p_xq) for all CT-segments p_xq in S. It is easy to see that η(S) is (t|C|) and decreases at each flip involving a CT-segment (not counting the flips inside Lemma <ref>).
There are two situation in which η(S) may increase. One is when Lemma <ref> is applied, which happens (t^2) times. Another one is when a TT-segment and a CC-segment flip, creating two CT-segments, which happens (t^3) times. At each of these two situations, η(S) increases by (|C|). Consequently, the number of flips between a CT-segment and a CC-segment is (t^3|C|) = (t^3n).
𝐂𝐂-segments.
By removal choice, we choose to flip the pairs of CC-segments last (except for the ones flipped in Lemma <ref>). As T is outside the convex hull of C, flipping two CC-segments does not create crossings with other segments (Lemma <ref>). Hence, we apply the algorithm from <cit.> to untangle the remaining segments using (n) flips.
§ AUXILIARY LEMMA OF SECTION <REF>
In this section, we prove Lemma <ref> used in the proof of Lemma <ref>.
Recall that, in the proof of Lemma <ref>, we have a convex quadrilateral p_1p_2p_2np_2n-1 and a segment pq crossing the segments p_1p_2 and p_2np_2n-1 in this order when drawn from p to q, and we invoke Lemma <ref> to show that at least one of the segments among pp_2n-1,qp_1,qp_2 intersects all the segments of S.
Before proving Lemma <ref>, we detail how to apply it to this context.
Lemma <ref> applied to the segment pq and the triangle p_1p_2p_2n-1 asserts that at least one of the following pairs of segments cross: pp_2n-1,p_1p_2, or qp_1,p_2p_2n-1, or qp_2,p_1p_2n-1.
If the segments pp_2n-1,p_1p_2 cross, then we are done.
If the segments qp_1,p_2p_2n-1 cross, then the segments qp_1,p_2np_2n-1 also cross and we are done.
If the segments qp_2,p_1p_2n-1 cross, then the segments qp_2,p_2np_2n-1 also cross and we are done.
Next, we state and prove Lemma <ref>.
For any triangle abc, for any segment pq intersecting the interior of the triangle abc, there exists a segment s ∈{pa,pb,pc,qa,qb,qc} that intersects the interior of the triangle abc.
If all a,b,c,p,q are in convex position, then p and the point among a,b,c that is not adjacent to p on the convex hull boundary define the segment s. Otherwise, since p,q are not adjacent on the convex hull boundary, assume without loss of generality that a is not a convex hull vertex and p,b,q,c are the convex hull vertices in order. Then, either ap or aq intersects bc.
§ AUXILIARY LEMMAS OF SECTION <REF>
In this section, we prove Lemma <ref> and Lemma <ref> used in the proof of Theorem <ref>.
Recall that, in the proof of Theorem <ref>, we define a set L of lines as follows.
For each point q ∈ T, we have two lines ℓ_1, ℓ_2 ∈ L that are the two tangents of the convex hull of C that pass through q.
When flipping a TTO-segment q_1q_2 with another segment q_3p with q_3 ∈ T (p may be in T or in C), we make the insertion choice of creating a TTO-segment q_1q_3 such that there exists a line ℓ∈ L whose potential λ(ℓ) decreases.
We invoke Lemma <ref> and Lemma <ref> to show that such a line ℓ always exist.
Indeed, by Lemma <ref>, it is enough to show that there exists a line ℓ∈ L containing one of the points q_1,q_2,q_3 that crosses one of the segments q_1q_2 or q_3p. This is precisely what Lemma <ref> shows.
Next, we state prove Lemma <ref> and Lemma <ref>.
Consider two crossing segments p_1p_2,p_3p_4 and a line ℓ containing p_1 and crossing p_3p_4.
Then, one of the two pairs of segments p_1p_3,p_2p_4 or p_1p_4,p_2p_3 does not cross ℓ.
In other words, there exists an insertion choice to flip p_1p_2,p_3p_4 such that the number of segments crossing ℓ decreases.
Straightforward.
Consider a closed convex body B and two crossing segments q_1q_3,q_2q_4 whose endpoints q_1,q_2,q_3 are not in B, and whose endpoint q_4 is not in the interior of B.
If the segment q_1q_3 does not intersect the interior of B, then at least one of the six lines tangent to B and containing one of the endpoints q_1,q_2,q_3 is crossing one of the segments q_1q_3,q_2q_4.
(General position is assumed, meaning that the aforementioned six lines are distinct, i.e., each line does not contain two of the points q_1,q_2,q_3,q_4.)
For all i ∈{1,2,3}, let ℓ_i and ℓ_i' be the two lines containing q_i and tangent to B.
By contraposition, we assume that none of the six lines ℓ_1,ℓ_1',ℓ_2,ℓ_2',ℓ_3,ℓ_3' crosses one of the segments q_1q_3,q_2q_4. In other words, we assume that the six lines are tangent to the convex quadrilateral q_1q_2q_3q_4.
It is well known that, if m ≥ 5, then any arrangement of m lines or more admits at most one face with m edges (see <cit.> for example).
Therefore, B is contained in the same face of the arrangement of the six lines as the quadrilateral q_1q_2q_3q_4.
Let p_1 (respectively p_1') be a contact point between the line ℓ_1 (respectively ℓ_1') and the convex body B.
The segment p_1p_1' crosses the segment q_1q_3 and is contained in B by convexity, concluding the proof by contraposition.
|
http://arxiv.org/abs/2307.01016v1 | 20230703134527 | Monitoring the large-scale magnetic field of AD~Leo with SPIRou, ESPaDOnS and Narval. Toward a magnetic polarity reversal? | [
"S. Bellotti",
"J. Morin",
"L. T. Lehmann",
"C. P. Folsom",
"G. A. J. Hussain",
"P. Petit",
"J. F. Donati",
"A. Lavail",
"A. Carmona",
"E. Martioli",
"B. Romano Zaire",
"E. Alecian",
"C. Moutou",
"P. Fouque",
"S. Alencar",
"E. Artigau",
"I. Boisse",
"F. Bouchy",
"C. Cadieux",
"R. Cloutier",
"N. Cook",
"X. Delfosse",
"R. Doyon",
"G. Hebrard",
"O. Kochukhov",
"G. Wade"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Towards a magnetic polarity reversal?
Near-infrared Zeeman-Doppler imaging of AD Leo with SPIRou
Institut de Recherche en Astrophysique et Planétologie,
Université de Toulouse, CNRS, IRAP/UMR 5277,
14 avenue Edouard Belin, F-31400, Toulouse, France
[email protected]
Science Division, Directorate of Science,
European Space Research and Technology Centre (ESA/ESTEC),
Keplerlaan 1, 2201 AZ, Noordwijk, The Netherlands
Laboratoire Univers et Particules de Montpellier,
Université de Montpellier, CNRS,
F-34095, Montpellier, France
Tartu Observatory,
University of Tartu,
Observatooriumi 1, Tõravere, 61602 Tartumaa, Estonia
Department of Physics and Astronomy,
Uppsala University,
Box 516, SE-75120 Uppsala, Sweden
Univ. Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France
Laboratório Nacional de Astrofísica, Rua Estados Unidos 154, 37504-364, Itajubá - MG, Brazil
Institut d'Astrophysique de Paris, CNRS, UMR 7095, Sorbonne Université, 98 bis bd Arago, 75014 Paris, France
Universidade Federal de Minas Gerais, Belo Horizonte, MG, 31270-901, Brazil
Université de Montréal, Département de Physique, IREX,
Montréal, QC H3C 3J7, Canada
Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France
Observatoire de Genève, Université de Genève, Chemin Pegasi, 51, 1290 Sauverny, Switzerland
Observatoire de Haute Provence, St Michel l'Observatoire, France
Department of Physics & Space Science,
Royal Military College of Canada,
PO Box 17000 Station Forces, Kingston, ON, Canada K7K 0C6
Department of Physics & Astronomy,
McMaster University,
1280 Main St West, Hamilton, ON, L8S 4L8, Canada
One clear manifestation of dynamo action on the Sun is the 22-yr magnetic cycle, exhibiting a polarity reversal and a periodic conversion between poloidal and toroidal fields. For M dwarfs, several authors claim evidence of activity cycles from photometry and analyses of spectroscopic indices, but no clear polarity reversal has been identified from spectropolarimetric observations. These stars are excellent laboratories to investigate dynamo-powered magnetic fields under different stellar interior conditions, that is partly or fully convective.
Our aim is to monitor the evolution of the large-scale field of AD Leo, which has shown hints of a secular evolution from past dedicated spectropolarimetric campaigns. This is of central interest to inform distinct dynamo theories, contextualise the evolution of the solar magnetic field, and explain the variety of magnetic field geometries observed in the past.
We analysed near-infrared spectropolarimetric observations of the active M dwarf AD Leo taken with SPIRou between 2019 and 2020 and archival optical data collected with ESPaDOnS and Narval between 2006 and 2019. We searched for long-term variability in the longitudinal field, the width of unpolarised Stokes profiles, the unsigned magnetic flux derived from Zeeman broadening, and the geometry of the large-scale magnetic field using both Zeeman-Doppler imaging and principal component analysis.
We found evidence of a long-term evolution of the magnetic field, featuring a decrease in axisymmetry (from 99% to 60%). This is accompanied by a weakening of the longitudinal field (-300 to -50 G) and a correlated increase in the unsigned magnetic flux (2.8 to 3.6 kG). Likewise, the width of the mean profile computed with selected near-infrared lines manifests a long-term evolution corresponding to field strength changes over the full time series, but does not exhibit modulation with the stellar rotation of AD Leo in individual epochs.
The large-scale magnetic field of AD Leo manifested first hints of a polarity reversal in late 2020 in the form of a substantially increased dipole obliquity, while the topology remained predominantly poloidal and dipolar for 14 yr. This suggests that low-mass M dwarfs with a dipole-dominated magnetic field can undergo magnetic cycles.
Monitoring the large-scale magnetic field of AD Leo with SPIRou, ESPaDOnS, and Narval
S. Bellotti1,20000-0002-2558-6920
J. Morin 30000-0002-4996-6901
L. T. Lehmann10000-0001-5674-2116
C. P. Folsom 40000-0002-9023-7890
G. A. J. Hussain 20000-0003-3547-3783
P. Petit 10000-0001-7624-9222
J-F. Donati 10000-0001-5541-2887
A. Lavail1,50000-0001-8477-5265
A. Carmona 60000-0003-2471-1299
E. Martioli 7,80000-0002-5084-168X
B. Romano Zaire90000-0002-9328-9530
E. Alecian60000-0001-5260-7179
C. Moutou 10000-0002-2842-3924
P. Fouqué10000-0002-1436-7351
S. Alencar9 E. Artigau100000-0003-3506-5667
I. Boisse110000-0002-1024-9841
F. Bouchy120000-0002-7613-393X
C. Cadieux100000-0001-9291-5555
R. Cloutier150000-0001-5383-9393
N. J. Cook100000-0003-4166-4121
X. Delfosse 60000-0001-5099-7978
R. Doyon100000-0001-5485-4675
G. Hébrard8,130000-0001-5450-7067
O. Kochukhov 50000-0003-3061-4591
G. A. Wade 14
Received ; accepted
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Studying stellar surface magnetic fields yields relevant insights into the internal structure of stars, as well as their essential role in stellar formation, evolution, and activity <cit.>. For cool stars, monitoring secular changes of the field's configuration provides useful feedback on the dynamo processes operating in the stellar interior and constraints on stellar wind models. The latter is fundamental to understanding atmospheric hydrodynamic escape of embedded planets since magnetic cycles modulate the star's activity level and thus its radiation output <cit.>.
The Sun is an important benchmark in this context: its long-term monitoring revealed a periodic variation in sunspot number, size, and latitude <cit.>, and a polarity reversal of the large-scale magnetic field over a timescale of 11 yr <cit.>. The proposed mechanism to reproduce these phenomena theoretically is the αΩ dynamo <cit.>, namely the combination of differential rotation and cyclonic turbulence at the interface between the radiative and convective zones, known as tachocline. A different model is the Babcock-Leighton mechanism, which describes the conversion from a toroidal to poloidal field via a poleward migration of bipolar magnetic regions <cit.>. However, there is still no model that can account for all the solar magnetic processes <cit.>.
For other stars, magnetic field measurements can be performed with two complementary approaches <cit.>. One is to model the Zeeman splitting in individual unpolarised spectral lines and estimate the total unsigned magnetic field, which is insensitive to polarity cancellation. The other is to apply tomographic techniques that use the polarisation properties of the Zeeman-split components to recover the orientation of the local field. In addition to these well-established methods, <cit.> show that fundamental properties of the large-scale field topology can be derived directly from the circularly polarised Stokes V time series using principal component analysis (PCA), without prior assumptions. This method allows us to qualitatively infer the predominant component of the field topology, as well as its complexity, axisymmetry, and evolution. Altogether, these observational constraints guide dynamo theories to a comprehensive description of the magnetic field generation and dynamic nature in the form of magnetic cycles <cit.>.
Over the last three decades, Zeeman-Doppler imaging (ZDI, ) has been applied to reconstruct the poloidal and toroidal components of stellar magnetic fields, providing evidence of a wide variety of the large-scale magnetic topologies <cit.>. Among rapidly rotating cool stars, the partly convective ones with masses above 0.5 M_⊙ tend to have moderate, predominantly toroidal large-scale fields generally featuring a non-axisymmetric poloidal component <cit.>. Those with masses between 0.2 M_⊙ and 0.5 M_⊙ – close to the fully convective boundary at 0.35 M_⊙ <cit.> – generate stronger large-scale magnetic fields, dominated by a poloidal and axisymmetric component. For fully convective stars with M<0.2 M_⊙, spectropolarimetric analyses have revealed a dichotomy of field geometries: either strong, mostly axisymmetric dipole-dominated or weak, non-axisymmetric multipole-dominated large-scale fields are observed <cit.>. The latter findings could be understood either as a manifestation of dynamo bistability <cit.>, that is two dynamo branches that coexist over a range of stellar rotation periods and masses, or of long magnetic cycles, implying that different topologies correspond to different phases of the cycle <cit.>. Yet, no firm conclusion has been reached. In parallel, studies relying on the analysis of unpolarised spectra have shown that the average (unsigned) surface magnetic field of cool stars follows a classical rotation-activity relation including a non-saturated and a saturated (or quasi-saturated) regime, without a simple relation with the large-scale magnetic geometry <cit.>. Similarly, recent dynamo simulations conducted by, for instance, <cit.> confirm that the influence of rotation on convective motions alone could not explain the observed variety of magnetic geometry. Only in the case of fully convective very fast rotators, <cit.> found that the strongest average fields were measured for stars with large-scale dipole-dominated fields. <cit.> show that the fraction of magnetic energy contained in the large-scale field component is also the highest for these stars.
Cyclic trends for Sun-like stars were found via photometric and chromospheric activity (i.e. Ca II H&K lines) monitoring, and timescales shorter (e.g., 120 d for τ Boo, ) or longer (≃ 20 yr for HD 1835; ) than the solar magnetic cycle were reported <cit.>. Moreover, polarity flips of the large-scale field were detected for a handful of stars based on optical spectropolarimetric observations <cit.>. For M dwarfs, numerous studies relying on photometry and spectroscopic indices claimed evidence of activity cycles <cit.>, and radio observations suggest the occurrence of polarity reversal at the end of the main sequence <cit.>, but no polarity reversal has been directly observed with spectropolarimetry so far. This motivates long-term spectropolarimetric surveys, to reveal secular changes in the field topology and shed more light on the dynamo processes in action.
A well-known active M dwarf is AD Leo (GJ 388), whose mass (0.42 M_⊙) falls at the boundary between the domains where toroidal- and dipole-dominated magnetic topologies have previously been identified, and thus represents an interesting laboratory to study stellar dynamos. <cit.> analysed the large-scale magnetic field from spectropolarimetric data sets collected with Narval at Télescope Bernard-Lyot in 2007 and 2008 and reported a stable, axisymmetric, dipole-dominated geometry. Later, <cit.> examined data collected with ESPaDOnS at Canada-France-Hawaii Telescope (CFHT) from 2012 and 2016, and showed an evolution of the field in the form of a global weakening (about 20%) and small-scale enhancement. The latter was quantitatively expressed by a decrease in the magnetic filling factor (from 13% to 7%), meaning that the field was more intense on local scales. No polarity reversal was reported on AD Leo <cit.>. The large-scale magnetic topology has remained stable since spectropolarimetric observations of AD Leo have been initiated (2007–2016): dominated by a strong axial dipole, the visible pole corresponding to negative radial field (magnetic field vector directed towards the star).
Here, we extend the magnetic analysis of AD Leo using both new optical ESPaDOnS observations collected in 2019 and near-infrared spectropolarimetric time series collected with SPIRou at CFHT in 2019 and 2020 under the SPIRou Legacy Survey (SLS), which adds to the previous optical data sets collected with ESPaDOnS and Narval between 2006 and 2016. The aim is to apply distinct techniques to search for long-term variations that may or may not resemble the solar behaviour.
The paper is structured as follows: in Sec. <ref> we describe the observations performed in the near-infrared and optical domains, in Sec. <ref> we outline the temporal analysis of the longitudinal magnetic field, the Full-Width at Half Maximum (FWHM) of the Stokes I profile, and the total magnetic flux inferred from Zeeman broadening modelling. Then, we describe the magnetic geometry reconstructions by means of ZDI and PCA. In Sec. <ref> we discuss the wavelength dependence of magnetic field measurements and in Sec. <ref> we present our conclusions.
§ OBSERVATIONS
AD Leo is an M3.5 dwarf with a V and H band magnitude of 9.52 and 4.84, respectively <cit.>, at a distance of 4.9651±0.0007 pc <cit.>. Its age was estimated to be within 25 and 300 Myr by <cit.>. AD Leo has a rotation period of 2.23 days <cit.> and an inclination i = 20^∘, implying an almost pole-on view <cit.>. Its high activity level is seen in frequent flares <cit.> and quantified by an X-ray-to-bolometric luminosity ratio (log(L_X/L_bol)) of -3.62 <cit.> and a mean CaII H&K index (logR'_HK) of -4.00 <cit.>.
AD Leo's mass is 0.42 M_⊙ <cit.>, which places it above the theoretical fully convective boundary at 0.35 M_⊙ <cit.>. The latter value is in agreement with observations, as it has been invoked to explain the dearth of stars with M_G∼10.2, known as Gaia magnitude gap <cit.>. However, it is not an absolute limit: age <cit.> and metallicity affect the depth of the convective envelope <cit.>, and the presence of strong magnetic fields quenches convection and could push the theoretical boundary towards later spectral type <cit.>.
§.§ Near-infrared
A total of 77 spectropolarimetric observations in the near-infrared were collected with the SpectroPolarimètre InfraRouge (SPIRou) within the SLS. SPIRou is a stabilised high-resolution near-infrared spectropolarimeter <cit.> mounted on the 3.6 m CFHT atop Maunakea, Hawaii. It provides a full coverage of the near-infrared spectrum from 0.96 to 2μm at a spectral resolving power of R ∼ 70,000. Optimal extraction of SPIRou spectra was carried out with A PipelinE to Reduce Observations ( v0.6.132), a fully automatic reduction package installed at CFHT <cit.>. The same data set was used in <cit.> to perform a velocimetric study and reject the hypothesis of a planetary companion by <cit.> in favour of activity-induced variations, in agreement with <cit.>.
Observations were performed in circular polarisation mode between February 2019 and June 2020, spanning 482 days in total; the journal of observations is available in Table <ref>. The mean airmass is 1.32 and the signal-to-noise ratio (S/N) at 1,650 nm per spectral element ranges from 68 to 218, with an average of 168. We applied least-squares deconvolution (LSD) to atomic spectral lines to derive averaged-line Stokes I (unpolarised) and V (circularly polarised) profiles <cit.>. This numerical technique assumes the spectrum to be the convolution between a mean line profile and a line mask, that is to say a series of Dirac delta functions centred at each absorption line in the stellar spectrum, with corresponding depths and Landé factors (i.e. sensitivities to the Zeeman effect at a given wavelength). The output mean line profile gathers the information of thousands of spectral lines and, because of the consequent high S/N, enables the extraction of polarimetric information from the spectrum. The adopted line mask was generated using the Vienna Atomic Line Database[<http://vald.astro.uu.se/>] <cit.> and a MARCS atmosphere model <cit.> with T_eff=3,500 K, log g= 5.0 [cm s^-2] and v_micro= 1 km s^-1. It contains 1,400 atomic lines between 950–2,600 nm and with known Landé factor (ranging from 0 to 3) and with depth larger than 3 % of the continuum level.
We discarded six observations in February 2019 since one optical component of the instrument was not working nominally, one observation in November 2019 because likely affected by a flare (the corresponding radial velocity is >8 sigma lower than the bulk of the measurements) and two observations in 2020 as they led to noisier (by a factor of 10) LSD profiles. Therefore, the data set analysed in this work comprises 68 polarimetric sequences, whose characteristics are reported in Table <ref>.
The near-infrared observations were performed monthly between 2019 and 2020, except for two gaps of approximately two and three months. There is also a gap of 1.5 month between the end of 2019 and beginning of 2020. We thus split the time series in four epochs to maintain coherency of magnetic activity over short time scales and for clearer visualisation: 2019a (15th April 2019 to 21st June 2019, i.e. 2019.29 to 2019.47), 2019b (16th October 2019 to 12th December 2019, i.e. 2019.79 to 2019.95) 2020a (26th January 2020 to 12th March 2020, i.e. 2020.07 to 2020.19), and 2020b (8th May 2020 to 10th June 2020, i.e. 2020.35 to 2020.44).
The near-infrared domain covered by SPIRou is polluted by strong and wide telluric bands due to Earth's atmospheric absorption. Their contribution to the stellar spectra is corrected using a telluric transmission model (which is built from observations of standard stars since the start of SPIRou operations, and using Transmissions of the AtmosPhere for AStromomical data (TAPAS) atmospheric model ) and a PCA method implemented in the pipeline <cit.>. To account for potential residuals in the telluric correction, we ignored the following intervals of the spectrum when computing the LSD profiles: [950, 979], [1116, 1163], [1331, 1490], [1790, 1985], [1995,2029], [2250, 2500] nm. These intervals correspond to H_2O absorption regions, with transmission typically smaller than 40%. We assessed whether removing these telluric intervals optimises the quality of the Stokes V profiles. In a first test, we searched for stellar absorption lines deeper than 75 % of the continuum level and within ±100 km s^-1 from telluric lines included in the transmission model of . This approach allowed us to identify stellar lines that are contaminated by the telluric lines throughout the year. When the telluric-affected spectral lines were removed, no significant improvement was reported in the final LSD profiles, indicating a robust telluric correction as already reported in <cit.>. In a second test, we extended the intervals by 25 and 50 nm or reduced them by 10 nm and noticed an increment of the noise level in LSD profiles up to 20 %, so we proceeded with the previous intervals.
Accounting for the ignored telluric intervals, the number of spectral lines used in LSD is 838. We show the LSD Stokes profiles for one example observation in Fig. <ref>. The average noise level in Stokes V for the entire time series is 1.6·10^-4 relative to the unpolarised continuum, similar to the optical domain <cit.>. We also note that the profiles are broader than in the optical by more than 10 km s^-1, owing to a stronger Zeeman effect in the near-infrared domain <cit.>.
§.§ Optical
For most of the analyses presented here, we considered all archival observations collected with ESPaDOnS and Narval, and studied previously in <cit.> and <cit.>. We also included six new observations taken in November 2019 (from 2019.87 to 2019.89) with ESPaDOnS for CFHT programme 19BC06, PI A. Lavail (reported in Table <ref>). They are contemporaneous to the SPIRou ones for the same period, hence enabling us to study the dependence of the measured magnetic field strength on the wavelength domain employed (see Sec. <ref>).
ESPaDOnS is the optical spectropolarimeter on the 3.6 m CFHT located atop Mauna Kea in Hawaii, and Narval is the twin instrument on the 2 m TBL at the Pic du Midi Observatory in France <cit.>. The data reduction was performed with the pipeline <cit.>, and the reduced spectra were retrieved from the PolarBase archive <cit.>.
The LSD profiles were computed similar to the near-infrared, but using an optical VALD mask containing 3330 lines in range 350-1080 nm and with depths larger than 40% the continuum level, similar to <cit.>. The number of lines used is 3240 and accounts for the removal of the following wavelength intervals, which are affected by telluric lines or in the vicinity of Hα: [627,632], [655.5,657], [686,697], [716,734], [759,770], [813,835], and [895,986] nm. For the 2019 observations, the average noise in Stokes V is 3·10^-4 relative to the unpolarised continuum.
In the next sections, the near-infrared and optical observations will be phased with the following ephemeris:
HJD = 2458588.7573 + P_rot· n_cyc ,
where we used the first SPIRou observation taken in April 2019 as reference, P_rot=2.23 days is the stellar rotation period <cit.>, and n_cyc corresponds to the rotation cycle (see Table <ref>).
§ MAGNETIC ANALYSIS
§.§ Longitudinal magnetic field
We measured the line-of-sight component of the magnetic field integrated over the stellar disk (B_l) for all the available observations, in optical (2006–2019) and near-infrared (2019–2020). Since B_l traces magnetic features present on the visible hemisphere, its temporal variations are modulated at the stellar rotation period and can be therefore used as a robust magnetic activity proxy <cit.>. Formally, it is computed as the first-order moment of Stokes V <cit.>:
B_l [G] = -2.14·10^11/λ_0 g_effc∫ vV(v)dv/∫(I_c-I)dv ,
where λ_0 and g_eff are the normalisation wavelength and Landé factor of the LSD profiles, I_c is the continuum level, v is the radial velocity associated to a point in the spectral line profile in the star's rest frame and c the speed of light in vacuum. For the near-infrared and optical Stokes profiles, the normalisation wavelength and Landé factor are 1700 nm and 1.2144, and 700 nm and 1.1420, respectively. In accordance with the fact that near-infrared lines are broader than optical ones, the integration was carried out within ± 50 km s^-1 from line centre in the former case and ± 30 km s^-1 in the latter case, to include the absorption ranges of both Stokes I and V profiles.
The list of measurements is reported in Table <ref>. The values are of constant sign (negative), which is expected when observing one polarity of a dipole almost aligned with the stellar rotation axis over the entire stellar rotation, especially for a star observed nearly pole-on as AD Leo <cit.>. The near-infrared measurements range between -263 and -46 G, with an average of -179 G and a median error bar of 15 G. The optical measurements range between -297 and -155 G, with an average of -233 G and a median error bar of 10 G. The lower error bar is likely due to the narrower velocity range over which the optical measurements are performed, since less noise is introduced in Eq. <ref>. A discussion about chromatic differences in the longitudinal field measurements is presented in Sec. <ref>.
We plot the temporal evolution of B_l in Fig. <ref>. In general, we note a secular weakening of the field strength over 14 yr, with an oscillation between 2016 and 2019 followed by a rapid decrease in strength (in absolute value). We also note that the intra-epoch dispersion increases for the last two epochs.
By phase-folding the near-infrared data at P_rot, we observe a systematic increase in the rotational modulation towards 2020b, meaning that the axisymmetry level of the field has likely decreased (see Fig. <ref>). For a first quantitative evaluation, we followed <cit.> and <cit.> to model the phase variations of the longitudinal field for a predominantly-dipolar magnetic configuration. Formally,
B_l [G] = 1/2015+ε/3-εB_p(cosβcos i+sinβsin icos(2π p)) ,
with ε the limb darkening coefficient (set to 0.3; ), p the rotational phase, B_p the longitudinal field of the dipole, i the stellar inclination and β the obliquity between magnetic and rotation axes. The results are listed in Table <ref>, for both near-infrared and optical time series for completeness. The six optical observations in November 2019 have poor coverage (three of them are clustered around phase 0.9) and lead to a less reliable sine fit. Nevertheless, they are compatible with the 2019b fit curve.
These clues clearly indicate that the magnetic field of AD Leo is evolving, in agreement with <cit.>, and demonstrate the interest of long-term spectropolarimetric monitoring of active M dwarf stars.
§.§ The mean line width
The width of near-infrared spectral lines of stars with intense fields and low equatorial velocity such as AD Leo (v_esin(i)= 3 km s^-1; ) are sensitive to the Zeeman effect, given its proportionality to wavelength, field strength, and Landé factor. The rotationally-modulated line broadening correlates with the azimuthal distribution of the unsigned small-scale magnetic flux, a useful diagnostic for stellar activity radial velocity contamination, as shown for the Sun by <cit.>. In this context, <cit.> adopted a selection of magnetically sensitive lines for the young star AU Mic and saw a correspondence in the variations of RV and FWHM of the Stokes I profiles at the stellar rotation period. This confirmed the sensitivity of the FWHM to the distortions induced by magnetic regions on the stellar surface.
Here, we proceeded analogously in an attempt to connect modulations of the FWHM with variations of the large-scale field. We applied LSD on the near-infrared data using a mask of 417 lines characterised by g_eff>1.2, following <cit.>. The near-infrared time series was divided in four epochs as in Sec. <ref> for consistency.
In Table <ref>, we compare the phase variations of the FWHM when adopting the default and high-g_eff masks, and we inspect whether they are more compatible with a sine fit or a constant fit equal to the mean of the data set. In all cases, there is no clear rotational modulation of the data points, as the sine fit does not provide a better description (i.e. lower χ^2_r) of the variations than the constant fit. This is confirmed by a quick inspection of the periodogram applied to the FWHM data for each individual epoch.
The χ^2_r increase when using a sine model rather than a constant is not statistically significant. The observed variations are attributable to dispersion, as illustrated in Fig. <ref>. We observed that the FWHM is systematically larger in all epochs for the high-g_eff mask, as expected given the linear dependence of Zeeman effect to g_eff, and the dispersion is between 1.8 and 3.0 times larger. The lack of rotational modulation prevents us from searching for correlations with other quantities such as RV and B_l as done in <cit.>.
From Fig. <ref>, we also noticed an evident long-term evolution of the mean FWHM. Such evolution has a moderate correlation (Pearson R coefficient of 0.5) with the variations of the mean B_l for the same epochs, meaning that the FWHM is a reasonable proxy to trace long-term evolution of the field. This is consistent with the recent <cit.> analysis analysis of AU Mic. When using the default mask, the mean FWHM oscillated from 19 km s^-1 in 2019a to 21 km s^-1 in 2019b and 2020a, and back to 19 km s^-1 in 2020b. As expected, such oscillation is enhanced when considering the magnetically sensitive lines and goes from 23 km s^-1 in 2019a to 27 km s^-1 in 2019b and 2020a, and back to 22 km s^-1 in 2020b. We performed the same analysis with low-Landé factor lines (i.e. g_eff<1.2 and 406 lines) and noticed no appreciable variation of the mean FWHM, since it remained stable at ∼15 km s^-1. A view of the Stokes I profiles computed with the three different line lists can be found in Appendix <ref>.
The FWHM analysis was also carried out on the ESPaDOnS and Narval data between 2006 and 2019. When using low-g_eff lines, the mean width of Stokes I is reasonably stable around 9.7 km s^-1, stressing their potential for precise radial velocity measurements. The full (high-g_eff) mask yields a mean value at 10 km s^-1 (12 km s^-1) between 2006 and 2012, which then increases to 11 km s^-1 (13 km s^-1) in 2016 and 2019. Such long-term evolution is only moderate compared to the one seen in the near-infrared time series. The entire evolution is illustrated in Fig. <ref>.
The difference between the mean FWHM of low-g_eff lines in optical (∼9.5 km s^-1) and near-infrared (∼16 km s^-1) can be attributed to lines that have non-zero Landé factor. Indeed, the quadratic differential broadening between the two domains is 11.4 km s^-1, corresponding to a total magnetic field of 2.5 kG for a line at 1700 nm with g_eff=0.96 (the normalisation values of the low-g_eff mask). Although we assumed that the Zeeman effect for low-g_eff lines is negligible in the optical with this exercise, the inferred value of total magnetic field is reasonably consistent with what is reported in the literature <cit.>, indicating that the magnetic field accounts mostly for the difference in width between optical and near-infrared low-g_eff lines.
Our analysis confirms that the FWHM is capable of tracing secular changes in the total, unsigned magnetic field, which could be used to better understand stellar activity jitter. Activity-mitigating techniques would benefit from this information even for low-inclination stars such as AD Leo, for which the phase modulation of the radial velocity jitter is more difficult to constrain. At the same time, the analysis highlights the presence of short-term variability producing scatter and that is not rotationally modulated.
§.§ Modelling Zeeman broadening
To further investigate the small-scale magnetic field of AD Leo we conducted a Zeeman broadening analysis. For this analysis we used the full set of new and archival data, both in the near-infrared from SPIRou and in the optical from ESPaDOnS and Narval. All the data sets require a telluric correction, since telluric lines are present in much of the SPIRou wavelength range, and the red end of the ESPaDOnS and Narval range. For the SPIRou data we relied on the telluric correction from the pipeline (Sec <ref>, for more detail).
For the ESPaDOnS and Narval data, we made a telluric correction using the molecfit[<https://www.eso.org/sci/software/pipelines/>] pipeline, originally designed for handling spectra from ESO instruments <cit.>. molecfit retrieves weather conditions and other relevant information at the time of observation and models the atmosphere in the line of sight. It performs radiative transfer and iteratively models the telluric component in the input spectrum while also fitting the continuum and the wavelength scale of the spectrum. It finally corrects telluric lines and provides a telluric-corrected output spectrum.
After telluric correction the spectra were re-normalised in the regions of interest using a low order polynomial fit through carefully selected continuum regions. A few ESPaDOnS and Narval spectra were affected by fringing effects, hence we adopted a higher-order polynomial fit to normalise to a flatter continuum. Finally, we discarded any observations where the telluric correction left a noticeable residual feature that was blended with the stellar lines of interest.
To characterise the magnetic field, we fitted synthetic spectra to the observed Stokes I spectra, incorporating both the Zeeman broadening and intensification effects. Synthetic spectra were calculated with Zeeman <cit.>, using model atmospheres from marcs <cit.>. Zeeman performs polarised radiative transfer including the Zeeman effect. However, a major limitation for M-dwarfs is that the programme does not currently include molecular lines, which are not typically used in Zeeman broadening analyses. Weak molecular lines are blended with many atomic lines in the spectra of M-dwarfs. With careful attention we identified a set of atomic lines suitable for AD Leo, with no evident distortion in the line shape by molecular blends. Thus the systematic error from this limitation is expected to be negligible, but the inclusion of molecular lines in the future would substantially simplify the selection of lines for Zeeman broadening analyses, as shown in the recent work of <cit.> and previously applied to AD Leo by <cit.>. To check the validity of the analysis presented here, a second analysis of the ESPaDOnS and Narval spectra was carried out with the SYNMAST code <cit.>. The analyses used nearly the same set of Ti i lines, and the results we obtained were consistent within uncertainty.
For the ESPaDOnS and Narval observations, we used the Ti i lines at 9675.54 Å (g_eff=1.35), 9688.87 Å (g_eff=1.50), 9705.66 Å (g_eff=1.26), 9728.40 Å (g_eff=1.00), 9743.61 Å (g_eff=0.00), and 9770.30 Å (g_eff=1.55). These lines have been used extensively for Zeeman broadening analysis <cit.> and have reliable oscillator strengths and Landé factors in VALD. These lines have relatively weak telluric blending, very little molecular blending, and a wide range of effective Landé factors.
For the SPIRou observations, we selected a set of lines using similar criteria, but also avoided lines with large pressure broadened wings, since small errors in the pressure broadening could cause larger errors in the Zeeman broadening estimation. In order to maximise the range of available effective Landé factors we used the Fe i lines at 11422.32 Å (g_eff=1.98), 11593.59 Å (g_eff=2.50), 11607.57 Å (g_eff=1.66), 11638.26 Å (g_eff=1.58), and 11783.26 Å (g_eff=1.14) and the Ti i lines at 11892.88 Å (g_eff=0.75), 12821.67 Å (g_eff=1.26), 12831.44 Å (g_eff=0.67), 12847.03 Å (g_eff=1.08), 22232.84 Å (g_eff=1.66), and 22310.61 Å (g_eff=2.50). This provides multiple lines with both high and low effective Landé factors, but uses lines from two different ions, which we compensated for by using the Ti and Fe abundances as independent free parameters in our analysis.
There are a few other Ti i lines near 22000 Å with large effective Landé factors, but there is a relatively severe blending by many weak molecular lines in this region, hence we did not include these lines.
Line data were extracted from VALD. In these line lists, experimental oscillator strengths for Ti i lines were from <cit.> <cit.>, except for 22232.84 Å from <cit.>, and a theoretical value for 22310.61 Å from the compilation of R. L. Kurucz[<http://kurucz.harvard.edu>]. Oscillator strengths for the Fe i lines were taken from <cit.>.
The total magnetic field was modelled with a grid of field strengths and filling factors for the fraction of the surface area with the corresponding field strength <cit.>. A uniform radial orientation was assumed for the magnetic field, since Stokes I spectra have little sensitivity to magnetic field orientation. This is also a reasonable assumption given the magnetic field maps reconstructed in Sec. <ref>. For the optical spectra, we adopted magnetic fields of 0, 2, 4, 6, 8, and 10 kG, and derived their filling factors. For the SPIRou spectra, we used a finer grid of 1 kG from 0 to 10 kG, since the sensitivity to Zeeman effect is larger at longer wavelengths, and a finer grid is needed to produce smooth line profiles.
To derive the magnetic filling factors we applied an MCMC-based approach, using the emcee package <cit.> integrated with Zeeman. The filling factors for B > 0 were treated as free parameters, with the filling factor for B=0 (f_B=0) calculated from 1 - ∑_B>0 f_B. Proposed steps in the chain where ∑_B>0 f_B > 1 were rejected to ensure that the filling factors sum to unity. The projected rotational velocity v_esin(i) and the abundance of Ti (and Fe for SPIRou) were included as free parameters in the MCMC process.
The modelling used T_ eff = 3500 K, log g = 5.0, a microturbulence of 1 km s^-1.
The chemical abundances may be unreliable since they do not account for elements bound in molecules, making them effectively nuisance parameters in this study. However, this provides the code with flexibility for fitting line strength and width in the absence of a magnetic field, reducing the sensitivity of the results to small errors in non-magnetic parameters.
Example fits resulting from the MCMC-based approach are show in Figs. <ref> for ESPaDOnS and SPIRou. The shapes of the posterior distributions are generally similar for all observations using the same sets of lines, and are illustrated in Appendix <ref>. There are important anti-correlations between filling factors with adjacent magnetic field strengths, and weak correlations between filling factors spaced by two bins in field strength. Therefore, some caution should be taken in interpreting the uncertainties from this and similar analyses. The filling factor for B=0 and the quantity ∑_i B_i f_i summed over magnetic field bins (abbreviated to ∑ Bf), were calculated from samples in the MCMC chain. The resulting distribution was used to provide the median value from the 50th percentile, with uncertainties from the 16th and 84th percentile.
The results for all observations, and averages for each epoch, are presented in Fig. <ref>, and values for each epoch are provided in Tables <ref> and <ref>. The quantity ∑ Bf (sometimes called the magnetic flux, and analogous to a magnetic flux density) ranges between 2.6 kG and 3.7 kG, which is consistent with previous measurements <cit.>. We observe a long-term increase of the average Σ Bf from 2.8 kG in 2007 to 3.6 kG in 2016, followed by a weakening towards 3.4 kG with the latest SPIRou observations. Such behaviour correlates with the long-term decrease (in absolute value) of the longitudinal field (Pearson coefficient R=0.6, excluding the 2006 data point). Likewise, the average Σ Bf time series correlates with the average FWHM of Stokes I, demonstrating its capability at tracing the evolution of the total, unsigned magnetic field <cit.>.
The Σ Bf values for the ESPaDOnS optical data acquired in 2006 fall out from this trend. This could stem from residuals of the telluric correction blending with the lines used in the modelling and/or instrumental effects such as fringing, for which the results are sensitive to the choice of continuum normalisation. Attempts were made to correct for these potential systematic errors: rejecting observations where the telluric correction left residual features in the used portion of the spectrum, and careful continuum normalisation to remove any weak fringing. However, it is possible these attempts were not fully successful, and thus the departure from the general trend of the 2006 result should be treated with caution.
§.§ Magnetic imaging
We applied ZDI to the SPIRou and 2019 ESPaDOnS time series of Stokes V profiles to recover the large-scale magnetic field at the surface of AD Leo. The magnetic geometry is modelled as the sum of a poloidal and a toroidal component, which are both expressed through spherical harmonics decomposition <cit.>. The algorithm compares observed and synthetic Stokes V profiles iteratively, fitting the spherical harmonics coefficients α_ℓ,m, β_ℓ,m, and γ_ℓ,m (with ℓ and m the degree and order of the mode, respectively), until they match within a target reduced χ^2. Because the inversion problem is ill-posed, a maximum-entropy regularisation scheme is applied to obtain the field map compatible with the data and with the lowest information content (for more details see ).
In practice, we used the code described in <cit.>. In its initial version, the code performed tomographic inversion under weak-field approximation, for which Stokes V is proportional to the first derivative of Stokes I over velocity <cit.>. For the present study, we have implemented the Unno-Rachkovsky's solutions to polarised radiative transfer equations in a Milne-Eddington atmosphere <cit.> and incorporated the filling factor formalism outlined in <cit.> and <cit.>. The implementation of Unno-Rachkovsky's solutions was motivated by the need of a more general model for the observed Stokes V profiles. Near-infrared observations of stars with intense magnetic fields are indeed more susceptible to distortions and broadening due to an enhanced Zeeman effect.
As input parameters for ZDI, we assumed i= 20^∘, v_esin(i)= 3 km s^-1, P_rot= 2.23 days, and solid body rotation. We adopted a linear limb darkening coefficient in H band of 0.3 and V band of 0.7 <cit.>. We set the maximum degree of the harmonic expansion ℓ_max= 8 (considering the low v_esin(i)) and allowed an entropy weighting scheme proportional to ℓ during ZDI inversion, to favour simple geometries as in <cit.>. The SPIRou near-infrared time series was split similarly to Sec. <ref>: 2019a (21 observations over 30 cycles), 2019b (21 observations over 26 cycles) 2020a (30 observations over 20 cycles), and 2020b (18 observations over 15 cycles). The Stokes V time series of SPIRou and 2019 ESPaDOnS data are shown in Fig. <ref>.
For the 2019a, 2019b, 2020a, and 2020b epochs, we fitted the Stokes V profiles to a χ^2_r level of 1.2, 1.0, 1.1, and 1.1 from an initial value of 10.3, 15.8, 14.7, and 8.5, respectively. For the ESPaDOnS 2019 time series, we fitted down to χ^2_r=2.5 from an initial value of 156.3. We attempted to merge the 2019b and 2020a epochs and reconstruct a single map, since they are separated by the shortest time gap. The quality of the final model is deteriorated (χ^2_r=1.3) with respect to the two epochs separately, but the corresponding map and magnetic energies are consistently recovered. We therefore kept these two epochs separate.
From Fig. <ref>, it is evident that the near-infrared Stokes V profiles manifest structures and stochastic variability in both lobes. This is not extended in the continuum, since the residuals with respect to the mean profile are compatible with the noise level, it is not rotationally-modulated, and it is not exhibited by Stokes N. The presence of such variability was already suggested by the phase-folded variations in B_l (Fig. <ref>), as some data points featured a departure from a pure rotational modulation. Likewise, the residuals of the Stokes I profiles show clear variability, but the application of a 2D periodogram does not reveal any significant periodicity. While our ZDI model is capable of describing the general shape of Stokes V profiles, it is limited at reproducing the structures and at capturing all the information present. These considerations are also valid for optical observations in 2019, as the amplitude of Stokes V is not matched exactly by our ZDI model, and overall translate into an underestimate of the field strength. This motivates further the use of the PCA method described by <cit.>, which is a data-driven approach offering a complementary view on the magnetic field evolution, as outlined in the next section.
We are able to constrain the filling factor f_V following a χ^2 minimisation prescription similar to <cit.>. We found f_V values oscillating between 9% in 2019a, 16% in 2019b and back to ∼11% in the remaining epochs, compatible with <cit.>, and larger by a factor of 1.7 than <cit.>. This would indicate a weakening of the local small-scale field since 2016, on top of a decrease in large-scale field intensity as seen in the reconstructions (Fig. <ref>).
The filling factor f_I was inspected by considering a grid of values between 0% and 100%; for each f_I value, we synthesised a time series of model Stokes I profiles, computed the corresponding time series of χ^2_r with the observations, and phase-folded the χ^2_r curve at P_rot. We then assessed at what value of f_I the χ^2_r curve would start manifesting rotational modulation, because it would indicate that certain model profiles deviate from the observations. We noticed that values above 30% deteriorate the fit of the profile core progressively, yielding variability and rotational modulation of the Stokes I profiles, which is not observed otherwise (see Sec. <ref>). Values of f_I=30% are three times larger than f_V, in agreement with <cit.>. Since the plausible f_I values are consistent with 0%, we adopted f_I=0% in the ZDI modelling.
The five maps of surface magnetic flux (one for each SPIRou epoch, and one for the ESPaDOnS 2019 epoch) are shown in Figure <ref> and their properties are reported in Table <ref>. In all cases, the configuration is predominantly poloidal, storing >95% of the magnetic energy. The main modes are dipolar and quadrupolar, as they account for 70-90% and 15-20% of the magnetic energy. We report a weakening of the mean field strength (⟨ B ⟩) of factor of 1.5 and 2.4 relative to the optical maps reconstructed by <cit.> and <cit.>, respectively. The most remarkable feature is the reduction of magnetic energy contained in the axisymmetric mode, going from > 99% in 2019a to 60% in 2020b, translating into an increase of the dipole obliquity relative to the rotation axis, from 3^∘ to 38^∘.
We note that the maximum field strength reconstructed with ZDI is between 1.2 and 2.4 times smaller than obtained via Eq. <ref> <cit.>. Likewise, the magnetic field obliquity is underestimated, as illustrated in Fig. <ref>. On one side, this difference stems from the limitation of the Stokes V ZDI model, since it does not encompass the full amplitude of the two lobes for some observations, and on the other side Eq. <ref> assumes a purely dipolar field, contrarily to our reconstructions (the dipole accounts for 70-90% of the energy). Nevertheless, both approaches allow us to observe an evident evolution of the obliquity, featuring a rapid increase in the most recent epochs.
Finally, we merged the 2019b, 2020a and 2020b data sets and attempted a joint rotation period and differential rotation search following <cit.>. The results were inconclusive, likely due to the significant evolution of the surface magnetic field between each epoch.
The summary of the magnetic field's evolution is illustrated in Fig. <ref>. We performed ZDI reconstructions also for the archival ESPaDOnS and Narval data for consistency, finding reasonably compatible results with previous studies <cit.>. We observe a globally simple geometry (i.e. predominantly poloidal and dipolar) over 14 yr, with a decreasing strength. Our latest SPIRou observations revealed a clear evolution of the dipole obliquity in the form of a reduced axisymmetry, suggesting a potential dynamo magnetic cycle. These features are indeed compatible with the variations observed by <cit.> and <cit.> for the solar cycle.
§.§ Diagnosing the large-scale field using PCA
AD Leo is an ideal target for analysing large-scale field evolution with the data-driven PCA method recently presented by <cit.>, given its magnetic field strength and v_e sin(i). Principal component analysis allows us to uncover details about the stellar large-scale field directly from the LSD Stokes V profiles and to trace its magnetic field evolution across the observation run, without prior assumptions. Here, we analyse only the near-infrared time series, because the number of optical 2019 observations is not sufficient.
First, we can get insights about the star's axisymmetric large-scale field by analysing the mean Stokes V profile determined over all Stokes V LSD profiles (see <cit.> for further details). Fig. <ref> displays the mean profile and the decomposition into its antisymmetric and symmetric parts, denoting the poloidal and toroidal axisymmetric components, respectively. We clearly see that the mean profile is antisymmetric, which indicates a poloidal-dominated axisymmetric large-scale field. The amplitude of the symmetric part is comparable to the noise, and likely due to an artefact of uneven phase coverage rather than a true toroidal field signal <cit.>. Compared to the mean-subtracted Stokes V profiles, the amplitude of the mean profile is generally strong, marking a dominant axisymmetric field. However, we observe an increase in the amplitude of the mean-subtracted Stokes V in the last two epochs 2020a and 2020b, which provides a first hint towards a less axisymmetric configuration.
Second, the application of PCA to the mean-subtracted Stokes V profiles yields insights on the non-axisymmetric field, <cit.>. For the mean-subtracted Stokes V profiles, we applied the mean profile computed across all epochs, which allows a direct reflection of the epoch-to-epoch variations in PCA coefficients (e.g. in amplitude and mean value). If the mean Stokes V profile were computed per epoch, we would miss such information, that is to say the mean value of the coefficients would be centred for each epoch, and the amplitudes could no longer be compared to each other. Fig. <ref> presents the first three eigenvectors and their corresponding coefficients for the mean-subtracted Stokes V profiles separated by epoch and colour-coded by rotation cycle. The first eigenvector displays an antisymmetric shape proportional to the first derivative of the Stokes I profile and, together with the associated coefficient, scales mainly with the longitudinal magnetic field <cit.>. The second eigenvector shows a more symmetric shape, more closely related to the second derivative of the Stokes I profile, and describes the temporal evolution of the Stokes V profiles between the maxima of the longitudinal field. According to <cit.>, a strongly antisymmetric eigenvector traces the radial component and a symmetric eigenvector the azimuthal component for a dipole dominated field that is strongly poloidal, which is the case for AD Leo. The third eigenvector features a signal as well (antisymmetric, and related to the third derivative of the Stokes I profile), which is detectable due to the high S/N of the data set, while the further eigenvectors are dominated by noise. Seeing three eigenvectors indicates that even if the axisymmetric field is likely to be dominant, we are able to detect and to analyse the non-axisymmetric field in great detail.
The coefficients of the eigenvectors suggest an evolving large-scale field as their trend changes for every epoch, see Fig. <ref> 2nd-5th row.
In 2019a, the coefficients related to the first eigenvector show only a flat distribution around zero, implying a predominantly axisymmetric field.
For the following epochs, 2019b, 2020a and 2020b, we see sine-like trends of the first two coefficients with rotational phase. The amplitude increases from epoch to epoch, indicating a growing obliquity of the dipole-dominated large-scale field.
For the 2020b epoch, the obliquity becomes so large that the coefficients of the third eigenvector start to show a sine-like trend as well, which translates into a significant non-axisymmetric field.
Furthermore, the extremes of the coefficients associated to the antisymmetric and symmetric profile (first and second eigenvector) for the same epoch feature an apparent phase shift of ≈ 0.25, which demonstrates that the dipolar component is poloidal dominated with little to no toroidal contribution <cit.>.
The extremes of the coefficients related to the antisymmetric eigenvector locate the pointing phase of the dipole <cit.>. For the last three epochs, the maximum of this coefficient occurs at a pointing phase of ≈ 0.3 for the northern pole of the dipole, and the sign of the eigenvector implies a negative polarity. The extremes of the coefficients occur at the same rotational phase throughout the whole observation run, designating a stable pointing phase of the dipole, in agreement with the B_ℓ measurements (see middle panel of Fig. <ref>).
By applying the PCA method on the time series of Stokes V <cit.>, we confirm that AD Leo features a dipolar large-scale field, whose obliquity increased during the latest epochs (2020a and 2020b). As the large-scale field became more non-axisymmetric, the pointing phase of the dipole remained stable.
§ ACHROMATICITY OF THE MAGNETIC FIELD
The impact of stellar magnetic activity on radial velocity measurements features a chromatic dependence stemming from a combination of magnetic field and spot temperature contrast <cit.>. Indeed, at near-infrared wavelengths the Zeeman broadening is expected to be stronger, while starspots contribute less owing to a lower contrast with the photosphere. The situation is reversed in the optical domain. For AD Leo, recent work by <cit.> demonstrated the strong chromatic behaviour of radial velocity jitter, the latter being significantly weaker in the near-infrared domain than in optical. The combination of these effects becomes increasingly important with the activity level of the star, since the number of spots would be correspondingly larger <cit.>, and it could possibly result in distinct contributions to the magnetic field strength, which can then be used to facilitate the modelling of stellar activity.
Fast-rotating stars are expected to feature high active latitudes and large polar spots <cit.>, because the Coriolis force would overcome the buoyancy force, making the flux tubes ascend parallel to the stellar rotation axis <cit.>. There are some cases, however, in which fast rotation does not correlate with the presence of a polar spot <cit.>. The fact that AD Leo is a moderate rotator observed nearly pole-on makes it an interesting case to investigate whether longitudinal field measurements are chromatic, reflecting the behaviour of an underlying spot.
Previous studies dedicated to the Sun have shown that the magnetic field strength measured in individual lines varies significantly <cit.>, and differences between optical and near-infrared domains have unveiled a dependence of the field strength on atmospheric height: the field increases while going towards deeper internal layers <cit.>. For other stars, <cit.> reported a chromatic difference in magnetic field strength for the moderatively active K dwarf ε Eri, but attributed its origin to incomplete modelling of the spectral lines used for the Zeeman broadening analysis. No wavelength dependence of the field strength was reported more recently, neither for ε Eri <cit.> nor for T Tauri stars <cit.>. The same conclusion was reached by <cit.> when computing longitudinal field values for the active M dwarf EV Lac using blue (<550 nm) and red (>550 nm) lines of an optical line list.
To investigate the longitudinal field chromaticity, we analyse the contemporaneous observations taken with SPIRou and ESPaDOnS in November 2019. We limit the LSD computation within successive wavelength bins of the line mask, and evaluate the longitudinal field for each case. Including both optical and near-infrared domains, we considered 11 subsets of lines in the following ranges: [350,390], [390,430], [430,480], [480,550], [550,650], [650,1100], [950,1100], [1100,1400], [1400,1600], [1600,1800], [1800,2500] nm. The [650,1100] and [950, 1100] nm ranges represent the red end of ESPaDOnS spectra and the blue end of SPIRou spectra, respectively. We adopt more wavelength regions than those presented in <cit.>, allowing a finer search of chromatic trends. The number of lines used varies between 100 and 1000 in the optical, and between 120 and 300 in the near-infrared (see Fig. <ref>). In addition, we compute LSD using a 50-lines mask in the overlapping wavelength region of ESPaDOnS and SPIRou spectra ([950,1050] nm).
Stokes I and V profiles were computed for the simultaneous SPIRou and ESPaDOnS epochs, namely 2019b and 2019, respectively. To increase the S/N and allow a more precise estimate of B_l, the profiles obtained with a specific line list subset and belonging to the same epoch were co-added. This is reasonable considering the marginal amplitude variation over the epochs examined and the unchanged polarity of Stokes V. The longitudinal field was then computed with Eq. <ref> using the specific normalisation wavelength and Landé factor of each line subset, and adapting the velocity integration range according to the width of the co-added Stokes V profile.
From Fig. <ref>, we observe no clear chromaticity of B_l. The distribution of field strength is flat around -200 G with a total scatter of 20 G. Such dispersion is mainly due to LSD computations with a low number of lines, implying Stokes shapes more sensitive to variations in individual lines, blends and residuals of telluric correction. For the same reason, some profiles appear deformed and lead to evident outliers (see Fig.<ref>). For instance, the B_l value obtained from ESPaDOnS data in the spectral region overlapping with SPIRou is 100 G weaker (in absolute value) than the B_l value obtained from SPIRou data in the same wavelength region. This could be due to the low S/N at the very red edge of ESPaDOnS.
The case of [390,430] nm leads to a field value of -750 G, despite the Stokes profiles do not show a particular deformation. We attribute this behaviour to an imprecise continuum normalisation of the spectra, likely due to a challenging identification of the continuum level in the blue part of the spectrum, where M dwarfs feature forests of spectral lines. The effect is a smaller depth (and equivalent width) of the Stokes I profile relative to the other cases, which artificially increases the value of the field (in absolute value). Overall, although the [350,390] and [390,430] nm bins contain more than half of the lines in the optical mask, their weight in the LSD computation is small <cit.> making their effect in the computation of B_l with the full mask negligible.
We repeated the same exercise for the other SPIRou epochs and found a similar behaviour, the only difference being 2020b data points shifting upwards because of the field global weakening. A possible implication of the lack of a chromatic trend may be the absence of a polar spot for AD Leo. This would be justified considering that other faster-rotating M dwarfs like V374 Peg <cit.> and HK Aqr <cit.> do not show polar spots.
A potential source of chromaticity for B_l values may come from limb darkening. This radial gradient in stellar brightness over the visible disk can be expressed as a linear function of the angle between the line of sight and the normal to a surface element (θ)
I/I_0 = 1 - ε(1 - cosθ),
where I_0 is the brightness at disk centre (θ=0^∘) and ε is the limb darkening coefficient. <cit.> show that ε decreases with wavelength, being 0.7 in V band and 0.3 in H band. The linear limb darkening law in Eq.<ref> is the one implemented in the ZDI reconstruction <cit.>.
Owing to the stronger limb darkening in optical than in near-infrared, there is the possibility of additional polarity cancellation in the latter domain, which would lead to weaker field measurements. For the specific case of AD Leo, the low stellar inclination makes the equator appear at the limb and near-infrared observations would be more sensitive to this region. In particular, the sign of large-scale dipolar magnetic field lines exiting the pole would cancel out more with those at equator, compared to optical observations.
To verify this, we 1) linearly interpolated the limb darkening coefficients in <cit.> at the wavelengths examined for the thermal contrast test (see Fig. <ref>), 2) synthesised Stokes profiles for the same coefficients assuming an axisymmetric dipole of 1 kG seen pole-on (akin to AD Leo in 2019a) and infinite S/N, and 3) computed the associated field values with Eq. <ref>. The results are illustrated in Fig. <ref>. We observe a small (7%) weakening of the field from optical to near-infrared, which is overwhelmed by noise in real observations.
§ DISCUSSION AND CONCLUSIONS
In this paper, we presented the results of an extended spectropolarimetric monitoring of the active M dwarf AD Leo, using near-infrared observations collected with SPIRou between 2019 and 2020 as part of the SLS survey. They add to the previous optical data obtained with ESPaDOnS and Narval between 2006 and 2019, making the entire time series encompass approximately 14 yr. To carry out our magnetic analysis, we computed the longitudinal magnetic field, tracked the variations of the Stokes I FWHM, modelled Zeeman broadening on individual selected lines, reconstructed the large-scale field topology via ZDI, and assessed axisymmetry variations by means of a novel PCA method.
Initially, <cit.> reported an axisymmetric, dipole-dominated structure that was stable over one year; later, <cit.> pointed out a large-scale weakening and small-scale enhancement of the field but no variation in the geometry. We found strong evidence of a large-scale field evolution, that is summarised as follows:
* The longitudinal magnetic field has weakened between 2006 and 2020, from -300 to -50 G, with a rapid decrease of 100 G in the 2020b. The dipolar longitudinal magnetic field evolved in the same time frame, starting from -850 G in 2006, reaching -560 G in 2016 and restoring back to -900 G in 2020.
* The FWHM of Stokes I profiles does not show rotational modulation, but a dispersion that may partly be due to short-term variability. The epoch-averaged FWHM manifests a long-term variation both in optical and near-infrared, being wider in 2019b and 2020a, and narrower in 2019a and 2020b. The variations are enhanced when the Stokes profiles are computed with magnetically-sensitive lines, as opposed to the insensitive ones. The near-infrared data in particular feature a trend moderately correlated with B_l (in absolute value).
* The magnetic flux estimated from the modelling of Zeeman broadening exhibits a global increase over time, which is also correlated to the long-term trend of the longitudinal magnetic field (in absolute value). Moreover the epoch-averaged magnetic flux obtained for the near-infrared SPIRou time series oscillates in a similar manner to the FWHM of Stokes I, demonstrating that the latter is capable of tracing secular evolution of the total, unsigned magnetic field.
* Zeeman-Doppler imaging reconstructions confirmed the same kind of topological evolution, with the axisymmetric level decreasing to 60% and the obliquity between magnetic and rotation axis increasing to 38^∘. This already found support by the enhanced intermittency of the amplitude of Stokes V profiles in late 2020.
* The PCA method confirmed the predominantly poloidal and dipolar geometry of the large-scale field, as well as a lower axisymmetry in 2020a and 2020b. In addition, the pointing phase of the dipole remained stable during the evolution.
* Measurements of the magnetic field strength are overall achromatic, since they manifest only a marginal wavelength dependence due to limb darkening.
Our results altogether suggest that AD Leo may be entering a polarity reversal phase of a long-term magnetic cycle, analogous to the solar one. The combination of chromospheric activity studies and spectropolarimetric campaigns show that some Sun-like stars may manifest magnetic cycles and polarity reversals in phase with chromospheric cycles <cit.>, while others have a more complex behaviour where very regular chromospheric oscillations have no straightforward polarimetric counterpart <cit.>.
Predicting when the polarity reversal may occur for AD Leo is not a trivial task, as the B_l data set does not feature a clear minimum or maximum. Recently, <cit.> did not report any evident trends from a long-term campaign of chromospheric indexes, whereas previous studies based on photometric observations reported either two co-existing timescales for cycles, namely 7 yr and 2 yr <cit.>, or an individual one of about 11 yr <cit.>. However, these time scales are not compatible with the variations in B_l observed over 14 yr. The axisymmetric level of the large-scale topology is a more suitable proxy to track the cycle <cit.>, but we recorded its change only in the most recent observations.
A comparison between the magnetic field evolution described here and that of the radial velocity jitter obtained in <cit.> leads to a puzzling situation. <cit.> show that radial velocity variations in optical are essentially due to the presence of a spot and that this signal has changed only slightly (in phase and amplitude, the latter varies from 25.6±0.3 m s^-1 to 23.6±0.5 m s^-1) between 2005 and 2021. Such radial velocity signal is not detected in infrared with SPIRou, corroborating its strong chromaticity and therefore its origin due to stellar activity <cit.>. The fact that the dipolar field evolution is disjointed from a surface brightness evolution is not a surprise: <cit.> show that the mainly-dipolar topology of V374 Peg did not correlate with the complex brightness map reconstructed via Doppler imaging.
These considerations motivate long-term spectropolarimetric and velocimetric campaigns of active M dwarfs. For AD Leo in particular, additional monitoring is required to observe the polarity reversal and the cycle's extremes, to constrain a precise time scale. An extended temporal baseline could also give more insight on the link between topological variations and high-energy flaring events <cit.>. At the same time, we could shed more light on the relation between the evolution of the large-scale magnetic field topology and the stability of the radial velocity jitter.
An additional detail we could infer about AD Leo's magnetic field is the helicity, which quantifies the linkage between poloidal and toroidal field lines and thus describes the complexity of the magnetic topology <cit.>. For the Sun, <cit.> reported a temporal variation of the value correlated to the magnetic cycle. Indeed, helicity maxima and minima occur when the axis of symmetry of the poloidal and toroidal field components are aligned and orthogonal, respectively.
For AD Leo, the fraction of toroidal energy is only a negligible fraction of the total one, hence we should exert caution when deriving quantities from it. Over time, we observe that the poloidal axisymmetric (m=0) mode maintains >80% of the magnetic energy and features a drop to 45% in 2020b, while the energy in the toroidal axisymmetric mode decreases from 30% to 6%. As a result, the two components maintained an overall misaligned configuration, but in the most recent epoch, the poloidal component became more aligned with the toroidal one due to the axisymmetry decrease. Following the practical visualisation of <cit.>, this evolution would correspond to an increase in field helicity.
The existence of a magnetic cycle for AD Leo is in agreement with the observational evidence of such phenomena for M dwarfs from radial velocity exoplanet searches <cit.>. In general, studies have shown that magnetic cycles introduce long-term signals in radial velocity data sets that can dominate over planetary signatures <cit.>, as they modulate the appearance and number of heterogeneities on the stellar surface. It is therefore necessary to have an accurate constraint on the temporal variations of the cycle, in order to remove its contamination and allow a more reliable planetary detection and characterisation <cit.>.
Furthermore, activity cycles modulate the stellar radiation output and winds in which close-in planets are immersed <cit.>. This leads to a temporal variation in the planetary atmospheric stripping with consequent alteration of the chemical properties and habitability <cit.>. Details on the occurrence of the cycle extremes can thus inform the most suitable interpretation framework and observing plans for missions dedicated to transmission spectroscopy like Ariel <cit.>. At the same time, periodic variations in the large-scale field geometry need to be considered for an accurate and updated modelling of the low-frequency radio emission discovered for M dwarfs <cit.>, which has been recently proposed to potentially reveal the presence of close-in magnetic planets <cit.>.
Finally, AD Leo may not be an isolated case. To verify this, it is essential to explore the possibility for such cycles over a wider area of the stellar parameter space, namely mass and rotation period.
We acknowledge funding from the French National Research Agency (ANR) under contract number ANR-18-CE31-0019 (SPlaSH). SB acknowledges funding from the European Space Agency (ESA), under the visiting researcher programme. LTL acknowledges funding from the European Research Council under the H2020 research & innovation programme (grant #740651 NewWorlds). XD and AC acknoweldge for funding in the framework of the Investissements dAvenir programme (ANR-15-IDEX-02), through the funding of the `Origin of Life' project of the Univ. Grenoble-Alpes. OK acknowledges support by the Swedish Research Council (grant agreement no. 2019-03548), the Swedish National Space Agency, and the Royal Swedish Academy of Sciences. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. The observations at the CFHT were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. We gratefully acknowledge the CFHT QSO observers who made this project possible. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna; Astropy, 12 a community-developed core Python package for Astronomy <cit.>; NumPy <cit.>; Matplotlib: Visualization with Python <cit.>; SciPy <cit.>.
aa
§ FWHM OF STOKES I
In this appendix, the Stokes I profiles computed using different line selections are shown. We compare the width of the profiles obtained with the full atomic mask, high-Landé factor (g_eff>1.2) lines and low-Landé factor (g_eff<1.2) lines.
§ CHROMATIC STOKES PROFILES
The various Stokes profiles computed with different wavelength-based line lists for LSD are reported. The wavelength intervals of the line subsets are: [350,390], [390,430], [430,480], [480,550], [550,650], [650,1100], [950,1100], [1100,1400], [1400,1600], [1600,1800], [1800,2500] nm.
§ ZEEMAN BROADENING EXAMPLES
Example plots of the posterior distributions from the Zeeman broadening MCMC analysis are shown in Fig. <ref> for a ESPaDOnS observation and in Fig. <ref> for a SPIRou observation. Summaries of the results of the MCMC analysis for each epoch are provided in Tables <ref> and <ref>.
§ OBSERVING LOG
This appendix contains the journal of observations of AD Leo, for both optical and near-infrared observations. It also includes all measurements of longitduinal magnetic field and Σ Bf.
lcrccc
List of AD Leo observations collected with SPIRou. The columns are: (1 and 2) date and universal time of the observations, (3) rotational cycle of the observations found using Eq. <ref>, (4) exposure time of a polarimetric sequence, (5) signal-to-noise ratio at 1650 nm per spectral element, (6) RMS noise level of Stokes V signal in units of unpolarised continuum.
Date UT n_cyc t_exp S/N σ_LSD
[hh:mm:ss] [s] [10^-4I_c]
continued.
Date UT n_cyc t_exp S/N σ_LSD
[hh:mm:ss] [s] [10^-4I_c]
6c2019
April 15 06:11:02.35 0.00 4x61 151 1.9
April 16 12:03:07.61 0.56 4x61 130 1.8
April 18 09:00:59.77 1.40 4x61 138 1.9
April 19 10:36:14.55 1.88 4x61 143 1.8
April 20 08:56:19.44 2.29 4x61 143 1.8
April 21 05:42:43.43 2.68 4x61 165 1.5
April 22 08:58:10.67 3.19 4x61 147 1.9
April 23 07:01:28.73 3.60 4x61 154 1.9
April 24 06:04:06.74 4.03 4x61 147 1.9
April 25 09:41:45.90 4.55 4x61 152 1.9
April 26 08:13:21.75 4.97 4x61 162 1.9
April 27 08:17:48.22 5.42 4x61 139 2.1
May 01 08:59:11.25 7.23 4x61 153 1.8
May 15 06:11:38.01 13.45 4x61 165 1.6
June 13 06:30.59.49 26.46 4x61 186 2.1
June 14 05:44:29.79 26.89 4x61 193 2.0
June 15 06:04:31.16 27.35 4x61 192 2.1
June 16 05:44:52.21 27.79 4x61 173 1.8
June 17 06:08:46.48 28.25 4x61 150 1.8
June 19 05:47:00.42 29.14 4x61 175 1.5
June 21 06:18:29.14 30.05 4x61 169 1.7
October 16 15:31:29.14 82.69 4x61 180 1.4
October 31 15:32:56.51 89.41 4x61 169 1.6
November 01 15:22:10.08 89.86 4x61 159 1.5
November 02 15:36:52.90 90.31 4x61 170 1.4
November 03 14:53:22.07 90.75 4x61 151 1.5
November 04 15:43:18.06 91.21 4x61 137 1.9
November 05 15:33:34.93 91.66 4x61 164 1.3
November 06 15:37:27.47 92.10 4x61 190 1.5
November 07 15:01:11.73 92.54 4x61 165 1.5
November 09 14:03:02.38 93.42 4x61 116 1.6
November 10 15:51:32.68 93.90 4x61 151 1.5
November 13 14:47:09.89 95.22 4x61 201 1.8
November 14 14:13:58.20 95.66 4x61 197 1.4
December 05 15:12:55.68 105.10 4x61 116 1.4
December 05 15:35:03.10 105.10 4x61 68 1.5
December 07 15:08:39.83 105.99 4x61 133 2.2
December 08 14:29:19.93 106.43 4x61 181 1.8
December 09 14:21:48.74 106.88 4x61 193 1.5
December 10 13:15:39.48 107.31 4x61 194 1.5
December 11 14:58:07.44 107.79 4x61 189 3.0
December 12 14:33:27.45 108.23 4x61 190 3.6
6c2020
January 26 12:11:48.90 128.36 4x61 218 2.0
January 27 12:07:16.45 128.81 4x61 166 1.5
January 28 12:06:25.46 129.26 4x61 193 1.5
February 05 08:18:51.61 132.78 4x61 174 1.4
February 16 07:39:02.87 137.70 4x61 193 1.8
February 17 09:20:45.46 138.18 4x61 171 1.9
February 18 07:39:13.56 138.59 4x61 191 1.2
February 19 08:37:09.84 139.06 4x61 210 1.5
March 12 09:16:56.32 148.94 4x61 217 2.0
May 08 05:59:01.58 174.43 4x61 194 1.4
May 09 09:42:30.22 174.95 4x61 206 1.5
May 12 09:39:45.19 176.30 4x61 177 1.4
May 13 09:43:17.10 176.75 4x61 204 1.4
May 14 07:46:35.66 177.16 4x61 208 1.5
May 15 09:50:31.27 177.65 4x61 164 1.6
May 31 06:22:44.29 184.76 4x61 215 1.2
June 01 07:29:57.32 185.23 4x61 189 1.3
June 02 06:23:31.70 185.65 4x61 202 1.4
June 03 07:38:43.10 186.13 4x61 196 1.3
June 04 06:32:43.13 186.55 4x61 197 1.3
June 05 06:42:42.88 187.01 4x61 139 1.8
June 06 07:53:54.53 187.48 4x61 167 1.2
June 07 06:58:07.88 187.91 4x61 154 1.6
June 08 06:03:47.85 188.34 4x61 135 1.3
June 08 06:10:09.48 188.34 4x61 143 1.4
June 09 06:31:50.17 188.79 4x61 120 1.3
June 10 06:57:13.44 189.25 4x61 200 1.8
October 31 15:06:44.49 253.75 4x61 205 1.5
November 03 15:29:52.78 255.11 4x61 220 1.5
lcrccc
List of AD Leo observations collected with ESPaDOnS in 2019. The columns are: (1 and 2) date and universal time of the observations, (3) rotational cycle of the observations found using Eq. <ref>, (4) exposure time of a polarimetric sequence, (5) signal-to-noise ratio at 650 nm per spectral element, (6) RMS noise level of Stokes V signal in units of unpolarised continuum.
Date UT n_cyc t_exp S/N σ_LSD
[hh:mm:ss] [s] [10^-4I_c]
November 15 13:24:01.40 96.32 4x300 234 1.7
November 16 14:20:01.30 96.79 4x300 230 1.9
November 19 14:08:04.00 98.13 4x300 151 3.6
November 19 14:33:03.00 98.14 4x300 159 3.2
November 19 14:56:04.00 98.15 4x300 180 2.8
November 21 15:43:01.00 99.06 4x300 265 1.8
cccc
List of optical and near-infrared measurements of longitudinal magnetic field and magnetic flux. The columns are: (1) Heliocentric Julian date of the observation, (2) B_l with formal error bar (see Eq. <ref>), and (3) magnetic flux from Zeeman broadening modelling, when a reliable measurement was possible, and (4) the instrument used.
HJD B_l Bf Instrument
[-2450000] [G] [kG]
continued.
HJD B_l Bf Instrument
[-2450000] [G] [kG]
3747.0876 -269.4 ± 26.4 3.65^+0.07_-0.08 ESPaDOnS
3748.8868 -272.4 ± 15.1 3.52^+0.09_-0.10 ESPaDOnS
3780.0705 -280.7 ± 13.1 3.55^+0.10_-0.10 ESPaDOnS
3895.8047 -291.2 ± 10.7 3.62^+0.08_-0.09 ESPaDOnS
3896.8124 -260.1 ± 7.6 3.61^+0.08_-0.09 ESPaDOnS
3897.8005 -266.7 ± 7.4 3.63^+0.08_-0.09 ESPaDOnS
3898.7785 -294.2 ± 8.1 3.58^+0.08_-0.10 ESPaDOnS
4127.5975 -294.6 ± 16.5 … Narval
4128.6088 -248.6 ± 10.8 2.95^+0.16_-0.16 Narval
4129.5717 -296.2 ± 11.8 2.85^+0.21_-0.22 Narval
4130.6084 -253.8 ± 9.4 2.65^+0.21_-0.20 Narval
4133.6312 -274.8 ± 10.4 … Narval
4134.6112 -271.7 ± 11.4 … Narval
4135.6217 -231.4 ± 10.4 … Narval
4136.5925 -294.9 ± 13.0 … Narval
4276.7715 -261.0 ± 8.2 … ESPaDOns
4485.5177 -290.1 ± 13.2 3.18^+0.17_-0.20 Narval
4489.5683 -248.6 ± 10.3 3.10^+0.17_-0.18 Narval
4492.5379 -285.3 ± 10.7 … Narval
4493.5486 -204.6 ± 10.6 3.03^+0.18_-0.19 Narval
4495.5611 -227.2 ± 12.6 2.81^+0.19_-0.19 Narval
4499.5675 -256.8 ± 11.1 … Narval
4501.5473 -288.6 ± 12.1 … Narval
4502.5475 -202.8 ± 9.7 … Narval
4506.5576 -218.9 ± 10.0 2.84^+0.19_-0.20 Narval
4508.5516 -265.5 ± 11.0 … Narval
4509.5564 -219.6 ± 12.2 2.90^+0.18_-0.20 Narval
4510.5523 -286.6 ± 15.3 … Narval
4511.5694 -200.2 ± 10.5 2.86^+0.17_-0.17 Narval
4512.5537 -296.7 ± 10.8 2.99^+0.16_-0.16 Narval
5896.7560 -249.3 ± 11.5 3.22^+0.15_-0.16 Narval
5934.6407 -247.0 ± 9.5 3.39^+0.11_-0.12 Narval
5935.6765 -225.9 ± 9.3 3.50^+0.09_-0.11 Narval
5936.6050 -241.7 ± 10.9 3.38^+0.09_-0.10 Narval
5937.7575 -254.6 ± 11.2 3.44^+0.08_-0.10 Narval
5938.6659 -243.9 ± 8.8 3.31^+0.08_-0.10 Narval
5939.6031 -234.2 ± 10.6 3.34^+0.08_-0.10 Narval
5940.6416 -203.8 ± 9.1 3.31^+0.09_-0.09 Narval
5941.6411 -232.7 ± 9.7 3.36^+0.11_-0.13 Narval
5942.6256 -220.5 ± 10.2 3.30^+0.09_-0.10 Narval
7435.7957 -177.0 ± 5.5 3.68^+0.07_-0.08 ESPaDOnS
7436.8831 -165.8 ± 5.3 3.67^+0.08_-0.09 ESPaDOnS
7441.8954 -181.4 ± 5.3 3.57^+0.06_-0.07 ESPaDOnS
7443.0074 -166.1 ± 5.3 3.60^+0.09_-0.10 ESPaDOnS
7447.9103 -160.7 ± 6.4 3.65^+0.14_-0.16 ESPaDOnS
7449.0011 -182.9 ± 7.7 3.66^+0.11_-0.13 ESPaDOnS
7449.9154 -154.6 ± 6.5 3.66^+0.12_-0.15 ESPaDOnS
7450.8328 -190.5 ± 6.9 3.70^+0.09_-0.11 ESPaDOnS
7495.8253 -192.9 ± 8.8 3.66^+0.12_-0.15 ESPaDOnS
7498.7442 -170.6 ± 5.8 3.68^+0.09_-0.09 ESPaDOnS
8588.7573 -194.7±17.2 3.40^+0.08_-0.09 SPIRou
8590.0016 -212.3±19.9 … SPIRou
8592.9410 -206.5±17.4 3.33^+0.09_-0.11 SPIRou
8593.8715 -202.5±16.5 3.37^+0.11_-0.11 SPIRou
8594.7370 -234.3±16.8 3.42^+0.10_-0.11 SPIRou
8595.8726 -211.7±16.4 3.47^+0.10_-0.13 SPIRou
8596.7915 -217.9±16.0 3.37^+0.09_-0.09 SPIRou
8597.7516 -219.1±18.6 3.33^+0.10_-0.09 SPIRou
8598.9026 -240.0±16.9 3.49^+0.09_-0.09 SPIRou
8599.8411 -263.6±17.6 3.36^+0.09_-0.10 SPIRou
8600.8441 -182.9±16.7 3.36^+0.10_-0.11 SPIRou
8604.8725 -190.5±16.3 3.27^+0.11_-0.12 SPIRou
8618.7549 -216.5±15.1 3.27^+0.08_-0.09 SPIRou
8647.7665 -217.0±12.9 3.26^+0.09_-0.09 SPIRou
8648.7341 -221.3±13.1 3.38^+0.09_-0.09 SPIRou
8649.7480 -230.0±15.3 3.23^+0.10_-0.10 SPIRou
8650.7343 -194.7±13.6 3.49^+0.09_-0.10 SPIRou
8651.7509 -209.4±16.0 3.49^+0.08_-0.10 SPIRou
8653.7357 -227.2±13.6 3.38^+0.09_-0.09 SPIRou
8655.7575 -250.3±20.3 3.42^+0.09_-0.10 SPIRou
8773.1472 -223.6±14.4 3.39^+0.10_-0.10 SPIRou
8788.1497 -223.7±13.4 3.32^+0.09_-0.10 SPIRou
8789.1423 -184.8±13.7 3.43^+0.10_-0.09 SPIRou
8790.1526 -245.2±15.4 3.50^+0.09_-0.10 SPIRou
8791.1225 -192.7±13.8 3.50^+0.12_-0.12 SPIRou
8792.1572 -256.3±16.3 3.46^+0.09_-0.10 SPIRou
8793.1506 -204.8±13.6 3.50^+0.09_-0.10 SPIRou
8794.1534 -246.5±13.5 3.53^+0.09_-0.09 SPIRou
8795.1283 -209.1±15.4 3.48^+0.09_-0.09 SPIRou
8797.0881 -189.3±25.7 3.51^+0.10_-0.09 SPIRou
8798.1635 -202.8±17.8 3.63^+0.10_-0.10 SPIRou
8801.1190 -254.3±14.6 3.56^+0.11_-0.12 SPIRou
8802.0961 -186.1±14.1 3.58^+0.08_-0.09 SPIRou
8803.0577 -208.3±7.0 3.67^+0.13_-0.14 ESPaDOnS
8804.0967 -189.1±7.2 3.58^+0.10_-0.11 ESPaDOnS
8807.0890 -178.9±12.9 3.65^+0.07_-0.09 ESPaDOnS
8807.1062 -198.8±11.5 3.61^+0.08_-0.09 ESPaDOnS
8807.1223 -169.3±10.1 3.63^+0.07_-0.07 ESPaDOnS
8809.1547 -161.1±6.4 3.57^+0.08_-0.09 ESPaDOnS
8823.1385 -254.6±30.7 3.48^+0.08_-0.09 SPIRou
8823.1539 -242.9±42.2 3.69^+0.10_-0.09 SPIRou
8825.1357 -193.2±22.1 3.60^+0.08_-0.08 SPIRou
8826.1084 -236.2±14.4 3.41^+0.09_-0.09 SPIRou
8827.1033 -178.9±13.7 3.42^+0.09_-0.09 SPIRou
8828.0574 -249.2±13.9 3.40^+0.11_-0.11 SPIRou
8829.1286 -152.6±14.4 3.44^+0.10_-0.10 SPIRou
8830.1115 -248.1±14.8 3.35^+0.10_-0.09 SPIRou
8875.0136 -176.0±12.2 3.43^+0.09_-0.10 SPIRou
8876.0104 -118.6±13.3 3.54^+0.08_-0.09 SPIRou
8877.0098 -238.7±14.5 3.40^+0.10_-0.11 SPIRou
8884.8515 -122.9±12.6 3.42^+0.10_-0.11 SPIRou
8895.8233 -132.6±11.9 3.40^+0.10_-0.10 SPIRou
8896.8939 -219.3±13.7 3.44^+0.11_-0.10 SPIRou
8897.8233 -157.3±12.7 3.43^+0.09_-0.11 SPIRou
8898.8635 -216.3±13.3 3.44^+0.10_-0.11 SPIRou
8920.8895 -176.4±12.6 3.42^+0.09_-0.10 SPIRou
8977.7467 -177.7±11.8 3.27^+0.09_-0.10 SPIRou
8978.9018 -114.9±11.5 3.33^+0.09_-0.08 SPIRou
8981.8996 -214.6±14.0 3.28^+0.10_-0.11 SPIRou
8982.9020 -81.2±11.5 3.43^+0.11_-0.10 SPIRou
8983.8209 -190.0±11.2 3.27^+0.08_-0.09 SPIRou
8984.9069 -100.5±16.8 3.44^+0.11_-0.11 SPIRou
9000.7614 -50.4±10.5 3.26^+0.12_-0.11 SPIRou
9001.8080 -217.7±17.7 3.32^+0.09_-0.10 SPIRou
9002.7618 -61.7±11.9 3.20^+0.09_-0.11 SPIRou
9003.8140 -199.2±12.7 3.18^+0.10_-0.11 SPIRou
9004.7681 -110.4±11.8 3.26^+0.09_-0.09 SPIRou
9005.7750 -96.4±15.5 3.30^+0.09_-0.09 SPIRou
9006.8244 -119.8±14.9 3.30^+0.09_-0.10 SPIRou
9007.7856 -70.0±15.1 3.30^+0.10_-0.10 SPIRou
9008.7478 -178.4±16.2 3.28^+0.10_-0.10 SPIRou
9008.7522 -185.2±16.6 3.30^+0.09_-0.10 SPIRou
9009.7672 -64.5±18.5 3.35^+0.10_-0.11 SPIRou
9010.7848 -194.7±13.2 3.17^+0.10_-0.09 SPIRou
9154.1315 -62.2±10.1 3.31^+0.07_-0.07 SPIRou
9157.1479 -46.1±9.9 3.28^+0.08_-0.09 SPIRou
|
http://arxiv.org/abs/2307.01716v1 | 20230704133618 | APRIL: Approximating Polygons as Raster Interval Lists | [
"Thanasis Georgiadis",
"Eleni Tzirita Zacharatou",
"Nikos Mamoulis"
] | cs.DB | [
"cs.DB"
] |
University of Ioannina, Greece
Greece
[email protected]
IT University of Copenhagen
Denmark
[email protected]
University of Ioannina, Greece
Greece
[email protected]
The spatial intersection join an important spatial query
operation, due to its popularity and high complexity.
The spatial join pipeline takes as input
two collections of spatial objects (e.g., polygons).
In the filter step, pairs of
object MBRs that intersect are identified and passed to the refinement
step for verification of the join predicate on the exact object geometries.
The bottleneck of spatial join evaluation is in the refinement step.
We introduce , a powerful intermediate step in the pipeline, which is
based on raster interval
approximations of object geometries.
Our technique applies a sequence of interval joins on “intervalized”
object approximations to determine whether the objects intersect or
not.
Compared to previous work, approximations are
simpler, occupy much less
space,
and achieve similar pruning effectiveness at a much higher speed.
Besides intersection joins between polygons,
can directly
be applied and has high effectiveness
for polygonal range queries, within joins, and polygon-linestring joins.
By applying a lightweight compression technique,
approximations may occupy even less space than
object MBRs.
Furthermore, can be customized to apply on partitioned data
and on polygons of varying sizes, rasterized at different granularities.
Our last contribution is a novel algorithm that
computes the approximation of a polygon without having to
rasterize it in full, which is orders of magnitude faster than
the computation of other raster approximations.
Experiments on real data demonstrate the effectiveness
and efficiency of ; compared to the state-of-the-art intermediate filter,
occupies 2x-8x less space, is 3.5x-8.5x more time-efficient,
and reduces the end-to-end join cost up to 3 times.
APRIL: Approximating Polygons as Raster Interval Lists
Nikos Mamoulis
August 1, 2023
======================================================
PVLDB Reference Format:
. . PVLDB, (): , .
https://doi.org/doi:
[This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit <https://creativecommons.org/licenses/by-nc-nd/4.0/> to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing mailto:[email protected]@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment.
Proceedings of the VLDB Endowment, Vol. , No.
ISSN 2150-8097.
https://doi.org/doi:
]footnote-1
PVLDB Artifact Availability:
The source code, data, and/or other artifacts have been made available at <>.
§ INTRODUCTION
We study the problem of computing the spatial intersection join
between two spatial object collections R and S, which identifies all pairs of
objects (r,s), r∈ R, s∈ S such that r shares at least one
common point with s.
Besides being a common operation in geographic information systems
(GIS),
the spatial intersection join
finds a wide range of applications in geo-spatial interlinking <cit.>,
GeoSPARQL queries on RDF data stores <cit.>,
interference detection between objects in computer graphics
<cit.>, suggestion of synapses between neurons in
neuroscience models <cit.>.
Recently, there is a growing interest in spatial query evaluation over
complex object geometries (i.e., polygons)
<cit.>.
A naive way to evaluate the join is to run an intersection test
algorithm from computational geometry for each pair (r,s) in
R× S. However, this method is extremely expensive,
since (i) the
number |R× S| of pairs to be tested can be huge and (ii) for each pair the
test takes O(nlog n) time <cit.>.
To mitigate (i), the join is evaluated in two steps.
Provided that the minimum bounding rectangles (MBRs) of the objects
are available (and possibly indexed), in the filter step,
an efficient MBR-join algorithm
<cit.> is used to find the pairs of
objects (r,s)∈ R× S such that MBR(r) intersects with
MBR(s).
In the refinement step, for each pair that passes the filter
step, the expensive intersection test on the exact object geometries
is applied.
To further reduce the number of pairs to be refined, intermediate filters can be added to the pipeline
<cit.>. The main idea is to
use, in addition to the MBR, object approximations that can help
to identify fast whether a candidate pair (r,s) that passes the MBR
filter is (i) a sure result, (ii) a sure non-result, or (iii) an
indecisive pair, for which we still have to apply the geometry
intersection test.
Previously proposed approximations in intermediate filters
include simple convex polygons (5C or convex hull
<cit.>), raster approximations <cit.>,
and “intervalized” raster approximations paired with binary
codes <cit.>.
Each of these approaches has its drawbacks. The convex polygons
proposed in <cit.>, although cheap to store and
relatively fast to compute, can only be used to identify sure
non-results and fail to reduce significantly the number of indecisive
pairs that are sent to the expensive refinement step.
The raster approximation technique of <cit.> occupies too
much space and is not always effective in pruning object pairs.
Finally, the state-of-the-art raster-intervals approach <cit.>, which
improves over <cit.> in terms of space complexity and
pruning effectiveness, has a high preprocessing cost and occupies
significant space.
In this paper, we propose (Approximating Polygons as Raster Interval Lists), a
technique which significantly improves upon the Raster Intervals ()
approach of <cit.>, having the following key
differences.
First, previous rasterization techniques for spatial joins
<cit.> divide the raster cells that
intersect a polygon in three classes: Full cells that are
fully covered by the polygon, Strong cells that are covered by
the polygon more than 50% but less than 100%, and Weak cells
that are covered by the polygon by at most 50%.
, unites Strong and Weak cell classes to a single Partial class, which simplifies storage and accelerates the
intermediate filter.
Second, previous work
<cit.> explicitly stores or encodes
cell-class information.
The main novelty of
is the representation of each object by two lists of
intervals; one list that includes All cells (independently of their
class) and one that includes only Full cells.
The intermediate filter is then applied as a sequence of interval
joins; All-All join (AA-join) filters out all non-results
and then Full-All (FA-join), All-Full (AF-join)
joins filter (i.e., identify) sure results, leaving indecisive pairs
to the refinement step.
Since there are no cell-specific comparisons,
the intermediate filter using is
much faster compared to <cit.> which performs
comparisons at the cell level for each pair of intersecting intervals.
Finally, applies a compression technique, based on delta
encoding, to greatly reduce the space
required to store the interval lists.
This way, the approximations may require even less storage
compared to object MBRs, making it possible to store and process them
in the main memory.
Moreover, 's compression scheme allows partial, on-demand
decompression of interval lists, which is conducted during interval
join evaluation.
In addition to , the contributions of this paper include:
* We show the generality of in supporting spatial
selection queries, spatial within joins, and joins between polygons
and linestrings.
* We present a space partitioning approach, which increases the resolution of the raster grid and achieves more refined object approximations compared to <cit.> leading to fewer inconclusive cases and, therefore, faster query evaluation.
* We investigate options for defining and joining approximations
of different polygons at
different granularities based on their geometries.
* We propose a novel, one-step “intervalization” algorithm
that computes the APRIL approximation of a polygon without having to
rasterize it in full.
Our experimental evaluation on real data shows that, compared to
the state-of-the-art intermediate filter (),
is 3.5x-8.5x faster, (ii) occupies 2x-8x
less space, and (iii) has orders of magnitude lower preprocessing cost.
Using , the cost of end-to-end spatial join drops up to 71%,
compared to using .
The rest of the paper is structured as follows: Section
<ref> provides the necessary background. Section
<ref> details 's features, construction
and usage.
Section <ref> offers customization
options that help to further tune to fit the system's or
dataset's needs.
In Section <ref> we
study the efficient construction of
approximations.
Section <ref> includes our experiments that
verify 's performance. Section <ref> reviews
related work and, finally, Section <ref>
concludes the paper with
suggestions
for future work.
§ BACKGROUND
The intersection join pipeline applies an MBR-join algorithm
<cit.> on (indexed) MBR approximations of
the objects, to identify pairs of object MBRs that intersect; these
form candidate join results.
The direct refinement of each candidate pair using computational
geometry algorithms is very expensive and can easily
take up to 99% of the end-to-end join cost <cit.>.
In view of this, several studies <cit.> suggest the use of an intermediate
filter based on more accurate object approximations than the MBR,
to further reduce the number of object pairs that need to be refined.
Figure <ref> illustrates a general spatial intersection
join pipeline that includes an intermediate filter.
For each candidate join pair (r.id,s.id) produced by the MBR-join,
we perform a look-up of the geometric approximations (GAs) of objects
r and s using their IDs, assuming a fast access method
(e.g., r.id is row-number in a table or vector storing R's GAs).
The GAs are used by the
intermediate filter to identify the pair (r.id,s.id)
as a true negative or a true hit, or forward it
to the refinement step in order for it to access the exact geometries
and make the ultimate decision (at a high computational cost).
Raster Intervals () <cit.> is the
state-of-the-art intermediate filter.
Assuming a global 2^N× 2^N grid superimposed over the data space,
approximates each object p by the set of cells in the
grid that overlap p.
Further, these cells are
classified to Full, Strong, and Weak, based on their
coverage percentage with the object's geometry (100%, >50% and
≤ 50%, respectively).
Consider a candidate join pair of two objects r and s whose MBRs overlap.
If there are no common cells in the
approximations of r and s, then the pair is a true negative and
is eliminated.
If the objects have common cells, then it is possible to detect true
hits by examining the types of common cells.
All possible cases are shown in Table <ref>; if
for at least one common cell its types in the two objects lead to a
`yes' case, then the object pair is a
definite join result and the refinement step can be avoided.
Figure <ref> illustrates three cases of two polygonal objects,
whose MBRs intersect.
In Fig. <ref>(a), the two object approximations do not share any
common cells, so the pair is pruned as a true negative.
In Fig. <ref>(b), the two objects are reported as a join
result (true hit); they definitely intersect because there exists a
cell (the one with the bold-line border) which is fully covered by one
of them and strongly covered by the other (see full-strong case
in Table <ref>). In Fig.
<ref>(c) all common cells in the two object
approximations are either weak-weak or weak-strong, so the pair is
determined as inconclusive and passed to the refinement step.
To expedite the comparison of raster approximations of objects,
maps each cell in the 2^N× 2^N space to an
integer in [0,2^2N-1], which is the cell's order in the
Hilbert space-filling curve <cit.>.
Then, the cells in an object approximation
with continuous IDs
are merged into intervals
[start,end), which are sorted to form an intervals list.
To capture the types of cells in each interval, the method uses 3-bit
codes (Table <ref>), which are concatenated to form an
interval's coding.
The creation of Raster Intervals is illustrated in Figure <ref>.
The intermediate filter for a pair of objects is then implemented as
follows. The sorted interval lists of the two objects are merged (as
in merge-join) to identify pairs of intervals that overlap. For each
such interval pair, the corresponding bit-codes are aligned and bitwise
ANDed; if the result of an AND is non-zero then the object pair is
immediately reported as a true hit; if there are no overlapping
intervals, then the pair is reported as a true negative. If there is
at least one pair of non-overlapping intervals and for all such pairs the
bitwise AND of their codings is 0, then the pair is passed to the
refinement step as indecisive.
In summary, the intermediate filter
checks all common
cells of two object approximations en masse.
Even though offers high refinement
candidate reduction compared to other polygon approximations, it comes
with a number of drawbacks.
First, the construction of approximations (i.e.,
pre-processing) is costly,
because for each cell that overlaps with the object, we need to
identify the cell type.
Second, the intermediate filter involves a complex and relatively expensive bitstring
alignment process.
Third, RI approximations may occupy too much space, especially for large
polygons that include long intervals with spacious encodings.
§ METHODOLOGY
We propose
(Approximating Polygons as Raster Interval Lists), an
enhanced intermediate filtering method for spatial intersection joins,
which is more efficient and less space consuming compared to
previous raster-based techniques <cit.>.
§.§ A- and F-Interval Lists
With , we reduce the approximation complexity
of through two major changes.
First, we unify the Weak and Strong cell types
to a single cell type called Partial.
Partial are non-empty cells which overlap with the polygon's area in less than 100% of their area; i.e., the cells that are intersected by the polygon's edges.
Second, discards the bit-coding of ; instead, each polygon is approximated simply by two sorted interval lists:
the A-list and the F-list.
The A-list is formed by intervals that concisely capture all cells that overlap with the polygon, regardless their type (Full or Partial),
whereas the F-list captures only Full cells.
An interval list having n intervals is stored as a simple sorted integer sequence in which the i-th interval's start,end are located at positions 2i and 2i + 1 respectively, for i ∈ [0,n).
The A-list and F-list for the example polygon of Figure <ref> are shown in Figure <ref>.
Strong and Weak cell types become Partial and the representation compared to is simplified.
Note that the set of intervals in each of the A- and F- lists are disjoint.
The new relationship identification table for a cell shared by two polygons, is shown in Table <ref>.
Removing the Strong cell type renders the approximation unable to detect true hits for cells of the Strong-Strong case, as common cells that are both Partial cannot decide definite intersection between the two polygons.
[As we have found experimentally (Section <ref>),
this has minimal effect on the amount of true hits and true negatives that the intermediate filter manages to detect.
This is due to the fact that the only cases of true hits missed are pairs of polygons that intersect with each other exclusively in cells typed Strong for both polygons and nowhere else.]
Construction
To construct an approximation
we need to first identify the cells intersected by the polygon's area in the grid, while also labeling each one of them as Partial or Full.
Then, Intervalization
derives the F-list, by sorting the set of Full cells by ID (i.e., Hilbert order) and merging consecutive cell IDs into intervals.
To derive the A-list, we repeat this for the union of Full and Partial cells.
In Section <ref>, we propose an efficient algorithm that derives the F- and A-list of a polygon without having to label each individual cell that intersects it.
§.§ Intermediate Spatial Join Filter
Similar to <cit.>, is employed by an intermediate filter (Figure <ref>), between the MBR-filter and the refinement phase.
Given a pair (r,s) of objects coming as a result of an MBR-join algorithm <cit.>, uses the A- and F-lists of r and s to detect fast
whether the polygons (i) are disjoint (true negative), (ii) are guaranteed to intersect (true hit), or (iii) are inconclusive, so they have to be forwarded to the refinement stage to verify their intersection.
Whether r and s are disjoint (i.e. do not intersect), can be determined by checking whether their A-lists have any pair overlapping of intervals or not.
If they have no overlapping intervals, then r and s do not have any common cell in the grid and thus they cannot intersect. We check this condition by
merge-joining the A-lists and stoping as soon as we detect two overlapping
intervals.
Pairs of polygons that have at least one pair of overlapping intervals in their A-lists are then checked using their F-lists. We perform two more merge-joins: A(r)⋈F(s) and F(r)⋈A(s);
detecting an overlapping intervals pair in one of these two joins means that there is a Full cell in one object that is common to a Full or Partial cell of the other object. This guarantees that the two objects intersect and the pair (r,s) is immediately reported as a spatial join result. If A(r)⋈F(s) fails to detect (r,s) as a true hit, then F(r)⋈A(s) is conducted; if the latter also fails, then (r,s) is an inconclusive candidate join pair, which is forwarded to the refinement step.
In summary, the intermediate filter sequence consists of 3 steps: the AA-join, AF-join, and FA-join, as illustrated in Figure <ref> and described by
Algorithm <ref>.
Each step is a simple merge-join between two sorted interval lists. Since each list contains disjoint intervals, each of the three interval joins takes O(n+m) time, where n and m are the lengths of the two interval join input lists. Hence, the total cost of the filter (i.e., Algorithm <ref>) is linear to the total number of intervals in the A- and F-lists of r and s.
Join Order Optimization
The AA-join, AF-join, and FA-join could be applied in any order in
Algorithm <ref>.
For example, if (r,s) is a true hit, it would be more beneficial to perform the AF-join and the FA-join before the AA-join, as this would identify the hit earlier.
On the other hand, if (r,s) is a true negative, conducting the AA-join first avoids the futile AF- and FA-joins.
However, there is no way to know a priori whether (r,s) is a true hit or a true negative. In addition, we experimentally found that changing the join order does not have a high impact on the intermediate filter cost and the overall cost.
For a typical candidate pair (r,s) the common cells are expected to be few compared to the total number of cells covered by either r or s, making AA-join the most reasonable join to start with. This is confirmed by our experiments where the number of candidate pairs identified as true negatives is typically much larger compared to the number of identified true hits.
§.§ Generality
In this section, we demonstrate the generality of in supporting other queries besides spatial intersection joins between polygon-sets. We first show how we can use it as an intermediate filter in selection (range) queries. Then, we discuss its application in spatial within joins.
Finally, we discuss the potential of using approximations of polygons and raster approximation of linestrings to filter pairs in polygon-linestring intersection joins.
§.§.§ Selection Queries
Similarly to joins, can be used in an intermediate filter to reduce the cost of selection queries. Consider a spatial database system, which manages polygons and where the user can draw a selection query as arbitrary polygon QP; the objective is to retrieve the data polygons that intersect with the query polygon QP. Assuming that we have pre-processed all data polygons and computed and stored their representations, we can process polygonal selection queries as follows. We first pre-process QP to create its approximation. Then, we use the MBR of QP to find fast the data polygons whose MBR intersects with the MBR of the query (potentially with the help of an index <cit.>).
For each such data polygon r,
we apply the intermediate filter for the (r,QP) pair to find fast whether r is a true negative or a true hit. If r cannot be pruned or confirmed as a query result, we eventually apply the refinement step.
§.§.§ Spatial Within Joins
can also applied for spatial joins having a within predicate, where the objective is to find the pairs (r,s), where r ∈ R and s ∈ S and r is within s (i.e., r is completely covered by s). In this case, the intermediate filter performs only 2 of its 3 steps.
The AA-join is applied first to detect whether r and s are disjoint, in which case the pair should be eliminated.
Then, we perform a variant of the AF-join, where the objective is to find if every interval in the A-list of r is contained in one interval in the
F-list of s; if this is true, then (r,s) is guaranteed to be a within join result and it is reported as a true hit. In the opposite case, (r,s) is forwarded to the refinement step.
We do not apply an
FA-join, because this may only detect whether s is within r.
§.§.§ Linestring to Polygon Joins
Another interesting question is whether can be useful for intersection joins between other spatial data types, besides polygons.
The direct answer is no, since is designed for voluminous objects.
Still, our method can be useful for the case of joins between polygons and linestrings.
A linestring is a sequence of line segments and it is used to approximate geographic objects such as roads and rivers.
The rasterization of a linestring only gives Partial cells, as linestrings do not have volume and may not cover a cell entirely. In addition, as exemplified in Figure <ref>, linestrings do not really benefit from merging consecutive cells into intervals, as linestrings that follow the Hilbert order (or any other fixed space-filling curve) are rare.
Hence, it is more space-efficient to approximate a linestring as a sorted sequence of cell-IDs (which are guaranteed to be Partial).
Having the linestring approximations, we can evaluate spatial intersection joins between a collection of polygons and a collection of linestrings,
by applying 2 of the 3 steps in the intermediate filter; namely,
(i) a merge-join between the A-list of the polygon and the cell-ID list of the linestring to find out whether the pair is a true negative and (ii)
a merge-join between the F-list of the polygon and the cell-ID list of the linestring to find out whether the pair is a true hit. Algorithm <ref> can easily be adapted for polygon-linestring filtering, by simply changing IntervalJoin(X, Y) to take a sequence of cell-IDs Y as treat them as intervals of duration 1.
§ CUSTOMIZATION
We have explored a series of optimization and customization options that can potentially reduce 's space complexity and improve its performance in terms of filter effectiveness and speed.
§.§ Compression
Recall that the only information that
stores for each polygon is two interval lists:
the A-list and the F-list.
The interval lists are essentially sorted integer arrays, so we can exploit
delta encoding and more specialized lossless compression schemes to reduce their space requirements.
Since any of the AA- AF- and FA-join that we may apply on the lists may terminate early (as soon as an interval overlap is detected), we should go for a compression scheme that does not require the decompression a list entirely before starting processing it. In other words, we should be able to perform joins while decompressing the lists. This way, we may avoid uncompressing the lists at their entirety and still be able to perform the joins. In view of this, we use delta encoding, where we store the first value of the list precisely and from thereon store the differences (gaps) between consecutive numbers.
There are dozens of different compression schemes for gaps between ordered integers, each with their pros and cons. We chose the Variable Byte (VByte) method <cit.>, a popular technique that even though it rarely achieves optimal compression, it is adequately efficient and really fast <cit.>.
We use the libvbyte <cit.> library that has an option for sorted integer list compression, which matches our case and boosts performance by utilizing delta encoding.
At the same time, we adapt our interval join algorithm to apply decompression and join at the same time, i.e., each time it needs to get the next integer from the list it decompresses its value and adds it to the previous value in the list.
§.§ Partitioning
The accuracy of as a filter is intertwined with the grid granularity we choose. A more fine-grained grid results in more Full cells, increasing the chance of detecting true hits; similarly, empty cells increase,
enhancing true negative detection.
However, simply raising the order N is not enough to improve performance.
Increasing N beyond 16
means that a single unsigned integer is not enough to store a Hilbert curve's identifier, which range from [0,2^2N - 1].
For N=17 or higher, we would need 8 bytes (i.e., an unsigned long) to store each interval endpoint, exploding the space requirements and the access/processing cost.
In view of this, we introduce a partitioning mechanism for , that divides the data space into disjoint partitions and defines
a dedicated rasterization grid and Hilbert curve of order N=16 to each partition.
This increases the global granularity of the approximation, without using long integers, while giving us the opportunity to define smaller partitions for denser areas of the map for which a finer granularity is more beneficial.
Partitioning is done considering all datasets/layers of the map. That is, the same space partitioning is used for all datasets that are joined together.
The contents of each partition are all objects that intersect it; hence, the raster area of the partition is defined by the MBR of these objects and may be larger than the partition, as shown in the example of Figure <ref>.
approximations are defined based on the raster area of the partition.
The spatial join is then decomposed to multiple joins, one for each spatial partition. Duplicate join results are avoided at the filter step of the join (MBR-join) as shown in <cit.> .
§.§ Different Granularity
If we use the same (fine) grid to rasterize all polygons, the approximations of large polygons may contain too many intervals, slowing down the intermediate filter.
We can create approximations using a different order N of the Hilbert curve for different datasets, based on the average sizes of their contents. There is a trade-off between memory and performance, since an order lower than 16 means fewer intervals and thus lower memory requirements and complexity, but also means reduced accuracy.
When joining two approximations of different order, we need to adjust one of the two interval lists so that it can be joined with the other.
For this, we
scale down the list with the highest order.
Specifically, before comparing two intervals a=[a_start, a_end) and b=[b_start, b_end) at orders N and L respectively, where N>L, the highest order interval a should be right shifted by n = |N-L|× 2 bits, to form a transformed interval a', as follows:
a' = [a_start >> n, (a_end-1) >> n]
Right shifting creates intervals in a more coarse-grained grid and thus, they may represent larger areas than the original. Therefore, this formula works only for A-intervals, since there is no guarantee that a Full interval at order N will also be Full at order L.
For this reason, in Algorithm <ref>, we perform
only one of
the AF- and FA- joins, using the F-list of the coarse approximation (which is not scaled down).
This has a negative effect on the filter's effectiveness, as a trade-off for the coarser (and smaller) approximations that we may use for large polygons.
§ APPROXIMATION CONSTRUCTION
In this section, we present two methods for the construction of a polygon's approximation.
In Section <ref> we present a rasterization approach that
efficiently finds the cells that intersect an input polygon and their types, based on previous research on polygon rasterization, and then sorts them to construct the A- and F-interval lists. In Section <ref>, we propose a more efficient approach tailored for , which avoids classifying all cells, but directly identifies the intervals and constructs the A- and F-interval lists.
§.§ Efficient Graphics-Inspired Rasterization
Previous raster-based filters <cit.> require the classification of each cell to Full, Strong, Weak, or Empty,
based on the percentage of the cell covered by the original polygonal geometry.
For this, they apply an algorithm that involves numerous polygon clippings and polygonal area computations, at a high cost.
On the other hand, to define a approximation,
we only need to identify the cells which are partially or fully covered by the input polygon's area.
Inspired by rasterization techniques in the graphics community, we propose a polygon rasterization technique which involves two stages. Firstly, we compute the Partial cells, which essentially form the boundary of the polygon in the grid. Next, we compute the Full cells using the previously-computed boundary cells.
Identifying the Partial cells is closely related to the pixel drawing problem in graphics that involves detecting which cells to “turn on” to draw a target line. While Bresenham's algorithm <cit.> is a popular and fast pixel drawing algorithm,
it approximates a line segment by turning on a minimal amount of cells and may thus not detect all intersected cells.
In contrast, the Digital Differential Analyzer (DDA) method <cit.> is slower,
but identifies correctly and completely all intersected cells.
To detect the Partial cells, we use an efficient variant of DDA
<cit.> that uses grid traversal.
We execute the grid traversal for each edge of the polygon and store the IDs of the identified Partial cells in a list.
The leftmost grid in Figure <ref> shows the Partial cells detected by the grid traversal algorithm for the polygon drawn in the figure.
Next, to identify the Full cells, a naive approach would be to sweep the grid in each line, starting from the polygon's leftmost Partial cell, and “fill” the grid until reaching another Partial cell.
Instead, we use a more efficient technique,
called flood fill <cit.>, which is commonly used to color or “fill” a closed area in an image. The classic flood fill algorithm first selects an unlabeled cell that is guaranteed to be within the polygon, called seed. Then, it traverses all neighboring cells of the seed until it finds the boundaries of the closed area, classifying the encountered cells as fully covered.
We implemented a variant of this algorithm which minimizes the number of point-in-polygon tests required to identify whether a cell is inside or outside the polygon.
Specifically,
we iterate through the cells of the polygon's MBR area.
If a cell c has not been labeled yet (e.g., as Partial), we perform a point-in-polygon check from c's center.
If the cell c is found to be inside the polygon, c is marked as Full and we perform a flood fill using c as the seed, stopping at labeled cells, and label all encountered unchecked cells as Full.
If the cell c is found to be outside the polygon, c is marked as Empty and we perform flood fill to mark Empty cells.
The algorithm repeats as long as there are unchecked cells to flood fill from.
This reduces the number of point-in-polygon tests that need to be performed, as it suffices to perform a single test for each contiguous region in the grid with Full or Empty cells.
Figure <ref> illustrates the complete flood fill process for an example polygon.
The unchecked cells form three contiguous regions bounded by Partial cells,
two of them being outside the polygon and one inside.
Instead of looking for cells within the polygon to flood fill starting from them, it is faster to fill both the inside and outside of the polygon (marking cells as Full and Empty, respectively), as the number of point-in-polygon tests is minimized.
After all Partial and Full cells have been identified, the algorithm merges
consecutive cell identifiers into intervals to create the A- and F-lists that form the approximation.
§.§ Two-Grid Rasterization
Recall that uses a fine 2^N×2^N grid, where N=16,
to capture a detailed object approximation and take full advantage of the
4-byte unsigned integers that store the cell-IDs (and interval endpoints).
However, with such a fine grid, rasterization becomes expensive for large polygons.
To reduce rasterization cost, we conduct it for large polygons in two grids instead of one:
coarse 2^L×2^L grid, where L<N and
the final 2^N×2^N grid.
First, the Partial cells in the final grid are computed using the grid traversal algorithm, as discussed in Section <ref>.
Since both grids are uniform and the coarse grid is aligned with the final grid, we can deduct the Partial cells of the coarse grid by right-bitshifting the cell-IDs of the final grid.
This means, that we can mark the Partial coarse grid cells by performing one pass over the final grid's Partial cells.
Then, we perform flood fill to find the Full and Empty cells in the coarse grid.
When dealing with intervals on the Hilbert curve, a single low granularity cell in a grid of order L contains complete and uncut intervals of orders higher than L. This is shown in Figure <ref>, where a cell with ID 0011 at order L = 2, can generate intervals for orders M=3,4,5, simply by shifting the cell ID to a new value v. This value is the new interval's start, while its end is v + 2^2· (M-L) - 1. Overall, the following formulae computes the interval I=[start,end] of order M for a cell of order L with ID c, where L < M (for the reverse transformation, a simple right shift c >> 2· (M-L) suffices):
v = c << 2· (M-L)
I = [v , v + 2^2· (M-L) - 1]
Based on this, we can directly store Full or Empty intervals in the final/fine granularity N from individual cells in the 2^L×2^L grid. This reduces the total rasterization time for Full and Empty cells, since flood fill is significantly faster in the coarse granularity L.
We then return to the final grid, where we now have cells marked as Empty, Full, and Partial (from the grid traversal algorithm applied to the 2^N×2^N grid in the first place).
The only cells that still remain unclassified are the ones that belong to Partial 2^L×2^L cells and they are not Partial in the 2^N×2^N grid.
We perform flood fill from each of these cells to
complete the rasterization at the final 2^N×2^N granularity.
We give an example of the two-grid rasterization in Figure <ref>, where for simplicity we define coarse and final grid granularity as L=2 and N=3 respectively.
After finding the Partial cells in the final granularity N, we use them to identify the partial cells in the coarse granularity L, by shifting the cell IDs 2· (N-L) bits to the right. Then, we perform flood fill for the unlabeled cells in the coarse granularity L to identify Full and Empty cells there.
Then, we go back to the final granularity N, where (i) we “remember” the Partial cells computed in the first stage and (ii)
each Full/Empty in the coarse grid is converted to an interval of cell-IDs in the fine grid.
These cell-ID intervals in the fine grid are used as boundaries in the final flood fill,
from the unlabeled cells after the four stages shown in the figure.
For small polygons,
the overhead of 2-grid rasterization does not pay off, as
such polygons may not even have any Full cells at order N=16 to begin with.
Thus, we have to decide for each polygon whether we are going to rasterize it
directly on the fine (final) grid of order N (small polygon) or with the help of a coarse grid (large polygon).
We propose a fast heuristic for deciding whether a polygon is small or large.
We take into account the size of its MBR in terms of cells it intersects in the grid, compared to a pre-defined threshold of T cells.
Starting from N = 16, we compute (algebraically) how many cells C the MBR intersects in the 2^16×2^16 grid.
If C > T, we find the smallest value K, such that the MBR intersects at least T cells in the
2^K×2^K grid by the following equation:
K = 16 - log_2( C/T) / 2
We have experimentally found that a good value for T is in the order of 1000-5000.
In addition, we found that 2-grid rasterization is only worth if K≤ 11.
§.§ One-Step Intervalization
The approach described in the previous section identifies the types (Partial, Full, Empty) of all cells that intersect the MBR of the input polygon. For polygons which are relatively large and their MBRs define a large raster area this can be quite expensive.
We propose an alternative approach that identifies the F-intervals of the approximation efficiently and directly uses them to identify the A-intervals that include them in one step, without the need to identify the types of all individual cells in them.
As in Section <ref>, we first apply DDA
<cit.> to detect the Partial cells and sort them in Hilbert order.
An important observation is that “gaps” between nonconsecutive identifiers in the sorted Partial cells list, indicate candidate Full intervals on the Hilbert curve. Figure <ref> illustrates how these gaps are formed for an example polygon. Identifying the first cell c of each candidate interval as Full or Empty, through a point-in-polygon (PiP) test, is enough to label the whole interval as Full or Empty, respectively. In the figure, the first “gap” interval is [7,8) containing just cell 7, which can be marked empty after a PiP test. From all “gap” intervals those marked in bold (i.e., 32-34 = [32,35) and 52-54 = [52,55)) are Full intervals and can be identified as such by a PiP test at their first cell (i.e., 32 and 52, respectively).
Additionally, we can skip some of these PiP tests by checking all adjacent cells (north, south, west, east) of the first cell c with smaller identifiers than c; if any of them is Full or Empty, we can also give the same label to the candidate interval, as it should exist in the same inner/outer area of the raster image. For example, in Figure <ref>, when the algorithm moves to identify the interval [52,55), it can detect that its first cell 52 is adjacent to another Full cell with smaller order (cell 33), that has been previously identified. Thus, the interval [52,55) exists in the same inner area as cell 33 and it inherits its label (Full), without performing another PiP test for it. In this example, a total of 5 PiP tests will be performed, for the intervals that start with the cells 7,13,30,32 and 42, instead of 11 PiP tests that would be performed otherwise, if we did not take into consideration the neighboring cells.
Algorithm <ref> is a pseudocode for the one-step intervalization process, which takes as input the sorted Partial cells list P computed by DDA.
The algorithm creates the A-list, F-list
of the polygon in a single loop through P.
In a nutshell, the algorithm keeps track of the starting point of every A-interval and when an empty gap is identified, the algorithm “closes” the current A-interval and starts the next one from the next Partial cell in the list. On the other hand, Full intervals start with the identifier of the cell that is right after the last Partial cell of a consecutive sequence and end before the next Partial cell in order.
In details, Algorithm <ref>, starting from the first cell p in P, keeps track of the starting cell-ID Astart of the current A-interval; while the next cell p+1 in Hilbert order is also in P (Lines <ref>–<ref>) the current A-interval is expanded. If the next cell c=p+1 is not partial, it is the starting cell of a candidate F-interval. We first apply function CheckNeighbors(c) to find whether there exists an adjacent cell of c which is part of a FULL or EMPTY interval.
Specifically, for cell c and a neighbor n, we first check whether n < c (if not, n is either Partial or unchecked); if yes, we binary-search P to check whether n is a P-cell. If not, we apply a special binary search method on
the current F-list to find out whether n is part of an interval in it.
If we find n as part of an F-interval, then c is definitely a Full cell.
If we do no find n, then c is definitely an Empty cell because n<c and n is not Partial.
If for all neighbors n of c, either n>c or n is Partial, then we cannot determine the type of c based on the current data, so we perform a PiP test to determine c's type (i.e., Full or Empty).
If c is Full, then we know that the entire interval [c,p) is FULL and append it to the F-list (Line 16).
Otherwise (c is Empty), c is the end of the current A-interval, so the interval is added to the A-list and the start of the next A-interval is set to the next Partial cell p.
The algorithm continues until the list P of partial cells is exhausted and commits the
last A-interval (Line 23).
Our one-step intervalization approach performs |P|-1 PiP tests in the worst-case, which dominate its cost.
Compared to the FloodFill-based approach of Section <ref>, which explicitly marks and then sorts all Full and Partial cells,
Algorithm <ref> is expected to be much faster for polygons which are large compared to the cell size and include a huge number of Full cells.
On the other hand, flood filling may be a better fit for small polygons with a small MBR and relatively few Full cells.
§ EXPERIMENTAL ANALYSIS
We assess the performance of our proposed method, by
experimentally comparing it with previously proposed polygon
approximations for intermediate filtering of spatial joins.
These include the combined use of 5-Corner and Convex Hull (5C+CH) (as proposed in
<cit.>), Raster Approximation (RA) <cit.>,
and the state-of-the-art Raster
Intervals () <cit.>.
We also included a baseline approach (None), which does not apply an
intermediate filter between the MBR-join and the refinement step.
For RA, we set the grid resolution to K=750 cells, except for a few
datasets where we use K=100, due to memory constraints.
The MBR filter of the spatial join pipeline was implemented using the
algorithm of <cit.>.
The refinement step was implemented using the Boost Geometry library
<cit.> and its functions regarding shape intersection.
All code was written in C++ and compiled with the -O3 flag. The
experiments were run on a machine with a 3.6GHz Intel i9-10850k and
32GB RAM, running Linux.
§.§ Datasets
We used datasets from SpatialHadoop's <cit.>
collection.
T1, T2, and T3 represent
landmark, water and county areas in the United States (conterminous
states only).
We also used two Open Street Maps (OSM) datasets (O5 and O6) that
contain lakes and parks, respectively, from all around the globe.
We grouped objects into continents and created 6 smaller datasets
representing each one:
Africa (O5AF, O6AF), Asia (O5AS, O6AS), Europe (O5EU, O6EU), North
America (O5NA, O6NA), Oceania (O5OC, O6OC) and South America (O5SA,
O6SA).
From all datasets, we removed any non-polygonal objects as well as
multi-polygons and self-intersecting polygons.
The first three rows of Table <ref> show statistics about
the datasets.
We conducted spatial joins only between pairs of datasets that cover
the same area (i.e., T1 T2, T1 T3, O5AF O6AF,
etc.).
§.§ Comparative Study
In the first set of experiments, we compare with other
intermediate filters in terms of space complexity, filter
effectiveness, and filter cost.
For all experiments, we created and
using a single partition
(i.e., the map of the two datasets that are joined in each case),
rasterized on a 2^16× 2^16 grid.
We used a fixed order (AA-, AF-, FA-) for the
interval joins of , as shown in Algorithm <ref>.
§.§.§ Space Complexity
Table <ref> shows the total space requirements of the object
approximations required by each intermediate filter, for each of the datasets
used in our experiments.
and -C refer to the uncompressed and compressed version
of , respectively.
As a basis of comparison we also show the total space required to
store the exact geometries of the objects and their MBRs. In most
cases, has the lowest space requirements compared to all
other filters. Notably, for most datasets, the
compressed approximations have similar space requirements as the object MBRs, meaning that we can keep them in memory and use them in main-memory spatial joins <cit.> directly after the MBR-join step, without incurring any I/O.
§.§.§ Performance in Spatial Intersection Joins
We evaluate (both compressed and uncompressed version),
5C+CH, RA, and , on all join pairs, in
Figure <ref>.
We compare their ability to detect true hits and true
negatives,
their computational costs as filters, and their impact to the
end-to-end cost of the spatial join.
Filter Effectiveness
and have the highest filter effectiveness among all
approximations across the board.
's true hit ratio is slightly smaller compared to that of
RI because fails to detect the (rare) pairs of polygons
which only have Strong-Strong common cells.
However, this only brings a marginal
increase in the refinement step's cost, at the benefit of having a faster and
more space-efficient filter.
In O5AS O6AS and O5OC
O6OC, and have marginally lower true hit ratio compared to RA;
however, in these cases their true negative ratio is much higher than that of RA.
The least effective filter is 5C+CH, mainly due to its inability to detect true hits.
Intermediate Filter cost
5C+CH are simple approximations (a few points each), therefore the
corresponding filter is very fast to apply.
Hence, 5C+CH has the lowest
cost for most joins.
Notably, has a filtering cost very close to that of
5C+CH and sometimes even lower.
This is due to 's ability to model a raster
approximation as two sequences of integers, which are processed
by a sequence of efficient merge-join algorithms.
On the other hand, the application of the filter is more
expensive because, besides the interval join, it requires the
alignment and bitwise ANDing of the interval bit-codes.
As a result, is 3.5-8.5 times faster as an
intermediate filter compared to .
Even though 5C+CH is the fastest filter to apply, it has poor filtering performance,
which negatively affects
the total
join cost (last column), whereas
is very fast and very
effective at the same time.
A comparison between the filter costs of and
-C reveals that decompressing the interval lists while
performing the joins in -C only brings a small overhead,
making compression well worthy, considering the significant space savings it
offers (see Table <ref>).
The decompression cost is significant only in T1 T3, because
T3's A-lists and F-lists are quite long. Still, even
in this case, -C is much faster than .
Refinement cost
The refinement cost is intertwined with the
percentage if indecisive pairs.
The detection of fewer candidate pairs as true hits or
true negatives leads to a higher refinement workload; this is why
and RI result in the lowest refinement cost, compared to
the rest of the approximations.
Overall cost
reduces the overall cost of end-to-end spatial
joins up to 3 times compared to the state-of-the-art
intermediate filter,
while also achieving a speedup of 3.23x-25x against the
rest of the approximations. Adding the intermediate
filter between the MBR-filter and the refinement step
reduces the spatial join cost by 7x-28x.
's high filtering effectiveness, low application
cost, and low memory requirements render it a superior
approximation for filtering pairs in spatial intersection join pipelines.
§.§.§ Performance in other queries
Next, we evaluate the performance of
in other queries, besides spatial
intersection joins.
We start with selection
queries of arbitrary shape (see Section <ref>).
For this experiment, we sampled 1000
polygons from T3 and applied them as selection queries
on T1 and T2, simulating queries of the form: find all landmark
areas (T1) or water areas (T2) that intersect with a given US county
(T3).
As Table <ref> shows, compared to ,
achieves a 3.5x-4x speedup in the total query cost.
Next, we compare all methods in spatial within joins, where the objective is to find pairs (r,s) such
that r is within s (see Section <ref>).
As Table <ref> shows, again achieves the
best performance, due to its extremely low filtering cost.
is even faster than 5C+CH, because 5C+CH
performs two polygon-in-polygon tests which are slower compared to a
polygon intersection test.
Finally, we test the effectiveness of in
polygon-linestring joins, as described in Section
<ref>.
For this experiment, we join the polygon sets T1, T2, and T3 with
dataset T8 (from the same collection),
which contains 16.9M linestrings (roads in the United States), each having
20.4 vertices on average.
In this comparison, we do not include and RA, because
Strong cell types cannot be used to detect true hits.
Table <ref> compares
with 5C+CH and the skipping of an intermediate filter
(None). 5C+CH only detects true negatives (in the case where the
5C+CH approximations do not intersect).
outperforms 5C+CH by at least three times in total join
time and by orders of magnitude in T3 T8, where it can
identify the great majority of join results as true hits.
§.§ Optimizations and Customizations
§.§.§ Join Order
So far the interval joins in have
been applied in a fixed order: AA, AF, and FA.
As discussed in Section <ref>, the joins can be
performed in any order.
Table <ref> tests different join orders for
T1 T2 and T1 T3.
T1 T2 (like the majority of tested joins) has a high
percentage of true negatives, so the original order is the most
efficient one (changing the order of AF and FA does not
make a difference). On the other hand, for T1 T3, where the true
hits are more, pushing the AA-join at the end is more
beneficial.
Since knowing the number (or probability) of true negatives and true
hits a priori is impossible and because the join order does not make a
big difference in the efficiency of the filter (especially to the
end-to-end join time), we suggest using the fixed order, which is the
best one in most tested cases.
In the future, we investigate the use of data statistics and/or object
MBRs to fast guess a good join order on an object pair basis.
§.§.§ Partitioning
Tables <ref> and <ref>
illustrate the effect
of data partitioning
(Section <ref>)
on the effectiveness, query evaluation
time, and space requirements of approximations.
A higher number of partitions means finer-grained grids per partition
and thus, more intervals per polygon (i.e., more space is required).
Even though this reduces the amount of inconclusive cases,
it can slow down the intermediate filter, since more intervals need to
be traversed per candidate pair. For example, T1 T3 has
already a small percentage of inconclusive pairs, so partitioning may
not bring a significant reduction in the total join time.
On the other hand, for joins with high inconclusive percentage, such as
O5AS O6AS, partitioning can greatly reduce the total cost.
In summary, partitioning comes with a time/space tradeoff.
§.§.§ Different Granularity
As discussed in Section <ref>, we can
define and use at lower granularity than N=16 for one or both
datasets, trading filter effectiveness for space savings.
In Table <ref>, we study the effect
of reducing N for T3 in T1 T3.
The size of T3's approximations halves every time we
decrease N by one.
The filter time also decreases, due to the reduced amount
of intervals from T3 in the interval joins.
However, the percentage of indecisive pairs increases, raising the refinement cost.
N=15 is the best value for T3, because it achieves the same
performance as N=16, while cutting the space requirements in half.
§.§ Construction Cost
We now evaluate the
construction techniques that we have proposed
in Section <ref>, comparing them with the
rasterization method
used in previous work <cit.> that employs
polygon clipping and polygon-cell intersection area computations.
Table <ref> shows the
time taken to compute the approximations of all polygons
in each dataset (for N=16), using (i) the
rasterization+intervalization approach of <cit.>,
after unifying Strong and Weak cells, (ii) the
FloodFill approach tailored for presented in Section
<ref>, and (iii) two versions of our novel OneStep
intervalization approach (Section
<ref>): one that performs a
point-in-polygon (PiP) test for each first cell c of a candidate Full
interval and one that checks the Neighbors of c before attempting
the PiP test.
Observe that our OneStep intervalization algorithm employing the
Neighbors check is the fastest approach in most of the cases.
OneStep (Neighbors) applies 40%-70% fewer PiP tests compared to
OneStep (PiPs) that does not apply the
Neighbors check.
Only in a few datasets containing
relatively small polygons
OneStep (Neighbors) is up
to 24% slower than the FloodFill method.
On the other hand, in some datasets containing large polygons (e.g., T3, O6AF,
O6SA)
OneStep is up to one order of magnitude faster than FloodFill.
Both methods proposed in Section <ref> are
orders of magnitude faster compared to previously applied
rasterization techniques <cit.> mainly due to the simplicity
of compared to previous raster-based intermediate filters <cit.>.
Comparison to IDEAL
We also compared OneStep to the rasterization technique used in
IDEAL <cit.>, as implemented in <cit.>. We
modified IDEAL's granularity definition formula accordingly to match
's Hilbert space grid of order N=16. For such high
granularity, IDEAL demanded too much memory for most datasets and
crashed, so we could only run it for three datasets as shown in Table <ref>.
In all these cases, OneStep has 2x-3x lower cost compared to
IDEAL's rasterization approach.
Applicability of OpenGL rasterization
Finally, we have investigated the applicability of
GPU-based rasterization approaches
in the construction of approximations.
For this, we
tested an OpenGL
implementation that uses a GPU (NVIDIA GeForce RTX 3060)
and follows
the approach described in <cit.>
to identify Partial
and Full cells of a polygon on a raster.
OpenGL is an API that supports the graphics pipeline to perform
efficient rasterization and drawing of the raster cells (pixels) into a
frame buffer for visualization.
In addition to rasterization, requires
the retrieval of the cells' Hilbert
curve identifiers and cell type information to create interval lists.
Furthermore, OpenGL's rendering pipeline is designed to work with triangles, and thus we have to triangulate all our input polygons before rendering.
Finally, the resolution of the frame buffer plays a crucial role in
rasterization accuracy.
The frame buffer 's resolution must
match the desired granularity (i.e., 2^16×2^16) of
approximations.
However, OpenGL does not allow frame buffers to have resolution
higher than 2^15× 2^15 pixels, so
approximations created using OpenGL are destined to have lower filter
effectiveness than if they were created using our CPU-based
methods (Section <ref>).
In addition, in our experiments, we have found that triangulation,
which is a pre-requisite of using OpenGL's rendering,
takes up 66% - 94% of the total rasterization time.
For example, triangulating the T3 dataset in its entirety takes around
160 seconds, which is already about 6x more expensive than the
end-to-end production of the approximations of all
objects in T3 using our OneStep approach (see Table
<ref>).
Overall, its limitations in setting an appropriate resolution and the
high costs for initializing and postprocessing its rasterization
process, make OpenGL-based construction suboptimal
compared to our CPU-based algorithms.
Table <ref> shows a time breakdown of the
OpenGL rasterization of dataset T3 for different frame buffer
resolutions.
is D the frame buffer
resolution? is it D× D?
Triangulation, data formatting and (VBO) and Element Buffer (EBO)
object creation for efficient data buffering, remain unaffected
by the frame buffer resolution increase and thus, were not included in
the table.
mention these costs
However, we have found that triangulation almost always dominates the
total rasterization time (65% to 90% of the total time).
mention the cost of Triangulation in seconds and compared to
the overall cost of our method from Table 12
Pixel Retrieval refers to the OpenGL call that copies the individual
pixel information from the frame buffer object to a CPU buffer. Finalization entails iterating over the pixels to create the
final cell lists with the Hilbert curve identifiers that are ready for
intervalization.
Both functions depend on the resolution, with the second one
becoming increasingly expensive as the number of pixels increases.
Although the actual Rasterization for pixel drawing is very fast, the
combined costs of Triangulation and Finalization for high-resolution
frame buffers, which are necessary to maintain intermediate filter
performance, make OpenGL rasterization sub-optimal for constructing
, since our OneStep (Neighbors) method (Section
<ref>) that does not use a GPU
is 6x-9x times faster for the same dataset.
§ RELATED WORK
Most previous works on spatial intersection joins <cit.> focus on the filter step of the join (denoted by MBR-join). They either exploit the pre-existing indexes <cit.> or partition the data on-the-fly and perform the join independently at each partition <cit.>.
Each partition-to-partition MBR-join can be performed in memory with the help of plane-sweep <cit.>.
Intermediate filters
Finer (but more space-consuming) approximations have been proposed to be used in an intermediate filter step that identifies true negatives and/or true positives, as described in Section <ref>. The first work in this direction <cit.> proposed the use of simple convex polygons (convex hull and the minimum bounding 5-corner convex polygon (5C)).
Another approach <cit.>
extends the MBR to capture the empty space around its corners, which may help in the detection of false positives.
Raster approximations of object MBRs have also been suggested, with a classification of the cells therein based on their coverage by the object <cit.>.
Recently, this approach has been improved in <cit.> to (i) apply on a global grid, (ii) represent the cells as intervals with bitcodes of the cell types, (iii) perform the intermediate filter as a specialized interval join, as described in Section <ref>.
A hierarchical raster approximation for window and distance queries was proposed in
<cit.>.
Raster approximations have also been combined with vector approximations in <cit.>. However, neither <cit.> nor <cit.> studied the spatial intersection join, for which the state-of-the-art intermediate filter is <cit.>.
Refinement step
Verifying whether two polygons overlap is CPU-intensive, requiring the application of an
intersection detection algorithm between sets of line segments and two point-in-polygon tests
<cit.>.
To speed it up, Brinkhoff et al. <cit.> suggest decomposing polygons into sets of trapezoids while <cit.> suggests alternative polygon decomposition approaches.
These techniques are orthogonal to , as they aim to speed up the refinement step, while reduces the number of candidate join pairs that require refinement.
Approximate spatial joins
The approximate representation of objects and approximate spatial query evaluation using space-filling curves was first suggested by Orenstein <cit.>.
Recent work explores the use of raster approximations for the approximate evaluation of spatial joins and other operations <cit.>.
<cit.> and our work are the first to approximate polygon rasterizations as intervals for exact spatial query evaluation.
Spatial joins on GPUs The widespread availability of programmable GPUs has inspired several research efforts that leverage GPUs for spatial joins <cit.>.
Sun et al. <cit.> accelerated the join refinement step by incorporating GPU rasterization as an intermediate filter.
This filter identifies only true negatives using a low resolution, and has thus limited pruning effectiveness.
Aghajarian et al. <cit.> proposed a GPU approach to process point-polygon and polygon-polygon joins for datasets that can be accommodated in GPU memory.
Liu et al. <cit.> also proposed GPU-accelerated filters to reduce the number of refinements.
These filters <cit.>, in contrast to , do not identify true hits, but rather focus on finding the intersection points between a candidate pair.
Furthermore, the above approaches <cit.> do not involve rasterization and rely on CUDA, which is exclusive to NVIDIA GPUs.
A recent line of work <cit.> proposes to use the GPU rasterization pipeline as an integral component of spatial query processing.
Doraiswamy et al. <cit.> introduced a spatial data model and algebra that is designed to exploit modern GPUs. Their approach leverages a data representation called canvas, which stores polygons as collections of pixels.
The canvas includes a flag that differentiates between pixels that lie on the boundary of the polygon and those that are entirely covered by it.
Although current-generation GPUs can handle millions of polygons at fast frame rates, the evaluation of spatial queries is still dominated by other costs, such as triangulating polygons and performing I/Os <cit.>.
Scalability in spatial data management
The emergence of cloud computing has led to many efforts to scale out spatial data management <cit.>.
SJMP <cit.> is an adaptation of the PBSM spatial join algorithm <cit.> for MapReduce.
Other spatial data management systems that use MapReduce or Spark and handle spatial joins include Hadoop-GIS <cit.>, SpatialHadoop <cit.>, Magellan <cit.>, SpatialSpark <cit.>, Simba <cit.>, and Apache Sedona <cit.>.
All the aforementioned systems focus only on the filter step of spatial joins.
§ CONCLUSIONS
We propose , an approximation technique for polygons,
to use as an intermediate filter in the spatial intersection join
pipeline.
Compared to previous approaches <cit.>, is (i) lightweight, as it
represents each polygon by two lists of integers that can be
effectively compressed; (ii) effective, as it typically filters the
majority of MBR-join pairs as true negatives or true positives; and
(iii) efficient to apply, as it only requires at most three linear
scans over the interval lists.
is a general approximation for polygons that can also be
used in selection queries, within-joins and joins between polygons and
linestrings.
We propose a compression technique for and customizations that
trade space for filter effectiveness.
Finally, we propose efficient construction techniques for
approximations, which greatly perform
rasterization-based techniques from previous work.
In the future, we plan to explore the
integration of in a spatial database system, investigate
further the problem of interval
join order optimization for for candidate join pairs,
and explore the effectiveness of for queries that
involve 3D objects (e.g., polytopes).
ACM-Reference-Format
|
http://arxiv.org/abs/2307.00690v1 | 20230703000734 | ROAR: Robust Adaptive Reconstruction of Shapes Using Planar Projections | [
"Amir Barda",
"Yotam Erel",
"Yoni Kasten",
"Amit H. Bermano"
] | cs.GR | [
"cs.GR"
] |
§ INTRODUCTION
The field of machine learning has made significant progress in recent times, leading to remarkable accomplishments in the realm of 3D shape comprehension tasks, encompassing both analysis and synthesis. Since data-driven approaches mostly rely on large datasets to distill information, which tend to grow exponentially larger, the quality and nature of such datasets heavily influence the advancement of the field. Many of the popular and established large 3D asset datasets <cit.> as well as recently released ones <cit.>, consist of shapes with countless topological errors of all types. These include everything from non-manifoldness, through self-intersections, to shapes broken into overlapping parts and inconsistent normals (See <Ref>).
These errors greatly undermine the unlocked potential and usability of a significant part of 3D shapes and datasets available today. We argue that for these reasons, many if not all of the state of the art achievements that use these datasets relating to tasks such as classification and segmentation <cit.> and even generation <cit.> rely on 2D renders of the data instead of directly learning over the 3D shape.
To address these issues, and to further the usability of large (and messy) 3D mesh datasets, a suitable reconstruction operation is required.
Reconstruction techniques seek a different sampling of the same underlying shape, while maximizing faithfulness and balancing out or improving some desirable properties such as triangle count and tessellation quality.
In addition, most 3D processing operations, including modern learning-based techniques, expect a 2-manifold mesh as input <cit.> and hence topological validity is perhaps the most important aspect of reconstruction.
Reconstruction approaches employing regular volumetric sampling (i.e. a voxel grid), excel in providing guarantees on topological validity and naturally support shapes of arbitrary genus, but they also tend to exhibit sampling artifacts and fail to express fine details or large but thin parts due to resolution constraints (<cit.>). Surface-based approaches, on the other hand, that evolve an initial shape or directly manipulate the target, are more expressive in this sense <cit.>. However, as it turns out, such approaches require careful selection of hyper parameters per target, and in practice tend to be too slow to operate over large scale datasets due to global optimization steps. Recent appearance based approaches <cit.>, on the other hand, have shown that using differentiable 2D renderings of the source and target shapes is a powerful tool for shape reconstruction, offering better convergence. The 2D nature of this approach, however, can yield undesirable artifacts if not properly accounted for as demonstrated by <cit.>. Furthermore, optimizing fine features on the surface using the render based loss relies on vertex normals, which are of poor quality or do not exist at all in many practical cases. In addition, features with sub-pixel resolution (such as corners) are disregarded in the optimization (<Ref>). Lastly, many such techniques are prone to get stuck in local minima, critically damaging tessellation quality, especially if using loss terms involving global distances (e.g. Chamfer distance, see <Ref>). All these drawbacks are the reason such methods have not been used for the purpose of reconstructing and repairing large datasets as of today.
In this paper, we present ROAR — a practical and robust reconstruction solution accurate and reliable enough for complete dataset repair. Our proposed technique successfully retains topological correctness, produces high-quality triangulation while preserving faithfulness, and is implemented completely using an auto-diff library (PyTorch <cit.>) on the GPU.
Following recent literature <cit.>, our iterative approach evolves an initial shape by constantly correcting the mesh topology after every geometry update. This ensures mesh validity, self-intersection minimization, and efficient triangle allocation. The main challenge of taking this approach is given a triangle count budget to both assign enough resolution to detailed regions and to remove problematic triangles, such as those that cause local self-intersections.
After a pre-processing step, we first extract an initial mesh using an off-the-shelf reconstruction approach. Then, our method refines the proposed solution by alternating between geometric changes and topological corrections. The geometry is governed by both a 3D planar projection term <cit.> used as a novel loss function that better preserves geometric details, and a 2D image loss term that regularizes convergence both globally and locally. Each geometric iteration is followed by an edge collapse operation that prevents self-intersections from persisting and evolving. We then continue to add resolution to the mesh adaptively, using a novel expansion of a rapid self-intersection estimator <cit.>.
We implement a prototype for ROAR using a GPU, making computation an order of magnitude faster than on the CPU, operations feasible, and the algorithm simple to integrate. We evaluate ROAR on reconstruction tasks, showing better reconstruction performance for a large scale triangle soup dataset <cit.>. This includes consistent topological validity, better triangle quality, and better shape preservation compared to the state-of-the-art. In addition, we demonstrate reconstructing a mesh directly from implicit neural SDF (signed distance function), and show that our approach can express the same level of detail that uniform 3D sampling-based approaches (e.g., marching cubes) achieve, with significantly less sampling operations.
Lastly, we present ShapeROAR, a topologically valid yet still geometrically accurate version of the ShapeNet dataset <cit.>, rendering learning over shapes a simpler challenge.
In summary, the core contributions of this paper are:
* A novel reconstruction pipeline, adaptively allocating triangle resolution in required regions while maintaining topological validity and tessellation quality.
* ShapeROAR, a topologically valid reconstruction of ShapeNet.
* A planar projection operation formulated as a novel loss term, substituting shading gradients when unavailable (e.g. triangle soups).
* A novel face score criterion for rapidly detecting local shape variation in-situ.
An open sourced implementation and the cleaned dataset ShapeROAR will be made publicly available.
§ RELATED WORK
We chose to focus on specific methods that relate to ROAR by employing similar techniques or intend to solve a similar problem while having an open-access implementation that can be used for comparisons. It is important to note however that despite providing proof for unconditional robustness (output is 2-manifold, with no self intersections), in practice all of the official implementations we found do not obtain this (<Ref>), emphasizing the gap that exists between theory and practice. Additionally, some prior studies cannot directly be applied to large scale datasets due to the sheer amount of compute time taken per-mesh. We discuss this issue in <Ref>.
§.§ Surface based
A common goal for reconstruction techniques is to resample the input domain to obtain a topologically valid triangulation while approximating the input shape as much as possible. The output of such techniques is useful for many downstream tasks such as simulations and UV texture unwrapping.
<cit.> iteratively construct a subcomplex of a 3D Delaunay triangulation by starting from a simple 3D Delaunay triangulation enclosing the input, and then iteratively removing eligible tetrahedra that lie on the boundary of the complex. The result is oriented, 2-manifold, and without self-intersections. More recently, <cit.> presents a surface-based technique to repair triangle soups operating directly on the input, by first locally repairing patches using visual cues and later globally optimizing for a manifold mesh formulated as a ILP. The main disadvantage is the extremely long running time, disallowing reconstruction for large scale datasets, and the rather low triangulation quality of results, as triangle quality was left unattended and is similar to the input.
Our reconstruction pipeline also yields an oriented 2-manifold surface, and despite not having guarantees on self-intersections their amount is low in practice due to the criterion we impose (<Ref>). Our results better approximate the target for a given triangle budget, and can be obtained within a reasonable running time.
§.§ Volume based
Some techniques employ a change in representation approach, where the input is intersected with a voxel grid <cit.> and reconstructed using topological guarantees such as manifoldness and other desirable qualities. In Tet-Wild <cit.>, the authors explicitly deal with "messy" inputs by considering the input fundamentally imprecise, and performing tetrahedralization using a binary space partitioning tree of planes that wraps the input model, and later on optimizing its quality while preserving the validity. Despite not being directly used for surface reconstruction but rather volumetric mesh reconstruction, the explicit treatment of input as flawed makes this approach competitive in this task. To extract the surface, one may select the volumetric output boundary surface (i.e. faces incident to a single tetrahedron). <cit.> extend this method by performing incremental reconstruction, and achieves better performance times while using a floating point representation. Reconstructing a surface in this manner was determined to be disadvantageous by our experiments, especially considering the computational resources associated with it (as opposed to evolving a surface mesh directly).
§.§ Render based
In a fairly recent line of work, differentiable rendering based techniques showed interesting new directions in evolving shapes using appearance. In <cit.>, a robust and scalable neural rasterizer was developed and its application in shape reconstruction is shown over both synthetic and real data. In <cit.>, the authors develop a correction term for the render loss that allows convergence into an input shape from a sphere, with very little loss of high frequency detail that is usually associated with the usage of regularizer terms in the loss. These terms usually exist to prevent the optimized shape from folding onto itself and preventing self-intersections, due to the large and sparse gradients associated with the silhouette of the object. The authors show their geometry updates can be used with an offline reconstruction algorithm to evolve a sphere into a target mesh with intricate details. In <cit.>, a full algorithm for reconstruction addressing topological connectivity is presented, leveraging the render loss to augment the geometry, interleaved with topological steps that assure triangles are created when necessary. Despite not using the correction term from <cit.>, results are of high quality, but they require a large number of triangles to complete the shapes and there are no topological guarantees on the output. We leverage these past findings and improve upon the quality and topological properties of the output as well as being more efficient with triangle budget by allocating triangles adaptively to surface details. Additionally, in both works, it was observed that the gradients received from the differentiable renderer can be classified into to two groups: The aforementioned sparse and strong silhouette gradients, and the dense but weaker gradients that originate from the shading, called "shading gradients", which strongly depend on well oriented normals. Both methods rely on high quality shading gradients to function well, which makes them impractical for the reconstruction of meshes with topological defects.
§.§ Point Cloud based
Another approach using a different intermediate representation for reconstruction is to convert the input into a pointcloud by sampling, and reconstructing the surface. Discarding connectivity information is a lossy operation, and indeed reconstructing an already connected surface is not the bread-and-butter of such applications. However, it is useful to consider them as a baseline and to draw conclusions on how effective can surface reconstruction get with versus without connectivity. One traditional technique introduced by <cit.> casts the reconstruction task as a spatial Poisson problem and solves it using a linear sparse solver. This technique saw many improvements over the years, in particular, <cit.> incorporate the points as interpolation constraints and show improved reconstruction quality. <cit.> expand Poisson surface reconstruction by formulating it as a differentiable layer, enabling end-to-end optimization of watertight manifold surfaces. <cit.> attempt to solve the same problem using a deep neural network prior and a Beam Gap loss.
§ METHOD
As a reminder, ROAR leverages an iterative surface evolving approach. This choice was made deliberately to allow us to exploit recent advancements with 2D differentiable rendering techniques (allowing to preserve appearance), and to also ensure topological validity (2-manifold) by choice of initial shape and operations performed on it during evolution. The main challenges associated with such an approach are efficient allocation of triangles, and maintaining local and global validity (self-intersections). Our introduced pipeline is shown in <Ref>. The inputs to our method are the raw target shape to be reconstructed, an initial source mesh that will evolve, and a face budget to constrain the number of triangles in the output. After a preprocess step, we evolve the state of the source mesh iteratively, where each iteration consists of three blocks.
In the preprocessing step (<Ref>), we create the initial mesh by applying MANIFOLD <cit.> on the original target mesh. Additionally, we generate an oriented and cleaned point cloud of the target mesh required for our loss terms.
After preprocessing, we start evolving the initial mesh:
First, a geometry update step is performed (<Ref>), where vertex locations are adjusted according to the loss function. To handle triangle soups, we supplement the rendering loss with a novel added term, the Planar Projection Loss (<Ref>), offering a robust alternative to the common Chamfer Distance loss, and acting as a replacement for shading gradients (see <Ref>). We found it crucial for preserving sharp features and being robust to errors on the target. Additionally, we inhibit the creation of self-intersections by attenuating the amount of displacement a vertex can undergo according to the lengths of the edges emanating from it.
The second step, Face Collapse, (<Ref>) identifies and removes self-intersection as they begin to form. Self-intersections that evolve too quickly can potentially elude this process and remain uncleaned. The aforementioned attenuation factor discourages this behavior.
The next step, Face Split (<Ref>), dictates where resolution should be added. To do this, we introduce the Face Score — an extension of the self-intersection estimator (<Ref>), that encodes local geometry changes - deciding whether triangles should be split according to the local shape of the target region they correspond to.
In the following, we start with preliminary work and continue describing the operation of each block in depth. Detailed descriptions of the hyper-parameters used are given in the supplementary material.
§.§ Preliminary
In <cit.>, an analysis of rendering-based gradients (of image pixels with respect to vertex locations) revealed two distinct "types" of gradients that contribute to vertex movements: shading gradients, which are smooth, dense, and occur from pixels inside shape primitives, and silhouette gradients, which are sparse, strong, and occur from pixels overlapping the border of the shape. Silhouette gradients were shown to interfere greatly with shape reconstruction, specifically because they induce local self-intersections by neighboring faces folding onto themselves.
<cit.> seek to evolve an initial sphere into a target shape using iterative geometric and connectivity updates driven by a differentiable render loss. One important insight is to continuously attempt to correct self-intersections that occur due to sparse silhouette gradients. An estimator for local self-intersections for a face f was used:
n_N· n_f < 0
where n_N is the mean of the face's vertex normals and n_f is the face normal. This estimator was also discussed in previous studies <cit.>, though not in the context of differentiable rendering. The advantage of using such an estimator is performance, and unlike other approaches which perform complete re-triangulation of the mesh when self-intersections occur <cit.>, this estimator can be used to prevent self-intersections in every vertex position optimization iteration. This allows to continue operating on the same evolving surface. Another mechanism used by <cit.> to prevent local self intersections is an attenuation factor for the vertices position updates. Namely, for a vertex position 𝐱, being optimized with the rotation-invariant ADAM formulation <cit.>, the update step becomes:
Δ𝐱 = α·ν· l_ref
where α is the learning rate, ν is the ADAM update rule (computed from the gradients of the loss with respect to 𝐱) and l_ref is a coefficient that is (initially) set to the length of the edges incident to 𝐱 and gets updated using the average ν. In a nutshell, this further restricts the movement of the vertex, effectively reducing the amount of generated self-intersections. In addition, this term was argued to be an indicator of where new triangles are required, because ν, or the amount of change a vertex should undergo during the current iteration, is (indirectly) proportional to the reconstruction fidelity (as larger distances warrant larger gradients). In our experiments, we found this term to be too noisy, and propose a more geometrically motivated approach for adding resolution to the mesh.
§.§ Move Vertices
Every iteration of the source mesh shape evolution starts by updating its geometry (vertex positions). The vertex positions are updated using a gradient-based optimizer (rotation invariant ADAM), and employ the following loss terms to drive it towards a good solution:
L(v) = L_Im(v_Source,v_Target) + λ_1 · L_Proj(v_Source,v_Target)
+
λ_2 · L_Proj(v_Target,v_Source)
Where L_Im is a 2D image loss computed over different views of the source and target mesh using a differentiable rasterizer <cit.>. We use a flat shaded back-face culled rendering of the geometry with color encoding indicating face normal directions (see supplementary for examples). These appearance properties yield strong signals that sharply change with surface geometry, and are less ambiguous than shading schemes that depends on a light source, a texture or vertex normals. L_Proj is our novel planar projection loss term that drives the geometry to better fit the target, especially in areas with pixel or sub-pixel resolution features. The projection operation driving this loss term is also used as a part of our face splitting process, and is described in further detail in <Ref>. λ_1 and λ_2 are coefficients to balance between the terms. Additionally, we wish to constrict vertex movements to avoid self-intersections. We note that the purpose of using the attenuation factor l_ref (<Ref>), is to prevent the formation of folded faces too quickly for them to be resolved. Instead, we propose a more geometrically oriented attenuation factor, which we call l_att, for the vertex updates:
l_att(v) = min(|e|), ∀ e: v∈ e
where e are the incident edges. This limits the maximum displacement a vertex can undergo in a single iteration, such that faces are not allowed to flip over, making sure that self-intersections are identified. Note that l_att must be calculated at every iteration, for all vertices. We do this by constructing a topological sparse matrix for the mesh and performing sparse-dense multiplication with the vertex tensor. Since these operations are all performed on the GPU, the run time overhead is acceptable, even when this term is calculated at every iteration. We find that our attenuation factor achieves better results (see <Ref>). After the update by the optimizer, to promote isotropic triangles, a global tangential smoothing step is performed <cit.>.
§.§ Face Collapse
During the evolution of the mesh, self-intersection may develop. We identify local self-intersections using the folded face estimator posed in <Ref>. We resolve the found folded faces by collapsing them: we first calculate the Qslim score <cit.> for all edges of the folded faces. Intuitively, a high Qslim score means that the edge encodes salient geometry. Our Qslim formulation also includes checks for face quality and normal flipping after collapse. We then insert these Qslim scores to a priority queue, and collapse the edges according to manifold-preserving rules <cit.>. For simplicity, we collapse the edges using the subset strategy given in <cit.>, meaning we can only collapse an edge towards one of its end points. We end this step with a round of edge flips, to balance the vertex valences. we threshold the edge flip on the dihedral angle of the edge, in order to avoid damaging reconstruction fidelity.
§.§ Face Split
The Face Split block is responsible for splitting triangles in areas of interest. There is a great degree of freedom of how and where should new triangles be added. To this end, we pursue a subdivision scheme that allows us to locally increase resolution but in a parallel manner (for performance), and a score function for indicating where it should be done. The subdivision strategy we found most suitable for our purposes is the √(3) subdivision <cit.>, which inserts a vertex at the barycenter of a triangle and connects it to all its vertices. This scheme allows us to split an arbitrary subset of the triangles and maintain a valid 2-manifold triangle mesh. The ADAM parameters of the newly create barycentric vertices are simply the average of their parent vertices' ADAM parameters. In fact this plays well with general machine learning applications, where if any learnable signal is carried on the vertices of a triangle that is split, the new vertex can simply inherit the weighted average of those signals naturally. As for the determination of which faces to split, clearly, a good score function must rely both on the source mesh and target shape. However, since no correspondence exists between them, and the target is noisy, we propose a scoring function that relies on a novel extension of the self-intersection criterion <Ref> imposed on a face:
C(f) = 1-n_N · n_f if n_N· n_f>0
0 otherwise.
Under this definition, when C(f) nears 1, the face f encodes a surface with more locally varying geometry. Note that when n_N · n_f < 0 the face is estimated to be part of a self-intersection.
As a reminder, our goal is to score the source mesh faces for splitting. Applying the score function to the source mesh faces directly is an option, but this has two downsides: first it assumes geometry is as perfect as it can be, i.e. the source mesh vertices reached some steady-state local minima in relation to the target shape. This assumption is simply untrue in the general case (and especially initially).
r0.11
< g r a p h i c s >
Second, the target mesh is not taken into account at all, which could be used as a guiding prior. Thus, a more elaborate application of the score function is necessary. To this end, we apply a regular face super-sampling operation on the source mesh <cit.>, which creates smaller triangles all having the same area and allows fast computations of integral quantities over the parent face (see inset).
We then follow by the projection of the smaller triangles' vertices to the target mesh. By examining the resulting score of the projected super-sampled faces, we can infer the areas where more resolution is necessary on the source mesh (see <Ref>). This takes into account both the source mesh current state, as well as the target shape.
The super-sampled faces' scores are then pooled to their parent faces:
FS(f) = ∑_iC(Φ(f_i))
where Φ is the projection function described below.
In the final stage of this block, we promote better average vertex valence by a global manifold-persevering edge flip operation.
§.§.§ Planar Projection
As mentioned, the super sampled faces' vertices are projected onto the target to approximate local geometry changes.
The natural option for performing such a projection is to find the nearest point on the mesh for every vertex of the super sampled faces. This operation however, is computationally intensive, with no real benefit over the discrete approximation counterpart (see supplementary). A common fast approximation is to fairly sample the target surface, and project a query point to its nearest sample (or the average of several nearest samples) during operation.
Unfortunately, due to the injective nature of this approximation it causes degenerated artifacts and acute overlaps especially when used as a loss term <cit.>, which lead to inferior surface estimation (see <Ref>).
Instead, we propose two augmentations to the process; in order to preserve triangulation quality and discourage overlaps and the local minima typical to Chamfer-based projections, we project the source points only along their normal directions. This means source vertices could probably not be projected to the nearest sampled target points (or their average) exactly. We propose projecting to the plane each sampled target point locally approximates (see <Ref>), instead of to the point itself:
given a point v on the source mesh with normal n̂_v, we first find its K nearest neighbors on the target (and their normals n̂_k) to compute a support s_k which is the projection of v to the plane defined by each such neighbor, in the direction n̂_v:
d = (k_i - v) ·n̂_k_i/n̂_v ·n̂_k_i
s_k_i = v + d · v_n
The final projection point is computed as an average of all supports, weighted by their distance to v:
w_i = ||s_k_i - v||_2
P(v) = ∑_iw_i · s_k_i/∑_iw_i
We call this operation Planar Projection, and found it crucial in preserving triangulation quality, adhering well to sharp features, and being robust to errors on the target (see <Ref>).
Additionally, note the steps to compute P(v) are completely differentiable with respect to v. We take advantage of this in the Move Vertices step (<Ref>), by defining a loss term ||v - P(v)||_2 which snaps vertices into corners in areas with sub-pixel resolution features. In those areas, the render loss roughly performs random perturbations of the vertices (i.e. gradients are small, and random), but given dense enough sampling, the Planar Projection term is minimized when vertices are exactly on corners or borders (an intersection of two or more planes). A good analogy for this is quicksand: the more jiggling action occurs, the more the vertex sinks into corners. Note that the Planar Projection Loss by itself can never achieve this as P(v) is always in the direction of n̂_v, which rarely points towards the corner. See <Ref> for a more visual explanation.
§.§ Mesh Preprocessing
The purpose of the preproecssing step is to create an initial mesh to begin optimization and an oriented point cloud used for our projection scheme.
§.§.§ Rendering
In real data, it is not uncommon to have meshes that are ill-oriented. This usually occurs due to the nature of the data formation (e.g. a designer who did not care for orientation, a reconstruction algorithm, or a result from an actual scan). Despite the fact many reconstruction algorithms treat face orientation as a feature of the input <cit.>,
this poses a problem when rendering the mesh, as the renderer can wrongly render back-facing triangles, or not render front-facing triangles, causing holes and overlap artifacts to appear in the rendering. To solve this, we first remove all duplicate faces (i.e. permutations of the same vertices). We then duplicate every face but with flipped orientation - these two steps guarantee every face has exactly one twin face facing the other direction, with the visible face being rendered, and the other culled by the renderer. Additionally, we normalize the mesh by moving the average vertices' location to the origin and scaling them to a unit sphere.
§.§.§ Mesh Initialization
we use MANIFOLD <cit.> on the normalized target mesh, to get an initial mesh for optimization which is 2-manifold by construction. Unfortunately, MANIFOLD's output occasionally consists of non-manifold vertices. We use MeshLab's <cit.> non-manifold vertices cleaning filter to fix this by splitting these vertices. After this, the initial mesh is guaranteed to be 2-manifold.
§.§.§ Sampling
Our planar projection loss (<Ref>) utilizes a pointcloud-to-mesh projection scheme which relies on sampling the target mesh. These samples include both the point location and its parent face normal, which means ill-oriented faces pose a problem to this projection. To fix this, we start from the preprocessed mesh created for rendering (<Ref>), and pass it through an orientation procedure resembling <cit.>: we render the mesh from 36 different view points evenly spaced on a sphere, and count the number of pixels each face was visible from. We remove a face from the mesh if it doesn't meet a certain count threshold (see supplementary for further details), but not if the count is zero. Note that this means we don't keep obviously non-visible faces, but do preserve very small faces and internal structures. Then, we sample points uniformly at random on this mesh. To further clean the sampled point cloud, we use the initial mesh (<Ref>). our insight here is that MANIFOLD outputs adhere to the general outline of the shape but are "inflated", we can use them to decide if a sampled point should be pruned or not.
We first project each sample point i to the initial mesh p(i), and denote the projection vector length as l_i, the sampled points' parent face normals as n_i and the projected points normal as n_p(i). we then perform the following procedure:
We raycast a beam of N rays (see supplementary for hyper-parameters settings used) from each sampled point in the direction of its projected point and check for intersections with the target mesh. If more than half the rays cast from a sampled point i have a length larger than l_i, then we keep the sampled point (<Ref>).
Additionally, to keep orientation consistent, we flip the sampled point normal for each point i for which n_i· n_p(i) < 0.1.
Finally, we project points from the initial mesh to the target mesh, in order to better sharp edges on the target mesh, see the supplementary for full details.
§ RESULTS
§.§ Reconstruction in-the-wild
In order to show our methods ability to reconstruct realistic 3D mesh data, we experiment with reconstructing the ShapeNet dataset <cit.>. A statistical breakdown of some metrics can be seen in <Ref>. Additionally, a randomly selected subset of 10 meshes per class was used for quantitative comparisons (as many of the other methods were too slow to process a larger portion of the data). Results can be seen in <Ref> and <Ref>.
The triangle meshes in the ShapeNet subset are particularly hard to process due to their low quality: despite the seemingly coherent appearance of the models, they suffer from acute connectivity problems, including disconnected components, self-intersections, inconsistent normals, duplicated and degenerate faces, un-referenced vertices and highly irregular vertex neighborhoods. To deal with these issues, we pass all the meshes through a cleaning pre-processing step described in <Ref>. We allow other methods to benefit from this cleaning scheme as well if it was observed to improve their results, and for point-cloud based methods we input the cleaned and oriented point cloud produced by our method. For fairness, the same point cloud is used for our projection step in the Face Split block and the planar projection loss (<Ref>).The initial source mesh used in our method for optimization is created using the procedure described in <Ref>. We observed better results when setting our triangle budget to a high value (50k faces) and decimating it <cit.> as a post process step to meet a certain budget (see supplementary for additional detail). Other methods were also decimated using the same method if it proved beneficial for their results rather than setting a target budget. Some methods were allowed to run with an unconstrained triangle budget if they do not support setting a budget directly and decimation was observed to significantly harm their performance. We found the volumetric based methods to be competitive, but fail to deliver on their promise for 2-manifold results. Other methods do not capture the surface geometry as well or take large amount of time (∼ 10 hours) to process a mesh <cit.>.
Range Scan Reconstruction In addition, we have experimented with range scans from the Stanford Scan Repository <cit.> (see supplementary for qualitative results). These scans are typically quite noisy, and lack regions where the scanning operation failed, leaving holes in the scans resulting geometry. Our preliminary result showed promising results, reconstructing the shape reasonably well. This suggests our method is quite robust to rendering artifacts..
§.§ Neural SDF Reconstruction
Since our method relies on a render loss and a valid projection operation, it can be easily extended to work with signed distance functions, as their zero level set can also be rendered with normals encoded as color, and projection of a point to a level set consists of computing the SDF and normal: v_proj = v -∇SDF(v)/||∇SDF(v)||· SDF(v). We used a set of volSDF <cit.> models, trained over a subset of the BlendedMVS<cit.> dataset and compared our method's results to using a (double) marching cubes approach as implemented by the original authors for a given triangle budget. Results can be seen in <Ref> and <Ref>. The double MC approach entails performing a coarse MC reconstruction to eliminate disconnected components and to shrink the bounding box around the object of interest, followed by a fine resolution MC reconstruction yielding the final result. It was decimated to meet the triangle budget using Qslim <cit.>. We used the same camera viewpoint parameters as provided in the dataset. To render our target from different views, we implemented a simple sphere tracer and used its renders as the target images for the L_Im loss term in <Ref>. The planar projection term L_Proj was replaced with the mean distance between all vertices and their projection to the zero level set. Notice that while our results are on par in terms of Chamfer distance, our method performs roughly an order of magnitude less SDF evaluations for computation (that are due to rendering the SDF, and projection) with no optimizations, compared to the 512^3 grid used for MC. In other words, our method is more efficient in extracting roughly the same information. Note that the Robot scene had a large piece of its wing missing in the ground truth mesh. The neural SDF did capture it and we reconstructed the wing fully (yet got penalized for it), as opposed to marching cubes which failed to reconstruct it and achieved a higher score.
§.§ Ablations
A quantitative ablation over 11 high resolution water tight meshes was conducted (See <Ref>), where the target face budget was set to 10k. We chose these meshes because they allow for better quantitative evaluation of shape reconstruction rather than using a triangle soup as input where the underlying shape is ill-defined. Because the meshes were clean and of a relatively organic nature (see supplementary), the planar projection loss term was not used for optimization and its effects were tested separately (<Ref>). We froze all hyper parameters and computed the image loss (16 novel view points) and Chamfer distance (200k samples) from the ground truth in the following settings: Full - our full pipeline engaged, Silhouette - the rendering is performed with binary silhouettes only instead of normals encoded as colors, Half Views - we render only 18 views instead of 36, No T. Smooth - we do not perform tangential smoothing nor edge flips, No FC - we do not run the Face collapse block, No l_att - we set l_att=1 effectively removing it, l_att CLC - we set l_att = l_ref (<Ref>), Max Dist - we replace the criterion in <Ref> to instead using the maximum distance of super-samples to the target samples and use that as face scores which proved to be a much poorer estimator of local geometry changes.
§.§.§ GPU speedup
To demonstrate that our system is optimized for GPU usage, we ran several examples both on the GPU (GeForce RTX 2080 Ti) and CPU (Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz) and quantify the run time difference. We selected 5 random ShapeNet meshes from different classes, and run our pipeline for 5 different face budgets. The results is shown in <Ref>.
§ CONCLUSIONS
ROAR is the first approach to offer a full GPU-based mesh evolution process that is topologically error free in practice. The differentiable nature of our system makes it possible to be used in conjecture with advanced solvers such as ADAM.
Our main result, ShapeROAR, is both important on its own for the field of geometric deep learning, and demonstrates the unprecedented robustness and feasibility of our approach. As shown by our experiments, our carefully designed steps fit together like puzzle pieces to yield the right balance between topological correctness, reconstruction quality, and triangulation quality.
We believe this robust approach of mixed 2D and 3D considerations, and a mechanism to control and correct the triangulation during mesh evolution, is mature and stable enough to facilitate meaningful 3D learning. In light of the largely available yet broken shape datasets, we anticipate a comeback for the triangular mesh to the forefront of geometric deep learning, fully unlocking the potential of this representation.
§.§ Limitations and Future Work
Typical for all render based techniques, inner structures are overall less accurate than visible ones (<Ref>, Top). We partially negate this problem by using a 3D loss term, but it works best when combined with a render loss, and thus the final inner structures are not as accurate. This could possibly be improved by cleverly rendering from within enclosed spaces or by depth peeling.
Additionally, leveraging a volume based initialization has one significant drawback, where very thin structured are skipped. Since our method does not alter the topology of the initial source mesh, this leads to artifacts in the reconstruction (<Ref>, Bottom). This may be improved by leveraging a different initialization, but warrants further investigation for schemes that lend themselves well to render-based optimization.
Lastly, since we used PyTorch <cit.> to implement every step of our pipeline, topological operations are differentiable with respect to the vertices locations. This means back-propagating through them is possible and was demonstrated by our use of our projection both as a geometric predicate and a loss. Ultimately, this property can be used to decide where faces should be split using a learning method. We hope to leverage this for learning over topological operations jointly with geometry optimization, as we believe this is key to improve deep learning applications over meshes.
§ ACKNOWLEDGMENTS
This work was partially supported by Len Blavatnik and the Blavatnik family foundation, the Yandex Initiative in Machine Learning, ISF (number 1337/22), and BSF (number 2020280)
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02872v1 | 20230706091605 | Spin and orbital Edelstein effect in a bilayer system with Rashba interaction | [
"Sergio Leiva M.",
"Jürgen Henk",
"Ingrid Mertig",
"Annika Johansson"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
APS/123-QED
[email protected]
Institut für Physik, Martin-Luther-Universität Halle-Wittenberg, D-06099 Halle (Saale), Germany
Institut für Physik, Martin-Luther-Universität Halle-Wittenberg, D-06099 Halle (Saale), Germany
Institut für Physik, Martin-Luther-Universität Halle-Wittenberg, D-06099 Halle (Saale), Germany
Max Planck Institute of Microstructure Physics, Halle, Germany
The spin Edelstein effect has proven to be a promising phenomenon to generate spin polarization from a charge current in systems without inversion symmetry. In recent years, a current-induced orbital magnetization, called orbital Edelstein effect, has been predicted for various systems with broken inversion symmetry, using the atom-centered approximation and the modern theory of orbital magnetization. In this work, we study the current-induced spin and orbital magnetization for a bilayer system with Rashba interaction, using the modern theory of orbital magnetization and Boltzmann transport theory in relaxation-time approximation. We found that the orbital effect can be significantly larger than the spin effect, depending on the model parameters. Furthermore, the Edelstein response can be enhanced, suppressed, and even reversed, depending on the relation of the effective Rashba parameters of each layer. A sign change of the orbital polarization is related to an interchange of the corresponding layer localization of the states.
Spin and orbital Edelstein effect in a bilayer system with Rashba interaction
Annika Johansson
August 1, 2023
=============================================================================
§ INTRODUCTION
In recent years, spintronics has made remarkable progress in developing novel non-volatile, energy-efficient devices that exploit the charge and spin degrees of freedom of an electron <cit.>. This progress is attributed, in part, to the fact that charge and spin can be interconverted. The most common and intensively studied examples of these conversion effects are the spin Hall effect (SHE) <cit.> and the Edelstein effect (EE) <cit.>. In the SHE, a charge current generates a transverse spin current due to spin-orbit interaction, and vice versa, a spin current generates a transverse charge current via the inverse spin Hall effect. Conversely, via the EE or inverse spin-galvanic effect <cit.>, a charge current causes a non-equilibrium spin density in systems with broken inversion symmetry, such as surfaces or interfaces, also known as current-induced spin polarization. The first and most common systems for which the EE has been predicted are systems with Rashba spin-orbit coupling <cit.>, where the spin polarization typically arises perpendicular to the current direction. This effect can also occur in Weyl semimetals <cit.>, chiral materials <cit.>, oxide interfaces <cit.>, topological insulators <cit.>, and other quantum materials <cit.>. Similarly, an injected spin current generates a net charge current in these systems via the Onsager reciprocal of the EE, the inverse Edelstein effect (IEE) <cit.>. The importance of these effects for spintronics is due to the ability to create and control spin currents and spin polarization in a non-magnetic material solely from an applied charge current and vice versa.
The magnetization induced via the EE comprises contributions from spin and orbital moments <cit.>. For many years, the orbital contribution was considered negligible, since for many ferromagnets – such as Fe, Co, and Ni – the orbital magnetization is less than 10% of the spin magnetization <cit.>. However, recent studies have found that the orbital moments' contribution to transport effects, such as the orbital Hall effect (OHE) <cit.> and the orbital Edelstein effect (OEE) <cit.>, can be comparable to or even larger than their analog spin effects, SHE and spin Edelstein effect (SEE), respectively. However, the calculation of the orbital magnetization (OM) is non-trivial in translationally invariant systems, since the position operator is not well-defined <cit.>. In order to avoid this problem, the angular-momentum operator is evaluated in disjunct spheres around the atoms. This standard method, known as the atomic-center approximation (ACA), provides accurate and computationally efficient results for a wide range of materials. A more precise and complete alternative including the nonlocal contributions is the so-called modern theory of orbital magnetization <cit.>, proposed for translationally invariant materials <cit.>.
The modern theory of OM has been implemented in several density-functional theory codes <cit.> and tight-binding models <cit.>, primarily to study bulk ferromagnetic materials and heterostructures. However, the need for translational invariance of the modern theory implies a problem for interfaces and, generally, two-dimensional (2D) systems. Recently, the modern theory has been extended to treat the OEE in polar metals, insulator surfaces, and semi-infinite systems <cit.>.
In this work, we apply this refined theory to a two-dimensional electron gas (2DEG) with Rashba spin-orbit coupling in a bilayer system. Due to the asymmetry between the layers and the interlayer interaction, the motion of the electrons can be regarded as closed loops of an electrical current that allow for an in-plane OM. Although the Rashba 2DEG has been the first system for which the SEE has been predicted, its OEE has not been discussed yet, particularly not within the modern theory of OM. By extending this paradigm Edelstein system to two coupled layers and applying the modern theory of OM, we examine the OEE and the SEE with respect to their dependence on the model parameters, and we reveal the role of layer localization of the electronic states.
This Paper is organized as follows. In Section <ref>, we set the expressions and overall framework for the spin and orbital contributions to the current-induced magnetization using a semiclassical Boltzmann approach in a two-dimensional system. In Section <ref>, we calculate the spin and orbital moments for a bilayer system with Rashba interaction. In Section <ref>, we discuss the spin and orbital Edelstein effects, their dependence on the model parameters, and potential applications in real materials. Finally, we conclude in Section <ref>.
§ CURRENT-INDUCED SPIN AND ORBITAL MAGNETIZATION IN A 2D ELECTRON GAS
At zero temperature, the magnetic moment per unit cell m⃗ in terms of spin and orbital contributions is given by
m⃗ = - μ_B/ħA_0/A_s∑_n k⃗ f_n k⃗ ( g_s + g_l ),
where A_0 is the area of the unit cell, A_s is the area of the entire system. μ_B is the Bohr magneton, ħ is the reduced Planck constant. f_n is the non-equilibrium distribution function. g_s/l are the spin and orbital g-factors, respectively. and are the expectation values of the spin and orbital angular momentum, respectively, and n and k indicate the band index and crystal momentum.
Solving the linearized Boltzmann equation within constant relaxation time approximation, the distribution function in the presence of an external electric field E is f_nk = f^0_nk + eτ (df/dϵ)|_ϵ = ϵ_nkv_n k·E, where f^0_nk is the Fermi-Dirac distribution function, v_n k is the group velocity, e is the absolute value of the electron's charge, and τ is the constant relaxation time.
The expectation value of the spin moment is
= ⟨Ψ_n | ŝ| Ψ_n |,⟩
where ŝ is the spin operator and |Ψ_n ⟩ is an eigenstate of the Hamiltonian. Within the modern theory of orbital magnetization <cit.>, the expectation value of the orbital moment is defined as <cit.>
= ie/2 μ_B g_l⟨∂ u_n k/∂k | × (ℰ_k - Ĥ_0) | ∂ u_n k/∂k|⟩eq:lk
where ℰ_k is the band energy, Ĥ_0 is the Hamiltonian of the system, and |u_n⟩ is the lattice-periodic part of the Bloch function. The derivative of the eigenvectors in Eq. (<ref>) is avoided in
= ie/2μ_B g_l∑_m (≠ n)⟨ u_n | ∂Ĥ_0/∂ | u_m⟩×⟨ u_m | ∂Ĥ_0/∂ | u_n⟩/ϵ_n - ϵ_m
(n and m band indices) which does not yield all components of the OAM in 2D systems since the out-of-plane component of k is not defined. This problem is avoided by replacing <cit.>
⟨ u_n | ∂ H_0/∂ k_z | u_m⟩ = i(ϵ_n - ϵ_m) ⟨ u_n | z | u_m⟩ .
Here, the system is assumed finite in the z-direction. In the following, k is a 2D vector, and z is the out-of-plane component of the position operator.
Finally, we define the Edelstein susceptibility tensor in the linear-response regime as <cit.>
m⃗ = (χ^s + χ^l) E⃗ = χE⃗,
where χ^s, χ^l and χ are the spin (s), orbital (l) and total Edelstein susceptibilities, respectively; E is the applied electric field.
§ MODEL
We consider a semi-infinite system with two Rashba layers at its surface (or interface to a substrate), labeled A and B. Each layer is described by a 2D Rashba Hamiltonian and coupled with a spin-independent interaction. The corresponding Hamiltonian of the two-layer system is
H = [ H_A T; T H_B ]
where
H_l = ħ^2 k^2/2m_l + α_l (z×k) ·σ, l = A, B,
are the Rashba Hamiltonians <cit.>, with m_l and α_l being the effective mass and Rashba parameter of layer l = A, B, respectively. z is the unit vector along the surface normal, σ = (σ_x, σ_y, σ_z) are the Pauli matrices, so the spin operator in Eq. <ref> is ŝ = ħ/2σ⊗𝕀_2x2. The interaction between the layers is modeled via the hopping matrix T = t 𝕀_2x2, with t the interlayer hopping. The out-of-plane operator is defined as z = (c/2) diag(1, -1) ⊗𝕀_2x2, with c the distance between the layers.
The band structure shows two pairs of Rashba-type bands (Fig. <ref>), split by 2 t at = 0. The magnitude of the spin and orbital moments is constant along iso-energy lines, with their orientation locked perpendicular to k. The spin moment presents a -independent magnitude (see color in Fig. <ref>a) with a fixed sense of rotation per band. The texture of the orbital moment shows a more complex k⃗-dependent magnitude and a reversion in the sense of rotation as a function of energy (see color in Fig. <ref>b).
The Rashba parameter of a layer can be associated with a potential gradient perpendicular to the interface,
α_R ∝∫ |Φ(z)|^2 ∂ V(z)/∂ z d^3r,
with Φ(z) the z-dependent part of an eigenstate <cit.>, but has been shown to be affected by other factors as well, e.g. by an in-plane potential gradient <cit.>. Therefore, by having different Rashba parameters per layer we can simulate a layer-dependent interface potential gradient, whereas to study a heterostructure different effective masses and Rashba parameter are needed.
In general, Equation (<ref>) cannot be diagonalized, except for distinct parameter combinations. In the following, we will focus on two particular cases: firstly, equal effective mass but different Rashba parameters (m_A = m_B, α_A ≠α_B), and secondly different effective masses but equal Rashba parameters (m_A ≠ m_B, α_A = α_B).
§.§.§ Equal effective masses
Assuming m_A = m_B ≡ m the dispersion relation yields
ℰ^n_1, n_2 (k) = ħ^2 k^2/2m + n_1/2 |α_+| k + n_2/2√(α_-^2 k^2 + 4t^2)
with n_1, n_2 = ± 1 and α_± = α_A ±α_B. n_1 indicates the shape of the band, either a V-shape for the inner band (n_1 = 1) or a W-shape for the outer band (n_1 = -1), similar to a monolayer Rashba system.
The expectation value of the spin moment reads
= n_1 ħ/2ϕ,
with ê_ϕ the azimuthal unitary vector in cylindrical coordinates, therefore, the absolute value of the spin moment is constant (eq. (<ref>) includes a factor of α_+ / |α_+|, which is neglected here since we consider positive values of the Rashba parameters). The orientation of the expectation value of the spin moment depends on the azimuth of k and the band shape (W or V), as for the monolayer Rashba system <cit.>; see Fig. <ref>.
The expectation value of the orbital moment
= - n_1 e c t^2 α_-/μ_B g_l ( α_-^2 k^2 + 4 t^2)ϕ
decays with the magnitude of and, as for , also includes α_+ / |α_+| for the general case. It is important to note that the band's shape (W or V) determines the sense of rotation for both spin and orbital moments. In addition, the orbital moment also depends on the difference in the Rashba parameters, α_-, leading to zero orbital moments for a system of two equivalent layers.
§.§.§ Equal Rashba parameters
The dispersion relation for a general combination of effective masses but equal Rashba parameters (α_A = α_B ≡α) is given by
ℰ^n1, n2 (k) = ħ^2 k^2/4 M_+ + n_1 |α| k + n_2 √(h^4 k^4 /16 M_-^2 + t^2),
with M_± = 1/m_A±1/m_B. The spin expectation value is identical to that in the former case, Eq. (<ref>), but the orbital moment
= - ecħ^2 t^2 M_- k/4 μ_B g_l (ħ^4 k^4/16 M^2_- + t^2 )ϕ
depends now on k and does depend neither on the band index n_1 nor on the sign of α_+.
§ RESULTS AND DISCUSSION
Due to the symmetries of the system introduced in Sec. <ref>, particularly rotational and mirror symmetries, the only nonzero tensor elements of the Edelstein susceptibility are χ^s/l_xy = -χ^s/l_yx.
The spin and orbital moments discussed above (Eqs. (<ref>), (<ref>), and (<ref>)), as well as the specific band structure, lead to the characteristic shape of the energy-dependent Edelstein susceptibilities shown in Figs. <ref> and <ref>. First, the Edelstein effect in a system with equal effective masses in both layers, shown in Fig. <ref>, is discussed. Increasing ℰ_ F, starting from the band edge of the lowest "W"-shaped band, increases the absolute value of both the spin Edelstein effect (χ_xy^s) and the orbital Edelstein effect (χ_xy^l) due to the increasing number of states contributing to transport. The opposite signs originate from the opposite orientation of spin and orbital moments. When the second, "V"-shaped, band is occupied, χ_xy^s approximately saturates due to partial compensation of the spin Edelstein effect originating from both bands, like in a monolayer Rashba system. Recall that both "W" and "V" shaped bands have spin textures with opposite sense of rotation and contribute oppositely to the SEE. Such partial compensation is also visible in the OEE signal (b). However, no saturation is visible here due to the k-dependence of the absolute value of the orbital moments, see Eq.(<ref>), leading to an orbital susceptibility approximately two times larger than the spin susceptibility. This energy-dependence is repeated qualitatively when the third and fourth bands are occupied.
In a bilayer system with equal Rashba parameters in both layers (Fig. <ref>), the energy-dependent SEE qualitatively behaves as in the previously discussed case of equal masses. However, the OEE exhibits qualitatively different behavior due to the band-independent sense of rotation of the orbital moments (Eq. (<ref>)). Therefore, the contribution to the orbital moment of all bands has the same sign, resulting in an orbital susceptibility approximately ten times larger than the spin susceptibility. Here, spin and orbital moment are not aligned oppositely. Hence the signs of χ_xy^s and χ_xy^l are equal in the whole energy range. Further, the contributions of the "W" and "V" shaped bands do not compensate, but add up due to the equal sense of rotation of the orbital textures.
§.§ Parameter dependence
Figures <ref> and <ref> show the spin, orbital, and total Edelstein susceptibilities as a function of the difference between the Rashba parameters (α_B - α_A) and the effective masses (m_B - m_A), respectively. Both calculations are performed for a fixed Fermi energy (ℰ_F = 1), changing the value of the corresponding parameter on layer B for the parameters used for Figures <ref> and <ref>, respectively. As shown in Figures <ref>a and <ref>a, the SEE is enhanced by increasing either α or m in one of the layers. The approximately constant increase of the SEE is related to an increasing size of the Fermi surfaces. However, the SEE can present the opposite sign for a system with a negative sum of the Rashba parameters, i.e. α_A + α_B <0, although that configuration is not studied in the present work.
Figure <ref>b shows that the sign of the OEE is controlled by the difference of the Rashba parameters, leading to a sign change for the case of equivalent layers (m_A = m_B and α_A = α_B), related to a symmetry of the system discussed in the following subsection, and a second parameter-dependent case around α_B - α_A ≈ 0.28. The OEE for equal Rashba parameters, shown in Fig. <ref>b, only shows a sign change for the case of the equivalent layer. The sign change of α_B - α_A and m_B-m_A, respectively, means a reversed orbital moment at each point, hence a reversed sense of rotation of the orbital moment along the iso-energy lines and a sign change of the OEE. One important difference between both cases, equal m and equal α, is the band independence of Equation (<ref>), since for the equal Rashba case, all bands exhibit the same sense of rotation of the orbital moments, giving the same sign of OEE in Figure <ref>b.
Figures <ref>c and <ref>c show the total Edelstein susceptibility combining spin and orbital contributions. In both cases, the OEE is larger than the SEE for a wide range of the corresponding parameters, which is in agreement with former work <cit.>. Both Figures highlight in green the regions where the absolute value of SEE is larger than that of OEE.
§.§ Layer dependence
From Eqs. (<ref>) and (<ref>), it is clear that a sign change of α_- and M_-, respectively, induces a sign change of the -dependent orbital moment per band, . A physical interpretation of the origin of the sign change in OEE can be obtained by analyzing the localization of the states per layer. In contrast to the spin moment, the OEE is tied to the out-of-plane position of the layers (z), giving relevance to the spatial order of the layers relative to each other. Figure <ref>a shows the projection of the eigenstates to the layers with equal effective masses but different Rashba parameters. First, when t=0 and α_B = 0, the states are fully localized in a degenerate free-electron band for layer B and a simple Rashba band structure for layer A. At =0 the state is four-fold degenerated. However, when we include interlayer hopping (t ≠ 0), the degeneracies are lifted. At =0, we observed two two-fold degenerate bands with a band gap of 2t. The states are weakly localized in both layers around =0. For this case, even with α_B = 0, the states localized in layer B show an energy-splitting close to =0. Nevertheless, this band-splitting becomes negligible for higher energies (ℰ_F ≫ t). In contrast, the states localized in layer A show a Rashba-like structure with the same band gap of 2t at =0. Therefore, the interlayer hopping induces Rashba interaction from layer A into layer B, even when α_B = 0.
For the case of equivalent layers, each eigenstate is equally localized in both layers, which can be interpreted as a total compensation of the layer contributions to the orbital moment, see Eqs. (<ref>) and (<ref>). This compensation is better seen when we compare two arrangements of the Rashba parameter. The first arrangement is when the Rashba parameter on layer A is larger than on layer B (α_A > α_B), while the second is the interchanged relation (α_A < α_B). Both arrangements are highlighted with grey boxes in Fig. <ref>a. In addition, Fig. <ref>b sketches spin and orbital textures along iso-energy lines, projected to the layers, for these two arrangements. Comparing these two cases proves helpful since the band structure is equivalent, but the localization of the eigenstates is opposite. Here, the states on the most outer and most inner bands (1 and 4) are localized in the layer with the larger Rashba parameter, while the states of the middle bands (2 and 3) are localized in the layer with the smaller Rashba parameter. This interchange of the localization does not affect the sign of the spin expectation value (see Figure <ref>b) since the spin texture is conserved. Even though the contribution per layer changes when the localization of the eigenstates is reversed, the total spin moment remains the same. However, for the orbital moment the sense of the texture's rotation per band is changed by reversion of the eigenstate's localization, which is also evident from Eq. (<ref>) due to the sign change of α_-.
To quantify a layer's contribution to the orbital moments, we decompose the eigenstates as |u_nk⟩ = |A, n⟩ + |B, n⟩, with
|A, n⟩ = 1/N[ u^A_↑, n; u^A_↓, n; 0; 0 ]
and analogously for |B, n⟩, with 1/N the normalization factor. With this decomposition, the orbital moment (<ref>) is a sum of four terms,
= l_n k^AA + l_n k^AB + l_n k^BA + l_n k^BB, with
l_n k^XY = ie/2 μ_B g_l∑_m≠ n⟨ X, n| ∂ H/∂k|X, m⟩×⟨ Y, m| ∂ H/∂k|Y, n⟩/ϵ_n - ϵ_m ,
(X, Y = A, B). These contributions read
^AA = -ect^2/2μ_B g_l (k^2 α_-^2 + 4 t^2 )( ħ^2 k/m + n_1 α_A ) ϕ,
^AB = -ect^2/2 μ_B g_l ( k^2 α_-^2 + 4 t^2 )n_1 α_-/2ϕ,
^BB = ect^2/2 μ_B g_l (k^2 α_-^2 + 4 t^2 )( ħ^2/m + n_1 α_B ) ϕ
for a system with m_A = m_B; confer Eq. (<ref>). The mixed or interlayer terms are equal (^AB = ^BA), but the intralayer terms ^AA and ^BB have opposite sign and differ according to the respective Rashba parameters, or analogously according to the effective masses for a system with α_A = α_B.
Applying the above decomposition to the spin, Eq. (<ref>), shows that only the intralayer terms contribute to the SEE, both with the same sign. Therefore, the physical origin of the nonzero -dependent orbital moment can be attributed to the asymmetry in the layer-wise contributions, since electrons flowing between the layers acquire an orbital motion in out-of-plane trajectories, which, in analogy to a loop of electrical current, generates an in-plane orbital moment.
§.§ Materials proposal
Materials showing a Rashba effect are widely used for spin-charge interconversion. Especially at oxide interfaces <cit.> and polar semiconductors <cit.>, where 2DEGs with a thickness of several unit cells can exhibit more than one band splitting related to the Rashba effect. However, those bands are required to be energetically close to induce a sizable nonlocal contribution to the OEE.
Polar semiconductors <cit.> are suitable candidates for showing a double Rashba band structure similar to the one shown in Fig. <ref>a. Al_2O_3 covered by a monolayer of a heavy metal has been reported to host similar double Rashba band structures. In this substrate, Al atoms are located at a slightly different height than the O atoms due to surface relaxation. Therefore, the monolayer of the heavy metal (Pb, Bi, Sb, and their ordered alloys <cit.>) is expected to form a buckled adlayer <cit.>.
Recent works have suggested a sizeable orbital contribution to the Edelstein effect compared to the spin contribution for oxide interfaces. Particularly, recent publications on SrTiO_3 (STO) <cit.> and KTaO_3 <cit.> based interfaces have shown a significant orbital magnetization using the ACA approach. These materials present energetically close bands from different layers, hinting at a relevant nonlocal contribution from the modern theory of OM. Especially for oxide interfaces in which the 2DEG is extended to several layers <cit.>, a double or multi-layer approach, which is crucial for the application of the modern theory of orbital magnetization, is appropriate. Other oxide-based materials reported to have significant EE are BaSnO_3 and ZnO <cit.>.
A surface polarization due to a slight spatial displacement between the atoms at the surface is key in obtaining a double (or multiple) Rashba structure from an inhomogeneous potential gradient. Therefore, the ferroelectric Rashba semiconductors (FERSC) have been proposed for purely electrical control of the Rashba interaction, even reaching a switchable configuration <cit.>. Other systems with switchable Rashba SOC have been reported from the perovskite family <cit.>. Therefore, the enhancement and reversion of the orbital contribution explored in this paper could further contribute to the overall electrical control of the total conversion efficiency.
§ CONCLUSIONS
This paper introduces an effective model for a bilayer system with Rashba interaction to describe the current-induced spin and orbital Edelstein effect. Because of a sizeable interlayer hopping electrons can perform out-of-plane motion which allows for an in-plane OM. Here, we see that the asymmetry of the parameters of those layers is fundamental for an orbital moment. Two cases, namely equal effective masses and equal Rashba parameters, are discussed in detail. For any parameter combination, spin and orbital moments are locked perpendicular to the momentum. The spin expectation values are constant, but the orbital moments' absolute values decay with k.
We explore the model parameter dependence of the current-induced magnetization. For constant energy, the SEE is enhanced by increasing the value of either effective mass or Rashba parameter regardless of the ratio of the corresponding parameters of both layers. However, the sign of the OEE can be tuned according to the difference between the parameters, with the OEE vanishing if the layers are equivalent. The sign change of the OEE is accompanied by a change of the layers localization of the eigenstates. Tuning the ratio α_B/α_A (or m_B/m_A) from <1 to >1 and vice versa, and assuming m_B=m_A (α_B= α_A), the layer localization of the individual states is reversed. For the orbital moment, the sense of rotation along an iso-energy line is also reversed, whereas the spin's sense of rotation is preserved. Considering the intra- and inter-layer contributions to the orbital moment, we find that both layers contribute oppositely to the total orbital moment, and hence the difference between the respective parameters (α_A - α_B and m_A-m_B, respectively) determines the sign of the total OEE.
The approach expressed in this work shows that the orbital moment is relevant even for systems where the expectation value of the orbital angular momentum operator is zero within the atom-centered approximation, implying that the modern theory of orbital magnetization is essential for interfaces.
This project has received funding from the European Union's 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 955671. The authors thank N.G.J. for patience throughout this work.
|
http://arxiv.org/abs/2307.02279v1 | 20230705132617 | From NeurODEs to AutoencODEs: a mean-field control framework for width-varying Neural Networks | [
"Cristina Cipriani",
"Massimo Fornasier",
"Alessandro Scagliotti"
] | math.OC | [
"math.OC",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising
Junwen Chen Yingcheng Wang Keiji Yanai
Department of Informatics, The University of Electro-Communications, Tokyo, Japan
[email protected], [email protected], [email protected]
August 1, 2023
===========================================================================================================================================================================================================
In our work, we build upon the established connection between Residual Neural Networks (ResNets) and continuous-time control systems known as NeurODEs. By construction, NeurODEs have been limited to constant-width layers, making them unsuitable for modeling deep learning architectures with width-varying layers.
In this paper, we propose a continuous-time Autoencoder, which we call AutoencODE, and we extend to this case the mean-field control framework already developed for usual NeurODEs.
In this setting, we tackle the case of low Tikhonov regularization, resulting in potentially non-convex cost landscapes. While the global results obtained for high Tikhonov regularization may not hold globally, we show that many of them can be recovered in regions where the loss function is locally convex.
Inspired by our theoretical findings, we develop a training method tailored to this specific type of Autoencoders with residual connections, and we validate our approach through numerical experiments conducted on various examples.
§ INTRODUCTION
In recent years, the field of artificial intelligence has witnessed remarkable progress across diverse domains, including computer vision and natural language processing. In particular, neural networks have emerged as a prominent tool, revolutionizing numerous machine learning tasks. Consequently, there is an urgent demand for a robust mathematical framework to analyze their intricate characteristics.
A deep neural network can be seen as map ϕ:^d_in→^d_out, obtained as the composition of L≫ 1 applications ϕ=ϕ_L∘…∘ϕ_1, where, for every n=1,…,L, the function ϕ_n:^d_n→^d_n+1 (also referred as the n-th layer of the network) depends on a trainable parameter θ_n∈^m_n. The crucial process of choosing the values of the parameters θ_1,…,θ_L is known as the training of the network. For a complete survey on the topic, we recommend the textbook <cit.>.
Recent advancements have explored the link between dynamical systems, optimal control, and deep learning, proposing a compelling perspective. In the groundbreaking work <cit.>, it was highlighted how the problem of training very deep networks can be alleviated by the introduction of a new type layer called “Residual Block”. This consists in using the identity map as skip connection, and after-addition activations.
In other words, every layer has the following form:
X_n+1 = ϕ_n(X_n) = X_n + (X_n, θ_n),
where X_n+1 and X_n are, respectively, the output and the input of the n-th layer.
This kind of architecture is called Residual Neural Network (or ResNet). It is important to observe that, in order to give sense to the sum in (<ref>), in each layer the dimension of the input should coincide with the dimension of the output.
In the practice of Deep Learning, this novel kind of layer has turned out to be highly beneficial, since it is effective in avoiding the “vanishing of the gradients" during the training <cit.>, or the saturation of the network's accuracy <cit.>. Indeed, before <cit.>, these two phenomena had limited for long time the large-scale application of deep architectures.
Despite the original arguments in support of residual blocks being based on empirical considerations, their introduction revealed nevertheless a more mathematical and rigorous bridge between residual deep networks and controlled dynamical systems. Indeed, what makes Residual Neural Networks particularly intriguing is that they can be viewed as discretized versions of continuous-time dynamical systems. This dynamical approach was proposed independently in <cit.> and <cit.>, and it was greatly popularized in the machine learning community under the name of NeurODEs by <cit.>.
This connection with dynamical systems relies on reinterpreting the iteration (<ref>) as a step of the forward-Euler approximation of the following dynamical system:
Ẋ(t) = (X(t), θ(t)),
where t↦θ(t) is the map that, instant by instant, specifies the value of the parameter θ.
Moreover, the training of these neural networks, typically formulated as empirical risk minimization, can be reinterpreted as an optimal control problem. Given a labelled dataset {(X^i
_0, Y^i_0)}^N_i=1 of size N ≥1, the depth of the time-continuous neural network (<ref>) is denoted by T > 0.
Then, training this network amounts to learning the control signals θ∈ L^2([0,T],^m) in such a way that the terminal output X^i_T of (<ref>) is close to it corresponding label Y^i_0 for all i=1, …,N, with respect to some distortion measure ℓ(·,·) ∈ C^1. A typical choice is ℓ(x, y) := |x - y|^2, which is often referred as the squared loss function in the machine learning literature. Therefore, it is possible to formulate the following optimal control problem
inf_θ∈ L^2([0,T];^m) J^N(θ) :=
{ 1/N∑_i=1^N ℓ(X^i(T),Y^i(T) ) + λ∫_0^T|θ(t)|^2 dt,
s.t. { Ẋ^i(t) = (t,X^i(t),θ(t) ), Ẏ^i(t) =0,
(X^i(t),Y^i(t)) |_t=0 = (X_0^i,Y_0^i), i ∈{1,…,N}.
.
.
Notice that the objective function also comprises of Tikhonov regularization, tuned by the parameter λ, which plays a crucial role in the analysis of this control problem.
The benefit of interpreting the training process in this manner results in the possibility of exploiting established results from the branch of mathematical control theory, to better understand this process.
A key component of optimal control theory is a set of necessary conditions, known as Pontryagin Maximum Principle (PMP), that must be satisfied by any (local) minimizer θ. These conditions were introduced in <cit.> and have served as inspiration for the development of innovative algorithms <cit.> and network structures <cit.> within the machine learning community.
This work specifically addresses a variant of the optimal control problem presented above, in which the focus is on the case of an infinitely large dataset.
This formulation gives rise to what is commonly known as a mean-field optimal control problem, where the term “mean-field” emphasizes the description of a multiparticle system through its averaged effect. In this context, the focus is on capturing the collective behavior of the system rather than individual particle-level dynamics, by considering the population as a whole. As a consequence, the parameter θ is shared by the entire population of input-target pairs, and the optimal control is required to depend on the initial distribution μ_0(x,y)∈𝒫(^d×^d) of the input-target pairs. Therefore, the optimal control problem needs to be defined over spaces of probability measures, and it is formulated as follows:
inf_θ∈ L^2([0,T];^m) J(θ) :=
{ ∫_^2dℓ(x,y) dμ_T(x,y)+λ∫_0^T|θ(t)|^2 dt ,
s.t. { ∂_tμ_t(x,y)+∇_x· (ℱ(t,x,θ_t)μ_t(x,y))=0 t∈[0,T],
μ_t|_t=0(x,y)=μ_0(x,y),
.
.
This area of study has gained attention in recent years, and researchers have derived the corresponding Pontryagin Maximum Principle in various works, such as <cit.> and <cit.>. It is worth mentioning that there are other types of mean-field analyses of neural networks, such as the well-known work <cit.>, which focus on mean-field at the parameter level, where the number of parameters is assumed to be infinitely large. However, our approach in this work takes a different viewpoint, specifically focusing on the control perspective in the case of an infinitely large dataset.
One of the contributions of this paper is providing a more accessible derivation of the necessary conditions for optimality, such as the well-known Pontryagin Maximum Principle. Namely, we characterize the stationary points of the cost functional, and we are able to recover the PMP that was deduced in <cit.> under the assumption of large values of the regularization parameter λ, and whose proof relied on an infinite-dimensional version of the Lagrange multiplier rule.
This alternative perspective offers a clearer and more intuitive understanding of the PMP, making it easier to grasp and apply it in practical scenarios.
In addition, we aim at generalizing the applicability of the results presented in <cit.> by considering a possibly non-convex regime, corresponding to small values of the parameter λ>0. As mentioned earlier, the regularization coefficient λ plays a crucial role in determining the nature of the cost function. Indeed, when λ is sufficiently large, the cost function is convex on the sub-level sets, and it is possible to prove the existence and uniqueness of the solution of the optimal control problem that arises from training NeurODEs.
Additionally, in this highly-regularized scenario, desirable properties of the solution, such as its continuous dependence on the initial data and a bound on the generalization capabilities of the networks, have been derived in <cit.>.
However, in practical applications, a large regularization parameter may cause a poor performance of the trained NeurODE on the task. In other words, in the highly-regularized case, the cost functional is unbalanced towards the L^2-penalization, at the expenses of the term that promotes that each datum X^i_0 is driven as close as possible to the corresponding target Y^i_0.
This motivated us to investigate the case of low Tikhonov regularization. While we cannot globally recover the same results as in the highly-regularized regime, we find interesting results concerning local minimizers.
Moreover, we also show that the (mean field) optimal control problem related to the training of the NeurODE induces a gradient flow in the space of admissible controls.
The perspective of the gradient flow leads us to consider the well-known minimizing movement scheme, and to introduce a proximal stabilization term to the cost function in numerical experiments.
This approach effectively addresses the well-known instability issues (see <cit.>) that arise when solving numerically optimal control problems (or when training NeurODEs) with itearive methods based on the PMP.
It is important to note that our stabilization technique differs from previous methods, such as the one introduced in <cit.>.
From NeurODEs to AutoencODEs.
Despite their huge success, it should be noted that NeurODEs (as well as ResNets, their discrete-time counterparts) in their original form face a limitation in capturing one of the key aspects of modern machine learning architectures, namely the discrepancy in dimensionality between consecutive layers.
As observed above, the use of skip connections with identity mappings requires a “rectangular” shape of the network, where the width of the layers are all identical and constant with respect to the input's dimension. This restriction poses a challenge when dealing with architectures that involve layers with varying dimensions, which are common in many state-of-the-art models.
Indeed, the inclusion of layers with different widths can enhance the network's capacity to represent complex functions and to capture intricate patterns within the data.
In this framework, Autoencoders have emerged as a fundamental class of models specifically designed to learn efficient representations of input data by capturing meaningful features through an encoder-decoder framework. More precisely, the encoder compresses the input data into a lower-dimensional latent space, while the decoder reconstructs the original input from the compressed representation.
The concept of Autoencoders was first introduced in the 1980s in <cit.>, and since then, it has been studied extensively in various works, such as <cit.>, among others.
Nowadays, Autoencoders have found numerous applications, including data compression, dimensionality reduction, anomaly detection, and generative modeling. Their ability to extract salient features and capture underlying patterns in an unsupervised manner makes them valuable tools in scenarios where labeled training data is limited or unavailable.
To the best of our knowledge, no attempts have been made to extend the NeurODEs model and the control-theoretic analysis to Autoencoders, or to more general width-varying neural networks. Additionally, there is currently a lack of established theory regarding the performance guarantees of these models.
To address this limitation, we propose a possible extension which is based on a novel design of the vector field that drives the dynamics, and such a solution allows us to develop a continuous-time model capable of accommodating various types of width-varying neural networks.
It is worth noting that, in principle, there could be different ways for modeling width-varying neural networks with dynamical systems, as, e.g., forcing some structure on the control variables, or formulating a viability problem. In this last case, a possibility could be to require admissible trajectories to visit some lower-dimensional subsets during the evolution. For an introduction to viability theory, we recommend the monograph <cit.>.
However, in our approach, we choose to impose a structure at the level of the controlled field that drives the dynamics, in order to leverage the insights and results obtained from our previous work <cit.>.
On the other hand, since we aim at capturing width-varying neural networks, we need to extend the previous control-theoretical framework to a more general scenario. This is done in Subsection <ref>, where we introduce a discontinuous-in-time dynamics that can describe a wider range of neural network architectures.
By doing so, we enable the study of Autoencoders (and, potentially, of other width-varying architectures) from a control-theoretic point of view, with the perspective of getting valuable insights into their behavior.
Furthermore, we also generalize the types of activation functions that can be employed in the network. The previous work <cit.> primarily focused on sigmoid functions, which do not cover the full range of activations commonly employed in practice.
Our objective is to allow for unbounded activation functions, which are often necessary for effectively solving certain tasks. By considering a broader set of activation functions, we aim at enhancing the versatility and applicability of our model.
The structure of the paper is the following: Section <ref> discusses the dynamical model of NeurODEs and extends it to the case of width-varying neural networks, including Autoencoders, which we refer to as AutoencODEs.
In Section <ref>, we present our mean-field analysis, focusing on the scenario of an infinitely large dataset. We formulate the mean-field optimal control problem, we derive a set of necessary optimality conditions, and we provide a convergence result for the finite-particles approximation. At the end of this section, we compare our findings with the ones previously obtained in <cit.>.
Section <ref> covers the implementation and the description of the training procedure, and we compare it with other methods for NeurODEs existing in the literature.
Finally, in Section <ref>, we present the results of our numerical experiments, highlighting interesting properties of the AutoencODEs that we observe.
§.§ Measure-theoretic preliminaries
Given a metric space (X,d_X), we denote by ℳ(X) the space of signed Borel measures in X with finite total variation, and by 𝒫(X) the space of probability measures, while 𝒫_c(X) ⊂𝒫(X) represents the set of probability measures with compact support. Furthermore, 𝒫_c^N(X) ⊂𝒫_c(X) denotes the subset of empirical or atomic probability measures.
Given μ∈𝒫(X) and f: X → Y , with f μ-measurable, we denote with f_#μ∈𝒫(Y) the push-forward measure defined by f_#μ(B) = μ(f^-1(B)) for any Borel set B⊂ Y. Moreover, we recall the change-of-variables formula
∫_Y g(y) d(f_#μ )(y) = ∫_X g ∘ f(x) dμ(x)
whenever either one of the integrals makes sense.
We now focus on the case X=^d and briefly recall the definition of the Wasserstein metrics of optimal transport in the following definition, and refer to <cit.> for more details.
Let 1≤ p < ∞ and 𝒫_p(^d) be the space of Borel probability measures on ^d with finite p-moment. In the sequel, we endow the latter with the p-Wasserstein metric
W_p^p(μ, ν):=inf{∫_^2d |z-ẑ|^p dπ(z,ẑ) | π∈Π(μ, ν)} ,
where Π(μ, ν) denotes the set of transport plan between μ and ν, that is the collection of all Borel probability measures on ^d×^d with marginals μ and ν in the first and second component respectively.
It is a well-known result in optimal transport theory that when p =1 and μ,ν∈𝒫_c(^d), then the following alternative representation holds for the Wasserstein distance
W_1(μ,ν)=sup{∫_^dφ(x) d (μ-ν)(x) | φ∈(^d), (φ)≤ 1} ,
by Kantorovich's duality <cit.>. Here, (^d) stands for the space of real-valued Lipschitz continuous functions on ^d, and (φ) is the Lipschitz constant of a mapping φ defined ad
Lip(φ) := sup_x,y ∈^d , x ≠ yφ(x)-φ(y)/x-y
§ DYNAMICAL MODEL OF NEURODES
§.§ Notation and basic facts
In this paper, we consider controlled dynamical systems in ^d, where the velocity field is prescribed by a function : [0,T] ×^d ×^m →^d that satisfies these basic assumptions.
The vector field : [0,T] ×^d ×^m →^d satisfies the following:
(i) For every x ∈^d and every θ∈^m, the map t ↦(t,x,θ) is measurable in t.
(ii) For every R>0 there exists a constant L_R >0 such that, for every θ∈^m, it holds
|ℱ(t,x_1,θ)-ℱ(t,x_2,θ)|≤ L_R(1+|θ|) |x_1-x_2| , t∈ [0,T] x_1, x_2 ∈ B_R(0),
from which it follows that |(t,x,θ)| ≤ L_R(1 + |x|)(1+ |θ|) for a.e. t∈ [0,T].
(iii) For every R>0 there exists a constant L_R >0 such that, for every θ_1, θ_2 ∈^m, it holds
|(t,x,θ_1)-(t,x,θ_2)| ≤ L_R(1 + |θ_1| + |θ_2|)|θ_1-θ_2| , t∈ [0,T] x ∈ B_R(0).
The control system that we are going to study is
ẋ(t) = (t,x(t), θ(t)), [0,T],
x(0) = x_0,
where θ∈ L^2([0,T], ^m) is the control that drives the dynamics.
Owing to Assumption <ref>, the classical Carathéodory Theorem (see <cit.>)
guarantees that, for every θ∈ L^2([0,T], ^m) and for every x_0 ∈^d, the Cauchy problem (<ref>) has a unique solution x : [0,T] →^d. Hence, for every (t, θ) ∈ [0,T] × L^2([0,T], ^m), we introduce the flow map Φ^θ_(0,t): ^d →^d defined as
Φ^θ_(0,t)(x_0) := x(t),
where t ↦ x(t) is the absolutely continuous curve that solves (<ref>), with Cauchy datum x(0)=x_0 and corresponding to the admissible control t ↦θ(t).
Similarly, given 0≤ s< t≤ T, we write Φ^θ_(s,t): ^d →^d to denote the flow map obtained by prescribing the Cauchy datum at the more general instant s≥ 0.
We now present the properties of the flow map defined in (<ref>) that describes the evolution of the system: we show that is well-posed, and we report some classical properties.
For every t ∈ [0,T] and for every θ∈ L^2([0,T], ^m), let satisfy Assumption <ref>. Then, the flow Φ^θ_(0,t): ^d →^d is well-defined for any x_0 ∈^d and it satisfies the following properties.
* For every R>0 and ρ>0, there exists a constant R̅>0 such that
|Φ^θ_(0,t)(x)| ≤R̅
for every x ∈ B_R(0) and every θ∈ L^2([0,T],^m) such that ||θ ||_L^2≤ρ.
* For every R>0 and ρ>0, there exists a constant L̅>0 such that, for every t ∈ [0,T], it holds
|Φ^θ_(0,t)(x_1)- Φ^θ_(0,t)(x_2)| ≤L̅|x_1-x_2|
for every x_1, x_2 ∈ B_R(0) and every θ∈ L^2([0,T],^m) such that ||θ ||_L^2≤ρ.
* For every R>0 and ρ>0, there exists a constant L̅>0 such that, for every t_1, t_2∈ [0,T], it holds
|Φ^θ_(0,t_2)(x)- Φ^θ_(0,t_1)(x)| ≤L̅|t_2-t_1|^1/2
for every x ∈ B_R(0) and every θ∈ L^2([0,T],^m) such that ||θ ||_L^2≤ρ.
* For every R>0 and ρ>0, there exists a constant L̅>0 such that, for every t ∈ [0,T], it holds
|Φ^θ_1_(0,t)(x)- Φ^θ_2_(0,t)(x)|_2 ≤L̅θ_1-θ_2_L^2
for every x ∈ B_R(0) and every θ_1, θ_2 ∈ L^2([0,T],^m) such that θ_1 _L^2, θ_2 _L^2≤ρ.
The proof is postponed to the Appendix (see Lemmata <ref>, <ref>, <ref>, <ref>).
Even though the framework introduced in Assumption <ref> is rather general, in this paper we specifically have in mind the case where the mapping : [0, T] ×ℝ^d ×ℝ^m →ℝ^d represents the feed-forward dynamics associated to residual neural networks.
In this scenario, the parameter θ∈ℝ^m encodes the weights and shifts of the network, i.e., θ = (W, b), where W ∈ℝ^d × d and b ∈ℝ^d. Moreover, the mapping has the form:
(t, x, θ) = σ(W x + b),
where σ: ℝ^d →ℝ^d is a nonlinear function acting component-wise, often called in literature activation function. In this work, we consider sigmoidal-type activation functions, such as the hyperbolic tangent function:
σ(x) = tanh(x),
as well as smooth approximations of the Rectified Linear Unit (ReLU) function, which is defined as:
σ(x) = max{0, x}.
We emphasize the need to consider smoothed versions of the ReLU function due to additional differentiability requirements on , which will be further clarified in Assumption <ref>. Another useful activation function covered by Assumption <ref> is the Leaky Rectified Linear Unit (Leaky ReLU) function:
σ(x) = max{0,x} - max{ -α x,0 }
where α∈ [0, 1] is a predetermined parameter that allows the output of the function to have negative values. The smooth approximations of (<ref>) and (<ref>) that we consider will be presented in Section <ref>.
§.§ From NeurODEs to AutoencODEs
As explained in the Introduction, NeurODEs and ResNets –their discrete-time counterparts– face the limitation of a “rectangular” shape of the network because of formulas (<ref>) and (<ref>), respectively.
To overcome this fact, we aim at designing a continuous-time model capable of describing width-varying neural networks, with a particular focus on Autoencoders, as they represent the prototype of neural networks whose layers operate between spaces of different dimensions.
Indeed, Autoencoders consist of an encoding phase, where the layers' dimensions progressively decrease until reaching the “latent dimension” of the network. Subsequently, in the decoding phase, the layers' widths are increased until the same dimensionality as the input data is restored. For this reason, Autoencoders are prominent examples of width-varying neural networks, since the changes in layers' dimensions lie at the core of their functioning.
Sketches of encoders and Autoencoders are presented in Figure <ref>.
Finally, we insist on the fact that our model can encompass as well other types of architectures. In this regard, in Remark <ref> we discuss how our approach can be extended to U-nets.
Encoder Our goal is to first model the case of a network which sequentially reduces the dimensionality of the layers' outputs.
For this purpose, we artificially force some of the components not to evolve anymore, while we let the others be active part of the dynamics. More precisely, given an input variable x_0 ∈^d, we denote with (_j)_j=0,…,r an increasing filtration, where each element _j contains the sets of indices whose corresponding components are inactive, i.e., they are constant and do not contribute to the dynamics. Clearly, since the layers' width will decrease sequentially, the filtration of inactive components _j will increase, i.e.
∅ =: _0 ⊊_1 ⊊ ... ⊊_r ⊊{1,…,d}, r < d, j=0,…, r.
On the other hand, the sets of indices of active components define an decreasing filtration _j := {1, …,d}∖_j for j=0,…,r. As opposed to before, the sets of active components (_j)_j=0,…,r satisfy
{1,…, d} =: _0 ⊋_1 ⊋ ... ⊋_r ⊋∅, r < d, j=0,…, r.
We observe that, for every j=0,…,r, the sets _j and _j provide a partition of {1,…,d}. A visual representation of this model for encoders is presented on the left side of Figure <ref>.
Now, in the time interval [0,T], let us consider r+1 nodes 0 = t_0 < t_1 <... < t_r<t_r+1=T. For j=0,…,r, we denote with [t_j,t_j+1] the sub-interval and, for every x ∈^d, we use the notation x__j:=(x_i)_i∈_j and x__j:=(x_i)_i∈_j to access the components of x belonging to _j and _j, respectively. Hence, the controlled dynamics for any t ∈ [t_j, t_j+1] can be described by
ẋ__j(t) = 0,
ẋ__j(t) = _j(t, x__j(t), θ(t)),
where _j: [ t_j, t_j+1 ] ×^| _j|×^m →^|_j|, for j= 0,…,r, and x(0) = x__0(0) = x_0.
Furthermore, the dynamical system describing the encoding part is
ẋ(t) = (t,x(t), θ(t)), t ∈ [0,T],
x(0) = x_0
where, for t ∈ [t_j, t_j+1], we define the discontinuous vector field as follows
((t,x,θ))_k = ((t, x__j,θ) )_k, k ∈_j,
0, k ∈_j.
Notice that θ(t) ∈^m for every t ∈ [0,T],
according to the model that we have just described. However, it is natural to expect that, since x has varying active components, in a similar way the controlled dynamics (t,x,θ) shall not explicitly depend at every t ∈ [0,T] on every component of θ.
Autoencoder We now extend the previous model to the case of networks which not only decrease the dimensionality of the layers, but they are also able to increase the layers' width in order to restore the original dimension of the input data.
Here we denote by z_0 ∈^d̃ the input variable, and we fictitiously augment the input's dimension, so that we consider the initial datum x_0 = (z_0, 0)∈^d = ^d̃×^d̃, where 0∈^d̃. We make use of the following notation for every x ∈^d:
x = ( (z_i)_i=1,…,d̃, (z^H_i)_i= 1,…,d̃) )
where z^H is the augmented (or shadow) part of the vector x.
In this model, the time horizon [0,T] is splitted using the following time-nodes:
0=t_0 ≤ t_1 ≤ ... ≤ t_r ≤ ... ≤ t_2r≤ t_2r+1:=T
where t_r, which was the end of the encoder in the previous model, is now the instant corresponding to the bottleneck of the autoencoder.
Similarly as before, we introduce two families of partitions of {1,…, d̃} modeling the active and non-active components of, respectively, z and z^H. The first filtrations are relative to the encoding phase and they involve the component of z:
_j-1 ⊊I_j if 1 ≤j ≤r,
_j = _j-1 if j>r,
6em
_j-1 ⊋_j if 1 ≤j ≤r,
_j = _j-1 if j>r.
where _0 := ∅, _r ⊊{1,…, d̃} and _0 ={ 1, …, d̃}, _r ⊋∅. The second filtrations, that aim at modeling the decoder, act on the shadow part of x, i.e., they involve the components of z^H:
^H_j-1 = {1,…, d} if 1 ≤j ≤r,
^H_j ⊊^H_j-1 if r < j ≤2r,
6em
^H_j-1 = ∅ if 1 ≤j ≤r,
^H_j ⊋^H_j-1 if r < j ≤2r.
While the encoder structure acting on the input data z_0 is the same as before, in the decoding phase we aim at activating the components that have been previously turned off during the encoding.
However, since the information contained in the original input z_0 should be first compressed and then decompressed, we should not make use of the values of the components that we have turned off in the encoding and hence, we cannot re-activate them.
Therefore, in our model the dimension is restored by activating components of z^H, the shadow part of x, which we recall was initialized equal to 0∈^d̃.
This is the reason why we introduce sets of active and inactive components also for the shadow part of the state variable. A sketch of this type of model is presented on the right of Figure <ref>.
Moreover, in order to be consistent with the classical structure of an autoencoder, the following identities must be satisfied:
* _j ∩_j^H = ∅ for every j=1,…,2r,
* _2r∪^H_2r = {1,…, d̃}.
The first identity formalizes the constraint that the active component of z and those of z^H cannot overlap and must be distinct, while the second identity imposes that, at the end of the evolution, the active components in z and z^H should sum up exactly to 1,…,d̃.
Furthermore, from the first identity we derive that _j ⊆ (_j^H)^C = _j^H and, similarly, _j^H ⊆_j for every j=1,…,2r. Moreover, _r satisfies the inclusion _r ⊆_j for every j=1,…,2r, which is consistent with the fact that layer with the smallest width is located in the bottleneck, i.e., in the interval [t_r,t_r+1]. Finally, from the first and the second assumption, we obtain that _2r^H = _2r, i.e., the final active components of z^H coincide with the inactive components of z, and, similarly, _2r^H = _2r.
Finally, to access the active components of x = (z, z^H), we make use of the following notation:
x__j= (z_k)_k ∈_j, x_^H_j = (z^H_k)_k ∈^H_j and x__j, ^H_j = (z__j, z^H_^H_j),
and we do the same for the inactive components:
x__j= (z_k)_k ∈_j, x_^H_j = (z^H_k)_k ∈^H_j and x__j, ^H_j = (z__j, z^H_^H_j).
We are now in position to write the controlled dynamics in the interval t_j ≤ t ≤ t_j+1:
ẋ__j, ^H_j(t) = 0,
ẋ__j, ^H_j(t) = _j(t, x__j, ^H_j(t), θ(t)),
where _j: [ t_j, t_j+1]×^| _j | + | _j^H |×^m→^| _j | + | _j^H | , for j=0,…, 2r, and x^H__0(0) = 0, x__0(0) =x_0. As before, we define the discontinuous vector field for t ∈ [t_j, t_j+1], as follows
((t,x,θ))_k = ((t, x__j,θ) )_k, k ∈_j ∪_j^H
0, k ∈_j ∪^N_j.
Hence, we are now able to describe any type of width-varying neural network through a continuous-time model depicted by the following dynamical system
ẋ(t) = (t,x(t), θ(t)) [0,T],
x(0) = x_0.
It is essential to highlight the key difference between the previous NeurODE model in (2.6) and the current model: the vector field now explicitly depends on the time variable t to account for sudden dimensionality drops, where certain components are forced to remain constant. As a matter of fact, the resulting dynamics exhibit high discontinuity in the variable t. To the best of our knowledge, this is the first attempt to consider such discontinuous dynamics in NeurODEs. Previous works, such as <cit.>, typically do not include an explicit dependence on the time variable in the right-hand side of NeurODEs, or they assume a continuous dependency on time, as in <cit.>. Furthermore, it is worth noting that the vector field introduced to model autoencoders satisfies the general assumptions outlined in Assumption 1 at the beginning of this section.
The presented model, initially designed for Autoencoders, can be easily extended to accommodate various types of width-varying neural networks, including architectures with long skip-connections such as U-nets <cit.>. While the specific details of U-nets are not discussed in detail, their general structure is outlined in Figure <ref>. U-nets consist of two main components: the contracting path (encoder) and the expansive path (decoder). These paths are symmetric, with skip connections between corresponding layers in each part. Within each path, the input passes through a series of convolutional layers, followed by a non-linear activation function (often ReLU), and other operations (e.g., max pooling) which are not encompassed by our model.
The long skip-connections that characterize U-nets require some modifications to the model of autoencoder described above.
If we denote with d̃_i for i=0, …, r the dimensionality of each layer in the contracting path, we have that d̃_2r-i=d̃_i for every i=0, …, r.
Then, given an initial condition z_0 ∈ℝ^d̃_0, we embed it into the augmented state variable
x_0 = (z_0, 0), 0∈ℝ^d̃_1 + … + d̃_r.
As done in the previous model for autoencoder, we consider time-nodes 0=t_0<…<t_2r=T, and in each sub-interval we introduce a controlled dynamics with the scheme of active/inactive components depicted in Figure <ref>.
§ MEAN-FIELD ANALYSIS
In this section, we extend the dynamical model introduced in Section 2 to its mean-field limit, which corresponds to the scenario of an infinitely large dataset. Within this framework, we formulate the training of NeurODEs and AutoencODEs as a mean-field optimal control problem and provide the associated necessary optimality conditions. It is worth noting that our analysis covers both the high-regularized regime, as studied in previous work <cit.>, as well as the low-regularized regime, which has not been extensively addressed before. In this regard, we dedicate a subsection to a detailed comparison with the results obtained in <cit.>.
Additionally, we investigate the case of finite-particles approximation and we establish a quantitative bound on the generalization capabilities of these networks.
§.§ Mean-field dynamical model
In this section, we employ the same view-point as in <cit.>, and we consider the case of a dataset with an infinite number of observations. In our framework, each datum is modeled as a point x_0 ∈^d, and it comes associated to its corresponding label y_0 ∈^d. Notice that, in principle, in Machine Learning applications the label (or target) datum y_0 may have dimension different from d.
However, the labels' dimension is just a matter of notation and does not represent a limit of our model. Following <cit.>, we consider the curve t ↦ (x(t),y(t)) which satisfies
ẋ(t)=ℱ(t,x(t),θ(t)) and ẏ(t)=0
for a.e. t∈ [0,T], and (x(0),y(0))=(x_0,y_0).
We observe that the variable y corresponding to the labels is not changing, nor it is affecting the evolution of the variable x.
We recall that the flow associated to the dynamics of the variable x is denoted by Φ_(0,t)^θ:^d →^d for every t∈ [0,T], and it has been defined in (<ref>).
Moreover, in regards to the full dynamics prescribed by (<ref>), for every admissible control θ∈ L^2([0,T],^m) we introduce the extended flow _(0,t)^θ:^d×^d→^d×^d, which reads
_(0,t)^θ(x_0,y_0) = (Φ^θ_(0,t)(x_0), y_0)
for every t∈[0,T] and for every (x_0,y_0)∈^d×^d.
We now consider the case of an infinite number of labeled data (X_0^i,Y_0^i)_i∈ I, where I is an infinite set of indexes. In our mathematical model, we understand this data distribution as a compactly-supported probability measure μ_0 ∈𝒫_c(^d×^d).
Moreover, for every t ∈ [0,T], we denote by t↦μ_t the curve of probability measures in 𝒫_c(^d×^d) that models the evolution of the solutions of (<ref>) corresponding to the Cauchy initial conditions (X_0^i,Y_0^i)_i∈ I. In other words, the curve t↦μ_t satisfies the following continuity equation:
∂_tμ_t(x,y) + ∇_x·((t,x,θ_t)μ_t(x,y) )=0, μ_t|_t=0(x,y)=μ_0(x,y),
understood in the sense of distributions, i.e.
For any given T>0 and θ∈ L^2([0,T],^m), we say that μ∈C([0,T],P_c(^2d)) is a weak solution of (<ref>) on the time interval [0,T] if
∫_0^T∫_^2d( ∂_t ψ(t,x,y) + ∇_x ψ(t,x,y) ·(t,x,θ_t) ) dμ_t(x,y) dt = 0,
for every test function ψ∈_c^1((0,T)×^2d).
Let us now discuss the existence and the characterisation of the solution.
Under Assumptions <ref>, for every μ_0 ∈P_c(^2d) we have that (<ref>) admits a unique solution t ↦μ_t in the sense of Definition <ref>. Moreover, we have that for every t ∈ [0,T]
μ_t = _(0,t) #^θμ_0.
Existence and uniqueness of the measure solution of (<ref>) follow from <cit.>.
From the characterisation of the solution of (<ref>) provided in (<ref>), it follows that the curve t ↦μ_t inherits the properties of the flow map Φ^θ described in Proposition <ref>. These facts are collected in the next result.
Let us fix T > 0 and μ_0 ∈𝒫_c(^2d),
and let us consider : [0, T ]×^d ×^m →^d satisfying Assumption <ref>. Let θ∈ L^2([0,T], ^m) be an admissible control, and let t ↦μ_t be the corresponding solution of (<ref>). Then, the curve t ↦μ_t satisfies the properties listed below.
* For every R>0 and ρ>0, there exists R̅>0 such that, for every t ∈ [0,T], it holds that
( μ_t) ⊂ B_R̅(0)
for every θ∈ L^2([0,T],^m) such that θ_L^2≤ρ, and for every μ_0 such that (μ_0)⊂ B_R(0).
* For every R>0 and ρ>0, there exists L̅>0 such that, for every t ∈ [0,T], it holds that
W_1(μ_t, ν_t) ≤L̅ W_1(μ_0,ν_0)
for every θ∈ L^2([0,T],^m) such that θ_L^2≤ρ, and for every initial conditions μ_0,ν_0 such that the supports satisfy (μ_0),(ν_0)⊂ B_R(0),
where μ_t = ^θ_(0,t)#μ_0 and ν_t = ^θ_(0,t)#ν_0.
* For every R>0 and ρ>0, there exists L̅>0 such that, for every t_1,t_2 ∈ [0,T], it holds that
W_1(μ_t_1, μ_t_2) ≤L̅· |t_1-t_2|^1/2
for every θ∈ L^2([0,T],^m) such that θ_L^2≤ρ, and for every μ_0 such that (μ_0)⊂ B_R(0).
* For every R>0 and ρ>0, there exists L̅>0 such that, for every t ∈ [0,T], it holds that
W_1(μ_t, ν_t) ≤L̅θ_1- θ_2_L^2
for every θ_1,θ_2∈ L^2([0,T],^m) such that θ_L^2 , θ_2_L^2≤ρ, and for every initial condition μ_0 such that (μ_0)⊂ B_R(0),
where μ_t = ^θ_1_(0,t)#μ_0 and ν_t = ^θ_2_(0,t)#μ_0.
All the results follow from Proposition <ref> and from the properties of the flow map presented in Proposition <ref>, combined with the Kantorovich duality (<ref>) for the distance W_1, and the change-of-variables formula (<ref>). Since the argument is essentially the same for all the properties, we detail the computations only for the second point, i.e., the Lipschitz-continuous dependence on the initial distribution.
Owing to (<ref>),
for any t ∈ [0,T], for any φ∈Lip(^2d) such that its Lipschitz constant Lip(φ)≤1, it holds that
W_1(μ_t, ν_t) ≤∫_^2dφ(x,y) d(μ_t - ν_t)(x,y)
= ∫_^2dφ(Φ^θ_(0,t)(x),y) d(μ_0-ν_0)(x,y)
≤L̅ W_1(μ_0,ν_0),
where the equality follows from the definition of push-forward and from (<ref>), while the constant L̅ in the second inequality descends from the local Lipschitz estimate of Φ^θ_(0,t) established in Proposition <ref>.
§.§ Mean-field optimal control
Using the transport equation (<ref>), we can now formulate the mean-field optimal control problem that we aim to address. To this end, we introduce the functional J:L^2([0,T],^m)→, defined as follows:
J(θ) =
{ ∫_^2dℓ(x,y) dμ_T(x,y)+λ∫_0^T|θ(t)|^2 dt ,
s.t. { ∂_tμ_t(x,y)+∇_x· (ℱ(t,x,θ_t)μ_t(x,y))=0 t∈[0,T],
μ_t|_t=0(x,y)=μ_0(x,y),
.
.
for every admissible control θ∈ L^2([0,T],^m).
The objective is to find the optimal control θ^* that minimizes J(θ^*), subject to the PDE constraint (<ref>) being satisfied by the curve t↦μ_t. The term "mean-field" emphasizes that θ is shared by an entire population of input-target pairs, and the optimal control must depend on the distribution of the initial data. We observe that when the initial measure μ_0 is empirical, i.e.
μ_0 := μ_0^N = 1/N∑_i=1^N δ_(X_0^i,Y_0^i),
then minimization of (<ref>) reduces to a classical finite particle optimal control problem with ODE constraints.
We now state the further regularity hypotheses that we require, in addition to the one contained in Assumption <ref>.
For any given T>0, the vector field satisfies the following.
(iv) For every R>0 there exists a constant L_R >0 such that, for every x_1,x_2 ∈ B_R(0), it holds
|∇_x (t,x_1,θ)-∇_x (t,x_2,θ)| ≤ L_R(1 + |θ|^2)|x_1-x_2| , t∈ [0,T] θ∈^m.
(v) There exists another constant L_R >0 such that, for every θ_1,θ_2 ∈^m, it holds
|∇_θ(t,x,θ_1)-∇_θ(t,x,θ_2)| ≤ L_R|θ_1 - θ_2| , t∈ [0,T] x ∈ B_R(0).
From this, it follows that
|∇_θ(t,x,θ)| ≤ L_R(1+|θ|) for every x∈ B_R(0) and for every θ∈^m.
(vi) There exists another constant L_R >0 such that, for every θ_1,θ_2 ∈^m, it holds
|∇_x (t,x,θ_1)-∇_x (t,x,θ_2)| ≤ L_R(1 + |θ_1| + |θ_2|)|θ_1-θ_2| , t∈ [0,T] x ∈ B_R(0).
From this, it follows that
|∇_x (t,x,θ)| ≤ L_R(1+|θ|^2) for every x∈ B_R(0) and for every θ∈^m.
(vii) There exists another constant L_R >0 such that
|∇_θ(t,x_1,θ)-∇_θ(t,x_2,θ)| ≤ L_R(1 + |θ|)|x_1-x_2| , t∈ [0,T] x_1,x_2 ∈ B_R(0).
Additionally, it is necessary to specify the assumptions on the function ℓ that quantifies the discrepancy between the output of the network and its corresponding label.
The function ℓ: ^d×^d ↦_+ is C^1-regular and non-negative. Moreover, for every R>0, there exists a constant L_R >0 such that, for every x_1,x_2 ∈ B_R(0), it holds
|∇_xℓ(x_1,y_1)-∇_x ℓ(x_2,y_2)| ≤ L_R ( |x_1-x_2| + |y_1-y_2|).
Let us begin by establishing a regularity result for the reduced final cost, which refers to the cost function without the regularization term.
Let T,R > 0 and μ_0 ∈𝒫_c(^2d) be such that (μ_0) ⊂ B_R(0), and let us consider :[0,T]×^d×^m→^d and ℓ:^d×^d→ that satisfy, respectively, Assumptions <ref>-<ref> and Assumption <ref>. Then, the reduced final cost
J_ℓ : θ∈ L^2([0,T];^m) ↦
{ ∫_^2dℓ(x,y) dμ_T^θ (x,y),
s.t. { ∂_t μ_t^θ(x,y) + ∇_x ( (t,x,θ_t) μ_t^θ(x,y) ) = 0,
μ_t^θ|_t=0(x,y) = μ_0(x,y),
.
.
is Fréchet-differentiable. Moreover, using the standard Hilbert space structure of L^2([0,T],^m), we can represent the differential of J_ℓ at the point θ_0 as the function:
∇_θ J_ℓ(θ) : t ↦∫_^2d∇_θ ^ ⊤( t , Φ^θ_0_(0,t)(x),θ(t) ) ·ℛ^θ_(t,T)(x)^⊤·∇_x ℓ ^⊤(Φ^θ_(0,T)(x), y ) dμ_0(x,y)
for a.e. t∈[0,T].
Before proving the statement, we need to introduce the linear operator ℛ^θ_τ,s(x):^d →^d with τ,s∈ [0,T], that is related to the linearization along a trajectory of the dynamics of the control system (<ref>), and that appears in (<ref>).
Given an admissible control θ∈ L^2([0,T],^m), let us consider the corresponding trajectory curve t↦Φ_(0,t)^θ(x) for t∈[0,T], i.e., the solution of (<ref>) starting at the point x∈^d at the initial instant t=0. Given any τ∈ [0,T], we consider the following linear ODE in the phase space ^d× d:
d/dsℛ^θ_(τ,s)(x) = ∇_x (s, Φ_(0,s)^θ(x),θ(s))
·ℛ_(τ,s)^θ(x) s∈[0,T],
ℛ_(τ,τ)^θ(x) =
Id.
We insist on the fact that, when we write ℛ_(τ,s)^θ(x), x denotes the starting point of the trajectory along which the dynamics has been linearized.
We observe that, using Assumption <ref>-(iv)-(vi) and Caratheodory Theorem, it follows that (<ref>) admits a unique solution, for every x∈^d and for every τ∈[0,T]. Since it is an elementary object in Control Theory, the properties of ℛ^θ are discussed in the Appendix (see Proposition <ref>). We just recall here that the following relation is satisfied:
ℛ^θ_τ,s(x) =
∇_x Φ^θ _(τ, s)|_Φ^θ _(0, τ)(x)
for every τ,s ∈ [0,T] and for every x∈^d (see, e.g., <cit.>). Moreover, for every τ,s ∈ [0,T] the following identity holds:
ℛ^θ_τ,s(x)·ℛ^θ_s,τ(x) = Id,
i.e., the matrices ℛ^θ_τ,s(x), ℛ^θ_s,τ(x) are one the inverse of the other. From this fact, it is possible to deduce that
∂/∂τℛ^θ_τ,s(x)
= -
ℛ^θ_τ,s(x) ·∇_x (τ,Φ_(0,τ)^θ(x),θ(τ))
for almost every τ,s∈ [0,T] (see, e.g., <cit.> for the details).
Let us fix an admissible control θ∈ L^2([0,T];^m) and let μ^θ_·∈𝒞^0([0,T];𝒫_c(^2d)) be the unique solution of the continuity equation (<ref>), corresponding to the control θ and satisfying μ^θ|_t=0=μ_0. According to Proposition <ref>, this curve can be expressed as μ_t^θ = ^θ_(0,t)#μ_0 for every t∈[0,T], where the map ^θ_(0,t)=(Φ_(0,t)^θ,Id):^2d→^2d has been introduced in (<ref>) as the flow of the extended control system (<ref>).
In particular, we can rewrite the terminal cost J_ℓ defined in (<ref>) as
J_ℓ(θ) = ∫_^2dℓ( Φ^θ_(0,T)(x),y )dμ_0(x,y).
In order to compute the gradient ∇_θ J_ℓ, we preliminarily need to focus on the differentiability with respect to θ of the mapping θ↦ℓ( Φ_(0,T)^θ(x),y), when (x,y) is fixed. Indeed, given another control ϑ∈ L^2([0,T];^m) and ε > 0, from Proposition <ref> it descends that
Φ^θ+εϑ_(0,T)(x) = Φ^θ_(0,T)(x) + εξ^θ(T) + o_θ(ε)
= Φ^θ_(0,T)(x) + ε∫_0^Tℛ^θ_(s,T)(x) ∇_θ ( s , Φ^θ_(0,s)(x) , θ(s) ) ϑ(s) ds + o_θ(ε)
ε→ 0,
where o_θ(ε) is uniform for every x ∈ B_R(0)⊂^d, and as ϑ varies in the unit ball of L^2.
Owing to Assumption <ref>, for every x,y,v ∈ B_R(0) we observe that
|ℓ(x+ε v +o(ε),y) -ℓ(x,y) -ε∇_x ℓ(x,y)· v| ≤ |∇_x ℓ(x,y)| o(ε) + 1/2L_R|ε v + o(ε)|^2 ε→ 0.
Therefore, combining (<ref>) and (<ref>), we obtain that
ℓ(Φ^θ + εϑ_(0,T)(x),y) - ℓ(Φ^θ_(0,T)(x),y) =
ε∫_0^T( ∇_x ℓ(Φ^θ_(0,T)(x),y) ·ℛ^θ_(s,T)(x)
·∇_θ(s, Φ^θ_(0,s)(x),θ(s)) ) ·ϑ(s) ds + o_θ(ε).
Since the previous expression is uniform for x,y∈ B_R(0), then if we integrate both sides of the last identity with respect to μ_0, we have that
J_ℓ(θ +εϑ)
- J_ℓ(θ) =
ε∫_^2d∫_0^T( ∇_x ℓ(Φ^θ_(0,T)(x),y) ·ℛ^θ_(s,T)(x)
·∇_θ(s, Φ^θ_(0,s)(x),θ(s)) ) ·ϑ(s) ds dμ_0(x,y) + o_θ(ε).
This proves the Fréchet differentiability of the functional J_ℓ at the point θ.
We observe that, from Proposition <ref>, Proposition <ref> and Assumption <ref>, it follows that the function s↦∇_x ℓ(Φ^θ_(0,T)(x),y) ·ℛ^θ_(s,T)(x)·∇_θ(s, Φ^θ_(0,s)(x),θ(s)) is uniformly bounded in L^2, as x,y vary in B_R(0)⊂^d. Then, using Fubini Theorem, the first term of the expansion (<ref>) can be rewritten as
∫_0^T ( ∫_^2d∇_x ℓ(Φ^θ_(0,T)(x),y) ·ℛ^θ_(s,T)(x)
·∇_θ(s, Φ^θ_(0,s)(x),θ(s)) dμ_0(x,y) ) ·ϑ(s) ds.
Hence, from the previous asymptotic expansion and from Riesz Representation Theorem, we deduce (<ref>).
We now prove the most important result of this subsection, concerning the Lipschitz regularity of the gradient ∇_θ J_ℓ.
Under the same assumptions and notations as in Lemma <ref>, we have that the gradient ∇_θ J_ℓ:L^2([0,T],^m)→ L^2([0,T],^m) is Lipschitz-continuous on every bounded set of L^2.
More precisely, given θ_1,θ_2 ∈ L^2([0,T];^m), there exists a constant ℒ(T,R,θ_1_L^2, θ_2_L^2) > 0 such that
∇_θ J_ℓ(θ_1) - ∇_θ J_ℓ(θ_2) _L^2≤ℒ(T,R,θ_1_L^2, θ_2_L^2) θ_1 - θ_2 _L^2.
Let us consider two admissible controls θ_1,θ_2 ∈ L^2([0,T],^m) such that θ_1 _L^2, θ_1 _L^2≤ C. In order to simplify the notations, given x∈ B_R(0)⊂^d, we define the curves x_1:[0,T]→^d and x_2:[0,T]→^d as
x_1(t) := Φ_(0,t)^θ_1(x),
x_2(t) := Φ_(0,t)^θ_2(x)
for every t∈[0,T], where the flows Φ^θ_1,Φ^θ_2 where introduced in (<ref>). We recall that, in virtue of Proposition <ref>, x_1(t),x_2(t)∈ B_R̅(0) for every t∈[0,1].
Then, for every y∈ B_R(0), we observe that
|
∇_θ ^ ⊤( t , x_1(t), θ_1(t) ) ℛ^θ_1_(t,T)(x)^⊤∇_x ℓ ^⊤(x_1(T), y ) -
∇_θ ^ ⊤( t , x_1(t),θ_2(t) ) ℛ^θ_2_(t,T)(x)^⊤∇_x ℓ ^⊤(x_2(T), y )
|
≤|
∇_θ ^ ⊤( t , x_1(t),θ_1(t) ) |
|
ℛ^θ_1_(t,T)(x)^⊤|
|
∇_x ℓ ^⊤(x_1(T), y ) - ∇_x ℓ ^⊤(x_2(T), y )
|
+
|
∇_θ ^ ⊤( t , x_1(t),θ_1(t) )
|
|
ℛ^θ_1_(t,T)(x)^⊤ - ℛ^θ_2_(t,T)(x)^⊤|
|
∇_x ℓ ^⊤(x_2(T), y )
|
+
|
∇_θ ^ ⊤( t , x_1(t),θ_1(t) ) -
∇_θ ^ ⊤( t , x_2(t),θ_2(t) )
|
|
ℛ^θ_2_(t,T)(x)^⊤|
|
∇_x ℓ ^⊤(x_2(T), y )
|
for a.e. t∈ [0,T]. We bound separately the three terms at the right-hand side of (<ref>). As regards the first addend, from Assumption <ref>-(v), Assumption <ref>, Proposition <ref> and Lemma <ref>, we deduce that there exists a positive constant C_1>0 such that
|
∇_θ ^ ⊤( t , x_1(t),θ_1(t) ) |
|
ℛ^θ_1_(t,T)(x)^⊤| |
∇_x ℓ ^⊤(x_1(T), y ) - ∇_x ℓ ^⊤(x_2(T), y )
|
≤
C_1 ( 1 + |θ_1(t)| )
θ_1 -θ_2 _L^2
for a.e. t∈ [0,T]. Similarly, using again Assumption <ref>-(v), Assumption <ref>, and Proposition <ref> on the second addend at the right-hand side of (<ref>), we obtain that there exists C_2>0 such that
|
∇_θ ^ ⊤( t , x_1(t),θ_1(t) ) |
|
ℛ^θ_1_(t,T)(x)^⊤ - ℛ^θ_2_(t,T)(x)^⊤|
| ∇_x ℓ ^⊤(x_2(T), y )
|
≤
C_2 ( 1 + |θ_1(t)| )
θ_1 -θ_2 _L^2
for a.e. t∈ [0,T].
Moreover, the third term can be bounded as follows:
|
∇_θ ^ ⊤( t , x_1(t),θ_1(t) ) - ∇_θ ^ ⊤( t , x_2(t),θ_2(t) ) |
|
ℛ^θ_2_(t,T)(x)^⊤|
| ∇_x ℓ ^⊤(x_2(T), y )
|
≤ C_3 [ (1 + |θ_1(t)|) θ_1 -θ_2_L^2 + |θ_1(t) - θ_2(t)| ]
for a.e. t∈ [0,T], where we used Assumption <ref>-(v)-(vii), Proposition <ref> and Lemma <ref>. Therefore, combining (<ref>)-(<ref>), we deduce that
|
∇_θ ^ ⊤( t , x_1(t), θ_1(t) ) ℛ^θ_1_(t,T)(x)^⊤∇_x ℓ ^⊤(x_1(T), y ) -
∇_θ ^ ⊤( t , x_1(t),θ_2(t) ) ℛ^θ_2_(t,T)(x)^⊤∇_x ℓ ^⊤(x_2(T), y )
|
≤C̅[ (1 + |θ_1(t)|) θ_1 -θ_2_L^2 + |θ_1(t) - θ_2(t)| ]
for a.e. t∈ [0,T]. We observe that the last inequality holds for every x,y∈ B_R(0). Therefore, if we integrate both sides of (<ref>) with respect to the probability measure μ_0, recalling the expression of the gradient of J_ℓ reported in (<ref>), we have that
|∇_θ J_ℓ(θ_1)[t] - ∇_θ J_ℓ(θ_1)[t]|
≤C̅[ (1 + |θ_1(t)|) θ_1 -θ_2_L^2 + |θ_1(t) - θ_2(t)| ]
for a.e. t∈ [0,T], and this concludes the proof.
From the previous result we can deduce that the terminal cost J_ℓ:L^2([0,T],^m)→ is locally semi-convex.
Under the same assumptions and notations as in Lemma <ref>, let us consider a bounded subset Γ⊂ L^2([0,T];^m). Then, ∇_θ J:L^2([0,T])→ L^2([0,T]) is Lipschitz continuous on Γ. Moreover, there exists a constant ℒ(T,R,Γ) > 0 such that the cost functional J:L^2([0,T],^m)→ defined in (<ref>) satisfies the following semiconvexity estimate:
J ((1-ζ)θ_1 + ζθ_2 ) ≤ (1-ζ) J(θ_1) + ζ J(θ_2) - (2λ - ℒ(T,R,Γ)) ζ(1-ζ)2θ_1 - θ_2_2^2
for every θ_1,θ_2 ∈Γ and for every ζ∈ [0,1]. In particular, if λ > 12ℒ(T,R,Γ), the cost functional J is strictly convex over Γ.
We recall that J(θ) = J_ℓ(θ) + λθ_L^2^2, where J_ℓ has been introduced in (<ref>).
Owing to Proposition <ref>, it follows that ∇_θ J_ℓ is Lipschitz continuous on Γ with constant ℒ(T,R,Γ). This implies that J is Lipschitz continuous as well on Γ. Moreover, it descends that
J_ℓ( (1-ζ)θ_1 + ζθ_2 ) ≤
(1-ζ) J_ℓ(θ_1) + ζ J_ℓ(θ_2) + ℒ(T,R,Γ) ζ(1-ζ)2θ_1-θ_2 _L^2^2
for every θ_1,θ_2∈Γ and for every ζ∈[0,1].
On the other hand, recalling that
(1-ζ) θ_1 + ζθ_2 _L^2^2 = (1-ζ)θ_1_L^2^2 + ζθ_2_L^2^2 -
ζ(1-ζ) θ_1 - θ_2 _L^2^2
for every θ_1,θ_2∈ L^2, we immediately deduce (<ref>).
When the parameter λ>0 that tunes the L^2-regularization is large enough, we can show that the functional J defined by (<ref>) admits a unique global minimizer.
Indeed, since the control identically 0 is an admissible competitor, we have that
inf_θ∈ L^2 J(θ) ≤ J(0) = J_ℓ(0),
where we observe that the right-hand side is not affected by the value of λ. Hence, recalling that J(θ) = J_ℓ(θ)+λθ_L^2^2, we have that the sublevel set {θ: J(θ) ≤ J_ℓ(0) } is included in the ball B_λ :={θ: θ_L^2^2 ≤1/λJ_ℓ(0) }. Since these balls are decreasing as λ increases, owing to Corollary <ref>, we deduce that there exists a parameter λ̅>0 such that the cost functional J is strongly convex when restricted to B_λ̅.
Then, Lemma <ref> guarantees that the functional J:L^2([0,T],^m)→ introduced in (<ref>) is continuous with respect to the strong topology of L^2, while the convexity implies that it is weakly lower semi-continuous as well.
Being the ball B_λ̅ weakly compact, we deduce that the restriction to B_λ̅ of the functional J admits a unique minimizer θ^*. However, since B_λ̅ includes the sublevel set {θ: J(θ) ≤ J_ℓ(0) }, it follows that θ^* is actually the unique global minimizer.
It is interesting to observe that, even though λ is chosen large enough to ensure existence (and uniqueness) of the global minimizer, it is not possible to conclude that the functional J is globally convex. This is essentially due to the fact that Corollary <ref> holds only on bounded subsets of L^2.
Taking advantage of the representation of the gradient of the terminal cost J_ℓ provided by (<ref>), we can formulate the necessary optimality conditions for the cost J introduced in (<ref>).
In order to do that, we introduce the function p:[0,T]×^d×^d→^d as follows:
p_t(x,y) :=
∇_x ℓ (Φ_(0,T)^θ(x),y)
·ℛ^θ_(t,T)(x),
where ℛ^θ_(t,T)(x) is defined according to (<ref>). We observe that p (as well as ∇_x ℓ) should be understood as a row vector.
Moreover, using (<ref>), we deduce that, for every x,y∈^d, the t↦ p_t(x,y) is solving the following backward Cauchy problem:
∂/∂ t p_t(x,y) =
-p_t(x,y)·∇_x
(t,Φ_(0,t)^θ(x),θ(t))
,
p_T(x,y) = ∇_x ℓ (Φ_(0,T)^θ(x),y).
Hence, we can equivalently rewrite ∇_θ J_ℓ using p:
∇_θ J_ℓ(θ)[t]
= ∫_^2d∇_θ ^ ⊤( t , Φ^θ_(0,t)(x),θ(t) ) · p^⊤_t(
x,y) dμ_0(x,y)
for almost every t∈ [0,T].
Therefore, recalling that J(θ) = J_ℓ(θ) + λθ_L^2^2, we deduce that the stationary condition ∇_θ J(θ^*) = 0 can be rephrased as
{ ∂_t μ^*_t(x,y) + ∇_x·(ℱ(t,x ,θ^*(t))μ_t^*(x,y))=0, μ_t^*|_t=0(x,y)=μ_0(x,y),
∂_t p^*_t(x,y) =
-p^*_t(x,y)·∇_x
(t,Φ_(0,t)^θ^*(x),θ^*(t)),
p^*_t|_t=T(x,y) = ∇_x ℓ (Φ_(0,T)^θ^*(x),y),
θ^*(t) = -1/2λ∫_^2d∇_θ ^ ⊤( t , Φ^θ^*_(0,t)(x),θ^*(t) ) · p^* ⊤_t(x,y) dμ_0(x,y). .
The computation of p through the backward integration of (<ref>) can be interpreted as the control theoretic equivalent of the “back-propagation of the gradients”. We observe that, in order to check whether (<ref>) is satisfied, it is sufficient to evaluate p^* only on (μ_0). Moreover, the evaluation of p^* on different points (x_1,y_1),(x_2,y_2)∈(μ_0) involves the resolution of two uncoupled backward ODEs. This means that, when dealing with a measure μ_0 that charges only finitely many points, we can solve the equation (<ref>) in parallel for every point in (μ_0).
In virtue of Proposition <ref>, we can study the gradient flow induced by the cost functional J:L^2([0,T],^m)→ on its domain. More precisely, given an admissible control θ_0∈ L^2([0,T],^m), we consider the gradient flow equation:
θ̇(ω) = -∇_θ J(θ(ω)) ω≥ 0,
θ(0) =θ_0.
In the next result we show that the gradient flow equation (<ref>) is well-posed and that the solution is defined for every ω≥ 0. In the particular case of linear-control systems, the properties of the gradient flow trajectories has been investigated in <cit.>.
Let T,R > 0 and μ_0 ∈𝒫_c(^2d) be a probability measure such that (μ_0) ⊂ B_R(0), and let us consider :[0,T]×^d×^m→^d and ℓ:^d×^d→ that satisfy, respectively, Assumptions <ref>-<ref> and Assumption <ref>.
Then, for every θ_0∈ L^2([0,T],^m), the gradient flow equation (<ref>) admits a unique solution ω↦θ(ω) of class C^1 that is defined for every ω∈ [0,+∞).
Let us consider θ_0 ∈ L^2([0,T],^m), and let us introduce the sub-level set
Γ:={θ∈ L^2([0,T],^m): J(θ)≤ J(θ_0) },
where J is the functional introduced in (<ref>) defining the mean-field optimal control problem. Using the fact that the end-point cost ℓ:^d×^d →_+ is non-negative, we deduce that Γ⊂{θ∈ L^2([0,T],^m): θ_L^2^2 ≤1/λJ(θ_0) }. Hence, from Proposition <ref> it follows that the gradient field ∇_θ J is Lipschitz (and bounded) on Γ. Hence, using a classical result on ODE in Banach spaces (see, e.g., <cit.>), it follows that the initial value problem (<ref>) admits a unique small-time solution ω↦θ(ω) of class C^1 defined for ω∈[-δ,δ], with δ>0.
Moreover, we observe that
d/dω J(θ(ω)) =
⟨∇_θ J(θ(ω)), θ̇(ω)) ⟩
= -∇_θ J(θ(ω)) _L^2≤ 0,
and this implies that θ(ω)∈Γ for every ω∈ [0,δ]. Hence, it is possible to recursively extend the solution to every interval of the form [0, M], with M>0.
We observe that, under the current working assumptions, we cannot provide any convergence result for the gradient flow trajectories. This is not surprising since, when the regularization parameter λ>0 is small, it is not even possible to prove that the functional J admits minimizers. Indeed, the argument presented in Remark <ref> requires the regularization parameter λ to be sufficiently large.
We conclude the discussion with an observation on a possible discretization of (<ref>). If we fix a sufficiently small parameter τ > 0, given an initial guess θ_0, we can consider the sequence of controls (θ^τ_k)_k≥0⊂ L^2([0,T],^m) defined through the Minimizing Movement Scheme:
θ^τ_0 = θ_0, θ^τ_k+1∈min_θ[ J(θ) + 1/2τθ-θ^τ_k^2_L^2]
k≥0.
We observe that the minimization problems in (<ref>) are well-posed as soon as the functionals θ↦ J^τ_θ_k(θ):= J(θ) + 1/2τθ-θ^τ_k^2_L^2 are strictly convex on the bounded sublevel set K_θ_0 := {θ: J(θ)≤ J(θ^τ_0) }, for every k≥0. Hence, the parameter τ>0 can be calibrated by means of the estimates provided by Corollary <ref>, considering the bounded set K_θ_0.
Then, using and inductive argument, it follows that, for every k≥0, the functional J^τ_θ_k:L^2([0,T],^m)→ admits a unique global minimizer θ^τ_k+1.
Also for J^τ_θ_k we can derive the necessary conditions for optimality satisfied by θ_k+1, that are analogous to the ones formulated in (<ref>), and that descend as well from the identity ∇_θ J^τ_θ_k(θ^τ_k+1)=0:
{ ∂_t μ_t(x,y) + ∇_x· (ℱ(t,x ,θ^τ_k+1(t))μ_t(x,y) )=0, μ_t|_t=0(x,y)=μ_0(x,y),
∂_t p_t(x,y) =
-p_t(x,y)·∇_x
(t,Φ_(0,t)^θ^τ_k+1(x),θ^τ_k+1(t)),
p_t|_t=T(x,y) = ∇_x ℓ (Φ_(0,T)^θ^τ_k+1(x),y),
θ^τ_k+1(t) = -1/1+ 2λτ(
θ^τ_k(t) -τ∫_^2d∇_θ ^ ⊤( t , Φ^θ^τ_k+1_(0,t)(x),θ^τ_k+1(t) ) · p^⊤_t(x,y) dμ_0(x,y)). .
Finally, we observe that the mapping Λ^τ:L^2([0,T],^m )→ L^2([0,T],^m ) defined for a.e. t∈ [0,T] as
Λ_θ^τ_k^τ(θ)[t] := -1/1+ 2λτ(
θ^τ_k(t) -τ∫_^2d∇_θ ^ ⊤( t , Φ^θ_(0,t)(x),θ(t) ) · p^⊤_t(x,y) dμ_0(x,y)
)
is a contraction on K_θ_0 as soon as
τ/1+ 2λτLip(∇_θ J_ℓ|_K_θ_0)
<1.
For every τ>0 such that the sequence (θ_k^τ)_k≥0 is defined, we denote with θ̃^τ:[0,+∞)→ L^2([0,T],^m) the piecewise affine interpolation obtained as
θ̃^τ(ω) =
θ_k^τ + θ^τ_k+1 - θ^τ_k/τ (ω - kτ)
ω∈ [kτ, (k+1)τ].
We finally report a classical result concerning the convergence of the piecewise affine interpolation θ̃^τ to the gradient flow trajectory solving (<ref>).
Under the same assumptions and notations as in Lemma <ref>, let us consider an initial point θ_0∈ L^2([0,T],^m) and a sequence (τ_j)_j∈ℕ such that τ_j→ 0 as j→∞, and let (θ̃^τ_j )_j∈ℕ be the sequence of piecewise affine curves defined by (<ref>).
Then, for every Ω>0, there exists a subsequence (θ̃^τ_j_k )_k∈ℕ converging uniformly on the interval [0,Ω] to the solution of (<ref>) starting from θ_0.
The proof follows directly from <cit.>.
§.§ Finite particles approximation
In this section, we study the stability of the mean-field optimal control problem (<ref>) with respect to finite-samples distributions. More precisely, assume that we are given samples {(X_0^i,Y_0^i)}_i=1^N of size N ≥ 1 independently and identically distributed according to μ_0 ∈𝒫_c(^2d), and consider the empirical loss minimization problem
inf_θ∈ L^2([0,T];^m) J^N(θ) :=
{ 1/N∑_i=1^N ℓ(X^i(T),Y^i(T) ) + λ∫_0^T|θ(t)|^2 dt
s.t. { Ẋ^i(t) = (t,X^i(t),θ(t) ), Ẏ^i(t) =0,
(X^i(t),Y^i(t)) |_t=0 = (X_0^i,Y_0^i), i ∈{1,…,N}.
.
.
By introducing the empirical measure μ_0^N ∈𝒫_c^N(^2d), defined as
μ_0^N := 1/N∑_i=1^N δ_(X_0^i,Y_0^i),
the cost function in (<ref>) can be rewritten as
J^N(θ) = ∫_^2dℓ(Φ_(0,T)^θ(x),y) dμ_0^N(x,y) + λθ_L^2^2
for every θ∈ L^2([0,T],^m), and the empirical loss minimization problem in (<ref>) can be recast as a mean-field optimal control problem with initial datum μ^N_0.
In this section we are interested in studying the asymptotic behavior of the functional J^N as N tends to infinity. More precisely, we consider a sequence of probability measures (μ_0^N)_N≥1 such that μ_0^N charges uniformly N points, and such that
W_1(μ_0^N,μ_0) N → +∞⟶ 0.
Then, in Proposition <ref> we study the uniform convergence of J^N and of ∇_θ J^N to J and ∇_θ J^N, respectively, where J:L^2([0,T],^m)→ is the functional defined in (<ref>) and corresponding to the limiting measure μ_0.
Moreover, in Theorem <ref>, assuming the existence of a region where the functionals J^N are uniformly strongly convex, we provide an estimate of the so called generalization error in terms of the distance W_1(μ_0^N,μ_0).
Let us consider a probability measure μ_0 ∈𝒫_c(^2d) and a sequence (μ_0^N)_N≥1 such that μ_0^N∈𝒫^N_c(^2d) for every N≥1. Let us further assume that W_1(μ_0^N,μ_0)→ 0 as N→∞, and that there exists R>0 such that (μ_0), (μ^N_0) ⊂ B_R(0) for every N≥1.
Given T>0, let :[0,T]×^d ×^m →^d and ℓ:^d ×^d → satisfy, respectively, Assumptions <ref>-<ref> and Assumption <ref>, and let J,J^N:L^2([0,T],^m)→ be the cost functionals defined in (<ref>) and (<ref>), respectively.
Then, for every bounded subset Γ⊂ L^2([0,T],^m), we have that
lim_N→∞ sup_θ∈Γ
|J^N(θ)-J(θ)| =0
and
lim_N→∞ sup_θ∈Γ∇_θ J^N(θ) - ∇_θ J(θ) _L^2=0,
where J was introduced in (<ref>), and J^N is defined as in (<ref>).
Since we have that J(θ) = J_ℓ(θ) + λθ_L^2^2 and J^N(θ) = J_ℓ^N(θ) + λθ_L^2^2, it is sufficient to prove (<ref>)-(<ref>) for J_ℓ and J_ℓ^N, where we set
J^N_ℓ(θ) := ∫_^2dℓ(Φ_(0,T)^θ(x),y) dμ_0^N(x,y)
for every θ∈ L^2 and for every N≥1.
We first observe that, for every θ∈ L^2([0,T],^m) such that θ_L^2≤ρ, from Proposition <ref> it follows that
(μ_0), (μ^N_0) ⊂ B_R̅(0), for some R̅>0. Then, denoting with t↦μ_t^N and t↦μ_t the solutions of the continuity equation (<ref>) driven by the control θ and with initial datum, respectively, μ_0^N and μ_0, we compute
|J^N_ℓ(θ)-J_ℓ(θ)| = |∫_^2dℓ(Φ_(0,T)^θ(x),y) (dμ_0^N-dμ_0)(x,y) |
=
|∫_^2dℓ(x,y) (dμ_T^N-dμ_T)(x,y) |
≤L̅_1 L̅_2 W_1(μ_0^N,μ_0),
where we have used (<ref>) and Proposition <ref> in the second identity, and we have indicated with L̅_1 the Lipschitz constant of ℓ on B_R̅(0), while L̅_2 descends from the continuous dependence of solutions of (<ref>) on the initial datum (see Proposition <ref>). We insist on the fact that both L̅_1,L̅_2 depend on ρ, i.e., the upper bound on the L^2-norm of the controls.
We now address the uniform converge of ∇_θ J^N_ℓ to ∇_θ J_ℓ on bounded sets of L^2. As before, let us consider an admissible control θ such that θ_L^2≤ρ.
Hence, using the representation provided in (<ref>), for a.e. t ∈ [0,T] we have:
|∇_θ J^N_ℓ(θ)[t] - ∇_θ J_ℓ(θ)[t] | =
|∫_^2d∇_θ ^ ⊤( t , Φ^θ_0_(0,t)(x),θ_0(t) ) ·ℛ^θ_0_(t,T)(x)^⊤·∇_x ℓ ^⊤(Φ^θ_0_(0,T)(x), y ) (dμ^N_0-dμ_0 )(x,y) |,
In order to prove uniform convergence in L^2 norm, we have to show that the integrand is Lipschitz-continuous in (x,y) for a.e. t∈[0,T], where the Lipschitz constant has to be L^2-integrable as a function of the t variable.
First of all, combining Assumption <ref>-(v) and Lemma <ref>, we can prove that there exists constants C_1,L̅_3>0 (depending on ρ) such that
| ∇_θ(t, Φ^θ_(0,t)(x), θ(t))| ≤ C_1 (1+ |θ(t)|),
|∇_θ(t, Φ^θ_(0,t)(x_1), θ(t)) - ∇_θ(t, Φ^θ_(0,t)(x_2), θ(t))| ≤ L̅_3 L̅_2 (1+|θ(t)|)|x_1-x_2|
for a.e. t∈[0,T]. We recall that the quantity L̅_2>0 (that already appeared in (<ref>)) represents the Lipschitz constant of the flow Φ_(0,t) with respect to the initial datum.
Moreover, from Proposition <ref>, it descends that
|ℛ^θ_(t,T)(x)| ≤ C_2,
|ℛ^θ_(t,T)(x_1)-ℛ^θ_(t,T)(x_2)|≤ L̅_4 |x_1-x_2|
for every t∈[0,T], where the constants C_2, L̅_4 both depend on ρ.
Finally, owing to Assumption <ref> and Proposition <ref>, we deduce
| ∇_x ℓ(Φ^θ_(0,T)(x),y)| ≤ C_3,
|∇_x ℓ(Φ^θ_(0,T)(x_1),y_1)-∇_x ℓ(Φ^θ_(0,T)(x_2),y_2)| ≤ L̅_5(L̅_2 |x_1-x_2|+|y_1-y_2|)
for every x,y ∈ B_R(0),
where the constants C_3, L̅_2 and the Lipschitz constant L̅_5 of ∇_x ℓ depend, once again, on ρ.
Combining (<ref>), (<ref>) and (<ref>), we obtain that there exists a constant L̃_ρ>0 such that
|∇_θ J^N[t]-∇_θ J[t]| ≤L̃_ρ (1 + |θ(t)|) W_1(μ_0^N, μ_0),
for a.e. t∈[0,T].
Observing that the right-hand side is L^2-integrable in t, the previous inequality yields
∇_θ J^N-∇_θ J_L^2≤L̃_ρ (1+ρ) W_1(μ_0^N, μ_0),
and this concludes the proof.
In the next result we provide an estimate of the generalization error in terms of the distance W_1(μ_0^N,μ_0). In this case, the important assumption is that there exists a sequence (θ^*,N)_N≥1 of local minimizers of the functionals (J^N)_N≥1, and that it is contained in a region where (J^N)_N≥1 are uniformly strongly convex.
Under the same notations and hypotheses as in Proposition <ref>, let us further assume that the functional J admits a local minimizer θ^* and, similarly, that, for every N≥1, θ^*,N is a local minimizer for J^N. Moreover, we require that there exists a radius ρ >0 such that, for every N ≥N̅, θ^*,N∈ B_ρ(θ^*) and the functional J^N is η-strongly convex in B_ρ(θ^*), with η>0.
Then, there exists a constant C>0 such that, for every N≥N̅, we have
|∫_ℝ^2dℓ(x,y) dμ^θ^*,N_T(x,y)- ∫_ℝ^2dℓ(x,y) dμ^θ^*_T(x,y) | ≤ C ( W_1(μ_0^N,μ_0)
+ 1/√(η)√(W_1(μ_0^N,μ_0))).
According to our assumptions, the control θ^*,N∈ B_ρ(θ^*) is a local minimizer for J^N, and, being J^N strongly convex on B_ρ(θ^*) for N≥N̅, we deduce that {θ^*,N} =min_B_ρ(θ^*)J^N.
Furthermore, from the η-strong convexity of J^N, it follows that for every θ_1, θ_2 ∈ B_ρ(θ^*), it holds
⟨∇_θ J^N(θ_1) - ∇_θ J^N(θ_2), θ_1-θ_2 ⟩≥η ||θ_1-θ_2||^2_L^2.
According to Proposition <ref>, we can pass to the limit in the latter and deduce that
⟨∇_θ J(θ_1) -∇_θ J(θ_2), θ_1-θ_2 ⟩≥η ||θ_1-θ_2||^2_L^2
for every θ_1,θ_2∈ B_ρ(θ^*).
Hence, J is η-strongly convex in B_ρ(θ^*) as well, and that {θ^*} =min_B_ρ(θ^*)J.
Therefore, from the η-strong convexity of J^N and J, we obtain
J^N(θ^*)-J^N(θ^*,N) ≥η/2||θ^*,N- θ^*||_L^2^2
J(θ^*,N)-J^N(θ^*) ≥η/2||θ^*,N- θ^*||_L^2^2.
Summing the last two inequalities, we deduce that
η ||θ^*,N- θ^*||_L^2^2 ≤ (J^N(θ^*)-J(θ^*) ) + (J^N(θ^*,N)-J(θ^*,N) ) ≤ 2C_1 W_1(μ_0^N, μ_0),
where the second inequality follows from the local uniform convergence of Proposition <ref>. We are now in position to derive a bound on the generalization error:
| ∫_ℝ^2dℓ(x,y) (dμ^θ^*,N_T - dμ^θ^*_T )(x,y) | = | ∫_ℝ^2dℓ(Φ_(0,T)^θ^*,N(x),y) dμ^N_0(x,y) - ∫_ℝ^2dℓ(Φ_(0,T)^θ^*(x),y) dμ_0(x,y) |
≤∫_^2d|ℓ(Φ^θ^*,N_(0,T)(x),y) -ℓ(Φ^θ^*_(0,T)(x),y)| dμ_0^N(x,y)
+ |∫_^2dℓ(Φ^θ^*_(0,T)(x),y) ( dμ_0^N(x,y)-dμ_0(x,y) ) |
≤L̅sup_x ∈(μ_0^N)|Φ^θ^*,N_(0,T)(x)- Φ^θ^*_(0,T)(x)| + L̅_R W_1(μ_0^N, μ_0),
where L̅ and L̅_R are constants coming from Assumption <ref> and Proposition <ref>. Then, we combine Proposition <ref> with the estimate in (<ref>), in order to obtain
sup_x ∈(μ_0^N)|Φ^θ^*,N_(0,T)(x)- Φ^θ^*_(0,T)(x)| ≤ C_2 θ^*,N-θ^*_L^2≤ C_2 √(2C_1/η W_1(μ_0^N,μ_0)).
Finally, from the last inequality and (<ref>), we deduce (<ref>).
Since the functional J:L^2([0,T],^m)→ defined in (<ref>) is continuous (and, in particular, lower semi-continuous) with respect to the strong topology of L^2, the locally uniform convergence of the functionals J^N to J (see Proposition <ref>) implies that J^N is Γ-converging to J with respect to the strong topology of L^2. However, this fact is of little use, since the functionals J,J^N are not strongly coercive.
On the other hand, if we equip L^2 with the weak topology, in general the functional J is not lower semi-continuous. In our framework, the only circumstance where one can hope for Γ-convergence with respect to the weak topology corresponds to the highly-regularized scenario, i.e., when the parameter λ>0 is sufficiently large. Therefore, in the situations of practical interest when λ is small, we cannot rely on this tool, and the crucial aspect is that the dynamics (<ref>) is nonlinear with respect to the control variable. Indeed, in the case of affine-control systems considered in <cit.>, it is possible to establish Γ-convergence results in the L^2-weak topology (see <cit.> for an application to diffeomorphisms approximation).
Finally, we report that in <cit.>, in order to obtain the L^2-strong equi-coercivity of the functionals, the authors introduced in the cost the H^1-seminorm of the controls.
§.§ Convex regime and previous result
In order to conclude our mean-field analysis, we now compare our results with the ones obtained in the similar framework of <cit.>, where the regularization parameter λ was assumed to be sufficiently large, leading to a convex regime in the sublevel sets (see Remark <ref>). We recall below the main results presented in <cit.>.
Given T, R, R_T>0, and an initial datum μ_0 ∈𝒫_c(^2d) with (μ_0) ⊂ B_R(0), let us consider a terminal condition ψ_T:^d×^d→ such that (ψ_T)⊂ B_R_T(0) and ψ_T(x,y) = ℓ(x,y) ∀ x,y ∈ B_R(0). Let satisfy <cit.> and ℓ∈ C^2(^d ×^d,). Assume further that λ>0 is large enough. Then, there exists a triple (μ^*, θ^*, ψ^*) ∈𝒞([0, T ], 𝒫_c(^2d)) × Lip([0, T ], ^m) ×𝒞^1([0, T ], 𝒞_c^2(^2d)) solution of
{ ∂_tμ_t^*(x,y) + ∇_x· (ℱ(t,x,θ^*(t))μ_t^*(x,y))=0, μ_t^*|_t=0(x,y)=μ_0(x,y),
∂_tψ_t^*(x,y) + ∇_xψ_t^*(x,y)·ℱ(t,x,θ^∗(t))=0, ψ_t^*|_t=T(x,y)=ℓ(x,y) ,
θ^*⊤(t) = -1/2λ∫_^2d∇_xψ_t^*(x,y)·∇_θℱ(t,x,θ^*(t)) dμ_t^*(x,y), .
where ψ^* ∈𝒞^1([0,T],𝒞^2_c(^2d)) is in characteristic form. Moreover, the control solution θ^* is unique in a ball Γ_C ⊂ L^2([0,T], ^m) and continuously dependent on the initial datum μ_0.
We observe that the condition on λ >0 to be large enough is crucial to obtain local convexity of the cost functional and, consequently, existence and uniqueness of the solution. However, in the present paper we have not done assumptions on the magnitude of λ, hence, as it was already noticed in Remark <ref>, we might end up in a non-convex regime. Nevertheless, in Proposition <ref> we show that, in the case of λ sufficiently large, the previous approach and the current one are “equivalent".
Under the same hypotheses as in Theorem <ref>, let J:L^2([0,T])→ be the functional defined in (<ref>). Then, θ^* satisfies (<ref>) if and only if it is a critical point for J.
According to Lemma (<ref>), the gradient of the functional J at θ∈ L^2([0,T],^m) is defined for a.e. t ∈ [0,T] as
∇_θ J(θ)[t] = ∫_^2d∇_θ ^ ⊤( t , Φ^θ_(0,t)(x),θ(t) ) ·ℛ^θ_(t,T)(x)^⊤·∇_x ℓ ^⊤(Φ^θ_(0,T)(x), y ) dμ_0(x,y) + 2λθ(t).
Hence, if we set the previous expression equal to zero, we obtain the characterization of the critical point
θ(t) = -1/2λ∫_^2d∇_θ ^ ⊤( t , Φ^θ_(0,t)(x),θ(t) ) ·ℛ^θ_(t,T)(x)^⊤·∇_x ℓ ^⊤(Φ^θ_(0,T)(x), y ) dμ_0(x,y)
for a.e. t∈[0,T].
On the other hand, according to Theorem <ref>, the optimal θ satisfies for a.e. t ∈ [0,T] the following
θ(t) = -1/2λ∫_^2d( ∇_xψ_t(x,y)
·∇_θ(t,x, θ(t)) )^⊤ dμ_t(x,y)
= -1/2λ∫_^2d∇_θ^⊤(t,Φ^θ_(0,t)(x), θ(t)) ·∇_xψ^⊤_t(Φ^θ_(0,t)(x),y) dμ_0(x,y).
Hence, to conclude that ∇_θ J=0 is equivalent to condition stated in Theorem <ref>, we are left to show that
ℛ^θ_(t,T)(x)^⊤·∇_x ℓ ^⊤(Φ^θ_(0,T)(x), y ) = ∇_x ψ^⊤_t( Φ^θ_(0,t)(x),y),
where the operator ℛ^θ_(t,T)(x) is defined as the solution of (<ref>).
First of all, we recall that (t,x,y)↦ψ(t, Φ^θ_(0,t)(x),y) is defined as the characteristic solution of the second equation in (<ref>) and, as such, it satisfies
ψ_t(x,y) = ℓ (Φ_(t,T)^θ(x),y),
for every t∈[0,T] and for every x,y∈ B_R_T(0). By taking the gradient with respect to x, we obtain that
∇_x ψ_t(x,y) = ∇_x ℓ(Φ_(t,T)^θ(x),y) )·∇_x Φ_(t,T)^θ|_x,
for all x,y∈ B_R_T(0). Hence, using (<ref>), we deduce that
∇_x ψ_t(Φ_(0,t)^θ(x),y) =
∇_x ℓ(Φ_(t,T)^θ∘Φ_(0,t)^θ(x),y) )·∇_x Φ_(t,T)^θ|_Φ_(0,t)^θ(x)
= ∇_x ℓ(Φ_(0,T)^θ(x),y) )·ℛ^θ_(t,T)(x)
which proves (<ref>).
§ ALGORITHM
In this section, we present our training procedure, which is derived from the necessary optimality conditions related to the minimizing movement scheme (see (<ref>)).
Since the mean-field optimal control problem as presented in (<ref>) is numerically intractable (especially in high-dimension), in the practice we always consider the functional corresponding to the finite particles approximation (see (<ref>)).
For its resolution, we employ an algorithm belonging to the family of shooting methods, which consists in the forward evolution of the trajectories, in the backward evolution of the adjoint variables, and in the update of the controls.
Variants of this method have already been employed in different works, e.g. <cit.>, with the name of method of successive approximations, and they have been proven to be an alternative way of performing the training of NeurODEs for a range of tasks, including high-dimensional problems.
In our case, we start with a random guess for the control parameter θ_0∈ L^2([0,T],^m). Subsequently, we solve the necessary optimality conditions specified in equation (<ref>) for a suitable τ>0 to obtain an updated control parameter θ_1. More precisely, since the last identity in (<ref>) has the form θ_1=Λ_θ_0^τ(θ_1), the computation of θ_1 is performed via fixed-point iterations of the mapping Λ_θ_0^τ, which is defined as in (<ref>). In this regard, we recall that Λ_θ_0^τ is a contraction if τ is small enough. The scheme that we implemented is presented is Algorithm <ref>.
It is interesting to observe that, in the highly-regularized regime considered in <cit.>, the authors managed to obtain a contractive map directly from the necessary conditions for optimality, and they did not need to consider the minimizing movements scheme. This is rather natural since, when the parameter λ>0 that tunes the L^2-penalization is large enough, the functional associated to the optimal control problem is strongly convex in the sublevel set corresponding to the control θ≡ 0, as discussed in Remark <ref>.
However, as reported in <cit.>, determining the appropriate value for λ in each application can be challenging. On the other hand, from the practitioners' perspective, dealing with high regularization is not always desirable, since the machine learning task that the system should learn is encoded in the final-time cost. The authors highlighted the complexity involved in selecting a regularization parameter that is large enough to achieve contractivity, while ensuring that the resulting controls are not excessively small (due to high regularization) and of little use.
These considerations motivated us to consider a scenario where the regularization parameter does not need to be set sufficiently large.
From a numerical perspective, the parameter τ in Equation (<ref>) (coming from the minimizing movement scheme) plays the role of the learning rate, and it provides the lacking amount of convexity, addressing the stability issues related to the resolution of optimal control problems in non-convex regime.
These kinds of instabilities were already known in the Soviet literature of numerical optimal control (see the review paper <cit.>), and various solutions have been proposed to address them.
For example, in <cit.> the authors proposed an iterative method based on the Maximum Principle and on an augmented Hamiltonian, with an approach that is somehow reminiscent of minimizing movements.
More recently, in the framework of NeurODEs, in <cit.> it was proposed another stabilization strategy, which is different from ours since it enforces similarity between the evolution of state and co-state variables after the control update.
Implicitly, the approach of <cit.>
leads to a penalization of significant changes in the controls. On the other hand, in our approach this penalization is more explicit, and it is enforced via the memory term of the minimizing movement scheme. To the best of our knowledge, this is the first instance where a regularization based on the minimizing movement scheme is employed for training NeurODEs.
Although we formulate and analyze theoretically our problem within the mean-field framework, it is not advantageous to numerically solve the forward equation as a partial differential equation. In <cit.>, various numerical methods for solving PDEs were employed and compared. However, these methods encounter limitations when applied to high-dimensional data, which is often the case in Machine Learning scenarios. Therefore, in this study, we employ a particle method to solve both the forward partial differential equation and the backward dynamics. This particle-based approach involves reformulating the PDE as a system of ordinary differential equations in which particles represent mathematical collocation points that discretize the continuous fields. By employing this particle method, we address the challenges associated with high-dimensional data, enabling efficient numerical solutions for the forward and backward dynamics.
To conclude this section, we briefly present the forward and the backward systems that are solved during the execution of the method. For the sake of simplicity, we will focus on the case of an encoder. The objective is to minimize the following function:
J(θ) = 1/N∑_i=1^Nℓ(X^i__r(T), Y^i(0)) + λ/2θ^2_2,
where _r denotes the active indices in the bottleneck, i.e. at t_r=T, of the state-vector X^i(T). The latter denotes the encoded output at time T for the i-th particle, while Y^i(0) represents the corresponding target at time 0 (which we recall is the same at time T, being Ẏ^i ≡ 0 for every i=1,…,N). For each i-th particle and every t such that t_j ≤ t ≤ t_j+1, the forward dynamics can be described as follows:
Ẋ^i__j(t) = 0,
Ẋ^i__j(t) = _j(t, X^i__j(t), θ(t) ),
subject to the initial condition X^i(0) = X__0^i(0) = X_0^i ∈^d. In the same interval t_j ≤ t ≤ t_j+1, the backward dynamics reads
Ṗ^i__j(t) = 0,
Ṗ^i__j(t) = -P^i__j(t) ·∇_x__j_j (t, X^i__j(t), θ(t) ),
where the final co-state is
P^i(T) =
- ∂_k ℓ(X^i__r(T),Y^i(0)), if k ∈_r,
0, if k ∉_r.
We notice that, for t_j ≤ t ≤ t_j+1 and every i ∈{0, …, N }, we have
(t,X^i(t),θ(t)) = (t, (X^i__r,X^i__j)(t), θ(t)) =
[ _j(t, X^i__j(t),θ(t)); 0 ],
and, consequently , we deduce that
∇_x (t,X^i(t),θ(t)) = [ ∇_x__j_j(t,X^i__j(t),θ(t)) 0; 0 0 ],
where the null blocks are due to the fact that, for t_j ≤ t ≤ t_j+1, ∇_x_k (t,x,θ)=0 if k∈_j, and ∇_x__j_j (t,x,θ)=0.
In the case of an Autoencoder, the structure of the forward and backward dynamics is analogous.
From the calculations reported above it is evident that the matrices and the vectors involved in our forward and backward dynamics are quite sparse (see (<ref>) and (<ref>)), and that the state and co-state variables contain components that are constant in many sub-intervals (see (<ref>) and (<ref>)). Hence, in the practical implementation, especially when dealing with an Autoencoder, we do not actually need to double the original state variables and to introduce the shadow ones, but we can simply overwrite those values and, in this way, we obtain a more memory-efficient code. A similar argument holds as well for the co-state variables.
Moreover, we expect the control variable θ to have several null components during the evolution.
This relates to Remark <ref> and descends from the fact that, even though in our model θ∈^m for every t ∈ [0,T], in the internal sub-intervals [t_j,t_j+1] only few of its components are influencing the dynamics.
Hence, owing to the L^2-squared regularization on θ, it results that, if in an interval [t_j,t_j+1] a certain component of θ is not affecting the velocity, then it is convenient to keep it null.
§ NUMERICAL EXPERIMENTS
In this section, we present a series of numerical examples to illustrate the practical application of our approach. We consider datasets of varying dimensions, ranging from low-dimensional data to a more typical Machine Learning dataset such as MNIST. Additionally, we provide justifications and insights into some of the choices made in our theoretical analysis.
For instance, we examine the process of choosing the components to be deactivated during the modeling phase, and we investigate whether this hand-picked selection
can lead to any issues or incorrect results.
In this regard, in our first experiment concerning a classification task, we demonstrate that this a priori choice does not pose any problem, as the network effectively learns to separate the dataset into two classes before accurately classifying them.
Furthermore, as we already pointed out, we have extended some of the assumptions from <cit.> to accommodate the use of a smooth approximation of the ReLU function. This extension is not merely a theoretical exercise, since in our second numerical example we show how valuable it is to leverage unbounded activation functions.
While both of these examples involve low-dimensional data and may not be representative of typical tasks for an Autoencoder architecture, we address this limitation in our third experiment by performing a reconstruction task on the MNIST dataset. Lastly, we present noteworthy results obtained from analyzing the performance on MNIST, highlighting specific behaviors that warrant further investigation in future research.
The layers of the networks that we employ in all our experiments have the form:
^d∋ X = (X__j, X__j)^⊤↦ϕ_n^W,b(X) =
(X__j, X__j)^⊤
+ h ( σ(
W__j·
X__j +
b__j), 0 )^⊤,
where _j,_j are, respectively, the sets of active and inactive components at the layer n, b__j are the components of b∈^d belonging to _j, while W__j is the square sub-matrix of W∈^d× d corresponding to the active components. Finally, the activation function σ will be specified case by case.
§.§ Bidimensional Classification
In our initial experiment, we concentrate on a bidimensional classification task that has been extensively described in <cit.>. Although this task deviates from the typical application of Autoencoders, where the objective is data reconstruction instead of classification, we believe it gives valuable insights on how our model works.
The objective is to classify particles sampled from a standard Gaussian distribution in ^2 based on the sign of their first component. Given an initial data point x_0 ∈^2, denoted by x_0[i] with i=1,2 representing its i-th component, we assign a positive label +1 to it if x_0[1] > 0, and a negative label -1 otherwise.
To incorporate the labels into the Autoencoder framework, we augment the labels to obtain a positive label [1,0] and a negative one [-1,0]. In such a way, we obtain target vectors in ^2, i.e., with the same dimension as the input data-points in the first layer.
The considered architecture is an Autoencoder comprising twenty-one layers, corresponding to T=2 and dt=0.05. The first seven layers maintain a constant active dimension equal to 2, followed by seven layers of active dimension 1. Finally, the last seven layers, representing the prototype of a decoder, have again constant active dimension 2, restoring the initial one.
A sketch of the architecture is presented on the right side of Figure <ref>.
We underline that we make use of the observation presented in Remark <ref> to construct the implemented network, and we report that we employ the hyperbolic tangent as activation function.
The next step is to determine which components to deactivate, i.e., we have to choose the sets _j for j=1, …, 2r: the natural choice is to deactivate the second component, since the information which the classification is based on is contained in the first component (the sign) of the input data-points.
Since we use the memory-saving regime of Remark <ref>, we observe that, in the decoder, the particles are “projected" onto the x-axis, as their second component is deactivated and set equal to 0. Then, in the decoding phase, both the components have again the possibility of evolving. This particular case is illustrated on the left side of Figure <ref>.
Now, let us consider a scenario where the network architecture remains the same, but instead of deactivating the second component, we turn-off the first component.
This has the effect of “projecting" the particles onto the y-axis in the encoding phase.
The results are presented in Figure <ref>, where an interesting effect emerges.
In the initial phase (left), where the particles can evolve in the whole space ^2, the network is capable of rearranging the particles in order to separate them. More precisely, in this part, the relevant information for the classification (i.e., the sign of the first component), is transferred to the second component, that will not be deactivated.
Therefore, once the data-points are projected onto the y-axis in the bottleneck (middle), two distinct clusters are already formed, corresponding to the two classes of particles.
Finally, when the full dimension is restored, the remaining task consists in moving these clusters towards the respective labels, as demonstrated in the plot on the right of Figure <ref>.
This numerical evidence confirms that our a priori choice (even when it is very unnatural) of the components to be deactivated does not affect the network's ability to learn and classify the data.
Finally, while studying this low-dimensional numerical example, we test one of the assumptions that we made in the tehoretical setting.
In particular, we want to check if it is reasonable to assume that the cost landscape is convex around local minima, as assumed in Theorem <ref>.
In Table <ref>, we report the smallest and highest eigenvalues of the Hessian matrix of the loss function recorded during the training process, i.e., starting from a random initial guess, until the convergence to an optimal solution.
§.§ Parabola Reconstruction
In our second numerical experiment, we focus on the task of reconstructing a two-dimensional parabola.
To achieve this, we sample points from the parabolic curve and we use them as the initial data for our network. The network architecture consists of a first block of seven layers with active dimension 2, followed by seven additional layers with active dimension 1.
Together, these two blocks represent the encoding phase in which the set of active components are _j = {0} for j=7,…, 14.
Similarly as in the previous example, the points at the 7-th layer are “projected" onto the x-axis, and for the six subsequent layers they are constrained to stay in this subspace.
After the 14-th layer, the original active dimension is restored, and the particles can move in the whole space ^2, aiming at reaching their original positions.
Despite the low dimensionality of this task, it provides an interesting application that allows us to observe the distinct phases of our mode, which are presented in Figure <ref>.
Notably, in the initial seven layers, the particles show quite tiny movements (top left of Figure <ref>). This is since the relevant information to reconstruct the position is encoded in the first component, which is kept active in the bottleneck.
On the other hand, if in the encoder we chose to deactivate the first component instead of the second one, we would expect that the points need to move considerably before the projection takes place, as it was the case in the previous classification task.
During the second phase (top right of Figure <ref>), the particles separate along the x-axis, preparing for the final decoding phase, which proves to be the most challenging to learn (depicted in the bottom left of Figure <ref>).
Based on our theoretical knowledge and the results from initial experiments, we attempt to improve the performance of the AutoencODE network by modifying its structure.
One possible approach is to design the network in a way that allows more time for the particles to evolve during the decoding phase, while reducing the time spent in the initial and bottleneck phases. Indeed, we try to use 40 layers instead of 20, and most of the new ones are allocated in the decoding phase.
The result is illustrated in the bottom right of Figure <ref>, where we observe that changing the network's structure has a significant positive impact on the reconstruction quality, leading to better results. This result is inspired by the heuristic observation that the particles “do not need to move” in the first two phases. On this point, a more theoretical analysis of the network's structure will be further discussed in the next paragraph, where we perform a sanity check, and we relate the need for extra layers to the Lipschitz constant of the trained network.
This experiment highlights an important observation regarding the choice of activation functions. Specifically, it becomes evident that certain bounded activation functions, such as the hyperbolic tangent, are inadequate for moving the particles back to their original positions during the decoding phase.
The bounded nature of these activation functions limits their ability to move a sufficiently large range of values, which can lead to the points getting stuck at suboptimal positions and failing to reconstruct the parabolic curve accurately.
To overcome this limitation and achieve successful reconstruction, it is necessary to employ unbounded activation functions that allow for a wider range of values, in particular the well-known Leaky Relu function.
An advantage of our approach is that our theory permits the use of smooth approximations for well-known activation functions, such as the Leaky ReLU (<ref>).
Specifically, we employ the following smooth approximation of the Leaky ReLU function:
σ_smooth(x) = α x + 1/slog(1+e^sx),
where s approaching infinity ensures convergence to the original Leaky ReLU function.
While alternative approximations are available, we employed (<ref>) in our study.
This observation emphasizes the importance of considering the characteristics and properties of activation functions when designing and training neural networks, and it motivates our goal in this work to encompass unbounded activation functions in our working assumptions.
§.§ MNIST Reconstruction
In this experiment, we apply the AutoencODE architecture and our training method to the task of reconstructing images from the MNIST dataset. The MNIST dataset contains 70000 grayscale images of handwritten digits ranging from zero to nine. Each image has a size of 28×28 pixels and has been normalized.
This dataset is commonly used as a benchmark for image classification tasks or for evaluating image recognition and reconstruction algorithms.
However, our objective in this experiment is not to compare our reconstruction error with state-of-the-art results, but rather to demonstrate the applicability of our method to high-dimensional data, and to highlight interesting phenomena that we encounter.
In general, when performing an autoencoder reconstruction task, the goal is to learn a lower-dimensional representation of the data that captures its essential features.
On the other hand, determining the dimension of the lower-dimensional representation, often referred to as the latent dimension, requires setting a hyperparameter, i.e., the width of the bottleneck's layers, which might depend on the specific application.
We now discuss the architecture we employed and the choice we made for the latent dimension.
Our network consists of twenty-three layers, with the first ten layers serving as encoder, where the dimension of the layers is gradually reduced from the initial value d_0=784 to a latent dimension of d_r=32.
Then, this latent dimension is kept in the bottleneck for three layers, and the last ten layers act as decoder, and, symmetrically to the encoder, it increases the width of the layers from 32 back to d_2r=784.
Finally, for each layer we employ a smooth version of the Leaky Relu, see (<ref>), as activation function.
The architecture is visualized in Figure <ref>, while the achieved reconstruction results are presented in Figure <ref>.
We observe that, once again, we made use of Remark <ref> for the implementation of the AutoencoODE-based model.
Latent dimensionality in the bottleneck:
One of the first findings that we observe in our experiments pertains to the latent dimension of the network and to the intrinsic dimension of the dataset.
The problem of determining the intrinsic dimension has been object of previous studies such as <cit.>, where it was estimated to be approximately equal to 13 in the case of MNIST dataset.
On this interesting topic, we also report the paper <cit.>, where a maximum likelihood estimator was proposed and datasets of images were considered, and the recent contribution <cit.>. Finally, the model of the hidden manifold has been formulated and studied in <cit.>.
Notably, our network exhibits an interesting characteristic in which, starting from the initial guess of weights and biases initialized at 0, the training process automatically identifies an intrinsic dimensionality of 13.
Namely, we observe that the latent vectors of dimension 32 corresponding to each image in the dataset are sparse vectors with 13 non-zero components, forming a consistent support across all latent vectors derived from the original images.
To further analyze this phenomenon, we compute the means of all the latent vectors for each digit and we compare them, as depicted in the left and middle of Figure <ref>. These mean vectors always have exactly the same support of dimension 13, and, interestingly, we observe that digits that share similar handwritten shapes, such as the numbers 4 and 9 or digits 3 and 5, actually have latent means that are close to each other.
Additionally, we explore the generative capabilities of our network by allowing the latent means to evolve through the decoding phase, aiming to generate new images consistent with the mean vector.
On the right of Figure <ref>, we present the output of the network when using a latent vector corresponding to the mean of all latent vectors representing digit 3.
This intriguing behavior of our network warrants further investigation into its ability to detect the intrinsic dimension of the input data, and into the exploration of its generative potential.
Previous studies have demonstrated that the ability of neural networks to converge to simpler solutions is significantly influenced by the initial parameter values (see e.g. <cit.>).
Indeed, in our case we have observed that this phenomenon only occurs when initializing the parameters with zeros.
Moreover, it is worth mentioning that this behavior does not seem to appear in standard Autoencoders without residual connections.
Sanity check of the network's architecture.
An advantage of interpreting neural networks as discrete approximations of dynamical systems is that we can make use of typical results of numerical resolutions of ODEs in order to better analyze our results.
Indeed, we notice that, according to well-known results, in order to solve a generic ODEs we need to take as discretization step-size dt a value smaller than the inverse of the lipschitz constant of the vector field driving the dynamics.
We recall that the quantity dt is related to the number of layers of the network through the relation n_layers= T/dt, where T is the right-extreme of the evolution interval [0,T].
In our case, we choose a priori the amplitude of dt, we train the network and, once we have computed θ^*, we can compare a posteriori the discretization step-size chosen at the beginning with the quantity Δ = 1/Lip((t,X,θ^*) for each time-node t and every datum x.
In Figure <ref>, we show the time discretization dt in orange and in blue the quantity Δ, for the case of a wrongly constructed autoencoder (on the left) and the correct one (on the right).
From this plots, we can perform a “sanity check" and we can make sure that the number of layers that we chose is sufficient to solve the task.
Indeed, in the wrong autoencoder on the left, we see that in the last layer the quantity Δ is smaller than dt, and this violates the condition that guarantees the stability of the explicit Euler discretization.
Indeed, the introduction of two symmetric layers to the network (corresponding to the plot on the right of Figure <ref>) allows the network to satisfy everywhere the relation Δ > dt.
Moreover, we also notice that during the encoding phase the inverse of the Lipschitz constant of is quite high, which means that the vector field does not need to move a lot the points.
This suggests that we could get rid of some of the layers in the encoder and only keep the necessary ones, i.e., the ones in the decoder where Δ is small and a finer discretization step-size is required.
Finally, we report that this last observation is consistent with the results recently obtained in <cit.>.
Entropy across layers.
We present our first experiments on the study of the information propagation within the network, where some intriguing results appear.
This phenomenon is illustrated in Figure <ref>, where we examine the entropy across the layers after the network has been trained.
We introduce two different measures of entropy, depicted in the two graphs of the figure. In first place, we consider the well-known Shannon entropy, denoted as H(E), which quantifies the information content of a discrete random variable E, distributed according to a discrete probability measure p: Ω→ [0,1] such that p(e) = p(E=e). The Shannon entropy is computed as follows:
H(E) = 𝔼[-log(p(E) ] = ∑_e ∈ E -p(e)log(p(e))
In our context, the random variable of interest is E = ∑_j=1^N 11_|X_0^i- X_0^j| ≤ϵ, where X_0^i represents a generic image from the MNIST dataset.
Additionally, we introduce another measure of entropy, denoted as ℰ, which quantifies the probability that the dataset can be partitioned into ten clusters corresponding to the ten different digits. This quantity has been introduced in <cit.> and it is defined as
ℰ =
ℙ(X ∈⋃_i=1^k B_ε(X_0^i) ),
where ε>0 is a small radius, and X_0^1,…,X_0^k are samplings from the dataset.
Figure <ref> suggests the existence of a distinct pattern in the variation of information entropy across the layers, which offers a hint for further investigations.
Let us first focus on the Shannon entropy: as the layers' dimensionality decreases in the encoding phase, there is an expected decrease of entropy, reflecting the compression and reduction of information in the lower-dimensional representation.
The bottleneck layer, where the dimension is kept constant, represents a critical point where the entropy reaches a minimum. This indicates that the information content is highly concentrated and compressed in this latent space.
Then, during the decoding phase, the Shannon entropy do not revert to its initial value but instead exhibit a slower increase. This behavior suggests that the network retains some of the learned structure and information from the bottleneck layer.
Something similar happens for the second measure of entropy: at the beginning, the data is unlikely to be highly clustered, since two distinct images of the same digit may be quite distant one from the other.
In the inner layers, this probability increases until it reaches its maximum (rather close to 1) in the bottleneck, where the data can then be fully partitioned into clusters of radius ϵ.
As for the Shannon entropy, the information from the bottleneck layer is retained during the decoding phase, which is why the entropy remains constant for a while and then decreases back in a slower manner.
It is worth noticing that in both cases the entropy does not fully return to its initial level.
This might be attributed to the phenomenon of mode collapse, where the network fails to capture the full variability in the input data and instead produces similar outputs for different inputs, hence inducing some sort of implicit bias.
Mode collapse is often considered undesirable in generative models, as it hinders the ability to generate diverse and realistic samples.
However, in the context of understanding data structure and performing clustering, the network's capability to capture the main modes or clusters of the data can be seen as a positive aspect.
The network learns to extract salient features and represent the data in a compact and informative manner, enabling tasks such as clustering and classification.
Further investigation is needed to explore the relationship between the observed entropy patterns, mode collapse, and the overall performance of the network on different tasks.
§.§ Acknowledgments
The authors would like to thank Giuseppe Savaré for the fruitful discussions during his permanence in Munich.
This work has been funded by the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. C.C. and M.F. acknowledge also the partial support of the project “Online Firestorms And Resentment Propagation On Social Media: Dynamics, Predictability and Mitigation” of the TUM Institute for Ethics in Artificial Intelligence and of the DFG Project “Implicit Bias and Low Complexity Networks” within the DFG SPP 2298 “Theoretical Foundations of Deep Learning”. A.S. acknowledges the partial support from INdAM-GNAMPA.
§ APPENDIX
Let us consider the controlled system
ẋ = (t,x,θ), x(0)=x_0,
where :[0,T]×^n×^m→^n satisfies Assumptions <ref>, and θ∈ L^2([0,T], ^m).
Then, for every R>0 and any x_0 ∈ B_R(0), we have that x(t) ∈ B_R(0) for every t ∈ [0,T], where R = (R + L_R (1+ θ_L^1))e^L_R(1+θ_L^1).
According to Assumption <ref>-(ii) on , the trajectories can be bounded as follows:
|x(t)|
≤ |x_0| + ∫_0^t |(s,x(s), θ(s))| ds
≤ |x_0| + L_R ∫_0^t(1+ |x(s)|)(1+ |θ(s)|) ds
for every t∈[0,T].
Using Gronwall's lemma, it follows that
|x(t)| ≤( |x_0| + L_R (1+ θ_L^1) ) e^L_R(1+ θ_L^1).
For every t ∈ [0,T], let us consider the flow mapping Φ^θ_(0,t) : ^d →^d defined in (<ref>) and driven by the control θ∈ L^2([0,T],^m).
Let us assume that the controlled dynamics :[0,T]×^d×^m→^d satisfies Assumption <ref>.
Then, for every R>0, and every x_1, x_2 ∈ B_R(0), it follows that
|Φ^θ_(0,t)(x_1) -Φ^θ_(0,t)(x_2)| ≤ e^L_R(1+ θ_L1)|x_1-x_2|,
where R is defined as in Lemma <ref>, and L_R̅ is prescribed by Assumption <ref>-(ii).
Let us denote with t ↦ x_1(t), t ↦ x_2(t) the solutions of (<ref>) driven by θ and starting, respectively, from x_1(0) = x_1, x_2(0) = x_2.
Then, for every t∈ [0,T], we have
|x_1(t)-x_2(t)| ≤|x_1-x_2| + ∫_0^t |(s,x_1(s), θ(s)) - (s,x_2(s), θ(s))| ds
≤ |x_1-x_2| + L_R∫_0^t (1+ |θ(s)|)|x_1(s)-x_2(s))| ds,
by using Assumption <ref>-(ii). As before, the statement follows from Gronwall's Lemma.
Under the same assumptions and notations as in Lemma <ref>, for every R>0, for every x ∈ B_R(0) and for every θ∈ L^2([0,T],^m), we have that
|Φ^θ_(0,t_2)(x) - Φ^θ_(0,t_1)(x)| ≤ L_ (1+ )(1 + θ_L^2) |t_2-t_1|^1/2
for every 0 ≤ t_1 < t_2 ≤ T, where R is defined as in Lemma <ref>, and L_R̅ is prescribed by Assumption <ref>-(ii).
Moreover, if θ∈ L^2([0,T], ^m) ∩ L^∞([0,T],^m), then, for every 0 ≤ t_1 < t_2 ≤ T, it holds:
|^θ_(0,t_2)(x) - ^θ_(0,t_1)(x)| ≤ L_ (1+ )(1 + θ_L^2) |t_2-t_1|.
If we denote by t ↦ x(t) the solution of (<ref>) driven by the control θ, then
|x(t_2)-x(t_1)| ≤∫_t_1^t_2 |(s,x(s),θ(s)| ds ≤∫_t_1^t_2 L_ (1+) (1 + |θ(s)|) ds.
The thesis follows by using Cauchy-Schwarz for θ∈ L^2, or from basic estimates if θ∈ L^∞.
For every t∈ [0,T], let Φ^θ_1_(0,t), Φ^θ_2_(0,t): ^d →^d be the flows defined in (<ref>) and driven, respectively, by θ_1,θ_2∈ L^2([0,T],^m).
Let us assume that the controlled dynamics :[0,T]×^n×^m→^n satisfies Assumption <ref>.
Then, for every R>0 and for every x ∈ B_R(0), it holds that
|Φ^θ_1_(0,t)(x) - Φ^θ_2_(0,t)(x)| ≤ L_(1+ θ_1_L^2 + θ_2_L^2) e^L_(1+ θ_1_L^1)θ_1-θ_2_L^2,
where R is defined as in Lemma <ref>, and L_R̅ is prescribed by Assumption <ref>-(ii).
By using Assumption <ref>-(ii),(iii) and the triangle inequality, we obtain that
|Φ^θ_1_(0,t)(x) - Φ^θ_2_(0,t)(x)|
≤∫_0^t |(s,x_1(s), θ_1(s))-(s,x_2(s), θ_2(s))| ds
≤∫_0^t |(s,x_1(s), θ_1(s))-(s,x_2(s), θ_1(s))| ds
+ ∫_0^t |(s,x_2(s), θ_1(s))-(s,x_2(s), θ_2(s))| ds
≤ L_∫_0^t (1 + θ_1(s)) |x_1(s)-x_2(s)| ds + L_(1+ θ_1_L^2 + θ_2_L^2) θ_1-θ_2_L^2 .
The statement follows again by applying Gronwall's Lemma.
Let us assume that the controlled dynamics satisfies Assumptions <ref>-<ref>.
Given an admissible control θ∈ L^2([0,T],^m) and a trajectory t↦ x (t) = Φ_(0,t)^θ(x_0) with x_0∈ B_R(0), let ξ:[0,T]→^d be the solution of the linearized problem
ξ̇(t) = ∇_x (t,x(t),θ(t))ξ(t),
ξ() = v,
where ∈ [0,T] is the instant of perturbation and v is the direction of perturbation of the trajectory.
Then, for every t ∈ (,T), it holds
|Φ_(,t)^θ(x()+ϵ v)- Φ_(,t)^θ(x()) - ϵξ(t)| ≤ C |v|^2 ϵ^2
where C is a constant depending on T,R, θ_L^2.
For t≥, let us denote with t↦ y(t) := Φ_(,t)^θ(x()+ϵ v) the solution of the modified problem, obtained by perturbing the original trajectory with ϵ v at instant .
Then, since ξ solves (<ref>), we can write
|y(t)-x(t)-ϵξ(t)| = |Φ_(,t)^θ(x()+ϵ v)- Φ_(,t)^θ(x()) - ϵξ(t)|
≤∫_^t |(s, y(s),θ(s))-(s,x(s), θ(s)) - ϵ∇_x (s,x(s),θ(s)) ξ(s)|ds
≤∫_^t |(s,y(s),θ(s))-(s,x(s),θ(s)) - ∇_x (s,x(s),θ(s))(y(s)-x(s))| ds
+ ∫_^t |∇_x (s,x(s),θ(s))||y(s)-x(s)-ϵξ(s)| ds
≤∫_^t [ ∫_0^1 |∇_x (s, x(s) + τ(y(s)-x(s)), θ(s)- ∇_x (s,x(s),θ(s)| |y(s)-x(s)| dτ ] ds
+ ∫_^t|∇_x (s,x(s),θ(s))||y(s)-x(s)-ϵξ(s)|ds
for every t≥.
We now address the two integrals separately. Using Assumption <ref>-(iv) and the result of Lemma <ref>, we obtain the following bound
∫_^t [ ∫_0^1 |∇_x (s, x(s) + τ(y(s)-x(s))), θ(s)- ∇_x (s,x(s),θ(s))| |y(s)-x(s)| dτ ] ds
≤∫_^t L_(1+ |θ(s)|^2)1/2|y(s)-x(s)|^2 ds
≤1/2
L_(1+θ_L^2^2)e^2L_(1+ θ_L^1)|ϵ v|^2
Similarly, for the second integral, owing to Assumption <ref>-(iv), we can compute:
∫_^t|∇_x (s,x(s),θ(s))||y(s)-x(s)-ϵξ(s)|ds ≤∫_^t L_(1+ |θ(s)|^2)(1+ )|y(s)-x(s)-ϵξ(s)| ds
Finally, by combining the two results together and using Gronwall's Lemma, we prove the statement.
Consider the solution ξ of the linearized problem
ξ̇(t) = ∇_x (t,x^θ(t),θ(t))ξ(t) + ∇_θ(t,x^θ(t), θ(t)) ν(t)
ξ(0) = 0
where the control θ is perturbed at the initial time with θ + ϵν, when starting with an initial datum x_0 ∈ B_R(0). Then,
|Φ_(0,t)^θ+ ϵν(x_0)- Φ_(0,t)^θ(x_0) - ϵξ(t)| ≤ C ||ν||_L^2^2 ϵ^2
where C is a constant depending on T,, L_, θ_L^1. Moreover, we have that for every t∈[0,T]
ξ(t) = ∫_0^t ℛ^θ_(s,t)(x_0)·∇_θ(s,x^θ(s), θ(s)) ν(s) ds,
where ℛ^θ_(s,t)(x_0) has been defined in (<ref>).
We first observe that the dynamics in (<ref>) is affine in the ξ variable. Moreover, Assumptions <ref>-<ref> guarantee that the coefficients are L^1-regular in time. Hence, from the classical Caratheodory Theorem we deduce the existence and the uniqueness of the solution of (<ref>). Finally, the identity (<ref>) follows as a classical application of the resolvent map (ℛ^θ_(s,t)(x_0)_s,t ∈ [0,T] (see ,e.g., in <cit.>).
Let us denote with t↦ x(t) and t↦ y(t) the solutions of Cauchy problem (<ref>) corresponding, respectively, to the admissible controls θ and θ+ϵν. We observe that, in virtue of Lemma <ref>, we have that there exists R̅>0 such that x(t),y(t)∈ B_R̅(0) for every t∈ [0,T].
Then, recalling the definition of the flow map provided in (<ref>), we compute
|y(t)-x(t)-ϵξ(t)| = |Φ_(0,t)^θ+ ϵν(x_0)- Φ_(0,t)^θ(x_0) - ϵξ(t)|
≤∫_0^t | (s,y(s),θ(s)+ ϵν(s))- (s,x(s), θ(s))
-ϵξ̇(s)|
ds
≤∫_0^t |(s,y(s),θ(s)+ ϵν(s))- (s,x(s), θ(s) + ϵν(s))
-ϵ∇_x (s,x(s),θ(s)+ϵν(s))·(y(s)-x(s))| ds
+ ∫_0^t|(s,x(s),θ(s)+ϵν(s)) - (s,x(s),θ(s))-ϵ∇_θ(s,x(s),θ(s)) ·ν(s)| ds
+ ∫_0^t |∇_x (s,x(s),θ(s)+ ϵν(s))-∇_x (s,x(s),θ(s))||y(s)-x(s)|ds
+ ∫_0^t |∇_x(s,x(s),θ(s))||y(s)-x(s)-ϵξ(s)| ds.
We now handle each term separately:
∫_0^t |(s,y(s),θ(s)+ ϵν(s))- (s,x(s), θ(s) + ϵν(s)) -ϵ∇_x (s,x(s),θ(s)+ϵν(s))(y(s)-x(s))| ds
≤∫_0^t [ ∫_0^1L_(1+ |θ(s)+ϵν(s)|^2) τ |y(s)-x(s)|^2dτ] ds
≤ L_^3 (1+ θ_L^2+ϵν_L^2)^4e^2L_(1+θ_L^1)ν^2_L^2ϵ^2
where we used Assumption <ref>-(iv) and Lemma <ref>. By using Assumption <ref>-(v), we obtain the following bounds for the second integral:
∫_0^t|(s,x(s),θ(s)+ϵν(s)) - (s,x(s),θ(s))-∇_θ(s,x(s),θ(s)) ·ϵν(s)| ds
≤∫_0^t [∫_0^1 L_ |ν(s)|^2ϵ^2τ dτ]ds
= 1/2 L_ν_L^2^2ϵ^2.
Similarly, the third integral can be bounded by using Assumption <ref>-(vi) and Lemma <ref>, and it yields
∫_0^t |∇_x (s,x(s),θ(s)+ ϵν(s))-∇_x (s,x(s),θ(s))||y(s)-x(s)| ds
≤∫_0^t L_(1+ |θ(s)|+ϵ|ν(s)|)ϵ|y(s)-x(s)||ν(s)| ds
≤ L_^2(1+θ_L^2+ϵν_L^2)^2 e^L_(1+θ_L^1)ν_L^2^2ϵ^2.
Finally, the fourth integral can be bounded using Assumption <ref>-(iv) as follows:
∫_0^t |∇_x(s,x(s),θ(s))||y(s)-x(s)-ϵξ(s)| ds ≤∫_0^t L_(1+)(1+|θ(s)|^2)|y(s)-x(s)-ϵξ(s)| ds.
Hence, by combining (<ref>), (<ref>), (<ref>) and (<ref>), the thesis follows from Gronwall Lemma.
Let us assume that the controlled dynamics satisfies Assumptions <ref>-<ref>.
Given an admissible control θ∈ L^2([0,T],^m) and a trajectory t↦ x (t) = Φ_(0,t)^θ(x) with x∈ B_R(0), for every τ∈ [0,T] the resolvent map ℛ^θ_(τ,·)(x):[0,T]→^d× d is the curve
s ↦ℛ^θ_(τ,s)(x_0) that solves
d/dsℛ^θ_(τ,s)(x) = ∇_x (s, Φ_(0,s)^θ(x),θ(s))
·ℛ_(τ,s)^θ(x) s∈[0,T],
ℛ_(τ,τ)^θ(x) =
Id.
Then for every τ,s∈ [0,T], there exists a constant C_1 depending on T,R, θ_L^2 such that
|ℛ^θ_(τ,s)(x)| := sup_v≠ 0|ℛ^θ_(τ,s)(x)· v|/|v|≤ C_1.
Moreover, for every x,y∈ B_R(0), there exists a constant C_2 depending on T,R, θ_L^2 such that
|ℛ^θ_(τ,s)(x) - ℛ^θ_(τ,s)(y)| := sup_v≠ 0|ℛ^θ_(τ,s)(x)· v - ℛ^θ_(τ,s)(y)· v|/|v|≤ C_2|x-y|.
Finally, if θ_1,θ_2 satisfy θ_1 ,θ_2 ≤ρ, then there exists a constant C_3 depending on T,R,ρ such that
|ℛ^θ_1_(τ,s)(x)-ℛ^θ_2_(τ,s)(x)| := sup_v≠ 0|ℛ^θ_1_(τ,s)(x)· v -ℛ^θ_2_(τ,s)(x)· v|/|v|≤ C_3 θ_1-θ_2_L^2.
We first prove the boundedness of the resolvent map.
Let us fix v∈^d with v≠ 0, and let us define ξ(s):=ℛ^θ_(τ,s)(x)· v for every s∈ [0,T].
Then, in virtue of Assumption <ref>-(vi), we have:
|ξ(s)|
≤ |ξ(τ)| + ∫_τ^s |∇_x (σ,Φ_(0,σ)^θ(x), θ(σ))||ξ(σ)| dσ≤ |v|+ L_∫_0^t (1+ θ(σ)^2)|ξ(σ)| dσ,
and, by Gronwall's Lemma, we deduce (<ref>).
Similarly as before, given x,y∈ B_R(0) and v≠ 0, let us define ξ^x(s):=ℛ^θ_(τ,s)(x)· v and ξ^y(s):=ℛ^θ_(τ,s)(y)· v for every s∈ [0,T].
Then, we have that
|ξ^x(s) - ξ^y(s)| ≤∫_τ^s |∇_x (σ,Φ_(0,σ)^θ(x), θ(σ))
ξ^x(σ) -
∇_x (σ,Φ_(0,σ)^θ(y), θ(σ))
ξ^y(σ)
|
dσ
≤∫_τ^s |∇_x (σ,Φ_(0,σ)^θ(x), θ(σ)) -
∇_x (σ,Φ_(0,σ)^θ(y), θ(σ))
| |ξ^y(σ)|
dσ
+
∫_τ^s |∇_x (σ,Φ_(0,σ)^θ(x), θ(σ))
| |ξ^x(σ)- ξ^y(σ)|
dσ
≤ C_1 |v| ∫_τ^s L_R̅(1 + θ(σ)^2 ) |Φ_(0,σ)^θ(x) - Φ_(0,σ)^θ(y)
|
dσ
+
∫_τ^s L_R̅(1 + θ(σ)^2 ) |ξ^x(σ)- ξ^y(σ)|
dσ,
where we used (<ref>) and Assumption <ref>-(iv). Hence, combining Lemma <ref> with Gronwall's Lemma, we deduce (<ref>).
Finally, we prove the dependence of the resolvent map on different controls θ_1, θ_2 ∈ L^2([0,T];^m). Given x∈ B_R(0) and v≠ 0, let us define ξ^θ_1(s):=ℛ^θ_1_(τ,s)(x)· v and ξ^θ_2(s):=ℛ^θ_2_(τ,s)(x)· v for every s∈ [0,T]. Then, we compute
|ξ^θ_1(s)-ξ^θ_2(s)| ≤∫_τ^s |∇_x (σ,Φ_(0,σ)^θ_1(x), θ_1(σ))
ξ^θ_1(σ) -
∇_x (σ,Φ_(0,σ)^θ_2(x), θ_2(σ))
ξ^θ_2(σ)
|
dσ
≤∫_τ^s |∇_x (σ,Φ_(0,σ)^θ_1(x), θ_1(σ))
-
∇_x (σ,Φ_(0,σ)^θ_2(x), θ_2(σ))
| |ξ^θ_1(σ)|
dσ
+ ∫_τ^s |
∇_x (σ,Φ_(0,σ)^θ_2(y), θ(σ))
|
|ξ^θ_1(σ)- ξ^θ_2(σ)|
dσ
≤ C_1 |v| ∫_τ^s
L_R̅
(1 + θ_1(σ)^2 ) |Φ_(0,σ)^θ_1(x) - Φ_(0,σ)^θ_2(x)
| dσ
+ C_1 |v|
∫_τ^s
L_R̅(1+ |θ_1(σ)|+ |θ_2(σ)|) |θ_1(σ)- θ_2(σ)|
dσ
+
∫_τ^s L_R̅(1 + θ(σ)^2 ) |ξ^θ_1(σ)- ξ^θ_2(σ)|
dσ,
where we used Assumption <ref>-(iv)-(vi).
|
http://arxiv.org/abs/2307.02748v1 | 20230706031636 | Dynamic Multi-time Scale User Admission and Resource Allocation for Semantic Extraction in MEC Systems | [
"Yuanpeng Zheng",
"Tiankui Zhang",
"Jonathan Loo"
] | cs.NI | [
"cs.NI",
"eess.SP"
] |
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Dynamic Multi-time Scale User Admission and Resource Allocation for Semantic Extraction in MEC Systems
Yuanpeng Zheng, Student Member, IEEE, Tiankui Zhang, Senior Member, IEEE, Jonathan Loo
This work was supported by National Natural Science Foundation of China under Grants 61971060.
(Corresponding author: Tiankui Zhang)
Yuanpeng Zheng, Tiankui Zhang are with the
School of Information and Communication Engineering,
Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: {zhengyuanpeng, zhangtiankui}@bupt.edu.cn).
Jonathan Loo is with the School of Computing and Engineering, University of West London, London W5 5RF, U.K. (e-mail: [email protected]).
August 1, 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper investigates the semantic extraction task-oriented dynamic multi-time scale user admission and resource allocation in mobile edge computing (MEC) systems. Amid prevalence artificial intelligence applications in various industries, the offloading of semantic extraction tasks which are mainly composed of convolutional neural networks of computer vision is a great challenge for communication bandwidth and computing capacity allocation in MEC systems. Considering the stochastic nature of the semantic extraction tasks, we formulate a stochastic optimization problem by modeling it as the dynamic arrival of tasks in the temporal domain. We jointly optimize the system revenue and cost which are represented as user admission in the long term and resource allocation in the short term respectively. To handle the proposed stochastic optimization problem, we decompose it into short-time-scale subproblems and a long-time-scale subproblem by using the Lyapunov optimization technique. After that, the short-time-scale optimization variables of resource allocation, including user association, bandwidth allocation, and computing capacity allocation are obtained in closed form. The user admission optimization on long-time scales is solved by a heuristic iteration method. Then, the multi-time scale user admission and resource allocation algorithm is proposed for dynamic semantic extraction task computing in MEC systems. Simulation results demonstrate that, compared with the benchmarks, the proposed algorithm improves the performance of user admission and resource allocation efficiently and achieves a flexible trade-off between system revenue and cost at multi-time scales and considering semantic extraction tasks.
Semantic extraction task, resource allocation, MEC, dynamic optimization.
§ INTRODUCTION
In recent years, mobile edge computing (MEC), which supports not only computing but also communications and storage, has become a key technology to solve many related problems with specific requirements<cit.>. By being closer to the edge of network than traditional cloud computing systems, MEC can obviously improve the quality of user experience, including optimization of delay and energy consumption<cit.>. Devices can significantly reduce their response times and energy consumption by offloading the computing tasks to nearby edge network, hence resource capacity and scheduling of MEC systems become a very important issue, especially in the context of the increasing number of intelligent tasks. The rapid development of network edge applications such as the Internet of Things (IoT) indicates the change of service requirements and diversity of tasks, nevertheless, few existing works consider the various performance requirements of these dynamics applications<cit.> and the characteristics of computing tasks<cit.>. Obviously, how to efficiently allocate resource to support the dynamics demand of services is still an unaddressed problem.
Hence, in the context of massive IoT devices deployment, limited terminal computing capacity and battery capacity, and increasingly complex computing tasks, existing works on resource allocation in MEC systems have become specific and multidimensional<cit.>. S. Zarandi et al.<cit.> investigated a way of combining MEC and network slicing, and proposed the optimization of the weighted sum of the difference between the observed delay and the delay requirement. By considering edge users and large data volume, a power consumption and delay optimization problem in unmanned aerial vehicle (UAV) assisted MEC systems was addressed by G. Faraci et al.<cit.>. As a key scenario, an efficient method of MEC and network slice integration which was deployed on the IoT platform was proposed by J. Y. Hwang et al.<cit.> to maximize the effect of decreasing delay and traffic prioritization. X. Cao et al.<cit.> introduced a new MEC setup where a UAV was served by cellular ground base stations for computation offloading to minimize the UAV's task completion time considering computing capacity. Considering computing tasks, the MEC technique combined with network slicing and non-orthogonal multiple access was leveraged by M. A. Hossain et al.<cit.> to minimize the total latency of the computing tasks with energy constraints. T. Zhang et al. <cit.> considered that a UAV equipped with an MEC server was deployed to serve a number of terminal devices of Internet of Things in a finite period, which aimed to minimize the total energy consumption including communication-related energy, computation-related energy and UAV's flight energy by optimizing the bits allocation. J. Feng et al. <cit.> proposed a heterogeneous computation and resource allocation framework based on a heterogeneous mobile architecture to achieve effective implementation of federated learning. Obviously, the above works do not consider the influence of the stochastic nature and specific computing capacity consumption of computing tasks on user admission and resource allocation. The research on specific tasks have become important in MEC systems considering communication bandwidth and computing capacity allocation. Ignoring the characteristics will result in some errors in real scenario and hence can not satisfy delay requirements well.
The rise of semantic communication research in recent years has brought more requirements to the MEC field. Meanwhile, it proposes more scenarios of specific tasks in MEC systems and a few studies on quantification of computational complexity of intelligent tasks has been discussed<cit.>. Some works proposed practical schemes to deploy semantic communication to MEC systems. H. Xie et al.<cit.> proposed a brand new framework of semantic communication where a deep learning based semantic communication system for text transmission combined with deep learning, natural language processing and semantic layer communication was constructed. After that, H. Xie et al.<cit.> considered a semantic communication system which was constructed between edge and IoT devices where MEC servers trained and updated the semantic communication model based on deep learning, and the IoT devices collected and transmitted data based on the training model. H. Qi et al.<cit.> investigated a model named PALEO which was applied to analyze performance of deep neural network (DNN). Nevertheless, D. Justus et al.<cit.> indicated that the presentation of computational complexity of PALEO was not accurate because of many other influence factors, and proposed an alternative strategy which predicted execution time by training a deep learning network including network features and hardware features. An approximation strategy of optimization of DNN training was proposed by D. Bienstock et al.<cit.>, which modelled DNN as a directed graph to control approximation error of computational complexity. M. Bianchini et al.<cit.> proposed a new approach to study how the depth of feedforward neural networks impacted on their ability to implement high complexity functions and indicated how the complexity depended on the number of hidden units and the used activation function. It is shown that the resource allocation problems for the semantic communication including the communication bandwidth and the computing capacity with the dynamic arrival of task in MEC systems has not been fully studied and it is hard but important to present characteristics and complexity of intelligent computing tasks.
There are still some studies considering the dynamic of mobile network and the stochastic nature of computing tasks<cit.>. F. Guo et al.<cit.> designed the framework where the service requirements of some IoT applications were changing. In<cit.>, Y. Xiao et al. considered the dynamic of fog computing networks to maximize the utilization efficiency of available resources while balancing the workloads among fog nodes. The real-time dynamics of the network resource requests have been discussed in <cit.> by N. Van Huynhet al. and obtained the optimal resource allocation policy under the dynamics of the frequency of request. J. Feng et al.<cit.> considered the stochastic nature of tasks and proposed an architecture that maximized revenue of network providers in MEC systems, where a multi-time scale scheme was adopted to increase revenue on the basis of QoS guarantee. However, it is necessary to integrate the stochastic nature of tasks and quantification of complexity of computing tasks in MEC systems. Semantic extraction tasks which are mainly composed of convolutional neural networks (CNN) of computer vision gradually become the mainstream on dynamic resource allocation. In conclusion, modelling dynamic and computing capacity of semantic extraction tasks in MEC systems has not been considered yet according to the above works.
§.§ Motivation and Contribution
As mentioned above, the combination of dynamic multi-time admission and resource allocation in MEC systems with specific computing tasks, i.e., semantic extraction tasks, is still an unaddressed research area, which motivates this contribution. In this paper, we formulate a stochastic optimization problem by modelling it as dynamic arrival of tasks in temporal domain considering the stochastic nature of the semantic extraction tasks. In order to investigate the dynamic arrival and stochastic nature of tasks, we adopt multi-time scale to represent traffic variations. Based on the dynamic model, we optimize the average utility over time that consists of the system revenue and cost which depends on user admission in the long term and resource allocation in the short term. We also model computing characteristic of semantic extraction task as a formula based on the structure of CNN. The primary contributions of this paper are as follows:
* We formulate a stochastic optimization problem for dynamic user admission and resource allocation considering the stochastic nature of semantic extraction tasks in MEC systems. We set up a queue model to represent dynamic of semantic extraction tasks and define the operator's utility which consists of long-time-scale revenue depending on the number of users and short-time-scale cost depending on power consumption in order to achieve continuous revenue in temporal domain with as little cost as possible at each time slot. For this study, we adopt a formula based on the structure of CNN to quantify the relationship between input data and computational complexity of semantic extraction tasks.
* We solve the highly coupled problem without any prior knowledge of traffic distributions or channel information with the assistance of the Lyapunov optimization with maximization of the number of users and minimization of power consumption.
We decouple manifold optimization variables on the dimension of time scale and propose a multi-time scale user admission and resource allocation algorithm for semantic extraction tasks where the dynamic user admission subproblem is in the long term and user association subproblem, bandwidth allocation subproblem and computing capacity allocation subproblem are in the short term. The dynamic user admission subproblem in the long term is settled by a heuristic iteration method and resource allocation subproblems in the short term are solved in closed forms.
* We demonstrate the simulation results which verify that our framework is applicable to semantic scenario in MEC systems and the proposed algorithm has significant effect for the multi-time scale problem solving. It is shown that, compared with the benchmarks, the proposed algorithm improves the performance of user admission and resource allocation efficiently and achieves a flexible trade-off between system revenue and cost at multi-time scales and considering semantic extraction tasks.
§.§ Organization
The rest of this paper is organized as follows. In Section II, we introduce system model and problem formulation. In Section III, we decompose the coupling problem into resource allocation in the short-time slot and user admission in the long-time slot. The performance of the proposed algorithm is evaluated by the simulation in Section IV, which is followed by our conclusions in Section V.
§ STOCHASTIC OPTIMIZATION PROBLEM FORMULATION
We consider that fog radio access network (F-RAN) is built on MEC systems, and communication and computing between terminals and MEC are for specific semantic extraction tasks, as shown in Fig. 1. We equip MEC servers on small base stations (SBS) to form MEC systems, which is assembled as K^S = {1, ..., k, ..., K}. In order to consider dynamic allocation of resources in temporal domain to dynamically meet the demands of multiple task slices, we design two types of time slots based on the time-slotted system where one is a long time slot (LTS) and the other is a short time slot (STS). In this paper, our system contains multiple LTSs which are dedicated to user admission and the length of the LTS is T. We assume that each LTS contains p STSs which are dedicated to resource allocation and the length of STS is τ, i.e., T = pτ.
At LTS l, we denote the set of users by U^S = {1, ..., u, ..., U}, and the set of specific tasks by M^S = {1, ..., m, ..., M}.
At the beginning of each LTS the network operator can decide user admission and at the beginning of each STS resource allocation policies is given. Let the admission control variable of the user u accessing MEC systems be y_u(l)∈{0,1}, where y_u(l) = 1 denotes user u is admitted by MEC systems and y_u(l) = 0 means the opposite. The multi-time scale system will be discussed in detail in the following sections of this section. Let the bandwidth resource of each SBS be W_k, computing capacity of each MEC be F_k. The delay limit for semantic extraction tasks is set to t̃_m.
§.§ Communication Model
In our system, we adopt a more convenient communication model<cit.> which can be easily modified to other general models to complete our design. At STS t, the indicator variable for user u accessing to SBS k is denoted by x_uk∈{0,1}. Assume that a user can only access one SBS in a short time slot, then the uplink transmission rate of user u accessing SBS k is given by
r_uk(t) = w_uk(t)log_2(1+p_ug_uk(t)/I_uk(t)+σ^2 ),
where w_uk(t) is uplink bandwidth resource obtained by user u, g_uk(t) is channel gain between SBS k and user u, p_u is the transmit power from user u to SBS k, and I_uk(t) is the co-channel interference from users connected to other SBS, i.e. I_uk=∑_i∈ K^S,i≠ kg_ui(t)p_u. σ^2 is the noise power. We denote the raw data that user u connected to SBS k collects as a_u(t), such as pictures of the industrial environment that need to be semantically segmented and given instructions. The raw data is transmitted to SBS through the uplink channel. Therefore the transmission delay of the raw data collected by user u in the wireless link is
t^comm_u(t)=a_u(t)/r_u(t),
where r_u(t)= ∑_k=1^Kx_uk(t) r_uk(t).
In our model, we let A_u(t) denote the random arrival process of tasks of user u in each STS t. For processing convenience, we assume that A_u(t) is independently and identically distributed between STSs and 𝔼{A_u(t)}=λ for all STSs. Let _𝐈(t) denote the amount of tasks in the current queue that need to be unloaded. The dynamics of the task offloading queue is given by
_𝐈(t+1)= max{_𝐈(t)-τy(l)·r(t) ,0} + y(l)·A(t),
where max represents queue accumulation of _𝐈 exists only when the queue arrival is greater than the queue departure, otherwise it is 0, and y(l), r(t) and A(t) are the vector representations of y_u(l), r_u(t) and A_u(t).
At STS t, user u transmits its collected raw data a_u(t) such as various captured images, etc., to the SBS for a specific semantic extraction task m to generate semantic extracted feature data. Then the MEC feeds the semantic data back to the user for further operation via the downlink channel. In the context of the image semantic segmentation task in our scenario, the feature data is extremely small compared to raw data a_u(t), therefore the downlink transmission delay can be neglected.
§.§ Semantic Extraction Task
The design of semantic extraction task oriented MEC systems becomes increasingly important as intelligent tasks become mainstream, especially lightweight semantic communication network combined with IoT<cit.>. In this paper, we consider some image recognition applications of industrial Internet where the image semantic extraction algorithm based on CNN is mainly used. The computational complexity of those applications primarily depends on not only input raw data but also CNN. Semantic extraction tasks in scenario of MEC systems needs to be considered separately from the general task to this extent.
In our system, we design a computing model for semantic extraction tasks which is specific to CNN. The required computing resource of CNN is determined by the amount of data and model parameters associated with the input of the convolutional layer, and the network model parameters are task-specific<cit.>. We denote the model parameter of task m as n_m, and the specific value is determined by the number of filters of CNN. Therefore, for ease of representation, the basic computing model for semantic extraction tasks at STS t is expressed as
F_um(a_u(t))= n_m a_u(t) +
log(a_u(t)/3N) (n_ma_u(t)/N+a_u(t)+n_ma_u(t)/3)
The above equation represents the approximate function of raw data volume (Byte) and the required computing resource (Gigacycle), where the constant 3 is the number of channels and N is the number of input feature maps.
Therefore, we set the required computing resource of user u as
F_u(a_u(t)) = ∑_m=1^M z_um(t)F_um(a_u(t)),
which means the computing resource needed to process the task of user u.
§.§ Computing Model
We propose a complete system-level computing model for semantic extraction tasks. Let the indicator variable for task m of user u at STS t be z_um(t)∈{0,1}, where z_um(t) = 1 means the task of user u is m and z_um(t)=0 is the opposite. z_um(t) is known as the indicator of user's request and ∑_m=1^M z_um(t)=1. We set the computing capacity that is allocated by SBS k for user u as f_uk(t). Computing latency incurred by user u to perform tasks on SBS is given by
t^comp_u(t)=F_u(a_u(t))/f_u(t),
where f_u(t) = ∑_k=1^Kx_uk(t)f_uk(t) (Gigacycle/s), which represents the computing capacity that MEC systems allocate to user u to process offloading tasks.
At STS t, the processing of computing tasks also requires consideration of the latency generated by the bus transfer of data between the hardware within the system<cit.>, which is denoted as
t^bus_u(t)=a_u(t)/B_bus,
where B_bus represents the bus bandwidth of the hardware devices within the SBS. Therefore, the total delay for user u to access SBS k to complete the task processing is given by
t_u(t)=t^comm_u(t)+t^comp_u(t)+t^bus_u(t).
We consider the computational power consumption of SBS k to handle the user u offloading task, which is expressed as
P_uk(t) = 𝒦_escf_uk^3(t),
where 𝒦_esc is the effective switched capacitance of the MEC server. Then the total power consumption of system is
P(t) = <x(t), 𝒫(t)>,
where x(t) and 𝒫(t) are the matrix representations of x_uk(t) and P_uk(t) and <·,·> represents matrix inner product.
In our model, we consider transmission between multiple hardware connected by bus inside the MEC server. This type of transmission can also have an impact on the task queue, therefore we model this impact as the bus transfer queue. The bus transfer queue is after the task offloading queue and is independent of it. Let _𝐈𝐈(t) denote the number of tasks currently in the bus transfer, the dynamics of bus transfer queue is expressed as
_𝐈𝐈(t+1)= max{_𝐈𝐈(t)-∑_u=1^U y_u(l)B_busτ ,0} +
min{y(l)·r(t),_𝐈(t)},
where max represents queue accumulation of _𝐈𝐈 exists only when the queue arrival is greater than the queue departure, otherwise it is 0, min represents the effect of queue arrival and queue accumulation of _𝐈 on accumulation of _𝐈𝐈 in tandem queue and f(t) is the vector representation of f_u(t).
Thereafter computing tasks are offloaded to the MEC for processing. Assuming that there is sufficient cache in MEC systems to store offloaded but unprocessed tasks, the dynamics of the computational processing queue is given by
Φ(t+1)= max{Φ(t)-y(l)·f(t) ,0}+
min{∑_u F_u(y_u(l)B_bus)(t),∑_uF_u(_𝐈𝐈(t))},
where max represents queue accumulation of Φ exists only when the queue arrival is greater than the queue departure, otherwise it is 0, and min represents the effect of queue arrival and queue accumulation of _𝐈𝐈 on accumulation of Φ in tandem queue.
§.§ Utility Model and Problem Formulation
In this paper, we consider the trade-off between the revenue and the cost of the optimization system, where the revenue depends on the number of admitted users associated with the admitted control weighting parameter and the cost depends on the computational energy consumption. The admitted control weighting parameter determined by the importance of users to revenue is expressed as v_u, which is given by
v_u(l) = ∑_t=pl^p(l+1)-1∑_m=1^M z_um(t) t̃_m/T,
which is seen as the importance distribution for users at LTS l determined by the average task delay limit. Then the revenue expression is
G_L(l) = y(l) ·v(l),
where v(l) is the vector representation of v_u(l), G_L(l) expresses that the influence of admitted users to revenue is determined by weighting parameter v_u. We investigate that computing energy cost of the system in long time slot T is
G_S(l)= ∑_t=pl^p(l+1)-1P(t).
In that case the system utility is expressed as
G(l) = G_L(l)- η G_S(l),
Remark 1. From (<ref>)(<ref>), we notice that η is the parameter for adjusting the revenue and cost weights. Hence, different values of η may effect the trade-off between the revenue in LTS and the cost in STS and stabilize the system utility. However, other comparison algorithms do not have this characteristic of balancing revenue and cost because of different treatment methods of admission and resource allocation. Therefore, our proposed algorithm can perform well under different values of η in (<ref>).
Furthermore, we denote the average utility as
G = lim_Z →∞1/Z∑_l=0^Z-1𝔼{G(l)},
where G represents the average utility of system on all time slots and is for constructing Lyapunov stochastic optimization problem in the following.
According to our model above, we investigate the operator's utility maximization problem in MEC systems by jointly controlling system admission y(l), user association x(t), bandwidth allocation w(t) and computing capacity allocation f(t). In particular, we formulate it as the following stochastic optimization problem.
max_y(l),x(t),w(t),f(t)G
s.t. (C1): y_u(l)∈{0,1},∀ u,l,
(C2): x_uk(t)∈{0,1},∀ u,k,t,
(C3):∑_k=1^K x_uk⩽ 1,∀ u,t,
(C4): y_u(l)t_u(t) ⩽∑_m=1^M z_um(t)t̃_m,∀ u,t,
(C5): ∑_u=1^U x_uk(t)w_uk(t) ⩽ W_k,∀ k,t,
(C6): ∑_u=1^U x_uk(t)f_uk(t) ⩽ F_k,∀ k,t,
(C7): Q_I < ∞, Q_II<∞, Φ<∞,∀ t.
In (<ref>), (C2) and (C3) indicate that a user can only access one SBS. (C4) is the delay requirements of tasks. (C5) and (C6) denote the limit of bandwidth and computing capacity of each SBS. (C7) represents that the data rate should be greater than or equal to the arrival rate of all data queues and processing queues, i.e. the mean rate stability<cit.>.
§ PROBLEM SOLUTION AND ALGORITHM DESIGN
Since our optimization problem (<ref>) is stochastic and complex in temporal domain and mixed in multi-time scale, we need to solve it by decomposing the two time-slot problem into many single time-slot subproblems with the help of Lyapunov framework as illustrated in Fig. 2. Besides, an algorithm for the user admission problem will be designed. Next, we will show the proposed algorithm is capable of achieving the revenue-cost trade-off in MEC systems.
§.§ The Lyapunov Optimization-Based Algorithm
We propose an algorithm based on the Lyapunov optimization and substitute the queue formula proposed in Section II into Lyapunov framework. Let Θ(t) =[_𝐈(t),_𝐈𝐈(t),Φ(t)] be a concatenated vector, and we define the Lyapunov function as
L(Θ(t))=1/2[_𝐈^2(t)+_𝐈𝐈^2(t)+Φ^2(t)].
Then the LTS conditional Lyapunov drift Δ_T(Θ(l)) is given by
Δ_T(Θ(l))=𝔼[L(Θ(l+T)-Θ(l))|Θ(l)],
where Θ(l)={_𝐈(t),_𝐈𝐈(t),Φ(t),t∈ [l,l+T-1]}. Then, the drift-plus-penalty expression of (<ref>) is expressed as
Δ_T(Θ(l))-V𝔼{G(l)|Θ(l)}.
Remark 2. From (<ref>), we notice that the control parameter V>0 which is from normalized form of Lyapunov optimization represents the extent of drift-plus-penalty and controls the weight of penalty. In our proposed algorithm, the larger parameter V increases the weight of penalty, i.e., system utility and causes a increase in the final optimization value. Hence, adjusting parameter V can balance the importance of queue stability and system utility and acquire an ideal result we want.
We derive the following theorem to provide an upper bound on the above drift-plus-penalty expression.
Theorem 1: Suppose G(t) is i.i.d. over slots. For arbitrary y(l), x(t), w(t), f(t), all parameters V > 0, and all possible values of Θ(l), Δ_T(Θ(l))-V𝔼{G(l)|Θ(l)} is upper bounded by (<ref>), where C meets (<ref>).
Proof: Please refer to Appendix A.
The stochastic optimization theory indicates that a stochastic optimization problem is solved by minimizing the upper bound of its drift-plus-penalty expression subject to the same constraints except the stability one in <cit.>. Therefore, we need to minimize right-hand-side of (<ref>) to solve (<ref>) subject to (C1)-(C6), because (C7) is a stability constraint. Therefore, the original multi-time scale optimization problem for long-term revenue can be equivalently transformed into an optimization of revenue on multiple LTSs after the above process. Then, we can get the following optimization problem which is expressed by (<ref>).
Δ_T(Θ(l))-V𝔼{G(l)|Θ(l)}⩽ C - ∑_t=pl^p(l+1)-1_𝐈(t) 𝔼{[τy(l)·r(t) - y(l)·A(t)] |Θ(l) } - ∑_t=1^l+T-1_𝐈𝐈(t) 𝔼{[∑_u=1^U y_u(l) B_busτ-
y(l)·r(t)]|Θ(l) } - ∑_t=1^l+T-1Φ(t) 𝔼{[y(l)·f(t)- ∑_uF_u(y_u(l)B_bus)(t)]|Θ(l) } - V𝔼{[G_L(l)-η∑_t=pl^p(l+1)-1P(t)]|Θ(l)}.
C ⩾ 1/2{[ ∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1r_u(t)τ]^2 + [∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1 A_u(t)]^2 + [ ∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1B_busτ]^2 +
max_u∈ U^S(l)[ y_u(l) ∑_t=pl^p(l+1)-1r_u(t) ]^2 + [ ∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1f_u(t) ]^2 + max_u ∈ U^S(l)[ ∑_t=pl^p(l+1)-1F_u (y_u(l)B_bus)(t)]^2 }.
max_y(l),x(t),w(t),f(t) [_𝐈𝐈(t)- _𝐈] y(l)·r(t) -_𝐈𝐈(t) ∑_uy_u(l)B_busτ-Φ(t)y(u)·f(t)+Φ(t) ∑_uF_u(y_u(l)B_bus)(t)+Vη P(t)
s.t. (C1)-(C6).
From the principle of opportunistically minimizing an expectation, minimizing f(t) can ensure that 𝔼{f(t)|Θ(t)} is minimized. Therefore for the objective function in (<ref>), we obtain by ignoring constant C, _𝐈( t )∑_u = 1^U y_u( l )A_u( t ) and G_L( l ) in (<ref>) and removing the conditional expectations in (<ref>). Since user admission is in long time scale, we use subproblem separation to separate the long and short time scale problems for ease of processing. The subproblems in long and short time scale will be solved later by iterative integration. Obviously, user association, bandwidth allocation and computing capacity allocation are highly coupled with each other in (<ref>). We further decompose these optimization variables to develop low-complexity algorithms in the following subsections.
§.§ Solution of Resource Allocation Subproblem in Short Time Scale
We obtain the solution of the coupled optimization problem (<ref>) by integrating the algorithms through iterative optimization. Under given user association x(t) and bandwidth allocation w(t), the computing resource allocation subproblem can be expressed by
max_f(t)Φ(t)y(l)·f(t)-Vη∑_u,kx_uk(t)κ_escf_uk^3(t)
s.t. (C4):f_u(t) ⩾y_u F_u(a_u(t))/∑_u=1^Uz_umt̃_m - y_ua_u(t)/B_bus-y_ua_u(t)/r_U(t),∀ u,t,
(C6):∑_u=1^U x_uk(t)f_uk(t) ⩽ F_k, ∀ k,t.
where the other terms of equation (<ref>) are constants under the above condition. Obviously, the objective function of (<ref>) is concave and its constraints are linear, so it is a convex optimization problem. Therefore we obtain the optimized solution f^*(t) directly through the convex optimization method<cit.> in polynomial time using standard CVX tools<cit.>.
Under given w(t) and f(t) we obtain the user association problem which is denoted as
min_x(t)[_𝐈𝐈(t)-_𝐈(t)]y(u)·r(t) +Vη∑_u,kx_uk(t)κ_escf_uk^3(t)
s.t. (C2)-(C3),
which is converted to
min_x(t)[(_𝐈𝐈(t)-_𝐈(t))∑_u,ky_u(l)r_uk(t)+Vη∑_u,kκ_escf_uk^3(t)]
x_uk(t)
s.t. (C2)-(C3),
whose optimal solution is expressed as
x_uk =
1, k = k^*,
0, k≠ k^*,
where
k^* = argmin_k∈ K^S{ (_𝐈𝐈(t)-_𝐈(t))y_u(l) r_uk(t)+V ηκ_esc f_uk^3(t)}
Next, if user association x(t) and computing capacity allocation f(t) are known, then the bandwidth allocation subproblem is given by
min_w(t)
∑_u,k [(_𝐈𝐈(t)-_𝐈(t)]x_uk(t) log_2(1+g_uk(t)p_u/I_uk(t)+σ^2)w_uk(t)
s.t. (C4)':∑_k x_uk(t)w_uk(t)log_2(1+g_uk(t)p_u/I_uk(t)+σ^2) ⩾
a_u(t)/t̃_m -t_u^comp(t)-t_u^bus(t),∀ u,t,
(C5):∑_u∈ Ux_uk(t)w_uk(t)⩽ W_k,∀ k,t.
It can be seen that the objective function of the problem is a linear function, and the constraints are linear, so it is a linear programming problem, and the solution w(t) is directly obtained by optimization methods such as the interior point method<cit.> by using standard CVX tools.
We use the idea of the greedy algorithm to iterate the above three solutions for the subproblem and arrive at the suboptimal solution, which is summarized in Algorithm 1.
§.§ Solution of User Admission Subproblem in Long Time Scale
The original problem (<ref>) is decomposed into two subproblems on the time scale, one is (<ref>) in short time scale and the other is denoted as
max_y(l) G_L(l) - η G_S(l)
s.t. (C1):y_u(l)∈{0,1},∀ u,
(C8):feasibility of problem (<ref>).
Obviously, this problem is a nonlinear 0-1 integer programming problem, and the constraint (C8) needs to determine the solvability of (<ref>). Therefore it can not be solved by conventional methods. According to the above condition that the random arrival process of tasks of user u is independently and identically distributed between STSs, the constraint (C8) is transformed into (C4) and (C7). The Lyapunov architecture makes the above solution , i.e. Algorithm 2, satisfy (C7), so we solely need to transform (C8) into (C4) there. Then, (<ref>) is expressed as
max_y(l)v(l)·y(l) - η∑_t=pl^p(l+1)-1P(t)
s.t. (C1):y_u(l)∈{0,1},∀ u,
(C4):y_u(l)t_u(t) ⩽∑_m=1^M z_um(t)t̃_m,∀ u,t∈ [l,l+T-1].
The optimization problem (<ref>) is a linear 0-1 integer programming problem and is solved by iterating with (<ref>). The whole procedure is shown in Algorithm 2.
§.§ Dynamic Solution to the Optimization Problem
As mentioned above, we decompose the original complex stochastic optimization problem (<ref>) into two subproblems. As shown in Fig.2, we propose a short-time-slot resource allocation algorithm and a long-time-slot user admission algorithm to solve it. Due to the specificity of the two time-slot iteration algorithm, we need to solve the problem offline and adopt the strategies online. The detailed procedure to solve (<ref>) is summarized in Algorithm 3.
§.§ Analysis of the Proposed Algorithms
In this subsection, we analyze the temporal computational complexity and the convergence of the proposed algorithms.
In Algorithm 1, we adopt alternating iteration of three subproblems and obtain the solutions in closed form by convex optimization. According to the greedy algorithm and convex optimization theory<cit.>, iteration of three convex subproblems can ensure |N^q(t)-N^q-1(t)|⩽ε quickly. Therefore, the convergence of Algorithm 1 is obvious but only sub-optimality is guaranteed<cit.>. From the perspective of complexity, the complexity of (<ref>), (<ref>) and (<ref>) are 𝒪(U^3K), 𝒪(UK) and 𝒪((UK)^3.5))<cit.>. We assume the number of iterations is L_1, then the complexity of Algorithm 1 is 𝒪((U^3K+UK+(UK)^3.5)L_1). This decoupled algorithm can make good use of convex optimization methods to solve complex coupling problem (<ref>).
In Algorithm 2, since (<ref>) a linear 0-1 integer programming problem and iterating with (<ref>), we assume the number of iterations is L_2, then the complexity is 𝒪((U^2+γ_1)L_2) where γ_1 is the complexity of Algorithm 1. Obviously, the convergence of the Algorithm 2 depends on Algorithm 1 since the iteration is performed with it. Therefore, the proposed algorithm can reach convergence after several iterations as Algorithm 1 converges quickly. Algorithm 3 indicates that above algorithms run on time slots, therefore at LTS l the complexity of Algorithm 3 is 𝒪((U^2+γ_1)L_2p). In the way, the complex stochastic optimization problem (<ref>) is decomposed into low-complexity subproblems iteratively solved.
§ SIMULATION RESULT
In this section, we first set the simulation parameters and then demonstrate simulation results to evaluate the performance of the proposed algorithms.
§.§ Simulation Parameters
As mentioned above, we consider system level simulation of uplink in a small cell F-RAN according to a 3GPP normative document of small cell network<cit.>. Four SBSs are deployed in a small cell area, with a total coverage area of 200m× 200m. The SBSs provide admission and resource allocation for users. We model the path loss of radio access link of the small cell network and use a hexagonal cellular deployment model. The distance between the user and the SBS is within the standard of the 3GPP document and only the outdoor access link exists. At STS t, let d_uk(t) be the distance between SBS k and user u, and users all moves randomly within the area at a speed of 3km/h. Note that the path loss between SBS k and user u is dependent on the link state of LoS and NLoS. When it is a LoS link, the path loss is given by
μ^LoS/NLoS_uk(t) = 22.0log_10(d_uk(t))+28.0+
20log_10(F^q),
and when it is a NLoS link, the path loss is given by
μ^LoS/NLoS_uk(t) = 36.7log_10(d_uk(t))+22.7+
26log_10(F^q),
where F^q indicates the carrier frequency. The LoS probability that determines the LoS/NLoS link state is denoted as
p^LoS_uk(t) = min(18/d_uk(t),1)(1-e^-d_uk(t)/36)+e^-d_uk(t)/36,
therefore the NLoS probability is p^NLoS_uk(t) = 1-p^LoS_uk(t). Then the channel gain is expressed as
g_uk(t) = (p^LoS_uk10^μ^LoS_uk+p^NLoS_uk10^μ^NLoS_uk)^-1.
In our proposed algorithm, we notice that parameter p of T=pτ in our algorithm represents the relationship between the length of LTSs and STSs. Hence, too small p will affect the effect of multi-time scales and optimization of the algorithm and too large p will increase the running time of our algorithm but the improved algorithm performance is not significant. Therefore, we set appropriate values of T and τ. Part of simulation parameters are summarized in Table II<cit.>.
According to the computing delay requirements of some services of ultra reliable low latency communications and combining task scenarios<cit.>, we set the basis delay requirement as 20 ms. Furthermore, we assume three task types whose delay requirements and model parameters progressive by task type and are equally distributed to the task set. The delay limits and model parameters of the tasks are shown in Table III.
§.§ Convergence of the Proposed Algorithm
For convenience, we integrate the value of STS iteration and the value LTS iteration, i.e., the system revenue and cost at each STS, to show our overall convergence. Fig. <ref> shows the convergence of the proposed algorithm under different parameters V. From Fig. <ref>, we can see that the convergence of the proposed Algorithm is fast and the trend is basically fixed after convergence. Since Algorithm 1 is nested into Algorithm 2 for computing, Algorithm 1 will stop as system utility of Algorithm 2 converges and the trend will be basically fixed after convergence. Also, a larger V indicates a larger penalty weight in Lyapunov drift plus penalty which increases the weight of G_S(l) relative to the overall queue stability and thus increases the impact of power consumption, therefore it leads to a change in the value of the utility which increases and a more volatile fluctuation after convergence as V becomes larger, which is a good proof of Remark 2.
§.§ Performance of the Proposed Algorithm
To verify the performance of the proposed algorithm, we will consider the following schemes:
* Fixed Allocation (FA): The scheme only optimizes user admission.
* Fixed Channel (FC): The scheme is that user association and bandwidth allocation are fixed and optimize access variables and computational resource allocation.
* Traditional Computing (TC)<cit.>: The scheme allocates computing resources based on input data size according to the traditional computing model.
We demonstrate the amount of admitted users varying with three different task types in Fig. <ref>. As we can see, different values of t_m affect the amount of admitted users. From (<ref>), we notice that t_m affects the system revenue and can cause that our algorithm will choose more valuable users. This characteristic is also shown in figure and the amount of admitted users of task 2 is more than other tasks. In our system, integrated user value is affected by the required computing resource F_um besides t_m and therefore users of task 2 become the most valuable in our parameters. However, our proposed algorithm can handle the difference between three types of tasks well and make user admission stable as we can see in Fig. <ref> compared with other contrast algorithms.
The characteristics of system utility with total number of users under different bandwidth values as Fig. 5 shows, it is found that it increases with the total number of users and slower after the total number of users is greater than 70. To better indicate the momentum of the system utility, we add the blue dotted line which represent the ratio of admitted users of our proposed algorithm as right vertical axis in Fig. 5 shows. When the total number of users is small, the resources are sufficient and resource allocation can be efficient, so the system utility will increase quickly as the total number of users increases. However, when the total number of users is large, the resources are limited and resouce alloction will not be efficient, therefore the rising tendency of utility will be reduced. The radio of admitted users will decrease first and then become stable after the total number of users is 70, which shows the increasing momentum of system utility will become slow when the total number of users become large. The comparison algorithms all have this property, but the trend is different for different algorithms. To make figure concise as far as possible, we do not show the ratio of admitted users of the comparison algorithms which has been compared in Fig. 4. There is a slight but insignificant increase in system utility for higher bandwidth values, therefore the bandwidth value change has a small impact on the system.
We compare system utility with computing capacity under different bandwidth values in Fig. 6, and it is seen that there is a maximum value in F_k = 190 Gigacycle/s. This is because system utility depends on the number of association users and power consumption and our algorithm needs to make a trade-off between them. When F_k reaches a certain amount, our proposed algorithm can take it to a better trade-off utility value. However when F_k continues to rise, the fairness design of this algorithm will allow more users to admit and thus generate more power consumption. Therefore it will lead to a decrease in the value of utility. While F_k increases to a certain magnitude, the number of associated users will not continue to increase due to bandwidth resource, so the change is no longer significant. This property is also present in comparison algorithms, mainly because the utility defined in this paper are indirectly influenced by the allocation of multiple resources. The increase in computing resource has little effect on total utility, so its change is insignificant and even has the characteristic of decreasing with its increase. We can see in this graph again that higher bandwidth values do not have a significant impact on total utility.
We plot the trade-off between system revenue and system cost vs. η in Fig. <ref>. The system utility (<ref>) indicate that our algorithm can balance system revenue and cost, and in this figure we can see that our proposed algorithm can get better trade-off than comparison algorithm while η increases, which verifies Remark 1. There is obvious increase on system revenue and decrease on negative system cost when η increases in our proposed algorithm but this trend is not that significant in other comparison, which means our proposed algorithm significantly attach importance to user admission and stabilize the system utility. Therefore, the system utility is increased by balancing revenue at LTSs and cost at STSs in the proposed algorithm and make systems increasingly stable compared with other algorithms.
§ CONCLUSION
In this paper, we studied the dynamical resource allocation problem of specific characteristic of tasks in MEC systems. Specially, the stochastic optimization problem we proposed was decomposed into user admission at LTS and resource allocation at STS by the Lyapunov optimization technique and we decoupled the optimization variables for efficient algorithm design and solve each subproblem at low complexity. Simulation results has demonstrated that, compared with the benchmarks, the proposed algorithm improves the performance of user admission and resource allocation efficiently and achieves a flexible trade-off between system revenue and cost at multi-time scales and considering semantic extraction tasks.
[A Proof of Theorem 1]
First of all, we have that {max[ A - B,0] + C}^2≤A^2 + B^2 + C^2 - 2A( B - C) always holds if A≥ 0, B≥ 0 and C ≥ 0. Suppose V>0, then squaring both sides of (<ref>) yields
_𝐈(l+T-1)^2 ⩽_𝐈(t)^2+[∑_t=pl^p(l+1)-1τy(l) ·r(t) ]^2 + [ ∑_u=1^U y_u(l) ∑_t=pl^p(l+1)-1A_u(t)]^2 - 2 ∑_t=pl^p(l+1)-1_𝐈(t)y(l)·[τr(t) - A(t) ],
For (<ref>), similarly, we have
_𝐈𝐈(l+T-1)^2 ⩽ _𝐈𝐈(t)^2+[∑_u,ky_u(l) ∑_t=pl^p(l+1)-1B_busτ]^2 + max_u∈ U^S(l)[y_u(l) ∑_t=pl^p(l+1)-1r_u(t) ]^2 -
2 ∑_t=pl^p(l+1)-1_𝐈𝐈(t) [∑_u,ky_u(l)B_busτ - y(l) ·r(t) ].
For (<ref>), we also have
Φ(l+T-1)^2 ⩽ Φ(t)^2+[∑_t=pl^p(l+1)-1y(l)·f(t) ]^2 + max_u∈ U^S(l)[ ∑_t=pl^p(l+1)-1∑_m=1^M z_um(t)F_um(y_u(l)B_bus) ]^2 -
2 ∑_t=pl^p(l+1)-1Φ(t)[ y(l)·f(t) - ∑_uF_u(y_u(l)B_bus)(t)].
By organizing the above inequalities, we obtain
_𝐈(l+T-1)^2-_𝐈(t)^2/2⩽1/2{[∑_t=pl^p(l+1)-1τy(l) ·r(t) ]^2+ [ ∑_u=1^U y_u(l) ∑_t=pl^p(l+1)-1A_u(t)]^2} - ∑_t=pl^p(l+1)-1_𝐈(t) y(l)·[τr(t) - A(t)] ,
_𝐈𝐈(l+T-1)^2-_𝐈𝐈(t)^2/2⩽ 1/2{[∑_u,ky_u(l) ∑_t=pl^p(l+1)-1B_busτ]^2 + max_u∈ U^S(l)[y_u(l) ∑_t=pl^p(l+1)-1r_u(t) ]^2} -
∑_t=pl^p(l+1)-1_𝐈𝐈(t)[∑_u,k y_u(l)B_busτ - y(l) ·r(t) ].
and
Φ(l+T-1)^2-Φ(t)^2/2⩽ 1/2{[∑_t=pl^p(l+1)-1y(l)·f(t)]^2 + max_u∈ U^S(l)[ ∑_t=pl^p(l+1)-1F_u(y_u(l)B_bus)(t)]^2} -
∑_t=pl^p(l+1)-1Φ(t) [y(l)·f(t)- ∑_uF_u(y_u(l)B_bus)(t)].
Summing the above three equations yields
L(Θ(l+T)-Θ(l)) ⩽ 1/2{[ ∑_t=pl^p(l+1)-1τy(l)·r(t) ]^2 + [∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1 A_u(t)]^2 + [ ∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1B_busτ]^2 +
max_u∈ U^S(l)[ y_u(l) ∑_t=pl^p(l+1)-1r_u(t) ]^2 + [ ∑_t=pl^p(l+1)-1y(l) ·f(t) ]^2 + max_u ∈ U^S(l)[ ∑_t=pl^p(l+1)-1F_u(y_u(l)B_bus)(t)]^2 } -
∑_t=pl^p(l+1)-1_𝐈(t)[ τy(l)·r(t) - y(l) ·A(t) ] - ∑_t=pl^p(l+1)-1_𝐈𝐈(t)[∑_u,ky_u(l)B_busτ- y(l)·r(t) ] -
∑_t=pl^p(l+1)-1Φ(t) [ y(l)·f(t) - ∑_uF_u(y_u(l)B_bus)(t)].
We take conditional expectation to the above inequality and can obtain
Δ_T(Θ(l))-V𝔼{G(l)|Θ(l)}⩽ C - ∑_t=pl^p(l+1)-1_𝐈(t) 𝔼{[τy(l)·r(t) - y(l)·A(t)] |Θ(l) } - ∑_t=1^l+T-1_𝐈𝐈(t) 𝔼{[∑_u=1^U y_u(l) B_busτ-
y(l)·r(t)]|Θ(l) } - ∑_t=1^l+T-1Φ(t) 𝔼{[y(l)·f(t)- ∑_uF_u(y_u(l)B_bus)(t)]|Θ(l) } - V𝔼{[G_L(l)-η∑_t=pl^p(l+1)-1P(t)]|Θ(l)},
where
C ⩾1/2{[ ∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1r_u(t)τ]^2 + [∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1 A_u(t)]^2 + [ ∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1B_busτ]^2 + max_u∈ U^S(l)[ y_u(l) ∑_t=pl^p(l+1)-1
r_u(t) ]^2 + [ ∑_u=1^Uy_u(l) ∑_t=pl^p(l+1)-1f_u(t) ]^2 + max_u ∈ U^S(l)[ ∑_t=pl^p(l+1)-1F_u (y_u(l)B_bus)(t)]^2 }.
Then we complete the proof of Theorem 1.
99
IEEEtran
ref1
Y. Guo, F. R. Yu, J. An, K. Yang, C. Yu and V. C. M. Leung, “Adaptive bitrate streaming in wireless networks with transcoding at network edge using deep reinforcement learning," IEEE Trans. Veh. Technol., vol. 69, no. 4, pp. 3879-3892, Apr. 2020.
ref2
S. Jošilo and G. Dán, “Computation Offloading scheduling for periodic tasks in mobile edge computing," IEEE Trans. Netw., vol. 28, no. 2, pp. 667-680, Apr. 2020.
ref3
L. Lei, C. Chen, Q. Pei, S. Maharjan and Y. Zhang, “Vehicular edge computing and networking: A survey." Mobile Netw. Appl., vol.26, no. 3, pp. 1145-1168, Jun. 2021.
ref4
J. Du, L. Zhao, J. Feng, and X. Chu, “Computation offloading and resource allocation in mixed fog/cloud computing systems with min-max fairness guarantee," IEEE Trans. Commun., vol. 66, no. 4, pp. 1594-1608, Apr. 2018.
ref5
J. Feng, Q. Pei, F. R. Yu, X. Chu, J. Du, and L. Zhu, “Dynamic network slicing and resource allocation in mobile edge computing systems," IEEE Trans. Veh. Technol., vol. 69, no. 7, pp. 7863-7878, Jul. 2020.
ref6
S. Zarandi and H. Tabassum, “Delay minimization in sliced multi-cell mobile edge computing (MEC) systems," IEEE Commun. Lett., vol. 25, no. 6, pp. 1964-1968, Jun. 2021.
ref7
G. Faraci, C. Grasso, and G. Schembra, “Design of a 5G network slice extension with MEC UAVs managed with reinforcement learning," IEEE J. Sel. Areas Commun., vol. 16, no. 7, pp. 2356-2371, Oct. 2020.
ref8
J. Y. Hwang, L. Nkenyereye and N. M. Sung, “IoT service slicing and task offloading for edge computing," IEEE Internet Things J., vol. 44, no. 4, pp. 1-14, Apr. 2020.
ref9
X. Cao, J. Xu and R. Zhang, “Mobile edge computing for cellular-connected UAV: Computation offloading and trajectory optimization," IEEE 19th International Workshop Signal Process. Adv. Wireless Commun. (SPAWC), pp. 1-5, 2018.
ref10
M. A. Hossain, and N. Ansari, “Energy aware latency minimization for network slicing enabled edge computing," IEEE Trans. Green Commun. Netw., pp. 1-10, May. 2021.
ref30
T. Zhang, Y. Xu, J. Loo, D. Yang and L. Xiao, "Joint computation and communication design for UAV-assisted mobile edge computing in IoT," IEEE Trans. Industrial Informatics, vol. 16, no. 8, pp. 5505-5516, Aug. 2020.
ref31
J. Feng, W. Zhang, Q. Pei, J. Wu and X. Lin, "Heterogeneous computation and resource allocation for wireless powered federated edge learning systems," IEEE Trans. Commun., vol. 70, no. 5, pp. 3220-3233, Mar. 2022.
ref11
H. Xie, Z. Qin, G. Y. Li, and B. H. Juang. “Deep learning enabled semantic communication systems," IEEE Trans. Signal Process., vol. 69, pp. 2663-2675, Apr. 2021.
ref12
H. Xie and Z. Qin, “A lite distributed semantic communication system for internet of things," IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 142-153, Nov. 2020.
ref13
H. Qi, E. R. Sparks and A. Talwalkar, “PALEO: A performance model for deep neural networks," International Conf. Learn. Representations (ICLR), 2016.
ref14
D. Justus, J. Brennan, S. Bonner and A. S. McGough, “Predicting the computational cost of deep learning models," IEEE International Conf. Big Data, pp. 3873-3882, Dec. 2018.
ref15
D. Bienstock, G. Muñoz and S. Pokutta, “Principled deep neural network training through linear programming," arXiv preprint arXiv:1810.03218, Oct. 2020.
ref16
M. Bianchini and F. Scarselli, “On the complexity of neural network classifiers: A comparison between shallow and deep architectures," IEEE Tran. Neural Netw. Learn. Syst., vol. 25, no. 8, pp. 1553-1565, Jan. 2014.
ref17
F. Guo, F. R. Yu, H. Zhang, H. Ji, M. Liu, and V. C. M. Leung, “Adaptive resource allocation in future wireless networks with blockchain and mobile edge computing,” IEEE Trans. Wireless Commun., vol. 19, no. 3, pp. 1689-1703, Mar. 2020.
ref18
Y. Xiao and M. Krunz, “Dynamic network slicing for scalable fog computing systems with energy harvesting,” IEEE J. Sel. Areas Commun., vol. 36, no. 12, pp. 2640-2654, Dec. 2018.
ref19
N. Van Huynh, D. Thai Hoang, D. N. Nguyen, and E. Dutkiewicz, “Optimal and fast real-time resource slicing with deep dueling neural networks,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1455-1470, Jun. 2019.
ref20
G. Sun, H. Al-Ward, G. O. Boateng and G. Liu, “Autonomous cache resource slicing and content placement at virtualized mobile edge network," IEEE Access, vol. 7, pp. 84727-84743, Jun. 2019.
ref21
M. J. Neely, “Stochastic network optimization with application to communication and queueing systems," Synth. Lect. Commun.. San Rafael, CA, USA: Morgan & Claypool Publishers, 2010.
ref22
S. Boyd, “Convex optimization problems,” Lecture slides and notes. 2008. [Online]. Available: http://web.stanford.edu/class/ee364a/lectures.
html.
ref23
M. Grant, S. Boyd, and Y. Ye, “CVX: MATLAB software for disciplined convex programming,” 2014. [Online]. Available: http://cvxr.com/cvx/.
ref24
S. Boyd, “Interior-point methods,” Lecture slides and notes. 2008. [Online]. Available: http://web.stanford.edu/class/ee364a/lectures.html.
ref25
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
ref26
3GPP, “Technical specification group radio access network; Small cell enhancements for E-UTRA and E-UTRAN Physical layer aspects,” TR 36.872, Release 15, pp. 9, 76-77, Dec. 2013.
ref27
3GPP, “Technical specification group radio access network; Evolved universal terrestrial radio access (E-UTRA); Further advancements for E-UTRA physical layer aspects," TR 36.814, Release 9, pp. 94-96, Mar. 2017.
ref28
J. Li, Q. L. Dong and M. Liao, “Study on the scenarios and future development of URLLC,” Mob. Commun., vol. 44, no. 2, pp. 20-24, Dec. 2020.
ref29
S. Zarandi and H. Tabassum, “Delay minimization in sliced multi-cell mobile edge computing (MEC) systems," IEEE Commun. Lett., vol. 25, no. 6, pp. 1964-1968, Jan. 2021.
|
http://arxiv.org/abs/2307.02377v2 | 20230703092746 | Fraunhofer SIT at CheckThat! 2023: Tackling Classification Uncertainty Using Model Souping on the Example of Check-Worthiness Classification | [
"Raphael Frick",
"Inna Vogel",
"Jeong-Eun Choi"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
2023
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CLEF 2023: Conference and Labs of the Evaluation Forum, September 18–21, 2023, Thessaloniki, Greece
mode=sub]Notebook for the CheckThat! Lab at CLEF 2023
1]Raphael Antonius Frick[
[email protected],
]
[1]
[1]
[1]Fraunhofer Institute for Secure Information Technology SIT | ATHENE - National Research Center for Applied Cybersecurity,
Rheinstrasse 75, Darmstadt, 64295, Germany,
url=https://www.sit.fraunhofer.de/
1]Inna Vogel[
[email protected],
]
[1]
1]Jeong-Eun Choi[
[email protected],
]
[1]
[1]Corresponding author.
This paper describes the second-placed approach developed by the Fraunhofer SIT team in the CLEF-2023 CheckThat! lab Task 1B for English. Given a text snippet from a political debate, the aim of this task is to determine whether it should be assessed for check-worthiness. Detecting check-worthy statements aims to facilitate manual fact-checking efforts by prioritizing the claims that fact-checkers should consider first. It can also be considered as primary step of a fact-checking system. Our best-performing method took advantage of an ensemble classification scheme centered on Model Souping. When applied to the English data set, our submitted model achieved an overall F_1 score of 0.878 and was ranked as the second-best model in the competition.
check-worthiness detection model souping fact-checking BERT NER
[
[
August 1, 2023
==================
|
http://arxiv.org/abs/2307.03305v1 | 20230706213813 | A Vulnerability of Attribution Methods Using Pre-Softmax Scores | [
"Miguel Lerma",
"Mirtha Lucas"
] | cs.LG | [
"cs.LG",
"cs.AI",
"68T07",
"I.2.m"
] |
We discuss a vulnerability involving a category of attribution methods used to
provide explanations for the outputs of convolutional neural networks
working as classifiers.
It is known that this type of networks are vulnerable to adversarial attacks,
in which imperceptible perturbations of the input
may alter the outputs of the model <cit.>. In contrast,
here we focus
on effects that small modifications in the model may cause on
the attribution method without altering the model outputs.
When Fair Classification Meets Noisy Protected Attributes
Christo Wilson
August 1, 2023
=========================================================
§ INTRODUCTION
The black box nature of current artificial intelligence (AI) models
is considered problematic in areas with low tolerance to errors, such as
Computer Aided Diagnosis (CAD) and autonomous vehicles. To palliate the
effect of mistakes and increase confidence in the model, explanation methods
have been developed to justify the model outputs <cit.>.
A class of explanation methods widely used on convolutional neural networks (CNN)
take the form of attribution methods
that determine how much different parts of the input of a model contribute
to produce its final output. In general, the networks on which
these methods are used consist of several
convolutional layers that produce a vector of outputs 𝐳 = (z_1,z_2,…,z_n),
which is then transformed with a softmax function into a vector of probabilities
𝐲 = (y_1,y_2,…,y_n), where n is the number of classes.
(Figure <ref>).
Each post-softmax output can be interpreted as the amount of confidence
about the input sample belonging to each of the several classes 1,2,…,n.
In classification tasks, the output with maximum value corresponds to the
class to which the input sample is considered to belong.
Gradient-based attribution
methods for convolutional networks work by computing the gradient
∇_𝐱S = (∂ S/∂ x_1,…,∂
S/∂ x_N) of an output or “score” S of the network respect
to a set of inputs or unit activations 𝐱 = (x_1,…,x_N),
where N is the number of inputs or internal units, and
S may represent either one of the pre-softmax outputs z_i,
or one of the post-softmax outputs y_i.
The assumption is that each derivative ∂ S/∂ x_i
provides a measure of the impact of x_i on the score S.
A few examples of attribution methods using this approach are Grad-CAM <cit.>,
Integrated Gradients (IG) <cit.>,
and RSI Grad-CAM <cit.>.
In <cit.> there is a detailed analysis of the differences between using
gradients of pre-softmax versus post-softmax outputs. In that paper it is argued
that the post-softmax version of gradient-based methods is more robust
and not affected by a vulnerability suffered by the pre-softmax version.
Here we will provide a brief overview of the main argument leading to that conclusion,
and a way in which the vulnerability could be exploited.
§ A VULNERABILITY OF ATTRIBUTION METHODS USING PRE-SOFTMAX SCORES.
In this section we examine a vulnerability that affects attribution methods for CNNs that
work with pre-softmax scores, with a special emphasis on gradient-based methods, although many of
the considerations can be easily extended to methods that work with finite differences rather
than gradients, such as Layer-wise Relevance Propagation (LRP) <cit.>
and DeepLIFT <cit.>.
§.§ The softmax function
The output of the softmax function applied to a vector 𝐳 = (z_1,z_2,…,z_n)
is the vector 𝐲 = (y_1,y_2,…,y_n) whose components are:
y_c = e^z_c/∑_i=1^n e^z_i .
The outputs of the softmax verify 0< y_c < 1 for all classes c=1,…,n,
and ∑_c=1^n y_c = 1, so the
y_c are usually interpreted as probabilities.
Note that adding an amount t independent of the class i to all the arguments
of the softmax, z'_i = z_i + t, has no effect on its outputs:
y'_c = e^z'_c/∑_i=1^n e^z'_i
= e^z_c+t/∑_i=1^n e^z_i+t
= e^t e^z_c/∑_i=1^n e^t e^z_i
= e^t e^z_c/e^t ∑_i=1^n e^z_i
= e^z_c/∑_i=1^n e^z_i = y_c
.
So, the change z_i ↦ z_i+t for every i does not change the
network post-softmax outputs y_c. Note that t does not need to be a constant, all that is required is that t is independent of i.
Since adding t has no effect in the output of the softmax,
the derivatives of
the outputs of the softmax won't change after adding t to its arguments:
∂ y'_i/∂ x = ∂ y_i/∂ x ,
however the derivatives of the pre-softmax z_i may change:
∂ z'_i/∂ x = ∂ (z'_i + t)/∂ x
= ∂ z_i/∂ x + ∂ t/∂ x ,
so that ∂ z'_i/∂ x≠∂ z_i/∂ x
if ∂ t/∂ x≠ 0.
This theoretical result and its potential impact
in gradient-based attribution methods
are carefully examined in <cit.>.
In the following section we will
provide a proof of concept showing
how this results can be used to radically modify a heatmap
produced by an attribution method such as Grad-CAM.
§.§ A vulnerability of attribution methods using pre-softmax scores.
Equation (<ref>) shows that the softmax function has no unique
inverse because we can add to its arguments z_1,…,z_n any scalar t independent of i
without changing the output of the softmax.
In the example shown here ( <ref>)
the network is a VGG19 pretrained on ImageNet <cit.>.
Then, t is the result of adding
the activations of the units placed in position (0,0) of the final pool layer
(block5_pool) across all its channels multiplied by a constant K.
More specifically, if A_ijk presents the activation of unit in position (i,j) of channel k
of the last pooling layer, then:
t = K ∑_k A_00k ,
where K is a constant—in our experiment we used K=10.
After t is added to the original
z_i pre-softmax scores of the network we get new pre-softmax scores z'_i = z_i+t.
This makes the new pre-softmax scores strongly dependent on the units
in position (0,0) of the final pool layer without altering the post-softmax scores of the network.
Consequently, we expect that heatmaps produced by Grad-CAM to strongly highlight the upper left area of
the image regardless of whether that part of the image is related to the network final output.
s <ref>–<ref> show that,
for the altered model,
the heatmaps produced using pre-softmax scores are strongly distorted,
while the heatmaps produced using post-softmax scores remain unchanged.
On the other hand, since the final (post-softmax) output of the network remains unchanged, the loss function used for training
would sit on the same local minimum for both models
(original and modified). Further training of the models won't make
a difference since the added connection
cannot backpropagate error. More specifically,
if E is the loss function used for training, then for the modified model
we have (using multivariate chain rule):
∂ E/∂ t =
∑_i=1^n ∂ E/∂ y'_i∂ y'_i/∂ t = 0
because y'_i=y_i, which does not depend on t, hence
∂ y'_i/∂ t = ∂ y_i/∂ t = 0 for all i.
Consequently,
the trainable parameters of both models would change in the same way,
and if the error function E is at or near a minimum for the original
model, the same would hold for the modified model.
Also, if we trained the modified VGG19 network from scratch
and with the same parameter initialization,
the final trainable parameters would be the same
as those of the original VGG19.
§ DISCUSSION
We note that the main property behind the vulnerability shown here is the
possibility of altering pre-softmax scores of a classifier CNN without
altering its post-softmax scores.
One question could be whether this vulnerability
can be exploited to deploy a malicious attack
intended to undermine confidence in the model.
This kind of attack would be available for anybody having
access to model repositories. Since after modification the
new model would be functionally equivalent to the original one
(its outputs will not change)
it would be hard to notice that it has been modified.
Also, it is conceivable that the problem pointed out may
manifest itself in an unintended way
because, after training, both the original and modified model
may end up at the same local minimum
of the loss function used for training.
The phenomenon discussed may seem to have some
similarities with Clever Hans
effects <cit.>,
which also causes heatmaps to highlight wrong areas
of the input. Clever Hans effects are
due to the ability of a classifier to
exploit spurious or artifactual correlations. For instance, in a dataset in which
images of horses contain a watermark, the model may learn to correctly classify
the image of a horse by paying attention only to the presence of the watermark
rather than the horse.
In that case, an appropriate attribution method would consistently highlight the area of the watermark in the images with horses, which is outside the actual
area of interest. However, that would not happen
because of a problem in the attribution method,
which would be correctly
revealing a problem with the model
(trained with a biased dataset).
On the contrary, the vulnerability discussed here
tells nothing about the ability of the model to extract the right information
from the right parts of its inputs,
it only depends on the fact that the gradients of
the pre-softmax scores may not provide the right information
to determine the impact
of the inputs on the final (post-softmax) outputs.
§ CONCLUSIONS
We have shown that attribution methods using pre-softmax scores are vulnerable
to a class of
adversarial attacks that may modify the heatmaps produced without changing the model outputs.
Post-softmax outputs are not vulnerable to this kind of attack.
We have also noted that the vulnerability discussed
here is not a Clever Hans effect. Future work can be used to determine
in what extent the problem applies to a wider
class of attribution methods.
10
burkart2021 Burkart N., Huber M.F. (2021).
A Survey on the Explainability of Supervised Machine Learning.
Journal of Artificial Intelligence Research, Volume 70, pp 245–317.
https://doi.org/10.1613/jair.1.12228
goodfellow2015 Goodfellow, I.J., Shlens, J., and Szegedy, C. (2014).
Explaining and Harnessing Adversarial Examples. CoRR, abs/1412.6572.
lapuschkin2016 Lapuschkin S., Binder A., Montavon G., Müller K.R. and Samek W. (2016).
Analyzing classifiers: fisher vectors and deep neural networks.
In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2912–2920 (2016).
lapuschkin2019 Lapuschkin, S., Wäldchen, S., Binder, A. et al. (2019).
Unmasking Clever Hans predictors and assessing what machines really learn. Nat Commun 10, 1096 (2019).
https://doi.org/10.1038/s41467-019-08987-4
lerma2023 Lerma, M., Lucas M. (2023). Pre or Post-Softmax Scores in Gradient-based
Attribution Methods, What is Best? Accepted for presentation in
the IEEE 13th International Conference on Pattern Recognition Systems (ICPRS),
July 4th-7th, 2023, Escuela Superior Politécnica del Litoral (ESPOL), Guayaquil - Ecuador
lucas2022 Lucas, M., Lerma M., Furst, J., Raicu, D. (2022).
RSI-Grad-CAM: Visual Explanations from Deep Networks via
Riemann-Stieltjes Integrated Gradient-Based Localization. In: Bebis,
G. et al (Ed.), Advances in Visual Computing. ISVC 2022. Lecture
Notes in Computer Science, vol 13598. Springer,
Cham. https://doi.org/10.1007/978-3-031-20713-6_20
montavon2019 Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, KR. (2019).
Layer-Wise Relevance Propagation: An Overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen,
L., Müller, KR. (eds) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning.
Lecture Notes in Computer Science(), vol 11700. Springer, Cham.
https://doi.org/10.1007/978-3-030-28954-6_10
selvaraju2019 Selvaraju, R.R., Cogswell, M., Das, A.,
Vedantam, R., Parikh, D., Batra, D. (2019): Grad-CAM: visual explanations
from deep networks via gradient-based localization. Int.
J. Comput. Vision 128(2), 336–359
(2019). https://doi.org/10.1007/s11263-019-01228-7
shrikumar2017 Shrikumar A., Greenside P., Kundaje A. (2017).
Learning important features through propagating activation differences.
In Doina Precup and Yee Whye Teh (eds.), Proceedings of
the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine
Learning Research, pp. 3145–3153, International Convention Centre, Sydney, Australia, 06–11 Aug
2017. PMLR.
simonyan2015 Simonyan, K., Zisserman, A.: Very deep
convolutional networks for large-scale image recognition
(2015). https://arxiv.org/abs/1409.1556
sundararajan2017 Sundararajan, M., Taly, A., Yan, Q.: Axiomatic
attribution for deep networks. In: Precup, D., Teh, Y.W. (eds.)
Proceedings of the 34th International Conference on Machine
Learning. Proceedings of Machine Learning Research, vol. 70,
pp. 3319–3328. PMLR (2017).
|
http://arxiv.org/abs/2307.01337v1 | 20230703201618 | Uncovering new white dwarf - open cluster associations using Gaia DR3 | [
"M. Prišegen",
"N. Faltová"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Advanced Technologies Research Institute, Faculty of Materials Science and Technology in Trnava, Slovak University of Technology in Bratislava, Bottova 25, 917 24 Trnava, Slovakia Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotlářská 2, 611 37
Brno, Czech Republic
[email protected]
Open clusters (OCs) provide homogeneous samples of white dwarfs (WDs) with known distances, extinctions, and total ages. The unprecedented astrometric precision of Gaia allows us to identify many novel OC–WD pairs. Studying WDs in the context of their parent OCs makes it possible to determine the properties of WD progenitors and study the initial–final mass relation (IFMR).
We seek to find potential new WD members of OCs in the solar vicinity. The analysis of OC members' parallaxes allows us to determine the OC distances to a high precision, which in turn enables us to calculate WD masses and cooling ages and to constrain the IFMR.
We searched for new potential WD members of nearby OCs using the density-based machine learning clustering algorithm . The clustering analysis was applied in five astrometric dimensions – positions in the sky, proper motions and parallaxes, and in three dimensions where the positional information was not considered in the clustering analysis. The identified candidate OC WDs were further filtered using the photometric criteria and properties of their putative host OCs. The masses and cooling ages of the WDs were calculated via a photometric method using all available Gaia, Pan-STARRS, SDSS, and GALEX photometry. The WD progenitor masses were determined using the ages and metallicities of their host OCs.
Altogether, 63 OC WD candidates were recovered, 27 of which are already known in the literature. We provide characterization for 36 novel WDs that have significant OC membership probabilities. Six of them fall into relatively unconstrained sections of the IFMR where the relation seems to exhibit nonlinear behavior. We were not able to identify any WDs originating from massive progenitors that would even remotely approach the widely adopted WD progenitor mass limit of 8 M_⊙; this confirms the paucity of such objects residing in OCs and hints at a presence of velocity kicks for nascent WDs.
Uncovering new white dwarf–open cluster associations using Gaia DR3
M. Prišegen 1,2 N. Faltová 2
Received Month day, 2023; accepted June day, 2023
===================================================================
§ INTRODUCTION
White dwarfs (WDs) are compact objects that represent the ultimate stage of evolution of main sequence (MS) stars with zero-age MS masses less than about eight times that of the Sun <cit.>. This mass bracket encompasses the vast majority of all stars in the Galaxy. Due to their ubiquity, WDs comprise a significant component of Galactic populations. Since they no longer generate energy via central nuclear reactions, they gradually cool from their high initial temperatures in a predictable way. These properties make WDs a suitable tracer for studying Galactic history, the enrichment of the interstellar medium (ISM), the physics of matter under extreme conditions, and the progenitors of Type Ia supernovae <cit.>.
Due to their small sizes, WDs are intrinsically faint objects, with absolute magnitudes ∼10 mag fainter than MS stars of similar effective temperatures. Despite recent significant advances in astrometry, their faintness still poses problems for precise distance determination. Furthermore, due to their evolved nature, many of their properties, such as metallicity and mass, no longer reflect the properties of their progenitors. This is due to significant mass loss (especially in the last phases of stellar evolution) and the change in the observable chemical composition since elements heavier than H or He tend to diffuse downward into the WD interior due to the strong gravitational field <cit.>. In the absence of any external accretion of matter, WDs relatively quickly end up with either a pure H (DA type) or He (DB type) atmosphere. As a consequence of these factors, observations of isolated WDs contain only limited information on the progenitor, and the limited precision of the measured distances also affects the derivation of other WD properties. However, if an association with another object or a group of stars with common properties can be established – that is, when a WD is part of a wide binary system with a stellar companion or a star cluster – a number of these problems can be resolved and new research avenues are opened <cit.>.
The most numerous class of star clusters are open clusters (OCs). They are gravitationally bound stellar aggregates (typically of ∼ 10^2–10^4 stars) formed from a giant molecular cloud in a single star formation event. Open cluster member stars share several common properties, such as overall age, initial chemical composition (metallicity), overall kinematic properties in the context of the Galaxy, distance from the observer, and the amount of intervening ISM between OC stars and the observer (extinction or reddening; this may not hold for very young OCs). A large number of OC members and the use of robust statistics make the derivation of these parameters more accurate than what is otherwise typically achievable for an isolated star located in the Galactic field. The ages of known OCs cover a wide range, from a few million years to almost ten gigayears <cit.>. Metallicities span from subsolar [Fe/H] = -0.56 to super-solar values of about [Fe/H] = 0.55 <cit.>. However, most OCs are relatively young and metal-rich, as most of the older OCs have long since dispersed into the Galactic field.
The fields of OC and WD research have been completely revolutionized with the advent of the Gaia satellite <cit.>, especially its second data release <cit.> and the early installment of its third data release <cit.>. EGDR3 presents accurate positions, proper motions, parallaxes, and broadband photometry in the G, G_BP, and G_RP bands for almost 1.5 billion sources, with notable improvements over GDR2 in both the formal uncertainties and systematic effects <cit.>. Moreover, the number of sources has increased by 7%, and there are proper motions and parallaxes available for 10% more sources as compared to GDR2, which also yields improved completenesses in dense areas of the sky <cit.>. The complete third Gaia data release <cit.>, published in 2022, also includes source classifications <cit.> and corrected photometry for some sources. The potential of a large-scale, astrometry-focused mission such as Gaia for WD science has been proven, with the known number of sources significantly expanded and a refinement of their physical properties <cit.>, enabling significant progress in this field. Similarly, the Gaia data have been instrumental in the discovery and characterization of many OCs <cit.>.
The fundamental properties of a WD – its mass, cooling age (the time since the WD left the tip of the asymptotic giant branch), and internal and atmospheric composition – can be obtained using spectroscopy, asteroseismology, or photometry. However, the spectroscopic and asteroseismic characterization of a large number of objects is observationally and computationally expensive. On the other hand, in the era of large, deep photometric all-sky surveys and with the availability of Gaia parallaxes and photometry, it is also possible to derive WD parameters using this type of data. However, photometry in the optical band by itself is unable to constrain the WD atmospheric composition, which is pivotal for the accurate determination of other WD physical parameters, as the atmosphere composition influences the way a WD cools. Despite this, it has been shown that the majority of observed WDs have DA-type atmospheres <cit.>. This fraction appears to be even higher in OCs <cit.>. Masses and cooling ages for nearby WDs with negligible extinction can be determined using Gaia photometry and parallaxes to a precision of several percent <cit.>.
Due to their nature, OCs provide homogeneous samples of WDs from coeval progenitors, all located at the same distance from the observer. Uncovering WD populations of OCs is pivotal for addressing numerous open issues in astrophysics. Of particular interest is the initial–final mass relation (IFMR), which links the star's initial mass, M_i, to its final mass, M_WD, at the end of the stellar evolution, when the star has ultimately evolved into a WD. The IFMR constrains the amount of mass locked away in the WD and how much material is returned to the ISM. Also, the high-mass end of the IFMR marks the limit at which stars undergo core-collapse SNe. The accurate determination of the IFMR relies on obtaining a large and clean sample of OC WDs <cit.>.
The IFMR determined in the literature contains extrinsic and intrinsic scatter. The intrinsic IFMR scatter is thought to be produced by the metallicity variation between the OCs that host WDs utilized for the IFMR determination because the stellar evolution, and especially the mass-loss in the last evolutionary stages, is thought to be significantly metallicity-dependent <cit.>. Another source of scatter may stem from the fact that the stellar evolution in the terminal phases might be inherently stochastic to an unknown degree and from the dispersion of the initial stellar rotational velocities. The main extrinsic components are the contamination from the WDs incorrectly assigned to OCs (i.e., physical nonmembers), measurement uncertainties and unaccounted systematic errors, incorrectly determined OC ages, and shortcomings in the stellar evolutionary models <cit.>.
A number of Gaia-based OC catalogs include tables of OC members stars, but the lower quality of astrometry and photometry at fainter magnitudes generally resulted in the exclusion of fainter stars from the analysis of OC parameters and the member tables in these catalogs. Because WDs tend to be located ∼10 mag below the OC MS in the OC color-magnitude diagram (CMD), only a small number of young WDs in the closest OCs are bright enough to be listed as cluster members in the current OC catalogs.
Due to these issues and the availability of improved astrometry and photometry in GDR3, there is an opportunity to update the census of OC WDs. To obtain a larger sample of OC WDs, it is possible to cross-match the known and candidate cataloged WDs with an OC catalog using positional, parallax, and proper motion criteria, and to filter out the spurious WD–OC pairs using photometric constraints that are dependent on the distance, reddening, and age of the matched OC. Alternatively, it is also viable to extend the search of OC members to fainter magnitudes in order to reach the OC WD population.
In this paper we present an all-sky census of WDs in nearby Galactic OCs that aims to increase the number of known bona fide WD–OC pairs using data from the Gaia mission. The search for possible OC WD members was conducted using the unsupervised clustering algorithm HDBSCAN[<https://hdbscan.readthedocs.io>] <cit.>. The membership probabilities of the potential OC WDs are also estimated, allowing us to quantify our confidence in the physical association between the OC and the WD. Photometric data from Gaia were added to remove additional spurious pairings and, together with photometry from additional sources, used to calculate WD masses and cooling ages. These parameters, in conjunction with the properties of the host OC and stellar evolutionary models, were used to calculate the WD progenitor masses and examine the IFMR.
This paper is structured as follows. In Sect. 2 we describe the data used in this study and the workflow used to construct the preliminary list of OCs that host at least one WD candidate. In Sect. 3 we describe how the clustering analysis was conducted, comment on the data quality, re-derive the OC group parallaxes, collate other OC properties, and filter out the probably spurious WD-OC pairs. In Sect. 4 we calculate the WD masses, cooling ages, and masses of their progenitors and examine the IFMR. Finally, we provide a summary and conclusion in Sect. 5.
§ PRELIMINARY SEARCH
Open clusters are observed throughout the Galactic disk, with the current census of well-established and characterized OCs numbering ∼1500 <cit.>, some of the cataloged objects lying in the distance in excess of several kiloparsecs. Nevertheless, as the OCs are located predominantly near the Galactic Plane, the high source density, compounded with the high absorption due to the ISM in the Galactic disk, the OC census is thought to be incomplete beyond 1 kpc. This is evident by the high number of new OCs discovered using GDR2 and (E)GDR3 data <cit.>. The visibility and the quality of astrometry and photometry of the intrinsically faint objects within the OCs, such as WDs, are especially affected by this and rapidly degrade with increasing distance.
The limitations inherent to observing such faint objects are also clearly evident in the recent WD catalogs, such as that in <cit.>, who present a collection of almost 1.3 million WD candidates based on EGDR3, they also list the probability of the objects being bona fide WDs (), which is computed using the spectroscopically confirmed WD sample from Sloan Digital Sky Survey (SDSS). Using the photogeometric distances, d, collated from <cit.> of the likely WDs ( > 0.5) in <cit.>, it can be noted that 90 % of the probable WDs lie at distances closer than 870 pc, with a median distance of ∼360 pc. However, there is a tail of WDs with d>1 kpc present in the sample of likely WDs.
According to WD evolutionary models <cit.>, a 10 Myr old, DA WD with M≈0.6 M_⊙ has intrinsic absolute G_BP≈ 9.7 mag. Assuming no extinction and Gaia G_BP mag limit of ∼20.0 mag, this translates into the distance limit of ∼1.1 kpc where reliable astrometry and photometry of WDs can be obtained considering Gaia capabilities in GDR3. This adopted limit is chosen due to the issues with Gaia G_BP photometry for very faint sources, which starts to be less reliable for sources fainter than G_BP≳ 20 mag. Furthermore, the typical uncertainty of parallaxes and proper motions of WDs at this brightness is about 0.5 mas and 0.5 mas yr^-1, respectively, and rapidly deteriorating for objects fainter than 20 mag <cit.>. Such uncertainties in the astrometry make it difficult to establish credible OC memberships for these objects beyond the distances of ∼1 kpc.
<cit.> suggest circumventing the problems with the G_BP photometry of faint objects by using G - G_RP as a color indicator; however, this approach is ill suited for WDs. The evolutionary tracks for WDs of various masses are more closely packed together in the G - G_RP versus G CMD than in the more standard G_BP-G_RP versus G CMD, which makes the former CMD type less suitable for the WD characterization. Naturally, WDs younger than 10 Myr can be substantially brighter than 9.7 mag, making them detectable in the OCs further away than 1.1 kpc. However, the evolutionary tracks in the CMD are almost degenerate for such young WDs. Therefore, the analysis based on the CMDs constructed from the Gaia photometry is unable to provide accurate parameters for these objects. Therefore, considering the limits of Gaia photometry and astrometry, we adopt a distance limit of 1.1 kpc for OCs to be studied for the presence of WDs.
For the OC census, we considered the recent compilations by <cit.> and <cit.>, containing 2017 and 1743 OCs, respectively. We opted to adopt the richer catalog of <cit.> for the preliminary WD search, discarding the well-known and well-studied OCs Pleiades, Hyades, and Melotte 20. Their proximity and large extent in the sky make them unsuited to be studied with the clustering technique used in this work. Moreover, the census of the single WD population of these OCs is most likely complete <cit.>. Altogether, we retain 419 OCs within the distance limit of 1.1 kpc. These clusters span a large range of ages, from ∼6.5 Myr to ∼4.3 Gyr.
To search for OCs hosting at least one isolated WD, we used the positions, radii, parallaxes, and proper motions reported in <cit.> to query the Gaia archive for objects in the field of each OC as follows. First, a cone with 5× radius around the center of each OC was chosen as the positional criterion. The parameter is used in <cit.> as a measure of angular OC radius and is defined as a radius containing half of OC members.
Second, using the mean values of OC parallax, proper motions, and their dispersions listed in <cit.>, we applied cuts at 5σ around these astrometric quantities to discard the objects from the cone search that are clearly not OC members.
Finally, only the stars with complete astrometric solutions (5p or 6p) were retained. We also applied the recommended basic quality cuts for the Gaia data, removing all sources with a renormalized unit weight error () greater than 1.4, relative parallax error over 1.0, sources with no Gaia color, and very faint sources with G_BP>19.7 mag. The magnitude limit was set to this value in order to accommodate the median OC extinction of A_G∼ 0.4 mag and to circumvent the problems in the BP photometry of faint blue sources as described in <cit.>.
Objects with >1.4 have high probabilities of having ill-behaved astrometric solutions yielding incorrect astrometric parameters for these objects <cit.>. The inclusion of these objects in the analysis can lead to the detection of spurious OC members. Objects with >1.4 can occur in fields with high source densities, where close doubles that are not correctly handled in GDR3 can arise. High values can also indicate unresolved binarity. Therefore, filtering such objects also removes some contamination from unresolved binaries that are unsuitable for IFMR determination.
While the OC parameters given in <cit.> are based on GDR2, the relative proximity of the studied OCs, together with the fact that the OC parameters are derived through robust statistics based on a large number of OC member stars, means that we do not expect the OC parameters to change appreciably in GDR3 and retaining the stars with the parallaxes and proper motions within 5σ of the cataloged astrometric parameters of the studied OCs is sufficient to recover practically all physical OC members.
For the objects obtained in these preliminary queries within the OC fields, we corrected the reported GDR3 parallaxes using the zero-point correction described in <cit.>. This correction depends on the type of the astrometric solution (5p or 6p solution), magnitude, color, and sky position[The zero-point is calculated using the python script provided in <https://gitlab.com/icc-ub/public/Gaiadr3_zeropoint>.].
For each cataloged OC, <cit.> list the extinction values in A_V. We converted these to Gaia EDR3 passbands using the conversion factors in <cit.>:
A_G = 0.835 A_V,
A_BP = 1.139 A_V,
A_RP = 0.650 A_V.
Using these extinction factors and the cataloged parallaxes of each OC, the absolute de-reddened colors and magnitudes of all objects recovered by the queries have been derived, under the assumption of OC membership.
To finally select all OCs potentially hosting at least one isolated WD, we constructed absolute color versus absolute magnitude (G_BP-G_RP) versus M_G CMD for all stars in the studied OC fields. We then applied a cut in the OC CMD below which all single WDs physically related to the OC are expected to be found:
M_G > 7 + 5.5(G_BP-G_RP).
The reason for the choice of the adopted criterion is illustrated in Fig. <ref>, which shows the CMD of high-confidence WDs (>0.5) from <cit.>, color-coded according to their mass. WDs span a wide range of masses, from low-mass objects with M_WD≲ 0.5 M_⊙ that have cores made up of He, up to high-mass WDs, which possibly host oxygen/neon cores, extending, in theory, all the way up to the Chandrasekhar limit. However, the lifetimes of the progenitors of He-core WDs are expected to be larger than the Hubble time, if isolated stellar evolution is assumed. Therefore, the majority of the observed population of these WDs is thought to be a consequence of close binary evolution involving mass transfer <cit.>. However, an alternative mechanism forming low-mass WDs from single metal-rich stars through extreme mass loss might be active as well <cit.>. Excluding these objects significantly reduces the CMD parameter space where the single WDs physically associated with OCs can lie, and the cooling track for a 0.5 M_⊙ DA WDs approximately delineates the upper boundary of this parameter space. On the other hand, the lower boundary is delineated by the cooling curve of the most massive WDs. In theory, the highest possible mass of a stable WD should be around ∼1.38 M_⊙ <cit.>. However, so far, the most massive WDs found to be residing in OCs are only about 1.0 M_⊙, with a handful of more massive objects (∼1.2 M_⊙) that were identified to be kicked out of their parent OCs <cit.>. The paucity of the high-mass WDs within the OCs hints at the presence of a physical mechanism that imparts a velocity kick on the order of one to a few km s^-1 to the nascent high-mass WDs <cit.>. In light of this, the cooling curve for 1.2 M_⊙ DA WD is plotted in Fig. <ref>. It is reasonable to expect that a vast majority of the WDs physically related to OCs will lie above this curve in the OC CMDs. Of course, the adopted selection criterion for the preliminary search is also sensitive to more massive WDs than this. Lastly, informed by the ages of the OCs listed in <cit.>, we plot a 4.3 Gyr cooling isochrone contour in Fig. <ref>, which is the age of the oldest OC in the sample. Any WDs with cooling times higher than the age of the studied OC cannot have their origin within this particular OC.
While the adopted cut is very liberal and allows for significant uncertainties in the OC and candidate WD astrometry and photometry, it cannot be considered inclusive for all double WD systems and other WD binaries where the secondary significantly contributes to the total luminosity and the color of the system. However, since the IFMR only applies to isolated WDs, this is not detrimental to this particular scientific case.
After this first selection step, the list consisted of 238 OCs that potentially host at least one WD candidate that satisfies the adopted selection cut.
§ CLUSTERING ANALYSIS AND ESTABLISHING WD PARAMETERS
In many cases, the preliminary selection of the OC stars in the previous section resulted in remarkably clean OC CMD diagrams, attesting to the quality of both the Gaia astrometry and the OC catalogs used in this work. However, for studying objects near the faint end of the brightness distribution, a more refined selection of OC stars is required. This is especially relevant for the fainter objects located below the OC MS, where the field star contamination that cannot be completely eliminated by the preliminary OC stars selection poses an issue.
Since the primary aim of this study is to examine a semi-empirical IFMR that assumes that every analyzed WD is a product of a single-star evolution and is a coeval member of its associated OC, we need to search for WDs physically associated with OCs using the astrometric and photometric criteria. Firstly, in order for a WD to be considered a member of a particular OC, it needs to share a consistent position in the sky, distance, and proper motion with other members of the OC. Secondly, the bona fide isolated OC WDs occupy only the specific region of the OC CMD. This position is prescribed by the WD mass and its cooling age. Additionally, the cooling age of a WD associated with an OC cannot be greater than the age of the OC itself. This further constrains the possible positions that the OC WDs can occupy in the OC CMD.
The first criterion for finding WDs physically related to an OC is their astrometric membership. This implies that the WD and the rest of the OC stars are clustered together in the astrometric phase space, which can be ascertained using clustering analysis. In the following text, the term “cluster” does not refer to a physical OC but rather to a grouping of stars sharing similar properties in the N-dimensional astrometric parameter space.
A number of clustering methods have widely been utilized to search for structures in astronomical data with various degrees of success. However, for the detection of the structures such as the stellar streams, associations, and star clusters (including OCs) in the data set such as the one provided by the Gaia mission, only some of them are viable. Firstly, a suitable algorithm needs to be reasonably fast in order to conduct a clustering analysis on a large number of objects (∼ 10^6) in multiple dimensions within a reasonable time frame. Secondly, it must be able to discern between the clustered data and the data points belonging to the noise, as obviously, not all considered objects are necessarily part of a cluster within the data set. This is also grounded in reality, as only a small part of the stars in the Galaxy currently reside in OCs because OCs as gravitationally bound stellar groupings have relatively limited lifetimes. For a typical observed stellar field containing an OC, the majority of the stars will not belong to the OC but they will instead be part of the background/foreground Galactic disk population. Furthermore, the algorithm needs to be able to detect clusters of various shapes, sizes, and densities, as the properties of the astrometric phase space vary significantly depending on the position in the Galaxy. This also partly connects to the last requirement, where a suitable algorithm needs to ideally be provided as little information as possible prior to the analysis. Most importantly, it is generally not known a priori how many clusters are present in the studied data set.
The most prominent clustering methods used in the field of OC research are the UPMASK and pyUPMASK codes <cit.> and the DBSCAN <cit.> and HDBSCAN <cit.> algorithms. All of them have been widely utilized for OC searches and characterization as they satisfy most or all of the criteria outlined above <cit.>.
In this work we utilize the HDBSCAN algorithm, which is intuitive to use and has proven to be very potent for studying OCs and other structures in the astrometric phase space <cit.>. HDBSCAN is a hierarchical clustering algorithm that builds on DBSCAN. The main advantages of using HDBSCAN over DBSCAN are that it is able to detect clusters of varying densities and that it does not require the non-intuitive epsilon hyperparameter that is required by DBSCAN, therefore providing results that are less biased than the DBSCAN output, which is significantly affected by the somewhat arbitrary choice of this clustering hyperparameter. Moreover, HDBSCAN is less sensitive to the selection of the clustering parameters than HDBSCAN while also being slightly faster.
The main required hyperparameter (i.e., parameter specified by the user) that controls the performance of HDBSCAN is , which sets the minimum number of data points required to form a cluster. There is a possibility to specify a second hyperparameter, , that determines how conservative the clustering is, with larger values of this parameter yielding more points classified as noise and clusters confined into progressively denser regions of the phase space. By default, is set to the same value as .
HDBSCAN also offers a choice of two possible clustering approaches that dictate how the algorithm selects clusters from the cluster tree hierarchy. The default method is `excess of mass,” which tends to select about one or two populous clusters and a number of smaller clusters. Another option is to use the “leaf” method, which yields a larger number of more homogeneous clusters. <cit.> note that the leaf clustering method generally performs better in the identification of OCs. Therefore, we adopted the leaf approach in this work.
§.§ HDBSCAN clustering
A new search was conducted using the coordinates of the OCs preselected in Sect. 2. based on their astrometric parameters listed in <cit.>. The query criteria are similar to the ones in Sect. 2, albeit with some notable differences. First, we used a cone search of 6× instead of 5×. This was done to increase the number of field stars in the OC surroundings to increase the contrast of the OC in the positional space for the clustering algorithm.
Similarly, the studied proper motion parameter space was also increased to ±10 mas yr^-1 around the central values of the OC proper motion. This is sufficient since the maximum standard deviation of the proper motion for the preselected OCs is only about ∼1.3 mas yr^-1. Finally, the parallax was limited to objects with ϖ>0.75 mas in order to reduce to crowding of the parameter space by background objects. The five astrometric parameters used for the clustering analysis (α, δ, μ_α^⋆, μ_δ, ϖ) were rescaled using from <cit.> to have a zero median and a unit interquartile range.
A typical OC comprises a dense compact core and a more extended sparse halo/corona, sometimes with tidal tails. Both these extended structures can span several tens of pc from the OC center <cit.>. Therefore, the positional constraints are not as informative for clustering analysis as the other dimensions, such as proper motions. This could be resolved if it was possible to apply different weights to the dimensions used for clustering, but HDBSCAN does not offer this feature. Because of this, we ran HDBSCAN in two ways – in 5D (α, δ, μ_α^⋆, μ_δ, ϖ) and in 3D (μ_α^⋆, μ_δ, ϖ). This was done because including the positional criteria in the clustering analysis may exclude physical members in the OC outskirts. On the other hand, excluding the positional criteria may introduce more contamination in the recovered clusters. For both the 5D and 3D analysis, we tested the clustering performance with different input hyperparameter combinations on a representative subset of 20 OCs that were picked from the studied OCs. This was necessary because, for some hyperparameter combinations, a significant portion of the studied OCs was not detected by the algorithm in their fields and was instead assigned to the noise. Also, for some combinations, only the dense OC cores were detected. Therefore, most importantly, we aimed to maximize the number of the detected OCs. Of secondary importance was the completeness of the OC member population, which was assessed by the comparison with the OC members listed in <cit.>. Out of the tested combinations, & for the 5D case, and & for the 3D case were found to offer the best performance and were adopted in the further analysis.
Depending on the input parameters, HDBSCAN typically finds several tens of clusters in each of the studied OC fields. To facilitate easier matching of the physical OC of interest to the statistical clusters detected by HDBSCAN, we computed the median proper motion values for each detected cluster and retained only those with the median proper motion within 3σ of the value listed in <cit.> for the targeted OC. Doing so significantly reduced the number of clusters that needed to be inspected. The cluster corresponding to the physical OC was determined by plotting the CMD, parallax histogram, vector-point, and position diagram for all clusters and overlaying the data for the OC members cataloged in <cit.> on top of these diagrams and choosing the cluster that offered the best match.
§.§ Membership probability
Aside from its inability to apply different weights to different clustering dimensions, another downside of HDBSCAN is that it does not take into consideration uncertainties in the input data. To rectify this and to compute membership probabilities of OC WD candidates we used a Monte Carlo approach, taking the uncertainties and correlations in the OC WD candidate astrometry into consideration. We conducted 100 runs, each time drawing a new set of parallax and proper motion values for the WD candidate based on a multivariate normal distribution, which was constructed using the OC WD candidate astrometric quantities, their uncertainties, and the correlation coefficients listed in the GDR3 catalog. Having obtained these new values for the considered object, we reapplied the HDBSCAN clustering on the OC field 100 times. The probability of the WD candidate membership in the particular OC is then the ratio of the number of outcomes where the HDBSCAN assigns the WD candidate to the OC and the number of outcomes when the WD is assigned elsewhere – either to the field population or to some other statistical or physical cluster present in the OC field. For instance, a WD assigned to the OC in 50 runs of the algorithm is assigned a membership probability of 0.5.
§.§ Data quality
The faintness of our objects of interest and their position within or in projection to OCs located in the Galactic plane makes it important to assess the quality of the Gaia astrometry and photometry of these objects, which is the base of the further analysis. Some preliminary filtering using data quality indicators and source characteristics available from the Gaia catalog (cuts based on , relative parallax error, and brightness) has already been done in the OC field queries in Sect. 2. Such filtering may not be sufficient to remove all unreliable sources. However, other quality indicators can be used to assess the quality and reliability of the Gaia data.
The candidate OC WDs were cross-matched with the catalog of <cit.> who provide the astrometric flag. This is a reliability diagnostic that is based on a neural network classifier that was trained on the GDR3 astrometric entries from a set of presumably bad and presumably trustworthy GDR3 sources. The value of the flag varies between 0 and 1.0, where 1.0 indicates a perfectly reliable solution, whereas 0.0 indicates a source with untrustworthy astrometry. Objects with spurious astrometric solutions can be filtered out from the analysis by removing the sources with the flag lower than 0.5.
It is also possible to identify the sources with unreliable photometry. This can be done using the corrected value of the flux excess factor <cit.>, C^⋆. For well-behaved point sources, C^⋆ should have a flat distribution when plotted with Gaia color that is centered on zero. Significant deviations from this trend may indicate that the source photometry may be contaminated by flux from objects in its proximity. When culling the sources with potentially affected photometry, we discarded objects with |C^⋆|/σ_C^⋆ (G) > 5, where σ_C^⋆ represents the 1σ scatter expected to be present for well-behaved sources, which is computed as a function of G according to Eq. 18 of <cit.>.
§.§ OC parallaxes
We computed group parallaxes of the recovered OCs using the member stars recovered by HDBSCAN and after applying a 2-σ clipping around the median parallax value of the OC members and following the procedure described in <cit.>. Due to a typically large number of the recovered OC member stars, the statistical uncertainty of the group OC parallax is very small, only up to a few μas. However, the Gaia parallaxes have an angular covariance that places limits on the minimum achievable uncertainty for the OC group parallaxes, and the errors stemming from these covariances dominate the total error budget. We list the derived group parallaxes of the OCs that are potential hosts of the newly characterized WDs in Table <ref>.
§.§ Supplementing the WD sample with the Hunt & Reffert catalog
Recently[This catalog was released as a preprint during the first round of revisions of this paper.], <cit.> constructed a large catalog of star clusters, also using a methodology based on HDBSCAN. Altogether, they list 7167 clusters, with the majority of them being OCs. For each cluster, they also provide a list of members with membership probabilities. Aside from 739 newly detected clusters, the novelty of this work is the depth of their list of cluster members, where the adoption of the criterion <cit.> allows the inclusion of the objects up to G ∼ 20 mag for some OCs, which is notably deeper than the membership lists constructed using the simple magnitude cut <cit.>. This means that their OC members lists are more likely to contain a number of previously unstudied WDs than the previous works in the literature.
In order to leverage this catalog, we considered a sample of astrometrically and photometrically reliable <cit.> OCs with distances below 2 kpc, which yielded 2257 OCs. Taking into consideration the cluster members with membership probabilities higher than 0.5, we calculate their absolute magnitudes and de-reddened colors using the OC parameters as listed in <cit.> and apply the criterion from Eq. <ref> to identify the possible OC hosting WDs. This first selection identified, after excluding the objects with >1.4 and |C^⋆|/σ_C^⋆ (G) > 5, 67 OCs potentially hosting at least one WD in this catalog. For potential OCs hosting WDs, we also calculated the group parallaxes and their errors using the methodology outlined in Sect. 3.4.
§.§ Other OC parameters
We make use of the published OC catalogs to get other OC physical parameters, such as extinction, total age, and metallicity. <cit.> do not list OC metallicities and for OC extinctions and total ages, they do not provide explicit uncertainty values in their data table. However, they report that the OC extinction uncertainties typically span the range 0.1–0.2 mag, and for the total ages (log t_OC) the uncertainty ranges 0.15–0.25 for young OCs and 0.1–0.2 for the older objects. <cit.> provide explicit uncertainties for OC extinctions, total ages, and metallicities. However, they do not list the parameters for some of the “UBC” and “UPK” OCs <cit.> that have been identified as likely WD hosts. Therefore, we adopt the OC parameters from <cit.>, except for the few OCs not included in their catalog. For these objects, we adopt the parameters listed in <cit.>, with the conservative uncertainty estimates of 0.2 mag for the extinction, log t_OC/yr of 0.2 for the total OC age, and we assume the OC metallicities to be solar. If the OC is not present in both of these catalogs, we adopt its age and extinction from <cit.>, where we again assume solar metallicities, as this catalog also does not contain this information. The OC parameters collated from <cit.>, <cit.>, and <cit.> are also tabulated in Table <ref>.
§.§ Position of the WDs in the OC CMDs and preliminary cooling ages
Figure <ref> shows the CMD of the OC WD sample in Gaia filters with the cooling tracks for 0.4 and 1.2 M_⊙ DA WDs from <cit.>. We used this CMD and the 0.4 M_⊙ cooling track to select the WD candidates that are consistent with being isolated stars. No objects needed to be cut based on the excessive cooling age as compared to the maximum total age of the studied OCs.
Additional filtering can be done using by comparing the WD cooling age (t_cool) with the overall OC age (t_OC), as WDs that are OC members cannot have t_cool larger than t_cool. The coordinates of WD candidates in the OC CMD constructed from the Gaia photometry can be converted into preliminary t_cool estimates by the interpolation between the set of cooling tracks for DA WDs with C/O core <cit.>.
To account for the uncertainties in WD candidate photometry, OC group parallax, and OC extinction, we performed 10^4 Monte Carlo draws, each time drawing values of these parameters from normal distributions (assumed to be independent of each other) characterized by their mean values and 1σ errors. These were used to calculate the absolute G magnitude and de-reddened color of an OC WD, which were in turn used to interpolate preliminary t_cool from the WD cooling tracks for each simulation run using the Python tool of <cit.>. Here and in the subsequent analysis, the listed values are medians of the values obtained from the simulations and the quoted uncertainties were calculated using the 16^th and 84^th percentiles of the resulting distributions. After this step, we excluded the objects with their median value of t_cool estimate higher than the t_OC of the associated OC, yielding the final sample of 77 possible OC WDs, listed in Table <ref>.
§.§ Issue of contamination and uncertainties in the OC parameters
It needs to be noted that constructing a clean sample of OC WDs and conducting meaningful studies that build on this sample are extremely challenging due to the low brightness of these objects and, despite a lot of recent progress, also due to significant uncertainties in the OC parameters listed in the recent catalogs. The issue is apparent also in the Fig. <ref>, where it can be seen that some objects, despite their high OC astrometric membership probability, are located blueward of the limit where the WDs should lie in the CMD. This can be attributed to various reasons, such as uncorrected issues with the photometry, spurious matching of the WD to the OC, or an overestimated extinction of the OC.
§ WD PROPERTIES AND THE IFMR
There is a large volume of literature dealing with finding OC WDs, deriving their properties, and using them to study the IFMR. The analysis conducted in the previous sections allowed us to recover and independently confirm the membership of several OC WDs previously studied in the literature. We collate them and their derived properties relevant to the IFMR in Table <ref>. Since they were predominantly subjects of dedicated studies, often also employing spectroscopic data, we do not re-derive their parameters again in this work.
However, the majority of the OC WD candidates are either novel detections or were previously the subject of only limited study. While M_WD and t_cool can be calculated by interpolating between cooling tracks in the appropriate CMD, a better estimate of these quantities can be obtained by supplementing the Gaia data by additional high-quality optical and UV photometry. Therefore, we used the Vizier service <cit.> to search for additional supplementary SDSS <cit.>, Pan-STARRS <cit.>, and Galaxy Evolution Explorer <cit.> photometry. Photometric fitting was then done using <cit.>, taking into consideration the group parallaxes and extinctions of the host OCs. We used models from <cit.> assuming pure H atmospheres, and Markov chain Monte Carlo sampling method emcee <cit.> was used to find the WD parameters. This yielded bolometric magnitudes and M_WD (together with their uncertainties). We derived t_cool values by using the mappings between the WD parameters using the tool from <cit.>. We list the calculated M_WD and t_cool and their uncertainties in Table <ref>. The listed uncertainties need to be considered as lower limits only since does not consider uncertainties in the extinction when fitting WD photometry.
After that, t_cool can be subtracted from the total OC age t_OC, yielding the lifetime of the progenitor t_i. To convert t_i into the initial progenitor mass M_i we adopt the version 1.2S of the PARSEC tracks <cit.> with COLIBRI TP-AGB tracks <cit.>[Downloaded from <http://stev.oapd.inaf.it/cgi-bin/cmd>]. To quantify the uncertainty in the estimate of M_i, we conduct 100 Monte Carlo simulations, each time drawing a random value from the distributions of t_cool and t_OC. Considering also the OC metallicity, we then obtained mean M_i values and their uncertainties, which are also listed in Table <ref>.
Using the derived values of M_WD and M_i for the novel OC WDs and also collating the data from the literature (Table <ref>), we construct the IFMR in Fig. <ref>. The IFMR shows several interesting features, which we discuss in this section. However, it also shows a number of probable contaminants well below the IFMR relation; they are probably field WDs that are either spuriously matched to an OC or objects that are the results of atypical or non-isolated evolutionary pathways.
The high-mass end of the IFMR is informative of the maximum mass of an isolated star that leads to the creation of a WD, which is a fundamental astrophysical quantity. This mass delineates the boundary between the stars that undergo a core-collapse SN explosion and those that do not. Therefore, together with the initial mass function (IMF), it controls the SN rate, which then, in turn, controls the formation rate of neutron stars and black holes, chemical enrichment of the ISM and intergalactic medium, dust production, and star formation rate linked to the SN mechanical feedback. The SN rate is quite sensitive to this progenitor mass limit, as the slope of the IMF is relatively steep near its generally adopted value of 8 M_⊙. Therefore, a change of even ∼ 1 M_⊙ can substantially alter the expected SN rate.
Therefore, high-mass WDs that come from isolated stellar evolution in OCs can provide valuable constraints on this progenitor mass limit. However, there is a notable paucity of high-mass WDs in OCs <cit.>. Indeed, the most massive WDs that are OC members only have M ∼ 1 M_⊙, far below the theoretical WD mass limit of ∼1.38 M_⊙ <cit.>. Moreover, their progenitors do not seem to exceed ∼ 6 M_⊙ <cit.>. The paucity of high-mass WDs within OCs is noticeable also in this study.
We detect only six novel WDs with masses M_WD≳ 1.0 M_⊙ and they do not seem to come from particularly massive progenitors. Some of this paucity most likely stems from the selection effects. Firstly, with increasing mass WDs get more compact, giving them significantly smaller sizes and therefore lower luminosities, making them more difficult to detect and characterize. Also, high-mass WDs are more likely to be a part of a binary system with a companion that dominates the overall emission of the system. This is due to a general trend of increasing binary fraction with increasing WD stellar progenitor mass <cit.>. Moreover, high-mass WDs cool relatively rapidly, and the solar neighborhood is deficient in OCs young enough to be currently hosting stars capable of producing massive WDs. Some physical mechanisms can also be at work. As mentioned before, it is possible that WDs are ejected from their parent OCs by velocity kicks imparted during the WD formation, which might be especially relevant for more massive WDs <cit.>. It is sufficient for the kick to be on the order ∼1 km s^-1 for the WD to escape the OC core within a few million years. Especially relevant is the discovery of one such runaway WD originating within the OC Alpha Per. This WD has the progenitor mass of ∼8.5 M_⊙, placing it at the currently accepted mass limit for the WD formation. Yet, the mass of the WD itself is ∼1.2 M_⊙, which is still about 0.2 M_⊙ below the Chandrasekhar limit <cit.>. This might hint at the flattening of the IFMR in the region where M_i>6.0 M_⊙ and moving the new WD formation mass limit by a significant margin to ≳12 M_⊙. However, more WDs in this mass range must be discovered to put more solid constraints on the IFMR in this region. Yet, moving this limit upward like this would reconcile the observed type II SN rate that seems to be too low if the progenitor mass limit for the WD formation is at ∼8 M_⊙ <cit.>.
Another approach is to address this problem from the opposite side – by searching for the WD progenitor mass limit by constraining the progenitor mass required for the star to explode in a SN. This can be done in a direct way, by identifying the SN progenitors in the archival images before the SN event and checking for their presence after the SN has faded sufficiently <cit.>. It is also possible to study the surrounding stellar population in order to get constraints on the SN progenitor mass <cit.>, as it can be expected that due to the limited lifetime of the high-mass stars, they are still largely clustered together with the other stars of the common origin in the same star-formation event <cit.>. This method is viable for both SNe and SN remnants in the Milky Way or other galaxies up to a few megaparsecs where the stellar populations can still be resolved. From these studies, it seems that the minimum progenitor mass required for an SN (and the maximum progenitor mass yielding WDs) is around 7–9 M_⊙.
The paucity of high-mass OC WDs with progenitor masses above 6 M_⊙ together with a possible flattening of the IFMR shape toward higher progenitor masses and rather low progenitor masses of some SNe are not straightforward to reconcile. However, this can be resolved if the effects of the stellar binarity are taken into account. Most of the high-mass stars are in a binary and most of them will also experience some kind of close interaction with their companion at some point in their lives <cit.>. These interaction processes play a critical role in the evolution of massive stars because their magnitude, duration, and their timing in the life of the star have a major impact on the mass and structure of the stellar core as the star reaches the terminal phases of its evolution. This then affects the nature and properties of the nascent stellar remnant. Most notably, <cit.> proposed that the stars with masses ∼8–11 M_⊙ are expected to explode in an SN if they are the primary component in a compact binary. This can happen if they experience a significant mass loss due to a binary interaction before the onset of the second dredge-up, which normally significantly reduces the core mass of the isolated stars. A second dredge-up phase takes place after the star has reached the asymptotic giant branch phase and after the convective envelope starts reaching into the stellar core, dredging up a significant amount of matter. The core can lose so much mass that it is no longer massive enough for an SN. However, if the stellar envelope is lost before this, the second dredge-up phase cannot take place and the core does not lose mass in this way. Therefore, the isolated stars in this mass range tend to end up as massive WDs, while an identical star within a close binary may explode as an SN and produce a neutron star. For some binaries, the lower limit for the SN explosion may be as low as 6 M_⊙ <cit.>. Therefore, since the IFMR is only constructed considering the isolated stars that do not experience any sort of close binary processes, the relatively high WD progenitor mass limit can be simultaneously reconciled with the lower observed value of the minimum SN progenitor mass limit. In some stars, especially the ones with low metallicities, the rotation speed can also have a significant effect on the mass of the stellar core and envelope, the surface chemical composition, and the stellar wind. These effects are relevant for both isolated and binary stars, but in the latter case, the interplay between the rotational and binary effects can yield additional complexity <cit.>. Rapid rotation may cause multiple types of interior mixing processes that do not arise in slowly rotating stars, supplying fresh fuel for nuclear burning into the stellar core. This can prolong the stellar lifetime and also lead to the formation of more massive cores and therefore, more massive WDs <cit.>. Thus, in the presence of dispersion of initial rotational velocities, the rotational effects can introduce some additional scatter in the WD progenitor mass limit and in the IFMR in general, which is also apparent in Fig. <ref>.
In the low-mass end, the IFMR exhibits non-monotonic behavior, approximately in the range 1.6 M_⊙ ≤ M_i ≤ 2.1 M_⊙ with a peak at M_i≈ 1.8 M_⊙. <cit.> attribute this kink in the IFMR to the formation of solar metallicity carbon stars on the asymptotic giant branch. The kink is quite unconstrained in its descending phase and near the point where the IFMR curve starts to rise again (∼2–2.5 M_⊙). <cit.> list only a single WD in this range from NGC 752. <cit.> identified PHR 1315-6555, which is a central star of a planetary nebula within OC AL 1 that also falls into this range. Our search yielded two more WDs that are within this range as well – high-confidence members of Alessi 22 and NGC 752. These WDs provide useful data in this relatively unconstrained mass range of the IFMR. However, their M_WD values are too high and not consistent with the nonlinear IFMR shape in this range as is proposed in <cit.>. Models of <cit.> also predict the existence of a second IFMR kink, located at higher masses in the range of about 4.2 M_⊙ ≤ M_i ≤ 4.8 M_⊙, where the IFMR resumes its monotonic increasing trend again starting from M_i∼ 5 M_⊙. However, the range and shape of the kink are dependent on the exact details of stellar evolution, such as the physics of convection, mixing-length parameter, and mass loss. We find two high-confidence WD members of NGC 2516 and one WD member of NGC 3532 with M_i∼ 4.0 M_⊙ and with abnormally high masses M_WD∼ 1.0 M_⊙, lying significantly above the literature IFMR prescriptions. The presence of these WDs in this mass range may indicate a second departure of the monotonic IFMR trend as proposed in <cit.>. However, this deviation can also be attributed to other sources of the IFMR scatter, such as possible past binary interactions <cit.>.
§ SUMMARY AND CONCLUSIONS
We have studied the WD content of nearby OCs with a primary focus on obtaining tighter constraints on the IFMR, which can be derived semi-empirically by studying the properties of the WDs and their host OCs. Our search for WDs within OCs relied on the astrometric and photometric data provided by the Gaia mission in its third data release. When such WDs are identified, it is possible to obtain a significantly more precise distance estimate for the WD, which is based on robust statistics that are in turn based on a large number of OC member stars rather than a singular noisy parallax measurement of a single object. A more precise distance obtained in this way translates to more precise knowledge of the fundamental WD properties, most importantly its mass and cooling age. This, in combination with the knowledge of the total age of the OC and its metallicity, can be used to constrain the lifetime of the progenitor by subtracting the cooling age from the total OC age, which can be used to infer the initial mass of the WD progenitor.
After determining the starting sample of the surveyed OCs, and informed by the capabilities of Gaia and properties of young WDs, we queried the Gaia archive for the presence of potential WDs associated with these OCs using relatively liberal criteria based on the OC astrometric properties tabulated in <cit.>. For each of the OCs that were identified as potentially hosting WDs based on the previous step, we conducted a more detailed search for possible WD members using the HDBSCAN algorithm employed in both five (α, δ, μ_α^⋆, μ_δ, ϖ) and three (μ_α^⋆, μ_δ, ϖ) astrometric dimensions. For each WD candidate detected in this way, we determined its astrometric membership probability using a Monte Carlo approach by drawing random values of its astrometric parameters based on its cataloged astrometric properties, their errors, and covariances.
For WD candidates with a reasonably high OC astrometric membership probability (P_memb≥ 0.5), we filtered out obvious low-mass or binary outliers unsuitable for the IFMR determination and WD candidates with cooling ages exceeding the total age of their putative parent OCs. We then calculated their masses and cooling ages based on the new estimates of the distances of their putative parent OCs, supplemented with the OC extinction values from the literature. After that, we determined their progenitor lifetimes and progenitor initial masses. Aside from several previously known OC WDs, we characterized 36 WDs with significant OC membership probabilities that had not been characterized in the literature before. These objects would benefit from a spectroscopic follow-up.
The IFMR constructed from the newly characterized and literature OC WDs is consistent with previously published prescriptions, albeit with a large scatter that might be attributed to several extrinsic or intrinsic factors <cit.>. As in the previous studies, there is still a paucity of high-mass WDs whose progenitor masses even remotely approach the widely adopted upper progenitor mass limit for WD formation of 8 M_⊙. This could be caused by the presence of velocity kicks imparted to high-mass WDs upon formation, which often occurs in combination with the presence of a photometrically dominant secondary binary component for many high-mass WDs. This makes searching for these systems difficult.
At the present rapid pace of new OC discoveries, one can expect novel OC-WD pairings to be identified in the near future. This would allow us to put even tighter constraints on the IFMR and the research avenues that are connected to it. Gaia's recent and upcoming data releases will provide us with an expanded and improved census of nearby coeval stellar populations that are suitable for studying WDs across a wide range of masses, ages, and initial progenitor metallicities. An expanded sample of WDs that are identified to be hosted by these populations, in conjunction with spectroscopic follow-up observations targeting them, is certain to refine our knowledge of the IFMR and the terminal phases of stellar evolution.
MP is supported by the European Regional Development Fund, project No. ITMS2014+: 313011W085.
NF acknowledges support from the grant GAČR 23-07605S.
This work has made use of data from the European Space Agency (ESA) mission
Gaia (<https://www.cosmos.esa.int/Gaia>), processed by the
Gaia Data Processing and Analysis Consortium (DPAC,
<https://www.cosmos.esa.int/web/Gaia/dpac/consortium>). Funding for the
DPAC has been provided by national institutions, in particular the
institutions participating in the Gaia Multilateral Agreement.
This research has made use of the VizieR catalogue access tool, CDS,
Strasbourg, France (DOI : 10.26093/cds/vizier). The original description
of the VizieR service was published in 2000, A&AS 143, 23
This research has made use of the WEBDA database, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University. We are grateful to the developers and contributors of the many software packages used in this work: Astropy <cit.>, astroquery <cit.>, astroML <cit.>, HDBSCAN <cit.>, Numpy <cit.>, Scipy <cit.>, scikit-learn <cit.>, matplotlib <cit.>, and ezpadova (<https://github.com/mfouesneau/ezpadova>).
aa
§ PROPERTIES OF OCS ASSOCIATED WITH WD CANDIDATES
llllllll
WDs and WD candidates recovered as OC members in the clustering analysis.
Gaia DR3 associated OC P_3D P_5D P_HR P_WD DB_M
continued.
object associated OC P_3D P_5D P_HR P_WD DB_M
Gaia DR3 2098988107112755712 ASCC 101 0.7 1.0 1.0
Gaia DR3 2879428195013510144 Alessi 22 1.0 1.0 1.0
Gaia DR3 4519349757798439936 Alessi 62 0.8 0.5 0.6 1.0 1.0
Gaia DR3 391939027303287040 Alessi 94 1.0 1.0 1.0
Gaia DR3 386116254240723712 Alessi 94 1.0 1.0 1.0
Gaia DR3 851411295734572416 CWNU 1095 0.7 1.0 1.0
Gaia DR3 5798954758752816512 CWNU 41 1.0 1.0 1.0
Gaia DR3 335525529520188032 HSC 1155 1.0 1.0 1.0
Gaia DR3 3299641271199703680 HSC 1630 1.0 1.0 1.0
Gaia DR3 5257090272981848192 HSC 2304 0.9 1.0 1.0
Gaia DR3 4502736137087916032 HSC 381 0.9 1.0 1.0
Gaia DR3 2060960191793682176 HSC 601 1.0 0.89 1.0
Gaia DR3 4283928577215973120 IC 4756 1.0 0.9 0.7 1.0 1.0
Gaia DR3 6653882460188145152 Mamajek 4 0.5 0.3 0.7 1.0 1.0
Gaia DR3 4008511467191955840 Melotte 111 1.0 1.0 1.0 1.0 1.0 Y
Gaia DR3 4662157454731788672 NGC 1901 0.6 1.0 1.0
Gaia DR3 4659513404187412736 NGC 1901 0.8 0.6 0.6 1.0 1.0
Gaia DR3 2931807898171448320 NGC 2358 0.8 1.0
Gaia DR3 5538113835730494464 NGC 2477 0.5 1.0
Gaia DR3 5290834387897642624 NGC 2516 1.0 0.9 1.0 1.0 Y
Gaia DR3 5289447182180342016 NGC 2516 0.9 0.6 1.0 1.0
Gaia DR3 5290719287073728128 NGC 2516 0.9 0.7 1.0 1.0 1.0 Y
Gaia DR3 5290720695823013376 NGC 2516 0.6 1.0 1.0
Gaia DR3 5294015515555860608 NGC 2516 1.0 0.8 1.0 1.0
Gaia DR3 5290767695648992128 NGC 2516 1.0 0.8 0.5 1.0 1.0
Gaia DR3 664325543977630464 NGC 2632 1.0 0.9 1.0 1.0 Y
Gaia DR3 661841163095377024 NGC 2632 1.0 0.6 1.0 1.0 Y
Gaia DR3 662798086105290112 NGC 2632 1.0 0.5 1.0 1.0 Y
Gaia DR3 662998983199228032 NGC 2632 1.0 1.0 1.0
Gaia DR3 660178942032517760 NGC 2632 1.0 0.5 1.0 1.0 Y
Gaia DR3 661270898815358720 NGC 2632 0.8 1.0 0.8 1.0 1.0 Y
Gaia DR3 661010005319096192 NGC 2632 1.0 0.8 1.0 1.0 Y
Gaia DR3 661297901272035456 NGC 2632 1.0 0.7 1.0 1.0 Y
Gaia DR3 661311267210542080 NGC 2632 0.5 1.0 1.0 1.0 1.0 Y
Gaia DR3 665139697978259200 NGC 2632 1.0 0.6 1.0 1.0 Y
Gaia DR3 661353224747229184 NGC 2632 1.0 0.9 1.0 1.0 Y
Gaia DR3 659494049367276544 NGC 2632 1.0 0.6 1.0 1.0 Y
Gaia DR3 5338675689360848256 NGC 3532 0.4 0.6 1.0
Gaia DR3 5337742307052922752 NGC 3532 0.4 0.8 0.5 1.0 1.0
Gaia DR3 5339898655484772864 NGC 3532 0.8 1.0 1.0
Gaia DR3 5340219811654824448 NGC 3532 0.6 0.4 1.0 1.0
Gaia DR3 5340530320614001792 NGC 3532 0.5 1.0 1.0
Gaia DR3 5340165355769599744 NGC 3532 0.6 0.99 1.0
Gaia DR3 5340220262646771712 NGC 3532 0.5 1.0 0.7 1.0 1.0
Gaia DR3 5887666586717940224 NGC 5822 0.5 1.0 1.0
Gaia DR3 4477168746525464064 NGC 6633 0.7 0.78 1.0 Y
Gaia DR3 4477214475044842368 NGC 6633 0.6 0.4 1.0 1.0 Y
Gaia DR3 2166915179559503232 NGC 6991 0.7 0.4 1.0 1.0
Gaia DR3 2170776080281869056 NGC 7092 0.9 0.8 0.8 1.0 1.0
Gaia DR3 342523646152426368 NGC 752 0.7 0.5 0.7 1.0 1.0
Gaia DR3 2082008971824158720 RSG 5 0.6 0.8 1.0 1.0
Gaia DR3 4087833117945955840 Ruprecht 147 0.7 0.6 1.0 1.0
Gaia DR3 4087806832745520128 Ruprecht 147 0.8 1.0 0.6 1.0 1.0
Gaia DR3 4088108859141437056 Ruprecht 147 0.9 0.8 0.7 1.0 1.0
Gaia DR3 4183928888026931328 Ruprecht 147 0.9 0.8 0.6 1.0 1.0
Gaia DR3 4183937688413579648 Ruprecht 147 0.9 0.9 1.0 1.0 1.0
Gaia DR3 4183847562828165248 Ruprecht 147 0.8 0.8 1.0 1.0 1.0
Gaia DR3 4183978061110910592 Ruprecht 147 0.6 0.6 0.99 1.0
Gaia DR3 4184148073089506304 Ruprecht 147 0.8 0.8 0.6 1.0 1.0
Gaia DR3 4184169822810795648 Ruprecht 147 0.9 0.8 0.9 1.0 1.0
Gaia DR3 4183926006112672768 Ruprecht 147 0.9 1.0 1.0
Gaia DR3 4183919237232621056 Ruprecht 147 0.5 1.0 1.0
Gaia DR3 4184196073644880000 Ruprecht 147 0.8 0.8 0.8 1.0 1.0
Gaia DR3 1992469104239732096 Stock 12 0.7 0.6 0.7 1.0 1.0 Y
Gaia DR3 507105143670906624 Stock 2 0.5 0.9 1.0
Gaia DR3 507555904779576064 Stock 2 0.9 0.7 1.0 1.0
Gaia DR3 506862078583709056 Stock 2 0.7 0.6 1.0 1.0
Gaia DR3 506848643933335296 Stock 2 0.6 1.0 1.0
Gaia DR3 3114831641658036608 Theia 172 0.6 0.92 1.0
Gaia DR3 2174431990805230208 Theia 248 1.0 1.0 1.0
Gaia DR3 2170776080281869056 Theia 517 0.8 1.0 1.0
Gaia DR3 3324040907394753792 Theia 558 1.0 1.0 1.0
Gaia DR3 4529222337115434240 Theia 817 1.0 1.0 1.0
Gaia DR3 4530122390454022272 Theia 817 1.0 1.0 1.0
Gaia DR3 5433483136700686336 Turner 5 0.7 0.99 1.0
Gaia DR3 4098106821451715584 UPK 5 0.8 0.78 1.0
Gaia DR3 5914732847840333440 UPK 624 0.7 0.6 0.9 1.0 1.0
Columns P_5D, P_3D, P_HR are the membership probabilities derived in the 5D, 3D, <cit.> membership analysis, respectively. Columns , P_WD, and DB_M, denote the astrometric fidelity flag from <cit.>, probability of the object being a WD from <cit.>, and if the object is present as a spectroscopically confirmed WD in the Montreal White Dwarf Database <cit.>, respectively.
|
http://arxiv.org/abs/2307.01735v1 | 20230704141117 | Hard X-ray grazing incidence ptychography: Large field-of-view nanostructure imaging with ultra-high surface sensitivity | [
"P. S. Jørgensen",
"L. Besley",
"A. M. Slyamov",
"A. Diaz",
"M. Guizar-Sicairos",
"M. Odstrcil",
"M. Holler",
"C. Silvestre",
"B. Chang",
"C. Detlefs",
"J. W. Andreasen"
] | physics.optics | [
"physics.optics",
"physics.app-ph"
] |
APS/123-QED
Technical University of Denmark, DTU Energy, 310, Fysikvej, DK-2800 Kgs. Lyngby, Denmark
Xnovo Technology ApS, Galoche Allé 15, 1., Køge 4600, Sjælland, Denmark
Technical University of Denmark, DTU Energy, 310, Fysikvej, DK-2800 Kgs. Lyngby, Denmark
Paul Scherrer Institut, 111, Forschungsstrasse , 5232 Villigen PSI, Switzerland
Paul Scherrer Institut, 111, Forschungsstrasse , 5232 Villigen PSI, Switzerland
École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
Paul Scherrer Institut, 111, Forschungsstrasse , 5232 Villigen PSI, Switzerland
Carl Zeiss SMT, 22, Carl-Zeiss-Straße, 73447, Oberkochen, Germany
Paul Scherrer Institut, 111, Forschungsstrasse , 5232 Villigen PSI, Switzerland
Technical University of Denmark, DTU Nanolab, 347, Oersteds Plads, DK-2800 Kgs. Lyngby, Denmark
European Synchrotron Radiation Facility, 71, avenue des Martyrs, CS 40220,
38043 Grenoble Cedex 9, France
[email protected]
Technical University of Denmark, DTU Energy, 310, Fysikvej, DK-2800 Kgs. Lyngby, Denmark
We demonstrate a technique that allows highly surface sensitive imaging of nanostructures on planar surfaces over large areas, providing a new avenue for research in materials science, especially for in situ applications. The capabilities of hard X-ray grazing incidence ptychography combine aspects from imaging, reflectometry and grazing incidence small angle scattering in providing large field-of-view images with high resolution transverse to the beam, horizontally and along the surface normal. Thus, it yields data with resolutions approaching electron microscopy, in two dimensions, but over much larger areas and with a poorer resolution in the third spatial dimension, along the beam propagation direction. Similar to grazing incidence small angle X-ray scattering, this technique facilitates the characterization of nanostructures across statistically significant surface areas or volumes within potentially feasible time frames for in situ experiments, while also providing spatial information.
Hard X-ray grazing incidence ptychography: Large field-of-view nanostructure imaging with ultra-high surface sensitivity
J. W. Andreasen
August 1, 2023
========================================================================================================================
§ INTRODUCTION
Co-first author with equal contribution
X-ray ptychography is a scanning coherent diffraction imaging technique that retrieves phase and absorption contrast from a series of diffraction patterns collected at various scanning positions with an overlap in sample illumination, which can offer excellent resolution that is not limited by X-ray optics or lenses <cit.>. Ptychographic data are typically acquired in transmission geometry <cit.>, however increasing imaging requirements for samples with low-contrast nanoscale features of interest at or near their surface with extended lateral dimensions makes transmission-based imaging challenging <cit.>. Coherent diffraction imaging of X-rays near grazing incidence has been applied in reflection geometry for the reconstruction of non-periodic surface structures <cit.>, however the application of ptychography at grazing incidence with hard X-rays would allow for non-isolated objects of arbitrary size to be imaged in extended samples. As such, ptychography in a reflection geometry is gaining interest and has been explored in the range of EUV wavelengths <cit.>.
Ptychographic imaging in reflection geometry has been shown to provide quantitative imaging of nanostructures, sensitive to both chemical and structural contrast, however to date, reflection-mode ptychographic imaging with hard X-rays remains unexplored, with the exception of Bragg-condition ptychography <cit.>, and in crystal truncation rod measurements <cit.>. The application of X-rays in grazing incidence in such an imaging technique would prove an invaluable imaging tool for surface features on the tens of nm scale. This is highly relevant for a variety of technologies with nanoscale surface features where imaging over several hundreds of μm is required. Critical angles for total reflection of X-rays from surfaces of common materials are typically somewhat less than θ_c = 1^∘, which results in the beam footprint being elongated by a factor of up to several hundred times the transverse length. The geometry of grazing incidence also presents significant experimental challenges by requiring a highly precise alignment of the sample plane with the scanning plane. Transmission X-ray ptychography requires very precise knowledge of scan positions in order for successful reconstructions, and when grazing incidence geometry is introduced, errors can be amplified by a factor of 1/sin(θ), placing further requirements on sample alignment and motor precision. Given that critical angles in the hard X-ray regime are typically (<1^∘), the geometry of grazing incidence presents a significant challenge even when compared to EUV ptychography where incidence angles can be approximately 1 to 2 orders of magnitude larger <cit.>.
We present an extension of the ptychographic X-ray imaging technique by performing it at grazing incidence angles that are typical of the hard X-ray regime and, for the first time, show a successful reconstruction of phase contrast images. The proposed method combines the high-resolution and robustness of ptychographic imaging with the macroscopic probing and flexibility of the grazing incidence geometry, enabling the multi-scale imaging of the morphology of thin-films and surfaces. In the direction parallel to the X-ray beam (longitudinal), the spatial resolution is sacrificed for an increased area of sample, resulting in an image with highly anisotropic resolution. We show that such experiments can be implemented at beam lines used for standard transmission ptychography with the addition of further surface alignment procedures.
We also demonstrate the simulation of diffraction data from model structures using the multislice technique outlined in <cit.> which has been shown to accurately model complex multiple-scattering phenomena. Such realistically simulated data is used to validate the accuracy of the reconstruction, which does not use a multislice approach.
§ RESULTS
§.§ Experimental
We provide a comparison of reconstructed images of samples with their nominal structural design used in the lithographic process, with heights measured by atomic force microscopy (AFM). Fig. <ref> shows the experimental setup during scanning. Due to the experimental geometry, the reconstructed pixel has a high aspect ratio and the reconstructed images appear squeezed along the grazing-incidence direction. To compensate for this effect and simplify the interpretation of the results, an elongated structure was fabricated for the proof-of-concept experiment. Fig. <ref>a shows the design of the sample in cross-section. The Si wafer is first coated with a thin titanium adhesion layer. On top of this, a 50 nm layer of Au is deposited, followed by the test structure. Further details of sample fabrication are given in the methods section. The thickness of all structures above the substrate is 20 nm. The stretched "Siemens Star" with truncated spokes (Fig. <ref>b and <ref>c) has dimensions of 0.04 mm× 4.5 mm.
Image acquisition is performed at different angles of incidence θ, in particular above and below the critical angle (θ_c) for total external reflection.
Fig. <ref> shows the phase contrast images of the fully reconstructed elongated Siemens star structure taken at θ=0.6^∘ and θ=0.8^∘, below and above the θ_c for Au at the experimental X-ray energy of 6.2 keV. The horizontal stripes visible in phase contrast are consistent between varying incidence angle reconstructions and are interpreted as variations in the real surface height of the sample substrate. The stripes appear only in the vertical direction due to the aspect ratio of the images, resulting in a perceived strong 1-D variation in surface height along the y axis parallel to the beam. In reality the surface height variation is uncorrelated in the x-y plane. Further, a dark stain-like feature in the bottom half of the star is visible in both images, which is also a real feature of the sample. The feature appears larger in the θ=0.6^∘ image and we believe this is due to a deposit of a low atomic number material which has a lower refractive index than the sample, and hence appears more clearly at lower incidence angles. The bottom right corner of Fig. <ref> b) shows a small area of noise, corresponding to the edge of the right side of the structure that extends beyond the field of view.
The relationship between measured phase shift ϕ and physical height h of the structure is determined by . Because ϕ is computed as the phase of a complex-valued function, it is subject to phase wrapping and its values are restricted within the range of ±π, if the height causes a phase shift greater than this, the phase value will wrap around. This implies that an unknown 2π n_p may be introduced which gives . Analagous to the principle of multiple-wavelength interferometry where several wavelengths can be used to increase the range of non-ambiguity in precise length measurements calculated from phase shifts <cit.>, we can use multiple incidence angles to reduce the ambiguity of height measurements. One can disambiguate the phase wrapping by computing heights for a range of n_p values (in this case 1< n_p < 10 was used). One can then look for the set of n_p values that result in the smallest height difference.
In this case, the solution was found to be n_p=2 for θ=0.6^∘ and n_p=3 for θ=0.8^∘, with (unwrapped) phase shift values of 12.5 rad and 16.9 rad, resulting in values of h=18.9 nm and h=19.3 nm respectively. The recovered structure height is thus 19.1 nm. The contribution of low-frequency errors in the reconstructions can lead to inaccurate estimations of height. In order to avoid this, the reconstructions are filtered through band-passing of the image in the Fourier domain. Further details of this are given in the methods section. The residual error of h as defined by the difference between the observed phase and the phase shift calculated from ϕ -2π n_p=4π h/λsin(θ) is found to be 0.4 nm, whereas the angle-dependent uncertainty of height measurements is computed from the standard deviation of the phase measurements of the structure after Fourier domain band-passing and found to be 0.07 nm for θ_c = 0.6^∘ and 0.06 nm for θ_c = 0.8^∘. The calculated RMS surface roughness of the substrate is 0.7 nm for θ=0.6^∘ and 0.5 nm for θ=0.8^∘ . Further details are discussed in the methods section. This measured height is in excellent agreement with heights of the sample measured by atomic force microscopy which were found to be 19± 3 nm (3 nm being the mean surface roughness as measured by AFM), showing the excellent sensitivity of phase contrast to nanoscale variations in grazing incidence geometry.
§.§ Simulation
Fig. <ref> shows the ptychographic reconstruction from diffraction patterns simulated via the multislice propagation method <cit.> alongside the reconstruction from experimentally collected data for the same area of the Siemens star. Diffraction patterns were simulated via multislice propagation and reconstructed with the same algorithms and parameters as their experimental data counterparts (i.e. multislice simulation is used for the forward model to generate simulated diffraction patterns, but a conventional ptychographic reconstruction with a simple multiplication between object and probe is used in every case). The multislice approach is used as a forward model as it has been shown to be capable of producing arbitrary complex reflections, including evanescent waves<cit.>. The simulation and the experiment in Fig. <ref> were done at an incidence angle of θ=0.8^∘. It can be seen that the real structure has rounder edges and less spacing between spokes. These discrepancies are due to the sample fabrication process where limits on edge sharpness are necessary. The observed smaller distance between the spokes in the experimental reconstruction is also a real property of the fabricated sample and not due to experimental accuracy. As expected, the substrate shows little to no phase contrast in the simulated data, whereas there is added variation and horizontal banding across the real reconstruction, which is real surface roughness. Height estimation on multislice simulated data was found to be n_p=4 for both θ=0.8^∘ and θ=0.9^∘, with (unwrapped) phase shift values of 23.4 rad and 25.8 rad, resulting in values of h=26.5 nm and h=26.1 nm respectively. The recovered structure height found to be is 26.5 nm, resulting in an overestimation of approximately 1.5 nm from the simulated height of 25 nm. We cannot currently explain the cause of this overestimation. Simulated diffraction patterns created by the multislice forward model are in good agreement with experimental data, and as such suggest that multislice simulation represents an adequate forward model for describing the wave-sample interaction in grazing incidence and producing diffraction data for qualitative comparison with experimental data and to validate the reconstructions using a simpler model. Further details of the multislice simulations are in the discussion section.
§.§ Resolution Estimation
Fourier ring correlation (FRC) <cit.> has become a standard method for providing reliable and quantitative estimates of the image resolution across a large number of imaging techniques. To estimate highly anisotropic resolution of images reconstructed from grazing-incidence X-ray ptychographic data, the calculation of FRC has to be decoupled for transverse and longitudinal directions. As such, the FRC is calculated separately for each dimension in the real space image, and the FRC is computed from two separate 1-Dimensional Fourier transforms of each image.
The estimated resolution from FRC is shown in Fig. <ref> as a function of incidence angle. The resolution in directions parallel and transverse to the beam approach a minimum near θ_c. As expected, the resolution is significantly poorer in the longditudinal direction, and the resolution in both directions becomes poorer as the incidence angle is shifted further away from θ_c. While the resolution in the transverse and longditudinal directions is worse for θ=0.6^∘ than other incidence angles, the FRC resolutions at angle θ=0.7^∘ and above are within the standard deviation of FRC measurements calculated from several sub-regions of the full image at each incidence angle.
§ DISCUSSION
Measured height differences in reflection are determined by observed phase shifts given by the geometric relation (where k = 2π/λ). This relationship between ϕ and h is based on two assumptions. Firstly, ϕ is only determined by topological variation of the sample, as the refractive index, n is assumed to be constant throughout the entire sample. While this is not a requirement of the reconstruction method, the quantitative calculation of height in this work is based on this assumption to simplify the interpretation of phase contrast reconstructions. Secondly, ϕ is considered to be caused by a single scattering upon reflection, without considering any multiple scattering events with higher complexity.
Whereas these assumptions can be made in the case of transmission of samples where the object transmissivity can be represented by a 2-D function and the sample thickness falls within the depth of focus <cit.>, in the case of grazing incidence, scattering events with higher complexity cannot generally be ignored. Given the test structures are entirely Au and the Ti and Si substrate layers are well below the 50 nm Au substrate, these layers are assumed to have negligible contribution to the measured signal, as the attenuation length of Au at the experimental energy ranges from approximately 1 to 10 nm over the range of θ investigated, these are well below the 50nm Au substrate thickness. Therefore a constant refractive index n everywhere is a reasonable assumption in this work. However, this would not be the case for samples that have both chemical as well as topological inhomogeneities, or samples where transmissivity through the structure in grazing incidence cannot be neglected (i.e. samples with significantly lower β than Au).
In particular, for the samples imaged in this work, artefacts appear in phase contrast at the edges of the structure where the phase shift is much larger than λ /2 per pixel. This causes determination of the true structure height to be less straightforward. Nonetheless the reconstructions in this work are based on this thin-object approximation without accounting for changes to the illumination function throughout the sample and height estimation from phase shift measurements are still found to be in good agreement with AFM measurements. This will likely only hold for relatively simple topologies of non-transmissive material as studied here. The contribution of more complex scattering phenomena as described by higher order Distorted-wave Born Approximation (DWBA) <cit.> terms to the measured phase shift needs to be explored further to better quantify topological contrast in more complex samples, especially for partially transmissive materials.
§ CONCLUSION
We have demonstrated a grazing-incidence X-ray scattering ptychography experiment over a large area on the millimetre scale in the sample plane with a relatively small number of scan points. The technique is capable of providing nanometre topological resolution through phase contrast. The experiment can be implemented at existing beamlines with existing phase-retrieval algorithms. The applicability of this imaging with excellent surface sensitivity shows promising potential for application of grazing incidence X-ray ptychography for robust large-scale characterization of surfaces and thin films where nanoscale height precision is required. The discrepancy between height estimation from phase contrast measurements and AFM is on the order of 1 nm, whereas the same method using images produced by multislice simulations show good agreement with both experimentally obtained data and with the nominal height used in the input simulation settings.
Having been in good agreement with experimentally obtained results, the multislice simulations aid in the qualitative interpretation and verification of the experimental data. We have demonstrated that multislice simulations provide a useful forward model for producing diffraction data that can be reconstructed with existing phase-retrieval algorithms. The model of height variation being calculated from phase shift arising from geometric path length difference holds for the structures imaged in this work given they are both chemically homogeneous and a highly absorbing material, however future work is required to more accurately develop a model and reconstruction that takes into account higher-order DWBA terms.
§ METHODS
Experiment
The experiment was performed at the cSAXS beamline of the Swiss Light Source (SLS) at the Paul Scherrer Institute in Villigen, Switzerland, using a photon energy of 6.2 keV. The coherent illumination on the sample was defined by a Fresnel zone plate (FZP) made of Au, fabricated by the X-ray nano-optics group at the Paul Scherrer Institute <cit.>. The FZP had a diameter of 220 μm and 4 nm outer-most zone width, resulting in a focal length of 99 mm and a focal depth of ±80 μm. The reflection geometry of the experiment requires the scanning to be performed in a plane parallel to the sample surface (Fig. <ref>). In this way, the illumination probe profile on the sample and source-sample distance can be considered constant and ptychographic phase-retrieval can be performed. When referring to the probe illumination size, we usually consider its transverse extent in the sample plane. However, it should be noted that for the incident beam probing a surface at an angle θ, the beam footprint is elongated in the longitudinal direction by a factor of 1/sin(θ). To keep the probe overlap consistent in both directions, the scanning step size along the grazing axis has to be scaled accordingly. A compact SmarAct hexapod-like positioning system was mounted on top of the scanning piezo-stage to align the sample surface parallel to the scanning plane during acquisition. This was achieved using an interferometric position measurement of the sample height (z-direction). The surface of the sample was used as reflective surface. To align this surface to the scanning plane, the sample stage was continuously moved in a sinusoidal pattern in the x-y plane and the measured displacement in the z-direction was minimized by adjusting the sample tilt using the SmarAct positioning system.. The height variation is minimized independently for both x and y through fine adjustments of the smaract stage. Scans were performed in an elongated fermat spiral pattern <cit.>, where the step size of each scan were elongated by a factor of 1/sin(θ) in the direction parallel to the beam propagation. Acquisition times were 0.2 s per position. Using a 4 μm size beam in the transverse direction, an effective area of 40 × 500 μm^2 was covered per fermat spiral. A Pilatus 2M detector with a pixel size of 172 μm was used for collection of diffraction data at a sample to detector distance of 7.36 m. Due to the limited range of the piezo system, the scan was split into several subscans as shown in Fig. <ref>, where each fermat scan was performed with the piezo stage after coarser translations with the hexapod. At the X-ray photon energy of 6.2 keV, θ_c for Au is θ_c ≈ 0.72^∘. Samples were imaged at 0.6, 0.7, 0.8, and 0.9^∘, a range of angles below and above θ_c.
Sample fabrication
The patterns were created on a ⟨ 001 ⟩ Si wafer. Initially, a 10 nm layer of Ti is evaporated onto the wafer, followed by a 50 nm Au layer, using e-beam evaporation in a Temescal FC-2000 tool with a deposition rate of 2 Å/s. The Ti layer is necessary to ensure good adhesion of Au on the substrate. After the initial Ti/Au bilayer, the wafers were spun with 1.5 μm positive photoresist AZ-MIR701 and the structures were patterned using UV lithography and further developed in a TMAH solution. The wafers are then placed in the e-beam evaporated and an additional layer of 20 nm of Au was evaporated to create the final patterns. The thickness of the Au deposition is controlled using a quartz crystal monitoring system. The resist was then lifted-off using a solvent solution (Remover 1165) at room temperature in an ultrasonic bath for approximately 5 minutes, leaving behind the patterned structures. The wafer was then rinsed with isopropanol for a further 5 minutes, followed by DI water, and then finally air dried.
Numerical simulations
Wave interaction with matter in grazing-incidence geometry cannot be approximated with the first Born approximation due to complex scattering phenomena and the more general DWBA is usually considered <cit.>. Scattering amplitude in DWBA is calculated as a coherent sum of scattering amplitudes contributing from different mixtures of refraction and scattering events. In general, four main scattering events (referred to as channels) with highest contributions are taken into account <cit.>. In such case, relating the scattering amplitude to electron density of the specimen is usually done by fitting a theoretical model to the experimental data taken in reflectivity by considering the contribution of these four scattering events.
Recently, it was shown that for the wave incident on the specimen under grazing angle, the so-called multislice propagation can model a range of complex scattering phenomena, such as standing and evanescent waves in the vicinity of the probed surface <cit.>. In this approximation, a specimen described in terms of complex refractive indices is divided into a set of thin slices. The transmission of the wave through each slice satisfies the projection approximation and propagation between slices is modeled using the angular-spectrum non-paraxial propagator where ℱ and ℱ^^-1 are the Fourier transform and its inverse, ψ_j is the j_th wave in the simulation, Δ z is the thickness of a slice, λ is the wavelength, and u_xy are spatial frequencies. Wave-propagation with multislice approximation has been widely used as a forward model for simulating diffraction data in various applications <cit.>.
Using the method outlined in <cit.>, the goal of the multislice simulation was to replicate the real experiment faithfully. All layers of the substrate material were simulated using the tabulated complex refractive index of each material. The virtual sample was constructed by importing the same pattern definition file used to create the physical sample, and discretizing it onto the simulation grid. The motor positions from the real experiment were imported and translated into movements of the virtual sample. Finally, the sample phantom was reconstructed from the simulated diffraction patterns using the same pipeline as used for the data from the physical experiment, with the same parameters. Reconstructions were made without adding noise to the diffraction patterns to attempt to simulate the experiment under ideal conditions.
The simulation volume is discretized into a 5 nm voxel grid in the directions transverse to the propagation direction, and 200 nm along the propagation direction. This is chosen because the interaction length in grazing incidence is much longer along the propagation direction, and in real reconstructions, resolution along the propagation direction is decreased. As a result, the resolution may be relaxed in this dimension for the simulation. The input wave chosen for the simulations is a reconstructed probe from experimental data from the cSAXS beamline, which is the wave field at the plane where it interacts with the sample, downstream of the focal point. This allows for ptychographic reconstructions of simulated data to use the reconstructed probe from previous scans as an initial guess, which greatly helps convergence.
The incoming wave is tilted to match grazing incidence geometry before propagating orthogonally through the simulated volume, each slice being in the plane normal to the direction of propagation, and then tilted again after interaction with the sample so the final exit wave is once again orthogonal to the detector. The simulation size is on the order of 1 to 4 ×10^3 slices, with each slice being approximately 1700 × 800 voxels.
Reconstruction
Ptychographic reconstructions were completed using Ptychoshelves <cit.>. A square area of 182 pixels around the center of the reflected beam from each far-field scattering pattern was used as the input for reconstructions. For reconstructions covering a large field of view, several overlapping Fermat spirals are combined together into a single ptychographic reconstruction with a shared object <cit.>.
Two methods are used in succession for solving the ptychographic reconstruction, firstly the difference map algorithm <cit.>, followed by least squares maximum likelihood method using compact sets (LSQ-MLc) <cit.>. As a stopping criterion, the number of iterations for each method is fixed at 300. Collected scattering patterns, which, in the plane of the detector are tilted in Fourier space, undergo a coordinate transform to have uniform spacing in Fourier space prior to solving. This process is more commonly referred to as tilted plane correction <cit.>. After reconstruction, the samples are corrected for a linear phase-ramp by fitting a 2-D plane to bare regions of the substrate across the image <cit.>, and 2-D phase unwrapping is performed.
Resolution estimation
1-D decoupled FRC involves performing separate 1-D Fourier transforms along the transverse and parallel directions of the beam propagation respectively. This allows for a separation of spatial frequencies between the lower frequency range of the longditudinal axis, and the higher frequency range of the transverse axis. A pair of images of the same area were taken for each incidence angle, and the correlation of their Fourier transforms for a single axis at a given incidence angle are determined. For FRC calculations, a finer structure, on the same wafer, fabricated and made in the same method as all other structures, but with much smaller features than the Siemens star in the x-y plane, was used for FRC calculations. All images used for FRC have tilted-plane correction applied to their ptychographic reconstructions prior to FRC.
Height measurements, which are encoded in phase shift measurements in these images, are dependent on spatial frequency and therefore biased by low frequency noise in the reconstructions. Band-passing is therefore required to achieve an accurate estimate of height. To achieve this, phase images are band-passed using a top-hat filter in the frequency domain to within a range of one decade of signal, chosen to be between the 1 × FRC resolution and 10 × FRC resolution estimated for each incidence angle. The standard deviation of phase measurements from two independent sets of data within the ROI after band-passing then provides an estimate of the repeatable precision of phase measurements and height sensitivity for each incidence angle. The standard deviation between these two sets of measurements provide an estimate of the achievable precision of height measurements along the z-axis, whereas FRC estimates provide the error along x and y, i.e. the resolution along transverse and longitudinal directions in the plane of the sample.
This study was partially funded from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765604 (MUMMERING). We also wish to acknowledge support from the Villum Experiment Programme and the Velux Foundations. We gratefully acknowledge the contribution of Professor Ole Hansen from DTU Nanolab to sample design and supervision of manufacturing.
§ COMPETING INTERESTS
The authors declare that they have no conflict of interest
|
http://arxiv.org/abs/2307.02826v1 | 20230706074535 | Realization of the unidirectional amplification in a cavity magnonic system | [
"Zi-Yuan Wang",
"Jie Qian",
"Yi-Pu Wang",
"Jie Li",
"J. Q. You"
] | physics.app-ph | [
"physics.app-ph"
] |
AIP/123-QED
]Realization of the unidirectional amplification in a cavity magnonic system
Interdisciplinary Center of Quantum Information, State Key Laboratory of Extreme Photonics and Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device, School of Physics, Zhejiang University, Hangzhou 310027, China
[email protected]
Interdisciplinary Center of Quantum Information, State Key Laboratory of Extreme Photonics and Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device, School of Physics, Zhejiang University, Hangzhou 310027, China
[email protected]
Interdisciplinary Center of Quantum Information, State Key Laboratory of Extreme Photonics and Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device, School of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, State Key Laboratory of Extreme Photonics and Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device, School of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, State Key Laboratory of Extreme Photonics and Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device, School of Physics, Zhejiang University, Hangzhou 310027, China
We experimentally demonstrate the nonreciprocal microwave amplification using a cavity magnonic system, consisting of a passive cavity (i.e., the split-ring resonator), an active feedback circuit integrated with an amplifier, and a ferromagnetic spin ensemble (i.e., a yttrium-iron-garnet sphere). Combining the amplification provided by the active circuit and the nonreciprocity supported by the cavity magnonics, we implement a nonreciprocal amplifier with the functions of both unidirectional amplification and reverse isolation. The microwave signal is amplified by 11.5 dB in the forward propagating direction and attenuated in the reverse direction by -34.7 dB, giving an isolation ratio of 46.2 dB. Such a unidirectional amplifier can be readily employed in quantum technologies, where the device can simultaneously amplify the weak signal output by the quantum system and isolate the sensitive quantum system from the backscattered external noise. Also, it is promising to explore more functions and applications using a cavity magnonic system with real gain.
[
J. Q. You
August 1, 2023
==================
Nonreciprocal devices, which exhibit the characteristic of unidirectionality, can play a crucial role in the information technology. To realize the nonreciprocity, various methods have been proposed, such as the magneto-optical
Faraday rotation <cit.>, nonlinearity <cit.>, spatial-temporal modulation <cit.>, and reservoir engineering <cit.>. The main function of the nonreciprocal devices is to both transmit information in the desired direction and isolate the backscattered noise. The typical feature is that the signal is transmitted in one direction with some insertion loss, and the reverse direction propagation has a large attenuation. On the other hand, in the process of signal transmission, the signal will inevitably attenuate continuously when increasing the propagation distance, so the gain medium and amplification device <cit.> are also a critical component in the information technology.
Under normal circumstances, the nonreciprocal and gain devices are independent of each other in the information network, each performing its own function separately. With the increasingly high requirements for integrating signal processing devices, whether both nonreciprocity and signal amplification can be realized in the same device has become an urgent demand. Different systems were proposed to realize the unidirectional amplification, including the atomic system <cit.>, whispering-gallery microcavities <cit.>, Josephson circuit <cit.>, microwave optomechanical device <cit.>, magnonic system <cit.>, and others <cit.>.
Recently, cavity magnonics has emerged as a new research frontier <cit.>. Based on this platform, various applications in, e.g., memory <cit.>, cavity optomagnonics for quantum transduction <cit.>, and magnon sensing at the quantum level <cit.> have been reported. In the cavity magnonic system, a magnon mode (i.e., the Kittel mode) in a ferrite, e.g., the yttrium-iron-garnet (YIG) sphere, strongly couples to the microwave photons in the cavity. As a mature commercial material, the YIG owns significant advantages such as the high spin density <cit.>, flexible adjustment of resonant frequency <cit.>, and low damping rate. Very recently, novel properties of the nonreciprocity and unidirectional invisibility have also been observed in the cavity magnonic system <cit.>, which arise from the interference between the coherent and dissipative magnon-photon interactions. By combining the easy-to-tune ferrite and the specially designed cavities, isolators with both nearly infinite isolation ratios and adjustable working frequencies become implementable <cit.>.
In this work, we construct a nonreciprocal amplifier based on the cavity magnonic system, consisting of a passive cavity made by the split-ring resonator (SRR), an active circuit with an embedded amplifier, and a YIG sphere. These components are all connected to a strip-line waveguide that supports the traveling photon mode. The active circuit is employed to compensate the dissipation of the passive cavity until the signal amplification occurs. The coupling strength between the magnon and cavity modes can be modified by adjusting the position of the YIG sphere with respect to the passive cavity. Through the comprehensive and synergistic regulation of the active circuit and magnon-photon coupling strength, we experimentally realize the microwave nonreciprocal amplification with a high isolation ratio. In the optimal conditions, the device exhibits a rightward propagating amplificition of 11.5 dB and a reverse propagating attenuation of -34.7 dB. Such devices have potential applications in quantum networks and repeaters <cit.>. They may also be promising in protecting the sensitive quantum system from noises associated with the read-out electronics and amplifying the weak signal leaking out of the quantum nodes.
The nonreciprocal amplification device is depicted in Fig. <ref>(a), where the microwave circuit consists of a strip-line waveguide, a passive cavity made by the split-ring resonator (SRR), and an active circuit with an embedded amplifier. For the passive cavity, a SRR wih parameters g=18 mm, b=5.2 mm, a=25 mm, and w=1.5 mm is side-coupled to the strip-line waveguide with a distance of d=0.2 mm, and the resonant frequency of the SRR is designed to be ω_c/2π=3.03 GHz to match the optimal working frequency of the active circuit <cit.>. The active circuit is capacitively coupled to the SRR via a gap of t=0.8 mm. By introducing a gain through the amplifier, the active circuit can compensate the loss of the passive cavity. In the experiment, we apply a DC voltage on the amplifier to continuously adjust the gain provided by the active circuit. The strip-line waveguide has a width of v=2.53 mm for 50 Ω impedance matching. Both the cavity and waveguide are fabricated on a F4B substrate. A 1 mm-diameter YIG sphere is glued at the end of a displacement cantilever, and the relative position between the YIG sphere and the planar microwave circuit can be finely adjusted by a three-dimensional (3D) motor-controlled robot arm. An external magnetic field is applied perpendicularly to the planar device, which is used to both saturate the magnetization of the YIG sphere and tune the frequency of the magnon mode. To characterize this device, a vector network analyzer (VNA) with a signal power of -20 dBm is used to measure the transmission spectrum |S_12(21)|, where the subscript 21 (12) indicates that the microwave field is loaded to the port 1 (2) and propagates to the port 2 (1) (see Supplementary Material Sec. I).
Our device is schematically shown in Fig. <ref>(b), where the cavity mode and the magnon mode dissipate cooperatively to the traveling wave-type dissipative reservoir (waveguide). Here, α and β are the intrinsic damping rates of the magnon mode and the passive cavity mode, respectively, and β can be reduced by the active circuit, or even turned to be negative. The external damping rates of the magnon mode and the cavity mode, i.e., γ and κ, reflect their interactions with the waveguide traveling photon modes. The cooperative dissipation gives rise to the dissipative coupling between the cavity mode and the magnon mode <cit.>. Meanwhile, the cavity mode and the magnon mode can directly interact with each other via the spatial mode overlapping, which is attributed to the coherent magnon-photon coupling.
When the coupling mechanism of our system is dominated by both coherent and dissipative couplings, the interference between them has a significant modulation on the transmission spectrum of the system. Due to the asymmetric location of the YIG sphere with respect to the central line of the device, the phase difference between the coherent and dissipative couplings will be different when the microwave is loaded to the ports 1 and 2. Under the circumstances, the overall coupling between the cavity mode and the magnon mode can become direction-dependent. In other words, the nonreciprocity stems from the inconsistent interference effect between the coherent and dissipative couplings when the microwave transmission direction is reversed <cit.>.
We define ℂ_R(L)=M+iN to characterize the direction-dependent complex coupling, where M denotes the coherent coupling, iN corresponds to the dissipative coupling, with i indicating the non-Hermiticity of the dissipative coupling, and the subscript R(L) represents the case when the signal is loaded to the port 1 (2). When the traveling wave propagates in the opposite direction, ℂ_R≠ℂ_L, so the nonreciprocity emerges. Under the rotating-wave approximation, the Hamiltonian of the non-Hermitian system can be written as
H/ħ=ω̃_ca^†a+ω̃_mb^†b+ℂ_R(L)(a^†b+b^†a),
where ω̃_c=ω_c-i(β+κ) and ω̃_m=ω_m-i(α+γ) are the complex frequencies of the cavity mode and the magnon mode, respectively, while a^† (a) and b^† (b) are the creation (annihilation) operators of the cavity mode and the magnon mode, respectively.
Below we first characterize the loss compensation of the active circuit to the passive cavity, and find out the relationship between the bias voltage of the amplifier and the compensated dissipation of the passive cavity. Figure <ref>(a) shows the response of the cavity mode to the bias voltage applied on the amplifier, where the measured transmission spectra |S_T| at a series of bias voltages are plotted. When the voltage is zero, |S_T|<0, and the spectrum shows a dip at the resonant frequency of the cavity mode. With the increase of the amplifier voltage, the amplitude of |S_T| decreases until an ultra-sharp dip (∼55 dB) appears. Then, it gradually increases and the spectrum turns into a resonance peak with |S_T|>0. This process is accompanied with the intrinsic damping rate of the cavity mode compensated to be zero and then negative. The transmission coefficient of the side-coupled cavity mode can be described by the following equation:
S_T=1+κ_e/i(ω-ω_c)-(κ_e+β_e),
where κ_e=κ·η and β_e=β-g are the effective external and intrinsic damping rates of the cavity mode, respectively. The parameter η indicates that the external damping rate of the cavity mode is affected by the active circuit when the bias voltage is applied. The parameter g is introduced as the gain coefficient, which is determined by the amplifier bias voltage and is equal to the linewidth compensation of the cavity mode.
The bare cavity mode (corresponding to zero bias voltage) has an external damping rate of κ/2π=48 MHz, and an intrinsic damping rate of β/2π=15.5 MHz. By continuously tuning the bias voltage, the fitted β_e and κ_e versus the bias voltage are plotted as the green and blue circles in Fig. <ref>(b) and Fig. <ref>(c), respectively. When the amplifier voltage is small, the intrinsic dissipation of the cavity mode is not fully compensated (g<β, β_e>0). While the amplifier voltage reaches 2.75 V, the intrinsic damping rate of the cavity mode is almost exactly compensated by the active circuit (g=β, β_e=0), resulting in an extremely abrupt dip in |S_T|, as marked by an arrow in Fig. <ref>(a). With the increase of the amplifier bias voltage, the gain coefficient exceeds the intrinsic damping rate, giving rise to an effective negative intrinsic damping rate of the cavity mode (g>β, β_e<0). It is worth noting that the amplifier also affects the external damping rate of the cavity mode, as shown in Fig. <ref>(c). However, its effect on the microwave transmission is not significant, and the transmitted signal appears as either amplification or attenuation, depending primarily on the compensation of the intrinsic dissipation (see Supplementary Material Sec. II for details). By introducing the active circuit, the transmission amplitude becomes amplified around the cavity mode frequency, and the loss-compensated cavity is a crucial element for the subsequent realization of the nonreciprocal amplification.
With the amplification mechanism in hand, the next step is to construct the nonreciprocity of the system. Additionally, the YIG sphere is attached to the cavity. The magnon mode sustained by the YIG sphere is coherently and dissipatively coupled to the cavity photon mode. The nonreciprocity originates from the interference between the coherent and dissipative couplings, and the phase of the interference term is related to the direction of the microwave propagation, which leads to a direction-dependent complex coupling ℂ_R(L). The magnon-photon coupling induces hybridized modes with complex frequencies
ω̃_±= 1/2[ω_c+ω_m-i(β_e+α+κ_e+γ)±
√([(ω_c-ω_m)-i(β_e-α+κ_e+γ)]^2+4ℂ^2)].
Using the input-output theory (see Supplementary Material Sec. III for details), we obtain the transmission spectrum of the coupled system as
S_21(12)=1+κ_e/i(ω-ω_c)-(κ_e+β_e)+ℂ^2_R(L)/i(ω-ω_m)-(α+γ),
where the complex coupling strength ℂ_R(L)=M+iN between the cavity and magnon modes is direction-dependent as stated above, resulting in nonreciprocal transmission spectra, i.e., S_21≠ S_12. It is necessary to adjust the ratio between the coherent and dissipative coupling strengths for an optimal nonreciprocal transmission. Experimentally, we modify the complex coupling strength ℂ_R(L) by fixing the height of the YIG sphere relative to the planar device at a distance of D=1 mm and then precisely controlling its position in the x-y plane.
To clearly show the existing region of the nonreciprocal transmission exists and reveal its origin in our device, here we place the YIG sphere at three different positions relative to the cavity, as shown in Fig. <ref>(a). We depict the corresponding |S_21| and |S_12| mappings versus the field detuning Δ_m=ω-ω_m and the probe frequency detuning Δ_c=ω-ω_c in Figs. <ref>(c)-(e), respectively. When the YIG sphere is at position 1, the two hybridized modes shown in Fig. <ref>(c) exhibit level repulsion, a characteristic of coherent magnon-photon coupling <cit.>. Using Eq. (<ref>) to fit the experimental results, we find that at position 1 the real part of the complex coupling strength M=|Re(ℂ)| is significantly larger than the imaginary part N=|Im(ℂ)|≈ 0, as depicted by the blue dot in Fig. <ref>(b). Nonreciprocity cannot be observed in this instance because the interference term between the coherent and dissipative couplings is negligibly small. As observed in the left (|S_21|) and right panels (|S_12|) of Figure <ref>(c), transmission mappings are nearly identical.
When the YIG sphere is placed at position 2, the mode hybridization as shown in Fig. <ref>(d) is quite distinct from level repulsion but similar to level attraction, indicating the coexistence of dissipative and coherent magnon-photon couplings <cit.>. As indicated by the orange dot in Fig. <ref>(b), when the YIG sphere is placed at position 2, the coherent and dissipative coupling strengths are comparable. The interference between coherent and dissipative couplings then plays a crucial role. Consequently, the nonreciprocity is evident in Fig. <ref>(d), where |S_21|≠|S_12| is observed at various detunings. As an illustration, we plot the nonreciprocal transmission at the arrow-marked field detuning in Fig. <ref>(d). As depicted in the inset of Fig. <ref>(b), red and blue curves correspond to |S_21| and |S_12|, respectively.
The coupling strength ℂ tends to be zero [gray dot in Fig. <ref>(b)] when the YIG sphere is placed at position 3, which is far from the cavity and transmission line. Conceivably, nonreciprocal transmission does not occur in this case either. The measured transmission mappings of |S_21| and |S_12| are shown in Fig. <ref>(e). Through the measurements at the positions above, we can find that the coexistence of coherent and dissipative couplings is very crucial for the emergence of nonreciprocal transmission. In addition, to obtain the maximum nonreciprocal response, the coherent and dissipative coupling strengths should be comparable.
Combining the transmission amplification caused by the dissipation-compensated cavity and the nonreciprocity of the coupled system, we achieve the unidirectional amplification and isolation, as shown in Fig. <ref>. In order to quantify isolation, the difference between the rightward (S_21) and leftward (S_12) transmission amplitudes is extracted and plotted as 20log_10|S_21/S_12|. Here we use its absolute value as the isolation ratio (Iso.) of the system. The experimentally observed isolation ratios at different amplifier bias voltages (V=0, 4.8, 5 V) are plotted versus Δ_m and Δ_c in Figs. <ref>(a)-(c). As the bias voltage of the amplifier rises, the intrinsic damping rate of the hybrid system transits from the positive to negative, which means the generation of a net gain in the system (see Supplementary Material Sec. IV). The maximum isolation ratio increases as more signal gain is supplied to the system. On the premise of a high level of isolation, it is crucial to find out the optimal operating point in order to achieve a relatively large unidirectional signal amplification. We choose the field detuning position marked by the gray planes in Figs. <ref>(a)-(c) as examples, and plot the |S_21| and |S_12| spectra in Figs. <ref>(d)-(f), respectively. The circle points are the experimental results, while the solid curves are the theoretical results obtained using Eq. (<ref>), which are in good agreement. The microwave transmission depicted in Fig. <ref>(d) is nonreciprocal without amplification, because the bias voltage of the amplifier is set to zero. In Figs. <ref>(e) and <ref>(f), the unidirectional amplification is realized when bias voltage is applied to the amplifier. At the working frequency of ω/2π=3.003 GHz, the rightward propagating microwave can be amplified by 11.5 dB, whereas the leftward propagating microwave can be attenuated by 34.7 dB, giving an isolation ratio of 46.2 dB. Fitting these spectra with Eq. (<ref>), we obtain the propagation direction-dependent complex coupling strengths of the system, which are ℂ_R=(5.66-4.3i) MHz and ℂ_L=(5.6-2.74i) MHz.
In conclusion, we have designed a novel nonreciprocal device that performs unidirectional amplification and reserve isolation simultaneously. The device consists of an active circuit with an embedded amplifier, a passive cavity made by SRR, and a YIG sphere. The unidirectional amplification of the microwave signal is accomplished by combining the nonreciprocity of the cavity magnonic system with the conventional active feedback of the gain medium. By adjusting the bias voltage of the amplifier in the active feedback circuit, we can compensate for loss of the passive cavity mode and convert it to a gain regime. Meanwhile, the coupling strength between the cavity photon mode and the magnon mode is controlled by the relative position between the device and the YIG sphere. By balancing the coherent and dissipative couplings, the interference between them can result in nonreciprocity in our system. By optimization via amplifier bias voltage and photon-magnon coupling strength, we are able to achieve microwave transmission amplification of 11.5 dB in one propagation direction and attenuation of 34.7 dB in the opposite propagation direction. Such a unidirectional amplification device may have promising applications in quantum information technologies, as it can both amplify the weak signal leaking from the principal quantum system and isolate the back-scattered noise from the readout electronics. This design of integrating two functions in just one device will provide advantages for the construction of large-scale information networks.
We thank Zi-Qi Wang for helpful discussion. This work is supported by the National Key
Research and Development Program of China (Grant No. 2022YFA1405200), the National Natural Science Foundation of China (Grants No. 92265202, No. 11934010, and No. 12174329), the Fundamental Research Funds for the Central Universities (Grant No. 2021FZZX001-02).
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
75
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Wolfe et al.(1995)Wolfe,
Wang, DiGiovanni, and Vengsarkar]Wolfe-95
author author R. Wolfe, author W.-K. Wang,
author D. DiGiovanni, and author A. Vengsarkar, title title All-fiber magneto-optic isolator based
on the nonreciprocal phase shift in asymmetric fiber, @noop
journal journal Optics letters volume 20, pages 1740–1742 (year
1995)NoStop
[Wang and Fan(2005)]Wang-05
author author Z. Wang and author S. Fan, title title Optical circulators in
two-dimensional magneto-optical photonic crystals, @noop
journal journal Optics letters volume 30, pages 1989–1991 (year
2005)NoStop
[Belotelov, Doskolovich, and Zvezdin(2007)]Belotelov-07
author author V. Belotelov, author L. Doskolovich, and author A. Zvezdin, title title Extraordinary
magneto-optical effects and transmission through metal-dielectric plasmonic
systems, @noop journal journal
Physical review letters volume 98, pages 077401 (year 2007)NoStop
[Bi et al.(2011)Bi,
Hu, Jiang, Kim, Dionne, Kimerling, and Ross]Bi-11
author author L. Bi, author J. Hu, author P. Jiang, author
D. H. Kim, author G. F. Dionne, author L. C. Kimerling, and author C. Ross, title title
On-chip optical isolation in monolithically integrated non-reciprocal
optical resonators, @noop journal journal Nature Photonics volume 5, pages 758–762 (year 2011)NoStop
[Chin et al.(2013)Chin,
Steinle, Wehlus, Dregely,
Weiss, Belotelov, Stritzker, and Giessen]Chin-13
author author J. Y. Chin, author T. Steinle,
author T. Wehlus, author D. Dregely, author
T. Weiss, author V. I. Belotelov, author B. Stritzker, and author H. Giessen, title title Nonreciprocal plasmonics enables giant enhancement of thin-film
faraday rotation, @noop journal journal Nature communications volume 4, pages 1599 (year 2013)NoStop
[Christodoulides and Joseph(1988)]Christodoulides-88
author author D. Christodoulides and author R. Joseph, title title Discrete
self-focusing in nonlinear arrays of coupled waveguides, @noop
journal journal Optics letters volume 13, pages 794–796 (year
1988)NoStop
[Shi, Yu, and Fan(2015)]Shi-15
author author Y. Shi, author Z. Yu, and author S. Fan, title title Limitations of nonlinear optical
isolators due to dynamic reciprocity, @noop journal
journal Nature photonics volume 9, pages 388–392 (year 2015)NoStop
[Fan et al.(2012)Fan,
Wang, Varghese, Shen,
Niu, Xuan, Weiner, and Qi]Fan-11
author author L. Fan, author J. Wang, author L. T. Varghese, author
H. Shen, author B. Niu, author Y. Xuan, author A. M. Weiner, and author M. Qi, title title An all-silicon passive optical diode, @noop journal journal Science volume 335, pages 447–450 (year
2012)NoStop
[Wang et al.(2012)Wang,
Fan, Varghese, Shen,
Xuan, Niu, and Qi]Wang-13
author author J. Wang, author L. Fan, author L. T. Varghese, author
H. Shen, author Y. Xuan, author B. Niu, and author M. Qi, title title A theoretical model for an
optical diode built with nonlinear silicon microrings, @noop
journal journal Journal of Lightwave Technology volume 31, pages 313–321 (year 2012)NoStop
[Huang et al.(2018)Huang,
Miranowicz, Liao, Nori, and Jing]Huang-18
author author R. Huang, author A. Miranowicz,
author J.-Q. Liao, author F. Nori, and author
H. Jing, title title Nonreciprocal photon blockade, @noop
journal journal Physical review letters volume 121, pages 153601 (year 2018)NoStop
[Hwang, Yun, and Kim(1997)]Hwang-97
author author I. K. Hwang, author S. H. Yun, and author B. Y. Kim, title title All-fiber-optic
nonreciprocal modulator, @noop journal journal Optics letters volume 22, pages 507–509 (year 1997)NoStop
[Wang et al.(2013)Wang,
Zhou, Guo, Zhang,
Evers, and Zhu]WangDW-13
author author D.-W. Wang, author H.-T. Zhou,
author M.-J. Guo, author J.-X. Zhang, author
J. Evers, and author
S.-Y. Zhu, title title Optical diode made from a moving photonic crystal, @noop journal journal Physical review
letters volume 110, pages 093901
(year 2013)NoStop
[Doerr, Dupuis, and Zhang(2011)]Doerr-11
author author C. R. Doerr, author N. Dupuis, and author L. Zhang, title title Optical isolator using two tandem phase
modulators, @noop journal journal
Optics letters volume 36, pages
4293–4295 (year 2011)NoStop
[Lira et al.(2012)Lira,
Yu, Fan, and Lipson]Lira-12
author author H. Lira, author Z. Yu, author S. Fan, and author
M. Lipson, title title Electrically driven nonreciprocity induced by interband
photonic transition on a silicon chip, @noop journal journal Physical review letters volume 109, pages 033901 (year
2012)NoStop
[Metelmann and Clerk(2015)]Metelmann-15
author author A. Metelmann and author A. A. Clerk, title title Nonreciprocal
photon transmission and amplification via reservoir engineering, @noop journal journal Physical Review
X volume 5, pages 021025 (year 2015)NoStop
[Fang et al.(2017)Fang,
Luo, Metelmann, Matheny,
Marquardt, Clerk, and Painter]Fang-17
author author K. Fang, author J. Luo, author A. Metelmann, author
M. H. Matheny, author
F. Marquardt, author
A. A. Clerk, and author
O. Painter, title title Generalized non-reciprocity in an optomechanical circuit
via synthetic magnetism and reservoir engineering, @noop
journal journal Nature Physics volume 13, pages 465–471 (year
2017)NoStop
[Xiao et al.(2010)Xiao,
Drachev, Kildishev, Ni,
Chettiar, Yuan, and Shalaev]Xiao-10
author author S. Xiao, author V. P. Drachev,
author A. V. Kildishev, author X. Ni, author
U. K. Chettiar, author
H.-K. Yuan, and author
V. M. Shalaev, title
title Loss-free and active optical negative-index
metamaterials, @noop journal journal
Nature volume 466, pages 735–738
(year 2010)NoStop
[Massel et al.(2011)Massel,
Heikkilä, Pirkkalainen, Cho, Saloniemi, Hakonen, and Sillanpää]Massel-11
author author F. Massel, author T. T. Heikkilä, author J.-M. Pirkkalainen, author S.-U. Cho, author H. Saloniemi,
author P. J. Hakonen, and author M. A. Sillanpää, title title Microwave amplification with
nanomechanical resonators, @noop journal journal Nature volume 480, pages
351–354 (year 2011)NoStop
[Fermann et al.(2000)Fermann, Kruglov, Thomsen, Dudley, and Harvey]Fermann-00
author author M. E. Fermann, author V. Kruglov,
author B. Thomsen, author J. M. Dudley, and author J. D. Harvey, title
title Self-similar propagation and amplification of
parabolic pulses in optical fibers, @noop journal
journal Physical review letters volume
84, pages 6010 (year 2000)NoStop
[De Leon and Berini(2010)]Leon-10
author author I. De Leon and author P. Berini, title title Amplification of
long-range surface plasmons by a dipolar gain medium, @noop
journal journal Nature Photonics volume 4, pages 382–387 (year
2010)NoStop
[Liu et al.(2007)Liu,
Sun, Pan, Wang, Kimerling, Koch, and Michel]Liu-07
author author J. Liu, author X. Sun, author D. Pan, author
X. Wang, author L. C. Kimerling, author T. L. Koch, and author J. Michel, title title
Tensile-strained, n-type ge as a gain medium for monolithic laser
integration on si, @noop journal journal Optics express volume 15, pages 11272–11277 (year 2007)NoStop
[Khurgin and Sun(2012)]Khurgin-12
author author J. B. Khurgin and author G. Sun, title title Practicality of compensating
the loss in the plasmonic waveguides using semiconductor gain medium, @noop journal journal Applied Physics
Letters volume 100, pages 011105
(year 2012)NoStop
[Choksi et al.(2022)Choksi,
Liu, Ghasemi, and Qian]Choksi-22
author author N. Choksi, author Y. Liu,
author R. Ghasemi, and author L. Qian, title title Sub-megahertz spectral dip in a
resonator-free twisted gain medium, @noop journal
journal Nature Photonics volume 16, pages 498–504 (year 2022)NoStop
[Stehlik et al.(2016)Stehlik, Liu, Eichler, Hartke, Mi, Gullans, Taylor, and Petta]Stehlik-16
author author J. Stehlik, author Y.-Y. Liu,
author C. Eichler, author T. Hartke, author
X. Mi, author M. Gullans, author J. Taylor, and author J. R. Petta, title title
Double quantum dot floquet gain medium, @noop journal journal Physical Review X volume 6, pages 041027 (year
2016)NoStop
[Frolov et al.(1999)Frolov,
Vardeny, Yoshino, Zakhidov, and Baughman]Frolov-99
author author S. Frolov, author Z. Vardeny,
author K. Yoshino, author A. Zakhidov, and author R. Baughman, title
title Stimulated emission in high-gain organic
media, @noop journal journal Physical
Review B volume 59, pages R5284
(year 1999)NoStop
[Yao et al.(2017)Yao,
Gui, Rao, Kaur, Chen, Lu, Xiao, Guo,
Marzlin, and Hu]Yao-17
author author B. Yao, author Y. Gui, author J. Rao, author
S. Kaur, author X. Chen, author W. Lu, author Y. Xiao, author H. Guo, author
K.-P. Marzlin, and author
C.-M. Hu, title title Cooperative polariton dynamics in feedback-coupled
cavities, @noop journal journal
Nature communications volume 8, pages
1437 (year 2017)NoStop
[Yao et al.(2023)Yao,
Gui, Rao, Zhang,
Lu, and Hu]Yao-23
author author B. Yao, author Y. Gui, author J. Rao, author
Y. Zhang, author W. Lu, and author C.-M. Hu, title title
Coherent microwave emission of gain-driven polaritons, @noop
journal journal Physical Review Letters volume 130, pages 146702 (year 2023)NoStop
[Lin et al.(2019)Lin,
Zhang, Hu, Niu, Gong, and Gong]Lin-19
author author G. Lin, author S. Zhang, author Y. Hu, author
Y. Niu, author J. Gong, and author S. Gong, title title
Nonreciprocal amplification with four-level hot atoms, @noop
journal journal Physical Review Letters volume 123, pages 033902 (year 2019)NoStop
[Peng et al.(2014)Peng,
Özdemir, Lei, Monifi,
Gianfreda, Long, Fan,
Nori, Bender, and Yang]Peng-14
author author B. Peng, author Ş. K. Özdemir, author F. Lei,
author F. Monifi, author M. Gianfreda, author
G. L. Long, author
S. Fan, author F. Nori, author C. M. Bender, and author L. Yang, title title
Parity–time-symmetric whispering-gallery microcavities, @noop
journal journal Nature Physics volume 10, pages 394–398 (year
2014)NoStop
[Abdo et al.(2013)Abdo,
Sliwa, Frunzio, and Devoret]Abdo-13
author author B. Abdo, author K. Sliwa,
author L. Frunzio, and author M. Devoret, title title Directional amplification with a
josephson circuit, @noop journal journal Physical Review X volume 3, pages 031001 (year 2013)NoStop
[Abdo et al.(2014)Abdo,
Sliwa, Shankar, Hatridge,
Frunzio, Schoelkopf, and Devoret]Abdo-14
author author B. Abdo, author K. Sliwa,
author S. Shankar, author M. Hatridge, author
L. Frunzio, author R. Schoelkopf, and author M. Devoret, title title Josephson directional amplifier for quantum measurement of
superconducting circuits, @noop journal journal Physical review letters volume 112, pages 167701 (year 2014)NoStop
[Malz et al.(2018)Malz,
Tóth, Bernier, Feofanov,
Kippenberg, and Nunnenkamp]Malz-18
author author D. Malz, author L. D. Tóth,
author N. R. Bernier, author A. K. Feofanov, author
T. J. Kippenberg, and author
A. Nunnenkamp, title
title Quantum-limited directional amplifiers with
optomechanics, @noop journal journal
Physical review letters volume 120, pages 023601 (year 2018)NoStop
[de Lépinay et al.(2019)de Lépinay, Damskägg, Ockeloen-Korppi, and Sillanpää]Mika-19
author author L. M. de Lépinay, author E. Damskägg, author C. F. Ockeloen-Korppi, and author M. A. Sillanpää, title title Realization of directional amplification in a microwave
optomechanical device, @noop journal journal Physical Review Applied volume 11, pages 034027 (year 2019)NoStop
[Zhao et al.(2022)Zhao,
Peng, Yang, Chao,
Li, Wang, and Zhou]Zhao-22A
author author C. Zhao, author R. Peng, author Z. Yang, author
S. Chao, author C. Li, author Z. Wang, and author L. Zhou, title title Nonreciprocal amplification
in a cavity magnonics system, @noop journal
journal Physical Review A volume
105, pages 023709 (year 2022)NoStop
[Koutserimpas and Fleury(2018)]Koutserimpas-18
author author T. T. Koutserimpas and author R. Fleury, title title Nonreciprocal
gain in non-hermitian time-floquet systems, @noop journal journal Physical review letters volume 120, pages 087401 (year
2018)NoStop
[Kamal and Metelmann(2017)]Kamal-17
author author A. Kamal and author A. Metelmann, title title Minimal
models for nonreciprocal amplification using biharmonic drives, @noop journal journal Physical Review
Applied volume 7, pages 034031
(year 2017)NoStop
[Jiang et al.(2018)Jiang,
Maayani, Carmon, Nori, and Jing]Jiang-18
author author Y. Jiang, author S. Maayani,
author T. Carmon, author F. Nori, and author
H. Jing, title title Nonreciprocal phonon laser, @noop journal journal Physical Review Applied volume 10, pages 064037 (year
2018)NoStop
[Galiffi, Huidobro, and Pendry(2019)]Galiffi-19
author author E. Galiffi, author P. Huidobro, and author J. B. Pendry, title title Broadband nonreciprocal
amplification in luminal metamaterials, @noop journal journal Physical review letters volume 123, pages 206101 (year
2019)NoStop
[Song et al.(2019)Song,
Shi, Lin, and Fan]Song-19
author author A. Y. Song, author Y. Shi, author Q. Lin, and author
S. Fan, title title Direction-dependent parity-time phase transition and
nonreciprocal amplification with dynamic gain-loss modulation, @noop
journal journal Physical Review A volume 99, pages 013824 (year
2019)NoStop
[Taravati and Eleftheriades(2021)]Taravati-21
author author S. Taravati and author G. V. Eleftheriades, title title
Full-duplex reflective beamsteering metasurface featuring magnetless
nonreciprocal amplification, @noop journal
journal Nature Communications volume
12, pages 4414 (year 2021)NoStop
[Rameshti et al.(2022)Rameshti, Kusminskiy, Haigh, Usami, Lachance-Quirion, Nakamura,
Hu, Tang, Bauer, and Blanter]Rameshti-14
author author B. Z. Rameshti, author S. V. Kusminskiy, author J. A. Haigh, author K. Usami,
author D. Lachance-Quirion,
author Y. Nakamura, author C.-M. Hu, author
H. X. Tang, author
G. E. Bauer, and author
Y. M. Blanter, title
title Cavity magnonics, @noop journal journal Physics Reports volume
979, pages 1–61 (year 2022)NoStop
[Zhang et al.(2015)Zhang,
Zou, Zhu, Marquardt,
Jiang, and Tang]Zhang-15
author author X. Zhang, author C.-L. Zou,
author N. Zhu, author
F. Marquardt, author
L. Jiang, and author
H. X. Tang, title title Magnon dark modes and gradient memory, @noop
journal journal Nature communications volume 6, pages 8914 (year
2015)NoStop
[Shen et al.(2021)Shen,
Wang, Li, Zhu, Agarwal, and You]Shen-21
author author R.-C. Shen, author Y.-P. Wang,
author J. Li, author
S.-Y. Zhu, author G. Agarwal, and author J. You, title title
Long-time memory and ternary logic gate using a multistable cavity magnonic
system, @noop journal journal
Physical Review Letters volume 127, pages 183202 (year 2021)NoStop
[Osada et al.(2016)Osada,
Hisatomi, Noguchi, Tabuchi,
Yamazaki, Usami, Sadgrove,
Yalla, Nomura, and Nakamura]Osada-16
author author A. Osada, author R. Hisatomi,
author A. Noguchi, author Y. Tabuchi, author
R. Yamazaki, author
K. Usami, author M. Sadgrove, author R. Yalla, author M. Nomura, and author Y. Nakamura, title title Cavity optomagnonics with spin-orbit coupled photons, @noop
journal journal Physical review letters volume 116, pages 223601 (year 2016)NoStop
[Tabuchi et al.(2015)Tabuchi, Ishino, Noguchi, Ishikawa, Yamazaki, Usami, and Nakamura]Tabuchi-15
author author Y. Tabuchi, author S. Ishino,
author A. Noguchi, author T. Ishikawa, author
R. Yamazaki, author
K. Usami, and author
Y. Nakamura, title title Coherent coupling between a ferromagnetic magnon and a
superconducting qubit, @noop journal journal Science volume 349, pages
405–408 (year 2015)NoStop
[Hisatomi et al.(2016)Hisatomi, Osada, Tabuchi, Ishikawa, Noguchi, Yamazaki, Usami, and Nakamura]Hisatomi-16
author author R. Hisatomi, author A. Osada,
author Y. Tabuchi, author T. Ishikawa, author
A. Noguchi, author R. Yamazaki, author K. Usami, and author Y. Nakamura, title title Bidirectional conversion between microwave and light via
ferromagnetic magnons, @noop journal journal Physical Review B volume 93, pages 174427 (year 2016)NoStop
[Haigh et al.(2015)Haigh,
Langenfeld, Lambert, Baumberg, Ramsay, Nunnenkamp, and Ferguson]James-15
author author J. Haigh, author S. Langenfeld,
author N. Lambert, author J. Baumberg, author
A. Ramsay, author A. Nunnenkamp, and author A. Ferguson, title title Magneto-optical coupling in whispering-gallery-mode resonators, @noop journal journal Physical Review
A volume 92, pages 063845 (year 2015)NoStop
[Haigh et al.(2016)Haigh,
Nunnenkamp, Ramsay, and Ferguson]James-16
author author J. Haigh, author A. Nunnenkamp,
author A. Ramsay, and author A. Ferguson, title
title Triple-resonant brillouin light scattering in
magneto-optical cavities, @noop journal journal Physical review letters volume 117, pages 133602 (year 2016)NoStop
[Zhang et al.(2016)Zhang,
Zhu, Zou, and Tang]Tang-16
author author X. Zhang, author N. Zhu, author C.-L. Zou, and author H. X. Tang, title
title Optomagnonic whispering gallery
microresonators, @noop journal journal
Physical review letters volume 117, pages 123605 (year 2016)NoStop
[Kusminskiy, Tang, and Marquardt(2016)]Kusminskiy-16
author author S. V. Kusminskiy, author H. X. Tang, and author F. Marquardt, title title Coupled
spin-light dynamics in cavity optomagnonics, @noop journal journal Physical Review A volume 94, pages 033821 (year
2016)NoStop
[Graf et al.(2018)Graf,
Pfeifer, Marquardt, and Kusminskiy]Kusminskiy-18
author author J. Graf, author H. Pfeifer,
author F. Marquardt, and author S. V. Kusminskiy, title title Cavity optomagnonics with
magnetic textures: Coupling a magnetic vortex to light, @noop
journal journal Physical Review B volume 98, pages 241406 (year
2018)NoStop
[Sharma et al.(2019)Sharma,
Rameshti, Blanter, and Bauer]Bauer-19
author author S. Sharma, author B. Z. Rameshti, author Y. M. Blanter, and author G. E. Bauer, title title Optimal mode
matching in cavity optomagnonics, @noop journal
journal Physical Review B volume 99, pages 214423 (year 2019)NoStop
[Wu et al.(2021)Wu,
Wang, Wu, Li, and You]Weijiang-21
author author W.-J. Wu, author Y.-P. Wang,
author J.-Z. Wu, author J. Li, and author
J. You, title title Remote magnon entanglement between two massive
ferrimagnetic spheres via cavity optomagnonics, @noop journal journal Physical Review A volume 104, pages 023711 (year
2021)NoStop
[Lauk et al.(2020)Lauk,
Sinclair, Barzanjeh, Covey,
Saffman, Spiropulu, and Simon]Lauk-20
author author N. Lauk, author N. Sinclair,
author S. Barzanjeh, author J. P. Covey, author
M. Saffman, author M. Spiropulu, and author C. Simon, title title
Perspectives on quantum transduction, @noop journal journal Quantum Science and Technology volume 5, pages 020501 (year
2020)NoStop
[Lachance-Quirion et al.(2020)Lachance-Quirion, Wolski, Tabuchi,
Kono, Usami, and Nakamura]Quirion-20
author author D. Lachance-Quirion, author S. P. Wolski, author Y. Tabuchi,
author S. Kono, author
K. Usami, and author
Y. Nakamura, title title Entanglement-based single-shot detection of a single
magnon with a superconducting qubit, @noop journal
journal Science volume 367, pages 425–428 (year 2020)NoStop
[Wolski et al.(2020)Wolski,
Lachance-Quirion, Tabuchi, Kono, Noguchi, Usami, and Nakamura]Wolski-20
author author S. P. Wolski, author D. Lachance-Quirion, author Y. Tabuchi, author S. Kono,
author A. Noguchi, author K. Usami, and author
Y. Nakamura, title title Dissipation-based quantum sensing of magnons with a
superconducting qubit, @noop journal journal Physical Review Letters volume 125, pages 117701 (year 2020)NoStop
[Xu et al.(2023a)Xu, Gu, Li, Weng,
Wang, Li, Wang, Zhu, and You]Xu-22
author author D. Xu, author X.-K. Gu, author H.-K. Li, author
Y.-C. Weng, author
Y.-P. Wang, author
J. Li, author H. Wang, author S.-Y. Zhu, and author J. You, title title
Quantum control of a single magnon in a macroscopic spin system, @noop journal journal Physical Review
Letters volume 130, pages 193603
(year 2023a)NoStop
[Xu et al.(2023b)Xu, Gu, Weng, Li,
Wang, Zhu, and You]xu2023deterministic
author author D. Xu, author X.-K. Gu, author Y.-C. Weng, author
H.-K. Li, author Y.-P. Wang, author S.-Y. Zhu, and author J. You, title title
Deterministic generation and tomography of a macroscopic bell state between
a millimeter-sized spin system and a superconducting qubit, @noop
journal journal arXiv preprint arXiv:2306.09677 (year 2023b)NoStop
[Huebl et al.(2013)Huebl,
Zollitsch, Lotze, Hocke,
Greifenstein, Marx, Gross, and Goennenwein]Hans-13
author author H. Huebl, author C. W. Zollitsch, author J. Lotze,
author F. Hocke, author M. Greifenstein, author
A. Marx, author R. Gross, and author S. T. Goennenwein, title title
High cooperativity in coupled microwave resonator ferrimagnetic insulator
hybrids, @noop journal journal
Physical Review Letters volume 111, pages 127003 (year 2013)NoStop
[Zhang et al.(2014)Zhang,
Zou, Jiang, and Tang]Zhang-14
author author X. Zhang, author C.-L. Zou,
author L. Jiang, and author H. X. Tang, title
title Strongly coupled magnons and cavity microwave
photons, @noop journal journal
Physical review letters volume 113, pages 156401 (year 2014)NoStop
[Tabuchi et al.(2014)Tabuchi, Ishino, Ishikawa, Yamazaki, Usami, and Nakamura]Tabuchi-14
author author Y. Tabuchi, author S. Ishino,
author T. Ishikawa, author R. Yamazaki, author
K. Usami, and author
Y. Nakamura, title title Hybridizing ferromagnetic magnons and microwave photons in
the quantum limit, @noop journal journal Physical review letters volume 113, pages 083603 (year 2014)NoStop
[Chumak et al.(2015)Chumak,
Vasyuchka, Serga, and Hillebrands]Chumak-15
author author A. V. Chumak, author V. I. Vasyuchka, author A. A. Serga, and author B. Hillebrands, title title Magnon
spintronics, @noop journal journal
Nature physics volume 11, pages
453–461 (year 2015)NoStop
[Wang and Hu(2020)]WangY-20
author author Y.-P. Wang and author C.-M. Hu, title title Dissipative couplings in
cavity magnonics, @noop journal journal Journal of Applied Physics volume 127, pages 130901 (year 2020)NoStop
[Wang et al.(2019)Wang,
Rao, Yang, Xu, Gui, Yao, You, and Hu]WangY-19
author author Y.-P. Wang, author J. Rao, author Y. Yang, author
P.-C. Xu, author Y. Gui, author B. Yao, author J. You, and author C.-M. Hu, title title Nonreciprocity and unidirectional
invisibility in cavity magnonics, @noop journal
journal Physical review letters volume
123, pages 127202 (year 2019)NoStop
[Qian et al.(2020)Qian,
Rao, Gui, Wang, An, and Hu]Qian-20
author author J. Qian, author J. Rao, author Y. Gui, author
Y. Wang, author Z. An, and author C.-M. Hu, title title
Manipulation of the zero-damping conditions and unidirectional invisibility
in cavity magnonics, @noop journal journal Applied Physics Letters volume 116, pages 192401 (year 2020)NoStop
[Zhang et al.(2020)Zhang,
Galda, Han, Jin, and Vinokur]Xufeng-20
author author X. Zhang, author A. Galda,
author X. Han, author
D. Jin, and author
V. Vinokur, title title Broadband nonreciprocity enabled by strong coupling of
magnons and microwave photons, @noop journal
journal Physical Review Applied volume
13, pages 044039 (year 2020)NoStop
[Zhao et al.(2020)Zhao,
Rao, Gui, Wang, and Hu]Zhao-20
author author Y. Zhao, author J. Rao, author Y. Gui, author
Y. Wang, and author
C.-M. Hu, title title Broadband nonreciprocity realized by locally controlling
the magnon’s radiation, @noop journal journal Physical Review Applied volume 14, pages 014035 (year 2020)NoStop
[Zhang et al.(2021)Zhang,
Jia, Shi, Jiang,
Xue, Ong, and Chai]Chai-21
author author C. Zhang, author C. Jia, author Y. Shi, author
C. Jiang, author D. Xue, author C. Ong, and author G. Chai, title title Nonreciprocal multimode and
indirect couplings in cavity magnonics, @noop journal journal Physical Review B volume 103, pages 184427 (year
2021)NoStop
[Kim et al.(2022)Kim,
Rao, Wang, Gui, Bridges, and Hu]Kim-22
author author M. Kim, author J. Rao, author Y. Wang, author
Y. Gui, author G. E. Bridges, and author C.-M. Hu, title title
Prototyping of novel isolator design based on cavity magnonics, @noop journal journal IEEE Transactions
on Microwave Theory and Techniques volume 70, pages 3020–3028 (year 2022)NoStop
[Reiserer and Rempe(2015)]Rempe-15
author author A. Reiserer and author G. Rempe, title title Cavity-based
quantum networks with single atoms and optical photons, @noop
journal journal Reviews of Modern Physics volume 87, pages 1379 (year
2015)NoStop
[Wehner, Elkouss, and Hanson(2018)]Hanson-18
author author S. Wehner, author D. Elkouss, and author R. Hanson, title title Quantum internet: A vision
for the road ahead, @noop journal journal Science volume 362, pages
eaam9288 (year 2018)NoStop
[Zhong et al.(2021)Zhong,
Chang, Bienfait, Dumur,
Chou, Conner, Grebel,
Povey, Yan, Schuster et al.]Zhong-21
author author Y. Zhong, author H.-S. Chang,
author A. Bienfait, author É. Dumur, author
M.-H. Chou, author
C. R. Conner, author
J. Grebel, author R. G. Povey, author H. Yan, author D. I. Schuster, et al., title title Deterministic multi-qubit entanglement in a quantum
network, @noop journal journal
Nature volume 590, pages 571–575
(year 2021)NoStop
[Nguyen et al.(2019)Nguyen,
Sukachev, Bhaskar, Machielse,
Levonian, Knall, Stroganov,
Riedinger, Park, Lončar
et al.]Nguyen-19
author author C. Nguyen, author D. Sukachev,
author M. Bhaskar, author B. Machielse, author
D. Levonian, author
E. Knall, author P. Stroganov, author R. Riedinger, author H. Park, author M. Lončar, et al., title title Quantum network nodes based on diamond qubits with an efficient
nanophotonic interface, @noop journal journal Physical review letters volume 123, pages 183602 (year 2019)NoStop
[Li et al.(2021)Li,
Wang, Wu, Zhu, and You]Li-21
author author J. Li, author Y.-P. Wang,
author W.-J. Wu, author S.-Y. Zhu, and author J. You, title
title Quantum network with magnonic and mechanical
nodes, @noop journal journal PRX
Quantum volume 2, pages 040344
(year 2021)NoStop
[Zarifi, Thundat, and Daneshmand(2015)]Zarifi-15
author author M. H. Zarifi, author T. Thundat, and author M. Daneshmand, title title High resolution microwave
microstrip resonator for sensing applications, @noop journal journal Sensors and Actuators A: Physical volume 233, pages 224–230 (year
2015)NoStop
|
http://arxiv.org/abs/2307.02085v1 | 20230705074804 | Finite period vectors and Gauss sums | [
"Yeongseong Jo"
] | math.NT | [
"math.NT",
"math.RT"
] |
Department of Mathematics Education, Ewha Womans University, Seoul 03760, Republic of Korea
[email protected]
[2020]Primary : 11F70; Secondary: 11F66, 20C33, 22E50
We study four sums including the Jacquet–Piatetski-Shapiro–Shalika, Flicker, Bump–Friedberg, and Jacquet–Shalika sums associated to irreducible cuspidal representations
of general linear groups over finite fields. By computing explicitly, we relate Asai and Bump–Friedberg gamma factors over finite fields to those over nonarchimedean local fields through
level zero supercuspidal representation. Via Deligne–Kazhdan close field theory, we prove that exterior square and Bump–Friedberg gamma factors agree with corresponding Artin gamma factors of their associated tamely ramified representations through local Langlands correspondence. We also deduce product formulæ for Asai, Bump–Friedberg, and exterior square gamma factors in terms of Gauss sums. By combining these results, we examine Jacquet–Piatetski-Shapiro–Shalika, Flicker–Rallis, Jacquet–Shalika, and Friedberg–Jacquet periods and vectors and their connections to
Rankin-Selberg, Asai, exterior square, and Bump-Friedberg gamma factors, respectively.
Finite period vectors and Gauss sums
Yeongseong Jo
August 1, 2023
====================================
§ INTRODUCTION
In a classical analytic number theory and related branches of mathematics, one of the main themes is to analyze a complex valued arithmetic function called a Dirichlet character.
One of its prominent property is what is known as the functional equation establishing the symmetry across the critical strip. Viewing the Euler's Γ-function as the L-factor in the archimedean context, the symmetric version of this global functional equation involves the epsilon function which can be presented as the product of the classical Gauss sum and the conductor. In the 1960's, the analytic paradigm for understanding Dirichlet characters shifted from real or complex analytic functions to the study of automorphic forms on GL_n and automorphic representations of GL_n.
This naturally leads us to ask ourselves whether Gauss sums that appear in the global epsilon function have a representation theoretic interpretation.
Since the representation of nonarchimedean local fields F occurs as factors of cuspidal representations, it is not so surprising for us to notice the imitation of such a interpretation for a pair of supercuspidal
representations ρ_1 and ρ_2 of GL_n(F) and GL_r(F). In particular, when ρ=ρ_1 and ρ_2= 1_F^× becomes the trivial representation of GL_1(F), the formula given in <cit.> defines
the Godement–Jacquet gamma factor Γ(s,ρ,ψ_F) <cit.> in terms of non-abelian Gauss sums, where ψ_F is a fixed non-trivial additive character of . The identical Gauss sum emerges in Tate's local gamma factor for n=1, and in the seminal book of Bushnell and Henniart <cit.>, albeit for n=2. In the twisted case, we find the explicit formula for Rankin–Selberg gamma factor Γ(s,ρ_1 ×ρ_2,ψ_F) only adhering to
the conductor of the local constant <cit.>.
Regarding many questions about representations of nonarchimedean local fields, oftentimes insights can be gained by inspecting the analogue question over a finite field _q.
Before the Rankin–Selberg gamma factor Γ(s,ρ_1 ×ρ_2,ψ_F) was established in the pioneering work of Jacquet–Piatetski-Shapiro–Shalika (cf. <cit.>), the parallel gamma factors
γ(π_1 ×π_2,ψ) associated to
a pair of irreducible generic representations π_1 and π_2 of GL_n(_q) and GL_r(_q) had already been investigated in Piatetski-Shapiro's unpublished note <cit.>. The finite gamma factor γ(π_1 ×π_2,ψ), where now π_2 is a multiplicative character of _q^×, is revisited by Nien <cit.> in the hope of resolving local converse theorem and distinction problems <cit.>, following the lead from nonarchimedean local fields situation. As a by-product, Nien does something more intriguing, namely that γ(π_1 ×π_2,ψ) is associated to the abelian Gauss sum <cit.>.
In this article, we put the Asai, Bump–Friedberg, and exterior square setting on an equal footing with the Rankin–Selberg setting by expressing such finite gamma factors in terms of abelian Gauss sums.
We summarize our main results concerning irreducible cupsidal representation π of GL_n(_q), or GL_n(_q^2) if necessary, as follows;
* In <Ref>, the Asai gamma factor is defined as a proportionality of bilinear forms arising from the Flicker sums given by (<ref>).
We prove the prodcut formula for in terms of abelian Gauss sums (<Ref>).
* The exterior square gamma factor γ(π,∧^2,ψ) is defined as a proportionality of bilinear forms arising from the Jacquet–Shalika sums given in (<ref>) and (<ref>). We prove the product formula for γ(π,∧^2,ψ) in terms of abelian Gauss sums (<Ref>).
* In <Ref>, the Bump–Friedberg gamma factor is defined as a proportionality of bilinear forms arising from the Bump–Friedberg sums given by (<ref>)
and (<ref>). This represents ε_0(φ,)ε_0(∧^2 ∘φ,), the product of Deligne's arithmetic standard ε_0-factor and Deligne's arithmetic exterior square ε_0-factor (<Ref>). We prove the product formula for in terms of abelian Gauss sums (<Ref>).
Returning to the motivating question over the finite field _q, Nien and Zhang <cit.> subsequently propose the conjectural formula with regard to abelian Gauss sums
for Rankin–Selberg gamma factor γ(π_1 ×π_2,ψ) for a pair of irreducible cuspidal representations π_1 and π_2 of GL_n(_q) and GL_r(_q) with different ranks n ≠ r. A slightly modified formula is settled by Yang <cit.> and, independently, by Ye–Zelingher <cit.>. Afterwords, Zelingher generalizes the method to γ(π_1 ×π_2,ψ) with the same rank n=r <cit.>.
A key strategy of establishing the explicit formula boils down to computing gamma factors for level zero supercuspidal representations.
When it comes to an irreducible cuspidal representation π (or a pair of irreducible cuspidal representations π_1 and π_2 on occasion), which does not possess suitable period vectors, we can further relate them with their counterparts for irreducible cuspidal representations over finite fields. The calculation of Rankin–Selberg gammas factors Γ(s,ρ_1 ×ρ_2,ψ_F)
for a pair of irreducible cuspidal representations ρ_1 and ρ_2 is attributed to Ye <cit.>, and later Ye and Zelingher <cit.> performed exterior square gamma factors Γ(s,ρ,∧^2,_). However the computation for the Asai gamma factor and the Bump–Friedberg gamma factor is newly explored in this paper.
Let us give a precise statement pertaining to level zero supercuspidal representations ρ of GL_n(F), or GL_n(E) with E an unramified quadratic extension over F if necessary, here.
* The Asai gamma factor satisfies the local functional equation given in (<ref>). We prove in <Ref> that this is equal to
the rational function
q^n(s-1/2)ω^-1_ρ(ϖ) L(n(1-s),ω^-1_ρ_)/L(ns,ω_ρ_)
in ℂ(q^-s) if n=2m+1 and π has a Flicker–Rallis vector, and to a complex number otherwise.
* The Bump–Friedberg factor satisfies the local functional equation given in (<ref>). We prove in <Ref>
that this equals to the rational function
ε(s,ρ,_)q^m(2s-1/2)ω^-1_ρ(ϖ) L(m(1-2s),ω^-1_ρ)/L(2ms,ω_ρ)
in ℂ(q^-2s) if n=2m and π has a Friedberg–Jacquet vector, and to a complex number otherwise.
Another main ingredient toward the product formula is to associate analytic gamma factors over finite fields with the corresponding Deligne's arithmetic ε_0-factor.
An instant benefit from such definition is that arithmetic ε_0-factors inherit multiplicative property which in turn make it feasible to express as product of Gauss sums <cit.>.
Part of reasons that Ye and Zelingher <cit.> primarily considered the product formula for Rankin–Selberg factor γ(π_1 ×π_2,ψ) is
that the matching between Jacquet–Shalika gamma factor Γ(s,ρ(φ),∧^2,_) and Artin exterior square gamma factor Γ(s,∧^2 ( φ),ψ_F)
under the local Langlands correspondence was not available at that time. Another aim of this paper is to take up this issue and remove the constant ambiguity “c_f" lingering in <cit.>.
We present an accurate statement regarding the identity in the following way:
* Let φ be a n-dimensional tamely ramified representation of W_ corresponding to the level zero supercuspidal representation ρ(φ) of _n() via local Langlands correspondence. We prove in <Ref> and <Ref> that
* Γ(s,ρ(φ),∧^2,_)=Γ(s,∧^2( φ),_).
* s,tρ(φ)_ =ε(s+t+1/2,φ,_)Γ(2s,∧^2( φ),_).
In late 1980's, Bump and Friedberg predicted that s,tρ(φ)_
is a product of the exterior square γ-factor and the standard γ-factor, based on the pattern in a spherical situation.
We partially confirm their conjecture <cit.> for level zero supercuspidal reprsentations.
Our important tactic here is to utilize a globalization of level zero supercuspidal representations over local function fields equipped with a close field theory.
In a series of versions of globalizing supercuspidal representations over number fields, one typically loses control of the local component of
a cuspidal automorphic representation exactly at one place, an archimedean place. While the Langlands–Shahidi
theory over archimedean local fields has been fairly well navigated since the seminal work of Shahidi <cit.>, there has been little progress on the desired archimedean input for
Bump–Friedberg and Jacquet–Shalika integrals. However the globalization of level zero supercuspidal representations in positive characteristic gives rather good control at all places.
Although one may sacrifice a few places, the necessary equalities of exterior square γ-factors for irreducible constituents of spherical representations at those bad places (Lemmata <ref>, <ref>)
do not appear to be insurmountable.
Having a solid matching of γ-factors for level zero supercuspidal representations over positive characteristic (Theorems <ref>, <ref>) in hand, we then incorporate γ-factors arising from Langlands–Shahidi methods and integral representations with Deligne–Kazhdan theory over close (nonarchimedean local) fields.
Deligne proved that Artin local factors remain the same for parallel representations over close local fields via Deligne isomorphisms <cit.>. An analogous result has been studied by Ganapathy and Lomelí <cit.> for Langlands–Shahidi local factors on analytic sides, though that time they only considered Kazhdan isomorphisms over sufficiently closed fields.
So far, there seems to have been no previous discourse in the literature about the very basic case of “1-close fields". To be precise, we will show that local exterior square γ-factors for level zero supercuspidal representations via Jacquet-Shalika integrals are compatible with
the Kazhdan correspondence over 1-close fields (Propositions <ref>, <ref>).
In doing so, we are allowed to transport the identity of γ-factors over positive characteristic to characteristic zero.
The fact that the pole of local L-functions characterizes the existence of linear functionals for supercuspidal representations
has been used to great impetus in recent years on constructing multiplicative relations of L-factors <cit.>. In particular, for a supercuspidal representation ρ_1 of GL_n(F) and an irreducible admissible generic representation ρ_2 of GL_n(F), the local Rankin–Selberg L-function L(s,ρ_1 ×ρ_2) is an important tool to detect
whether ρ_1 appears in the standard module that defies ρ̌_2. This comes down to the observation that a twist of ρ_1 occurs in the standard module for ρ̌_2
if and only if local L-functions L(s,ρ_1 ×ρ_2) has a pole of which the location determines that unramified twist <cit.>.
Lately, Soudry and Zelingher <cit.> suggest that the absolute value of the normalized finite Rankin–Selberg gamma factor γ^⋆(π_1 ×π_2,) might serve as an alternative for the order of the pole of local L-functions L(s,ρ_1 ×ρ_2). These results parallel analogous results of Cogdell and Piatetski-Shapiro in the nonarchimedean local field setting <cit.>. As long expected, for a pair of level zero supercuspidal representations ρ_1 and ρ_2 of GL_n(F), we are able to show that the existence of poles of L(s,ρ_1 ×ρ_2) indeed contributes to the absolute value of γ^⋆(π_1 ×π_2,) other than one, and vice versa.
More precisely, we list below the periods and vectors of interest provided in <Ref>, which naturally arise from certain local L-functions and particular finite gamma factors.
* The occurrence of Jacquet–Piatetski-Shapiro–Shalika periods and vectors is equivalent to having a pole of the Rankin–Selberg L-factor L(s,ρ_1 ×ρ_2),
or the absolute value of the normalized Rankin-Selberg gamma factor γ^⋆(π_1 ×π_2,) different to one.
* The occurrence of Flicker–Rallis periods and vectors is equivalent to having a pole of the Asai L-factor L(s,ρ, As),
or the absolute value of the Asai gamma factor different to one.
* The occurrence of Jacquet–Shalika periods and vectors is equivalent to have a pole of the exterior square L-factor L(s,ρ,∧^2), or the absolute value of
the exterior square gamma factor γ(π,∧^2,ψ) different to one.
* The occurrence of Friedberg–Jacquet periods and vectors is equivalent to having a pole of the Bump–Friedberg L-factor L(s,ρ, BF), or the absolute value of
the Bump–Friedberg gamma factor different to one.
Via the theory of new forms for _n, the parallel analogue results in the nonarchimedean setting is completed by the author <cit.>, and the method is further generalized to
archimedean setting by the joint work with Humphries <cit.>.
We can further explorer the absolute value of gamma factors to irreducible generic representations π of GL_n(_q), not just limited to irreducible cuspidal ones.
Soudry and Zelingher recently worked it out for Rankin–Selberg gamma factor γ(π_1 ×π_2,) over finite fields <cit.>.
Along the line of this philosophy, it is likely that so-called “multiplicativity" of gamma over finite fields needs to be well-understood ahead of time.
In the case of p-adic fields, it was the Langlands–Shahidi method that came to fruition first (cf. <cit.>) and enabled Zelingher <cit.> to realize the goal of establishing multiplicativity via Shahidi gamma factors.
It turns out that long before the computation was investigated by Soudry, when he was a graduate student in 1979 <cit.>.
In a joint work of Soudry and Zelingher, the finite field analogue of Shahidi gamma factors for pairs (π_1,π_2) is completed very recently <cit.>.
However it is still required to show that Shahidi gamma factors agrees with Rankin–Selberg gamma factor γ(π_1 ×π_2,) to validate this robust argument.
The author has carried out a similar matching question for the Asai gamma factor , which will appear elsewhere.
In conjunction with computing Rankin–Selberg sums for classical groups, the author currently pursues this topic in depth with Zelingher <cit.>. Although all these analysis seem to be doable, if somewhat involved, we think that our current formulation keeps our exposition a reasonable length and we plane to do so in near future.
The structure of this article is the following. <Ref> contains a brief review of the theory of Jacquet–Piatetski-Shapiro–Shalika sums, and Rankin–Selberg gamma factors.
We continue to survey Deligne–Kazhdan close field theory and give its application to Rankin-Selberg gamma factors.
<Ref>, and <Ref> are devoted to presenting an analogue theory for Asai, and Bump–Friedberg gamma factors, orderly. We concern with exterior square gamma factor
in <Ref>, where these results are all essentially analogue, although several of the results regarding close field theory are mostly addressed.
We discuss the relation between period vectors, integrals, absolute values of gamma factors, and poles of L-factors in <Ref>.
§ THE RANKIN-SELBERG GAMMA FACTOR
We now detail the theory of Jacquet–Piatetski-Shapiro–Shalika sums as well as the relation between Jacquet–Piatetski-Shapiro–Shalika vectors and Rankin–Selberg γ-factors.
The results herein are all well-known. However, the normalization of Haar measure <cit.>, the choice of subspace of Schwartz–Bruhat functions, and the Fourier transformation <cit.>
in the current literature are slightly different from this paper. We recall them as motivation for Sections <ref> and <ref>,
in which we discuss analogous yet new results for Asai and exterior square γ-factors. Section <ref> serves to
overview the breakdown of our computation that is repeated throughout the paper.
§.§ The Jacquet–Piatetski-Shapiro–Shalika sum
We let N_n be the unipotent radical of the standard Borel subgroup B_n of GL_n and A_n the Levi subgroup of B_n, consisting of diagonal matrices in GL_n.
We denote by P_n the mirabolic subgroup of GL_n, consisting of matrices in GL_n with last row equal to (0,…,0,1).
We write 1_n to denote the n × n identity matrix.
Let _q be a finite field of q=p^k elements with characteristic p.
Let =_q. We fix a non-trivial additive character :=_ of , and extend it to the character of N_n()
by setting ψ(n)=ψ(n_1,2+…+n_n-1,n) with n ∈ N_n(). Let
(^n)={ϕ | ϕ: ^n →}
be the set of complex valued functions on ^n.
Let {e_i | 1 ≤ i ≤ n } be the standard row basis of ^n.
We let xy :=x ·^ty be the standard bilinear form on ^n.
The Fourier transform of ϕ∈(^n) with respect to is given by (cf. <cit.>*(2.3) <cit.>*2.1.1
(ϕ)(y)=q^-n/2∑_x ∈_qϕ(x) (xy).
The Fourier inversion formula takes the form
(∘)(ϕ)(x)=ϕ(-x) and (∘)(ϕ)(x)=ϕ(x).
Given an irreducible cuspidal representation π, we fix a non-trivial _n()-invariant unitary form (·,·) on V_π× V_π. Then there exists a non-trivial vector v_0 ∈ V_π satisfying
π(n)v_0=ψ(n)v_0 for all n ∈ N_n(). Such a vector v_0 is called a Whittaker vector.
A Whittaker function of π is a matrix coefficient of the form W(g)=(π(g)v,v_0) for all g ∈_n() and v ∈ V_π.
Whittaker functions satisfy W(ng)=ψ(n)W(g) for every n ∈ N_n().
The subspace generated by all Whittaker function W(g) is unique <cit.>, and will be denoted by 𝒲(π,ψ) with _n() acting by right translations.
This space is called the Whittaker model of π.
For an irreducible cuspidal representation π of _n(), we have its contragredient π̌ that is isomorphic to π^ι,
where π^ι is the representation acting on the same underlying space V_π of π by π^ι(g)=π(^tg^-1) for g ∈_n().
Under π̌≅π^ι, we achieve an isomorphism of vector spaces (π,) →(π̌,^-1),
given by W_π↦W̌_π, where
W̌_π(g)=W_π(_n ^tg^-1), g ∈_n(),
and where
w_n=[ 1; ⋰ ; 1 ]
is the longest Weyl element of _n().
The Bessel function ℬ_π,ψ of π is the Whittaker function of the normalized Whittaker vector by setting
ℬ_π,ψ(g)=(π(g)v,v_0)/(v_0,v_0)=W(1_2)^-1W(g).
Let ω_π denote the central character of π on . Some of the elementary properties of the Bessel function ℬ_π,ψ are (cf. <cit.>):
* ℬ_π,ψ(n_1gn_2)=ψ(n_1)ψ(n_2)ℬ_π,ψ(g) for all n_1,n_2 ∈ N_n().
* ℬ_π,ψ(1_n)=1 and ℬ_π,ψ(a 1_n)=ω_π(a) for all a ∈.
* ℬ_π,ψ(g^-1)=ℬ_π,ψ(g)=ℬ_π̌,ψ^-1(g) for all g ∈_n().
Let π_1 and π_2 be irreducible cuspidal representations of _n().
Let _0(^n) denote the set of -valued functions on ^n such that ϕ(0)=0. For every W_π_1∈(π_1,) and W_π_2∈(π_2,^-1),
and for any ϕ∈_0(^n), there exists a complex number γ^⋆(π_1 ×π_2,) such that
γ^⋆(π_1 ×π_2,) ∑_g ∈_n() \_n() W_π_1(g) W_π_2(g) ϕ(e_ng)=
∑_g ∈_n() \_n() W_π_1(g) W_π_2(g) (ϕ)(e_1 ^tg^-1).
As explained in <cit.>, we normalize the Fourier transform and gamma factors, which are different from what is commonly adopted at least since Roditty's master thesis (cf. <cit.>). In order to distinguish the normalized gamma factor from unnormalized one γ(π_1 ×π_2,) in <cit.>, we add the superscript ⋆ to emphasize the normalization. In doing so, the absolute value of γ^⋆(π_1 ×π_2,) becomes 1, as shown in <Ref>.
For W_π_1∈𝒲(π_1,ψ) and W_π_2∈𝒲(π_2,ψ^-1), and ϕ∈(^n), we define the Jacquet–Piatetski-Shapiro–Shalika sum Ψ(W_π_1,W_π_2,ϕ) by
Ψ(W_π_1,W_π_2,ϕ):=∑_g ∈_n() \_n() W_π_1(g) W_π_2(g) ϕ(e_ng).
In a similar manner, the dual Jacquet–Piatetski-Shapiro–Shalika sum Ψ̌(W_π_1,W_π_2,ϕ) is defined by
Ψ̌(W_π_1,W_π_2,ϕ):=∑_g ∈_n() \_n() W̌_π_1(g)W̌_π_2(g) (ϕ)(e_ng).
A non-zero vector v_1 ⊗ v_2 ∈ V_π_1⊗ V_π_2 is called a Jacquet–Piatetski-Shapiro–Shalika vector, if for every g ∈_n(),
we have (π_1 ⊗π_2)(g)(v_1 ⊗ v_2):= π_1(g) v_1 ⊗π_2(g)v_2 =v_1 ⊗ v_2. This condition is equivalent to π_1 ≅π̌_2.
The definition of γ^⋆(π×π,) taken in <cit.> differs slightly from herein by the inclusion of all Schwartz-Bruhat functions ϕ∈(^n) in the Jacquet–Piatetski-Shapiro–Shalika sum. As stressed in <cit.>, the functional equation in (<ref>) does not hold when π_1 ×π_2 has the Jacquet–Piatetski-Shapiro–Shalika vector. In a pioneering work <cit.> (cf. <cit.>), Piatetski-Shapiro has already realized that it is advantageous to restrict the space (^n) to _0(^n). In order to define gamma factors over the finite field uniformly for every irreducible cuspidal representations, Schwartz-Bruhat functions ϕ are only taken over
_0(^n) throughout the paper.
We express γ^⋆(π_1 ×π_2,) in terms of the Bessel functions associated with π_1 and π_2.
<cit.>*Equation (16)<cit.>*Corollary 3.4
Let π_1 and π_2 be irreducible cuspidal representations of _n(). Then
γ^⋆(π_1 ×π_2,)= q^-n/2∑_g ∈_n()\_n()π_1(g) π_2^-1(g) (e_1 ^tg^-1 ^te_n).
In particular, we have
γ^⋆(π_1 ×π_2,^-1)=γ^⋆(π_1 ×π_2,).
We can precisely evaluate the sum (<ref>) in two different ways, when π_1 ×π_2 has the Jacquet–Piatetski-Shapiro–Shalika vector.
The method of Ye <cit.> uses the system of linear equations and the theory of level zero supercuspidal representations, whereas
Soudry and Zelingher <cit.> explicitly compute γ^⋆(π×π,)
within the context of the representation theory of groups over finite fields. We transport the former approach to the Bump–Friedberg setting in <Ref>,
while the latter path is adapted to the Asai setting in <Ref> and the exterior square setting in <Ref>.
<cit.>*Theorem A.1<cit.>*Corollary 4.3
Let π be an irreducible cuspidal representation of _n().
Then we have
γ^⋆(π×π,)=-q^-n/2.
We end this section by summarizing functional equations for γ^⋆(π_1 ×π_2,) over finite fields.
The result is a direct consequence of <Ref> and <Ref>.
(cf. <cit.>)
Let π_1 and π_2 be irreducible cuspidal representations of _n().
* If π_1 π̌_2, we have γ^⋆(π_1 ×π_2,) γ^⋆(π_1 ×π_2,^-1)=1 and γ^⋆(π_1 ×π_2,)=1.
* If π_1 ≅π̌_2, we have γ^⋆(π_1 ×π_2,) γ^⋆(π_1 ×π_2,^-1)=q^-n and γ^⋆(π_1 ×π_2,)=q^-n/2.
§.§ The Jacquet–Piatetski-Shapiro–Shalika period and level zero supercuspidal representations
Let F be a non-archimedean local field with its residual finite field / 𝔭≅𝔽_q of order q=q_F.
The base field F is a finite extension of ℚ_p or 𝔽_p((t)), called a p-adic field in characteristic 0,
or a local function field in characteristic p > 0.
We write :=_F and 𝔭:=𝔭_F for the ring of its integers and the maximal ideal, respectively.
We fix a generator ϖ:=ϖ_F of 𝔭 and normalize the absolute value |·| of F so that
|ϖ|=q^-1.
Let _ be a fixed non-trivial additive character of such that _ is trivial on 𝔭 and nontrivial on .
The self-dual Haar measure for _ <cit.> then satisfies
∫_ dx=q^1/2.
For the purpose of calculation, it will be convenient to choose the Haar measure x on such that
∫_ x=1.
We denote by : →/𝔭≅ the quotient map.
We define ((k))=_(k) for k ∈.
Let 𝒮(F^n) be the space of locally constant and compactly supported functions Φ : F^n →ℂ.
We denote its Fourier transform
_(Φ)(y)=∫_^nΦ(x) _(xy) dx,
for Φ∈(^n).
The Fourier inversion formulas are given by
(_∘_)(Φ)(x)=Φ(-x) and (_^-1∘_)(Φ)(x)=Φ(x).
For ϕ∈(^n), we define a lift Φ_∘∈(^n) of ϕ by
Φ_∘(x)=
ϕ((x)), if x ∈^n,
0, otherwise.
Then _(Φ_∘) is a lift of (ϕ) <cit.> in the sense of
_(Φ_∘)(x)=
( ϕ)((x)), if x ∈^n,
0, otherwise.
We let K_n=_n(𝔬) be the standard maximal compact subgroup of _n(F). A level zero supercuspidal representation of _n() is given by
ρ≅ c-Ind^ GL_n(F)_F^× GL_n(𝔬)μ,
where a representation μ:=μ_ GL_n(𝔬) is inflated from an irreducible cuspidal representation π of _n()
and the central character ω_π of π satisfies μ_𝔬^×=F^×∩ GL_n(𝔬)(a)=ω_π((a)).
We let 𝒜_0( GL_n(F)) be the set of isomorphism classes of level zero supercuspidal representations of GL_n(F) and let 𝒜_0( GL_n())
denote the set of equivalence classes of irreducible cuspidal representations of GL_n(). Then <cit.> and <cit.> give rise to a bijection
𝒜_0( GL_n(F)) ⟷ℂ^××𝒜_0( GL_n())
ρ ⟷ (ω_ρ(ϖ),π).
The contragredient representation ρ̌ of GL_n(F) is again a level zero supercuspidal representation
constructed from an irreducible cuspidal representation π̌ of GL_n() <cit.>.
If W_ρ∈(ρ,ψ_F), then W̌_ρ(g):=W_ρ(_n ^tg^-1) ∈(ρ̌,ψ^-1_F).
We denote by ⟨·, ·⟩ the pairing V_ρ× V_ρ and V_π× V_π given by the evaluation.
Let λ∈__n()(π,) be a non-zero Whittaker functional of π.
We define the linear functional λ_∘ : V_ρ→ by
⟨λ_∘, f ⟩:=∫__n() ∩ K_n \_n()⟨λ, f(u) ⟩^-1(u) u,
where f ∈ V_ρ. We view f as a function f : _n() → V_π. Then λ_∘∈__n()(ρ,_) is a non-zero Whittaker function of _n(),
which is a lift of λ <cit.>. Let W_π∈(π,) and v_W_π∈ V_π a unique vector such that W_π(g)=⟨λ, π(g)v_W_π⟩ for every g ∈_n(). Let :=K_n. We define f_W_π to be
f_W_π(g)=
ω_ρ(a) π((k))v_W_π, if g=ak ∈ with a ∈, k ∈ K_n
0, otherwise.
We define W^∘_ρ∈(ρ,_) by W^∘_ρ(g)=⟨λ_∘, ρ(g) f_W_π⟩ for g ∈_n().
Then the support of W^∘_ρ is contained in _n()K_n. The two Whittaker functions W^∘_ρ and W_π are related by
W^∘_ρ(g)=ω_ρ(a) W_π((k)),
for any g=ak ∈ with a ∈ and k ∈ K_n <cit.>. In particular, if W_π is a Bessel function π over finite fields ,
the lift W^∘_ρ is nothing but the Paskunas–Stevens partial Bessel function ρ_ in <cit.> and <cit.>.
Let G be a group and L a subgroup of G. A representation ρ of G is called (L,ξ)-distinguished if
Hom_L(ρ,ξ) ≠ 0.
If ξ= 1_L is a trivial character of L, we simply say that ρ is L-distinguished. In particular, let G=_n() ×_n() and L=_n() embedded in G diagonally. We also say that ρ_1 ×ρ_2 has a Jacquet–Piatetski-Shapiro–Shalika period if the representation ρ_1 ×ρ_2 of G is
(_n(), 1__n())-distinguished, and it is equivalent to the condition that ρ_1 ≅ρ̌_2. Because of this property, these distinguished representations appears naturally in the theory of Rankin-Selberg L-functions that we describe in a moment.
Let ρ_1 and ρ_2 be level zero supercuspidal representations of _n(F) constructed from irreducible cuspidal representations π_1 and π_2 of _n() with attached Whittaker models, 𝒲(ρ_1,ψ_F) and 𝒲(ρ_2,ψ^-1_F), respectively. We take each pair of Whittaker functions W_ρ_1∈𝒲(ρ_1,ψ_F), W_ρ_2∈𝒲(ρ_2,ψ^-1_F), and a Schwartz-Bruhat function Φ∈(^n),
and form the Jacquet–Piatetski-Shapiro–Shalika integral defined by
Ψ(s,W_ρ_1,W_ρ_2,Φ)=∫__n(F) \_n(F) W_ρ_1(g) W_ρ_2(g) Φ(e_ng)| g|^s dg.
The integral converges absolutely for Re(s) sufficiently large, and extend meromorphically to the entire complex plane. Moreover there exists a rational function Γ(s,ρ_1 ×ρ_2,_) ∈ℂ(q^-s) satisfying the functional equation <cit.>:
Ψ(1-s,W̌_ρ_1,W̌_ρ_2,_(Φ))= Γ(s,ρ_1 ×ρ_2,_) Ψ(s,W_ρ_1,W_ρ_2,Φ).
It is worthwhile to emphasize that the gamma factor Γ(s,ρ_1 ×ρ_2,_) defined above differs by a sign from the traditional one defined by Jacquet–Piatetski-Shapiro–Shalika in <cit.>. The local Rankin-Selberg L-function L(s,ρ_1 ×ρ_2) is the generator of the ℂ[q^± s]-fractional ideal of the Jacquet–Piatetski-Shapiro–Shalika integrals
Ψ(s,W_ρ_1,W_ρ_2,Φ) with W_ρ_1∈𝒲(ρ_1,ψ_F), W_ρ_2∈𝒲(ρ_2,ψ^-1_F), and Φ∈(^n)
normalized to be of the form P(q^-s)^-1 for some P(X) ∈ℂ[X] with P(0)=1.
We take pairs of Whittaker functions W_ρ_1=W^∘_ρ_1 and W_ρ_2=W^∘_ρ_2, which are lifts of corresponding pairs of Whittaker functions W_π_1 and W_π_2
over finite fields, and we insert certain test functions Φ_∘, the lift of ϕ, for Schwartz–Bruhat functions Φ. With lifting datum (W^∘_ρ_1,W^∘_ρ_2,Φ_∘),
Jacquet–Piatetski-Shapiro–Shalika integrals can be reduced to Jacquet–Piatetski-Shapiro–Shalika sums, we obtain so-called modified functional equations just as in <cit.>.
Ψ̌(W_π_1,W_π_2,ϕ)+q^-n(1-s)(ω_ρ_1ω_ρ_2)^-1(ϖ)(ϕ)(0)L(n(1-s),(ω_ρ_1ω_ρ_2)^-1)Ψ(W_π,_^n)
= Γ(s,ρ_1 ×ρ_2,_)(Ψ(W_π_1,W_π_2,ϕ)+q^-nsω_ρ_1ω_ρ_2(ϖ)ϕ(0)L(ns,ω_ρ_1ω_ρ_2)Ψ(W_π_1,W_π_2,_^n)).
As a result of the modified function equation, we recover the following main theorem of <cit.>.
Let ρ_1 and ρ_2 be level zero supercuspidal representations of _n().
* <cit.> If π_1 π̌_2, we have
Γ(s,ρ_1 ×ρ_2,_)=γ^⋆(π_1 ×π_2,).
* <cit.> If π_1 ≅π̌_2, we have
Γ(s,ρ_1 ×ρ_2,_)=q^n(s-1/2)(ω_ρ_1ω_ρ_2)^-1(ϖ) L(n(1-s),(ω_ρ_1ω_ρ_2)^-1)/L(ns,ω_ρ_1ω_ρ_2).
§.§ The Rankin-Selberg epsilon factor and the Gauss sum
Let be an algebraic closure of and ^×_q^n the multiplicative group of _q^n.
A multiplicative character α∈^×_q^d is called regular
if {α, α^q, …, α^q^n-1} is of size n. Two characters α and β are called equivalent if α=β^q^d for some integer d. In next paragraph, we see that
this amounts to saying that α and β are in the same Frobenius orbit.
Let ℛ_n(_q) denote the set of equivalence classes of regular characters of ^×_q^n. Green's parameterization <cit.>*3.2<cit.>*3.1
gives a bijection
𝒜_0( GL_n() ⟷ℛ_n(_q)
π ⟷α.
For each d | n, we have a norm map Nr_n:d: _q^n→_q^d, which induces a dual (embedding) map Nr_n:d : _q^d→_q^n
by assigning β∈_q^d to β∘ Nr_n:d∈_q^n. In this way, (_q^n)_n ∈ℕ with the embedding map ( Nr_n:d)_d | n forms a direct system. We denote by Ω:=_q^n its direct limit.
Let W_F be a Weil group of F, I_F the inertia subgroup, and P_F the wild inertia subgroup. Then W_F ≅ I_F ⋊⟨ Fr⟩,
where Fr∈ Gal( / ) is the geometric Frobenius automorphism given by Fr(x^q)=x for every x ∈.
The Frobenius map Fr acts on Ω via Fr·β=β^q. We identify _q^n with the subgroup Ω_n:={β∈Ω | Fr^n ·β=β} of Ω. A Galois orbit is a set of the form 𝒪=𝒪(β):={ Fr^i ·β | i ∈ℤ} for β∈Ω.
Given a Galois orbit 𝒪, we define its degree deg(𝒪) to be the cardinality of 𝒪. Then for β∈𝒪, we have β∈Γ_ deg(𝒪). We denote by Fr\Ω the set of Galois orbits.
Let _n:=∘__q^n. We define the Gauss sum τ(α,_n) by
τ(α,_n):=-∑_x ∈_q^nα^-1(x) _n(x).
Let φ : W_F → GL(V) be a n-dimensional Frobenius semisimple representation of the Weil group W_F. The representation φ is said to be unramified (resp. tamely ramified),
if φ contains I_F (resp. P_F). We let r be an operation on the Frobenius semisimple representation of W_F that preserves tame ramification.
In particular, we take r to be an identity operation, id, a tensor product, ⊗, a twisted tensor product, As (known as an Asai representation), and an exterior square, ∧^2.
In a spirit of Deligne <cit.>, ε(s,r(φ),_) and ε_0(r(φ),_) are related by
ε(s,r(φ),_)=ε_0(r(φ),_)(- Fr,r(V)^I_F)^-1 q^( r(V)^I_F)s.
Following the literature in <cit.>, we set 𝒢^t( GL_n(F)) to be the isomorphism classes of tamely ramified representations of W_F
of degree n. Since the local Langlands reciprocity map preserves the conductor and the depth of the representation <cit.>,
the correspondence induces a natural bijective map LLC : 𝒜_0( GL_n(F)) →𝒢^t( GL_n(F)) <cit.> (cf. <cit.>).
Two Weil representations are called I_F-equivalent if their restrictions to I_F are equivalent, and we write 𝒢^t_I( GL_n(F)) for the set of I_F-equivalence classes of
tamely ramified n-dimensional representations of W_F. Long before the local Langlands correspondence is established, Macdonald <cit.> has already obtained
a canonical bijection ℳ from 𝒜_0( GL_n() to 𝒢^t_I( GL_n(F)). Hence we get a diagram
3.5pc1.5pc 𝒜_0( GL_n(F)) [d]_p_1[r]^ LLC_≅ 𝒢^t( GL_n(F)) [d]^p_2
𝒜_0( GL_n()) [r]^ℳ_≅ 𝒢_I^t( GL_n(F))
where p_1 is a projection map induced from (<ref>), and p_2 is a canonical projection map sending a representation to its I_F-equivalence class.
In particular, it is a consequence of <cit.> that the above diagram is commutative. Composing the Macdonald correspondence ℳ with the Green's parameterization then yields a bijection between 𝒢_I^t( GL_n(F)) and ℛ_n(_q), which by abuse of teminology we again refer to as Green's parametrization.
<cit.>
Let φ_1 and φ_2 be n-dimensional tamely ramified representations of W_F. Then
ε_0(φ_1 ⊗φ_2,_)=(-1)^nq^-n^2/2∏_i=0^n-1τ(αβ^q^i,ψ_n),
where α and β are regular characters of ^×_q^n corresponding to φ_1 and φ_2, respectively, via Green's parametrization.
Rankin-Selberg γ-factors and tensor product ε_0-factors over finite fields are compatible with the Macdonald correspondence.
<cit.>
Let π_1(φ_1) and π_2(φ_2) be irreducible cuspidal representations of _n() associated to n-dimensional tamely ramified representations φ_1 and φ_2 of W_F via Macdonald correspondence. Then we have
γ^⋆(π_1(φ_1) ×π_2(φ_2),)=ω^n-1 _π_2(-1) ε_0(φ_1 ⊗φ_2,_).
As a corollary of <Ref> and <Ref>, we gain a product formula for γ^⋆(π_1 ×π_2,) with regard to Gauss sums.
Let π_1 and π_2 be irreducible cuspidal representations of _n().
We let α and β are regular characters of ^×_q^n corresponding to π_1 and π_2, respectively, via Green's parametrization.
Then we have
γ^⋆(π_1 ×π_2,)=ω^n-1_π_2(-1) · (-1)^n q^-n^2/2∏_i=0^n-1τ(αβ^q^i,ψ_n).
§.§ Deligne–Kazhdan close field theory
We turn our attention to Deligne-Kazhdan close local field theory. Two non-archimedean local fields F and F' are m-close if 𝔬_F/𝔭_F^m ≅𝔬_F'/𝔭_F'^m. For example, the fields 𝔽_p((t)) and ℚ_p(p^1/m) are m-close. We follow the elaboration about Deligne's theory in <cit.>
and <cit.>.
If F and F' is 1-close, Deligne (cf. <cit.>) gave a bijection:
{Isomorphism classes of Frobenius semisimple representations φ of W_F trivial on P_F}
Del{Isomorphism classes of Frobenius semisimple representations φ' of W_F' trivial on P_F'}.
Elements φ and φ' are nothing but tamely ramified representations. The triplet (F,φ,ψ_F) is said to be Del-associated to (F',φ',ψ'_F') if
* F and F' are 1-close;
* φ'= Del(φ);
* an character ψ'_F' of F' satisfies cond(ψ'_F')=𝔭_F' and the character induced by ψ'_F' on 𝔬_F' / 𝔭_F'
coincides with that induced by ψ_F on 𝔬_F / 𝔭_F under the isomorphism implicit in <ref>.
The analogous isomorphism of Deligne on the analytic side over close local fields has been studied by Kazhdan <cit.>.
We provide a revamped version of the Kazhdan isomorphism <cit.>, which can be directly verified from <cit.>:
{ Level zero supercuspidal
representations (ρ,V_ρ) of GL_n(F)} Kaz{ Level zero supercuspidal
representations (ρ',V'_ρ') of GL_n(F')},
where ρ≅ c-Ind^ GL_n(F)_F^× GL_n(𝔬_F)μ and ρ' ≅ c-Ind^ GL_n(F')_F'^× GL_n(𝔬_F')μ' under the isomorphism “ Kaz" satisfy
* ω_ρ(ϖ_F)=ω_ρ'(ϖ_F');
* μ:=μ_ GL_n(𝔬_F) and μ':=μ'_ GL_n(𝔬_F')
are an inflation of a common irreducible cuspidal representation τ via the canonical projections:
4pc
( GL_n(𝔬_F),μ) [r]^ mod 𝔭_F ( GL_n(𝔽_q),π) [l]_ mod 𝔭_F'
( GL_n(𝔬_F'),μ').
We say that the triplet (F,ρ,ψ_F) is Kaz-associated to (F',ρ',ψ'_F') if
* F and F' are 1-close;
* ρ'= Kaz(ρ);
* an character ψ'_F' of F' satisfies cond(ψ'_F')=𝔭_F' and the character induced by ψ'_F' on 𝔬_F' / 𝔭_F'
coincides with that induced by ψ_F on 𝔬_F / 𝔭_F under the isomorphism implicit in <ref>.
Let π_1 be an irreducible cuspidal representation of GL_n(𝔽) and π_2 an irreducible cuspidal representation of GL_r(𝔽)
with n > r. Then there exists a complex number γ(π_1 ×π_2,ψ) ∈ℂ such that
γ(π_1 ×π_2,ψ) ∑_g ∈N_r(𝔽) \ GL_r(𝔽) W_π_1[ g 0; 0 1_n-r ] W_π_2(g)
=∑_g ∈N_r(𝔽) \ GL_r(𝔽) W_π_1[ 0 1_n-r; g 0 ] W_π_2(g),
for all W_π_1∈𝒲(π_1,ψ) and W_π_2∈𝒲(π_2,ψ^-1) <cit.>.
Let ρ_1 be a level zero supercuspidal representation of GL_n(F) associated to π_1 and ρ_2 a level zero supercuspidal representation of GL_r(F)
associated to π_2 , with n > r. Let Γ(s,ρ_1 ×ρ_2,ψ_F) denote the Rankin-Selberg γ-factor defined by Jacquet, Piatetski-Shapiro, and Shalika (cf. <cit.>).
For (F,ρ_i,ψ_F) that is Kaz-associated to (F',ρ'_i,ψ'_F') with i=1,2, we have
Γ(s,ρ_1 ×ρ_2,ψ_F)= Γ(s,ρ'_1 ×ρ'_2,ψ'_F').
With aid of <cit.>, we can relate gamma factors for a pair of level zero supercuspidal representations with
those for the corresponding cuspidal representations over finite fields:
ω^n-1_ρ_2(-1)Γ(s,ρ_1 ×ρ_2,ψ_F)= Vol(𝔭_F)^r(n-r-1)γ(π_1 ×π_2,ψ).
Since we have normalized Haar measures on F so that the volume of 𝔬_F is q^1/2 (and similarly for 𝔬_F'), we have
Vol(𝔭_F)= Vol(𝔭_F') and the result follows.
The assignment “ LLC" is now reconciled with the Deligne-Kazhdan theory (<ref>) and (<ref>).
We assume that non-archimedean local fields F and F' are 1-close. Then the following diagram commutes:
3.5pc2pc 𝒜_0( GL_n(F)) [d]^≅_ Kaz[r]^ LLC_≅ 𝒢^t( GL_n(F)) [d]^ Del_≅
𝒜_0( GL_n(F')) [r]^ LLC_≅ 𝒢^t( GL_n(F'))
We will prove this theorem by induction on n. When n=1, the Deligne-Kazhdan philosophy is compatible with local class field theory <cit.>.
Now we assume that <Ref> holds for 1 ≤ d ≤ n-1. Let ρ_1 ∈𝒜_0( GL_n(F)) and σ∈𝒜_0( GL_d(F)). Let φ_ρ_1 and φ_σ
denote the local Langlands parameter attached to ρ_1 and σ, respectively. We put ρ'_1= Kaz(ρ_1) and σ'= Kaz(σ).
Writing ρ_2= LLC^-1∘ Del^-1(φ_ρ'_1), the corresponding local Langlands parameter φ_ρ_2 is Del-associated to φ_ρ'_1.
In view of <cit.> along with <cit.> again, ρ_1 and ρ_2 share the same central character ω_ρ_1=ω_ρ_2.
By induction hypothesis, we have
σ= LLC^-1∘ Del^-1(φ_σ').
This leads us to a chain of identities:
Γ(s,ρ_1 ×σ,ψ_F)= Γ(s,ρ'_1 ×σ',ψ'_F')=Γ(s,φ_ρ'_1⊗φ_σ',ψ'_F')
=Γ(s,φ_ρ_2⊗ Del^-1(φ_σ'),ψ_F)
=Γ(s,ρ_2 × LLC^-1∘ Del^-1(φ_σ'),ψ_F)
=Γ(s,ρ_2 ×σ,ψ_F)
for all σ∈𝒜_0( GL_d(F)) and 1 ≤ d ≤ n-1. Here, the second and fourth equalities are a part of local Langlands correspondence <cit.>,
the third equality follows from <cit.> due to Deligne, and the first equality is clear from Lemma <ref>.
Then by the local converse theorem for level zero supercuspidal representations <cit.>, we conclude that ρ_1 ≅ρ_2 from which the desired commutative diagram follows.
§ THE ASAI GAMMA FACTOR
§.§ The Flicker sum
Let =_q^2.
We fix a non-trivial additive character of such that
_=1_.
It is worth pointing out that can be constructed starting from a non-trivial additive character (cf.<cit.>*1 <cit.>*2 ).
We define the character of to be (x)=(_(Δ x)), where Δ∈
is of trace zero. Let c: x ↦x be the nontrivial Galois element in ().
Let π be an irreducible cuspidal representation of _n() with its associated Whittaker model (π,). For W_π∈(π,) and ϕ∈(^n), we define the Flicker sum
I(W_π,ϕ):=∑_g ∈_n() \_n() W_π(g) ϕ(e_ng).
Similarly, we define the dual Flicker sum
Ǐ(W_π,ϕ)
:=∑_g ∈_n() \_n() W̌_π(g) (ϕ)(e_ng).
Let π be an irreducible cuspidal representation of _n(). Then we have
Ǐ(W_π,ϕ)=∑_g ∈_n() \_n() W_π(g) (ϕ)(e_1 ^tg^-1).
We insert the definition (<ref>). Performing the change of variables g ↦_n ^tg^-1_n and then g ↦ g _n
yields the result.
We now aim to prove functional equation I(W_π,ϕ)=Ǐ(W_π,ϕ)
satisfied by the Flicker sum I(W_π,ϕ). This allows us to define the Asai gamma factor
of an irreducible cuspidal representation π of _n().
Let π be an irreducible cuspidal representation of _n(). For every W_π∈(π,) and for any ϕ∈_0(^n),
there exists a complex number satisfying
∑_g ∈_n() \_n() W_π(g) ϕ(e_ng)
=∑_g ∈_n() \_n() W_π(g) (ϕ)(e_1 ^tg^-1).
It can be verified from <Ref> that L_1 : (W_π,ϕ) ↦ I(W_π,ϕ) and L_2 : (W_π,ϕ) ↦I(W_π,ϕ) correspond to
elements of Hom__n()(π⊗_0(^n),__n()).
It is then enough to show that such forms are unique up to scalars, that is to say, Hom__n()(π⊗_0(^n),__n()) ≤ 1.
We identify _n() \_n()
with ^n - { 0 }, and then employ the Frobenius reciprocity law to find isomorphisms
Hom__n()(π⊗_0(^n),__n()) ≅ Hom__n()(π__n()⊗ Ind^_n()__n()(),__n())
≅ Hom__n()(π__n(),__n()).
The proof of at most one dimension of the space Hom__n()(π__n(),__n())
is then parallel to that for nonarchimedean local fields <cit.>, relying on the theory of Bernstein and Zelevinsky's derivatives for finite fields established in <cit.>.
In the course of the proof of proceeding theorem, we get the following multiplicity one result as a byproduct, which is used repeatedly in the proof of <Ref> and <Ref>.
Let π be an irreducible cuspidal representation of _n(). Then we have
Hom__n()(π__n(),__n()) ≤ 1.
We express in terms of the Bessel functions associated with π.
Let π be an irreducible cuspidal representation of _n(). Then we have
=q^-n/2∑_g ∈_n() \_n() π(g)
(e_1 ^tg^-1 ^te_n).
In particular, we have =.
We take W_π=π and ϕ to be an indicator function δ_e_n on e_n. It can be seen from <cit.>
that I(π,δ_e_n)=1 and (δ_e_n)(y)=q^-n/2(e_n ^ty) from which (<ref>) shall follow. We now take the complex conjugate to reach
=q^-n/2∑_g ∈_n() \_n() π^-1(g) ^-1(e_1 ^tg^-1 ^te_n)
=.
The following general lemma plays a crucial role to evaluate the sums of Bessel functions against additive characters for later purpose.
<cit.> Let G be a finite group and L a subgroup of G. Suppose that L is a semidirect of the form L=Z ⋊_n(). Let Ξ : L → be a character which is trivial on _n().
Let Π be an irreducible representation of G satisfying
* _L(Π_L,Ξ)=1.
* _Z ⋊_n()(Π_Z ⋊_n(),Ξ)=1.
* There exists a linear functional Λ∈__n()(Π__n(),__n()) and a vector v_0 ∈ V_Π such that
∑_p ∈_n() \_n() ∑_z ∈ ZΛ(Π(zp)v_0) Ξ^-1(z)=1.
Then we have
∑_g ∈_n() \_n()∑_z ∈ ZΛ(Π(zg) v_0) Ξ^-1(z) ^-1(e_n g ^te_1)=-1.
A non-zero vector v ∈ V_π is said to be a Flicker–Rallis vector if π(g)v=v for all g ∈ GL_n().
Using <cit.> in combination with <cit.>, it is noteworthy that π does not have the Flicker–Rallis vector whenever n=2m is even. (Refer to <cit.> for n=2).
Let π be an irreducible cuspidal representation of _n(). Suppose that n=2m+1 and π admits a Flicker–Rallis vector.
Then we have
= =-q^-n/2.
Thanks to <Ref>, we apply <Ref> to V_Π=(π,), G=_n(), L=_n(), Z={ 1_n}, and Ξ=_L a trivial character. We define a non-trivial linear functional
Λ∈__n()(π__n(),__n())
on (π,) by
Λ(W_π)=W_π(1_n).
By choosing W_π= π, it is clear from <cit.> that
∑_p ∈_n() \_n() Λ(π(p)π)= ∑_p ∈_n() \_n() π(p)=1.
With aid of <Ref> coupled with <Ref> again, and then making the change of variables g ↦ g^-1, we find that
=q^-n/2∑_g ∈_n() \_n() π^-1(g)
^-1(e_1 ^tg^-1 ^te_n)
= q^-n/2∑_g ∈_n() \_n() Λ(π(g)π)^-1(e_n g ^te_1)
=-q^-n/2.
All that remains is to take the complex conjugate to conclude from <Ref> that
==-q^-2m+1/2=-q^-2m+1/2.
We end this section with functional equations for of an irreducible cuspidal representation π of _n().
Let π be an irreducible cuspidal representation of _n().
* If π does not admit a Flicker–Rallis vector, then we have
=1
and =1.
* If n=2m+1 and π admits a Flicker–Rallis vector, then we have
=q^-n and =q^-n/2.
Appealing to <Ref>, when π does not admit a Flicker–Rallis vector, the functional equation is a direct consequence of the double-duality
Ǐ( W̌_π,(ϕ))=I(W_π,ϕ).
just like <cit.>. The rest of assertions can be verified from <Ref> along with <Ref>.
§.§ The Flicker–Rallis period and level zero supercuspidal representations
We set out to investigate the existence of Flicker–Rallis vectors which characterizes the non-vanishing sum.
Let π be an irreducible cuspidal representation of _n() with n=2m+1 odd. Then π admits a Flicker–Rallis vector if and only if there exists
W_π∈(π,) such that
∑_g ∈_n() \_n() W_π(g) ≠ 0.
We assume that π has a Flicker–Rallis vector. We endow (π,) with
an inner product (·,·) in which π is unitary. We define W_ FR∈(π,) by
W_ FR(g)=1/_n()∑_p ∈_n() π(gp).
for g ∈_n(). Benefited from the average, we find that
W_ FR(gh)= W_ FR(g) for all h ∈_n(). Using the containment __n()(π__n(),)
⊆__n() (π__n() ,),
we deduce the equality __n()(π__n(),)
=__n() (π__n() ,) by the one-dimensionality of both spaces, <Ref>.
In the same fashion, W_ FR produces an element T_W_ FR∈__n()(π__n(),)
stated by T_W_ FR(W')=(W',W_ FR) for W' ∈(π,), from which it follows that W_ FR is a Flicker–Rallis vector.
Furthermore, a non-trivialness of the given summation can be verified, because <cit.> yields
∑_g ∈_n() \_n() W_ FR(g)=1/_n()∑_p ∈_n() π(p)=1.
Conversely, we assume that there exists W_π∈(π,) such that
∑_g ∈_n() \_n() W_π(g) ≠ 0.
We define W^♯_ FR∈(π,) by
W^♯_ FR(h)=1/_n()∑_g ∈_n() W_π(hg).
for h ∈_n(). Combining
W^♯_ FR(1_n)=∑_g ∈_n() \_n() W_π(g) ≠ 0.
along with the quasi-invariance property that W^♯_ FR(hh')=W^♯_ FR(h) for all h' ∈_n(), W^♯_ FR is indeed a
Flicker–Rallis vector that we seek for.
Let be a quadratic unramified extension of nonarchimedean local fields .
Let be a fixed non-trivial character of E that is trivial on F. Then will be of the form (x)=_(_(δ x)), where δ∈ is an element of trace zero. According to <cit.>, δ is in fact a unit in _ of trace zero. For the purpose of relating to , we take δ to be ^-1(Δ), so that
((k))=(k)
for k ∈_E.
Let ρ be level zero supercuspidal representations of _n(E) constructed from irreducible cuspidal representations π of _n() with its attached Whittaker models (ρ,ψ_E F).
We take a Whittaker function W_ρ∈𝒲(ρ,ψ_E F) and a Schwartz–Bruhat function Φ∈(^n),
and form the Flicker integral defined by
I(s,W_ρ,Φ)=∫__n(F) \_n(F) W_ρ(g) Φ(e_ng) | g|^s dg.
The integral converges absolutely for Re(s) sufficiently large, and extend meromorphically to the entire complex plane.
Furthermore there exists a rational function ∈ℂ(q^-s) satisfying the functional equation <cit.>:
I(1-s,W̌_ρ,_(Φ))= I(s,W_ρ,Φ)
As before, interested reader may notice that the gamma factor defined above differs by a sign from the conventional one defined by Flicker in <cit.>.
The local Asai L-function L(s,ρ, As) is the generator of the ℂ[q^± s]-fractional ideal of ℂ(q^-s) generated by the family of Flicker integrals I(s,W_ρ,Φ)
with W_ρ∈𝒲(ρ,ψ_E F) and Φ∈(^n), which is normalized to be of the form P(q^-s)^-1 for some P(X) ∈ℂ[X] with P(0)=1.
Let π be an irreducible cuspidal representation of _n(). Then for every W_π∈(π,), ϕ∈(^n), and s ∈,
there exists such that
Ǐ(W_π,ϕ)+q^-n(1-s)ω^-1_ρ(ϖ)(ϕ)(0)L(n(1-s),ω^-1_ρ_)I(W_π,_^n)
=(I(W_π,ϕ)+q^-nsω_ρ(ϖ)ϕ(0)L(ns,ω_ρ_)I(W_π,_^n)).
Since (W^∘_ρ)⊆_n()_n(_)=⨿_l ∈ϖ^l__n()_n(_), for (s) ≫ 0,
our integral can be decomposed as an infinite series
I(s,W^∘_ρ,Φ_∘)
=∑_l ∈ q^-nls∫_ω_ρ(xϖ^l)∫__n()∩ K_n \ K_n W^∘_ρ(k)Φ_∘(e_nkxϖ^l) dk x.
With Φ_∘ being a lift of ϕ, Φ_∘(e_nkxϖ^l)=0 for l < 0, whereas Φ_∘(e_nkxϖ^l)=ϕ(0) for l > 0.
When l=0, we make the change of variables k ↦ kx^-1 to obtain
I(s,W^∘_ρ,Φ_∘)
=∑_l=1^∞ q^-nlsω_ρ(ϖ^l)ϕ(0)∫_ω_ρ (x) x ∫__n()∩ K_n \ K_n W^∘_ρ(k) dk
+ ∫__n()∩ K_n \ K_n W^∘_ρ(k) Φ_∘(e_nk) dk.
Just as in the proof of <cit.>, we express the integrals as the sum
I(s,W^∘_ρ,Φ_∘)
=(_n()(1_n+_n(𝔭)))
×(∑_l=1^∞ q^-nlsω_ρ(ϖ^l)ϕ(0)·∫_ω_ρ (x) x· I(W_π,_^n)+I(W_π,ϕ)).
The first sum becomes q^-nsω_ρ(ϖ)ϕ(0)L(ns,ω_ρ_)I(W_π,_^n) if ω_ρ is unramified, while the first term vanishes if ω_ρ is ramified, but it is still equal to q^-nsω_ρ(ϖ)ϕ(0)L(ns,ω_ρ_)I(W_π,_^n), since ω_π is non-trivial so that π dose not possess a non-zero Flicker–Rallis vector. This is equivalent to saying that I(W_π,_^n)=0 in virtue of <Ref>. Combining all together, we find
I(s,W^∘_ρ,Φ_∘)
=(_n()(1_n+_n(𝔭)))
(I(W_π,ϕ)+q^-nsω_ρ(ϖ)ϕ(0)L(ns,ω_ρ_)I(W_π,_^n)).
Regarding the dual side, we analogously follow the proof of <Ref> to write it as
I(1-s,W̌^∘_ρ,_(Φ_∘))=∫__n() \_n() W^∘_ρ(g) _(Φ_∘)(e_1 ^tg^-1) g^s-1 dg.
We iterate the process for I(1-s,W̌^∘_ρ,_(Φ_∘)) in order to produce
I(1-s,W̌^∘_ρ,_(Φ_∘))
=(_m()(1_m+_m(𝔭)))
×(Ǐ(W_π,ϕ)+q^-n(1-s)ω^-1_ρ(ϖ)(ϕ)(0)L(n(1-s),ω^-1_ρ_)I(W_π,_^n)).
It remains to use the functional equation (<ref>) as well as to cross out the common volume term.
Let G= GL_n(E) and L= GL_n(F) in (<ref>). Analogously to the finite field case, we say that a representation ρ of GL_n(E) admits a Flicker–Rallis period if
ρ is GL_n(F)-distinguished. According to <cit.>, which is based on <cit.>, a level zero supercuspidal representation ρ does not have the Flicker–Rallis period as long as n=2m is even. The keen reader may notice that this property is completely analogously to the finite field case, but it is not so surprising to see it in <Ref> that both conditions are equivalent.
Let ρ be a level zero supercuspidal representation of _n().
* If π does not admit a Flicker–Rallis vector, then we have
= .
* If n=2m+1 and π admits a Flicker–Rallis vector, then we have
=q^n(s-1/2)ω^-1_ρ(ϖ) L(n(1-s),ω^-1_ρ_)/L(ns,ω_ρ_).
We begin with the case when π does not admit a Flicker–Rallis vector. We apply <Ref> to see that
I(W_π,_^n)=0 for any W_π∈(π,).
Therefore, <Ref> boils down to the equality Ǐ(W_π,ϕ)= I(W_π,ϕ) for any W_π∈(π,) and ϕ∈_0(^n). This relation tells us that is exactly .
In what follows, we focus on the case when n=2m+1 and π admits a Flicker–Rallis vector. We take ϕ to be _^n an indicator function on _^n.
The relation (_^n)=q^n/2δ_0 implies that Ǐ(W_π,_^n)=0.
In addition, <Ref> allows us to choose W_π∈(π,) such that I(W_π,_^n)=1,
and consequently we obtain from <Ref> that
q^-n(1-s)+n/2ω^-1_ρ(ϖ)L(n(1-s),ω^-1_ρ_)
=(1+q^-nsω_ρ(ϖ)ϕ(0)L(ns,ω_ρ_))
= L(ns,ω_ρ).
We are left with solving it for .
When E=F × F and ρ≅ρ_1 ×ρ_2 is a representation of _n() ×_n(), <Ref> coincides with <Ref>.
§.§ The Asai epsilon factor and the Gauss sum
Let V be a n-dimensional vector space over . We consider the semi-direct product
( GL_n(ℂ) × GL_n(ℂ)) ⋊ Gal( / ),
where the non-trivial Galois element c in () acts on GL_n(ℂ) × GL_n(ℂ) by
(g_1,g_2) ⋊ c:=(g_2,g_1). This is the Langlands dual group of Res_(V / ).
Let s be an element of W_F which generates the quotient group W_F / W_E ≅ Gal(E/F).
Let φ : W_E → GL_n(ℂ) be an n-dimensional representation of the Weil group W_E.
We obtain the Asai representation, “ As",
As (φ) : W_F → ( GL_n(ℂ) × GL_n(ℂ)) ⋊ Gal( / )
by setting
As (φ)(τ)=(φ(τ),φ(sτ s^-1)) ∈ GL_n(ℂ) × GL_n(ℂ)
for τ∈ W_E, and
As (φ)(s)=(1_n,φ(s^2)) ⋊ c ∈ ( GL_n(ℂ) × GL_n(ℂ)) ⋊ Gal( / ) ∖ ( GL_n(ℂ) × GL_n(ℂ)).
For v_1,v_2 ∈ℂ^n, we have As (φ)(τ)(v_1⊗ v_2)=φ(τ)v_1⊗φ(sτ s^-1)v_2 and
As (φ)(s)(v_1⊗ v_2)=φ(s^2)v_2⊗ v_1. For x a real number, let ⌊ x ⌋ be the greatest integer less than or equal to x.
Let φ be n-dimensional tamely ramified representations of W_E. Let α be a regular character of _q^2n
corresponding to φ via Green's parametrization and m=⌊n-1/2⌋.
Then we have
ε_0( As (φ),_)=(-1)^n q^-n^2/2τ(α^1+q^2m+1,ψ_d) ∏_i=0^m-1τ(α^1+q^2i+1,ψ_2n),
where d=n if n is odd, and d=2n if n is even.
It is worthwhile to point out that the local class field theory gives
I_E / P_E ≅_q^n,
where the transition maps are given by the norm maps ( Nr_n:d)_d | n as seen before. Henceforth we may consider α as a character of I_E / P_E.
With respect to a suitably chosen basis (cf. <cit.>), we have
φ_I_E(i_E)= diag(α(i_E),α^q^2(i_E),…,α^q^2n-2(i_E)) ∈ GL_n(ℂ)
for i_E ∈ I_E, which induces that
As (φ) _I_F(i_E)=( [ α(i_E); α^q^2(i_E) ; ⋱ ; α^q^2n-2(i_E) ],
[ α^q(i_E); α^q^3(i_E) ; ⋱ ; α^q^2n-1(i_E) ]).
The element belongs to GL_n(ℂ) × GL_n(ℂ), because i_E ∈ I_E (so there is no Frobenius). The reason why we have a second matrix
is that Fr_F · x · Fr_F^-1≡ x^qP_F for x ∈ I_F. This means that α( Fr_F · i_E · Fr_F^-1)=α^q(i_E).
We index the standard basis of ℂ^n by e_i, where i=0,1,…,n-1.
Putting all together, we obtain
As (φ) _I_F(i_E)(e_i ⊗ e_j)=α^q^2i+q^2j+1(i_E)(e_i ⊗ e_j).
The matrix representing As ( φ) _I_F is indexed by (i,j), where 0 ≤ i,j ≤ n-1. Furthermore, the eigenvalue α^q^2i+q^2j+1
lies in the Galois orbit of α^1+q^2(j-i)+1 as ( α^1+q^2(j-i)+1)^q^2i=α^q^2i+q^2j+1.
Therefore, the Galois orbits indexed by integers 0 ≤ d ≤⌊n-1/2⌋ are given by
𝒪(α^1+q^2d+1)={ (α^1+q^2d+1)^q^k | 0 ≤ k ≤ 2n-1 },
whereas 𝒪(α^1+q^2m+1) looks a bit different for n=2m+1 odd:
𝒪(α^1+q^2m+1)={ (α^1+q^2m+1)^q^k | 0 ≤ k ≤ n-1 }.
We emphasize that 𝒪(α^1+q^2d+1)'s should be thought of as (multi-)sets possibly with duplicated elements, so they are not exactly Galois orbits.
Each 𝒪(α^1+q^2d+1) consists of multiple copies of the same Galois orbit. Then <cit.>
gives us the desired formula for the ε_0-factor.
Let λ_(_) be the Langlands constant
defined in <cit.>. Appealing to <cit.>, the Langlands constant λ_(_) is given by λ_(_)=-1. We apologize for the double usage of “λ", but we hope that the reader can separate the meaning from the context.
Let π(φ) be an irreducible cuspidal representations of _n() associated to n-dimensional tamely representation φ of W_E via Macdonald correspondence. Then we have
γ(π(φ), As,)=ω^n-1_π(Δ) λ_(_)^-n(n-1)/2ε_0( As (φ),_).
We divide it into two cases. We assume that π does not admits a Flicker-Rallis vector.
We use <Ref> in conjunction with <cit.> and <cit.> in order to see that
γ(π(φ), As,)=Γ(s,ρ(φ), As,_)
=ω^n-1_ρ(δ) λ_(_)^-n(n-1)/2ε(s, As (φ),_)
=ω^n-1_π(Δ) λ_(_)^-n(n-1)/2ε_0( As (φ),_).
It remains to deal with the case when n=2m+1 and π admits a Flicker–Rallis vector. Since Δ is an element of trace zero, Δ^2=-ΔΔ belongs to
. The central character restricted to , ω_π_=α_, becomes trivial so that α^1+q^2m+1=. Using ω^n-1_π(Δ)=ω_π^m(Δ^2)=1,
this reduces the problem to confirm that
γ(π(φ), As,)=(-1)^m ε_0( As (φ),_).
Now, (α^1+q^2i+1)^1+q^2m+1= and α^1+q^2i+1 is not trivial for 0 ≤ i ≤ m-1. <cit.> gives us that
τ(α^1+q^2i+1,ψ_2(2m+1))=-q^2m+1α(x^1+q^2i+1) for which x ∈. Thus x^1+q^2i+1∈,
so α(x^1+q^2i+1)=1. Collecting all these together, and then using the fact that τ(,ψ_2m+1)=1, <Ref> tells us that
ε_0( As (φ),_)=(-1)^2m+1 q^-(2m+1)^2/2τ(α^1+q^2m+1,ψ_2m+1) ∏_i=0^m-1τ(α^1+q^2i+1,ψ_2(2m+1))
=(-1)^2m+1 q^-(2m+1)^2/2(-q^2m+1)^m τ(,ψ_2m+1)=(-1)^m-1q^-2m+1/2,
which agrees with (-1)^m γ(π(φ), As,) in <Ref>, as required.
We are in a position to state a main product formula for γ(π, As,) with regard to Gauss sums.
Let π be an irreducible cuspidal representations of _n(). We let α∈_q^2n be a regular character
corresponding to π via Green's parametrization
and m=⌊n-1/2⌋. Then we have
γ(π, As,)=ω^n-1_π(Δ) · (-1)^-n(n+1)/2 q^-n^2/2τ(α^1+q^2m+1,ψ_d) ∏_i=0^m-1τ(α^1+q^2i+1,ψ_2n).
§ THE EXTERIOR SQUARE GAMMA FACTOR
§.§ Jacquet–Shalika sums and periods
We let ℳ_n be n × n matrices, 𝒩_n the subspace of upper triangular matrices of ℳ_n.
We let σ_2m be a permutation matrix given by
σ_2m=[ 1 2 … m | m+1 m+2 … 2m; 1 3 … 2m-1 | 2 4 … 2m; ]
and let σ_2m+1 denote
σ_2m+1=[ 1 2 … m | m+1 m+2 … 2m 2m+1; 1 3 … 2m-1 | 2 4 … 2m 2m+1; ].
Let π be an irreducible cuspidal representation of _n() with its associated Whittaker model (π,ψ).
For all W_π∈𝒲(π,ψ) and ϕ∈_0(^m),
there exists a complex number γ(π,∧^2,ψ) ∈ℂ^× such that
γ(π,∧^2,ψ) ∑_g ∈_m() \_m() ∑_X ∈𝒩_m() \ℳ_m() W_π( σ_2m[ 1_m X; 1_m ][ g ; g ]) ψ^-1( X) ϕ(e_mg)
=∑_g ∈_m() \_m() ∑_X ∈𝒩_m() \ℳ_m() W_π( σ_2m[ 1_m X; 1_m ][ g ; g ]) ψ^-1( X) (ϕ)(e_1tg^-1)
in the even case n=2m
γ(π,∧^2,ψ)
∑_g ∑_X ∑_Z
W_π( σ_2m+1[ 1_m X ; 1_m ; 1 ][ g ; g ; 1 ][ 1_m ; 1_m ; Z 1 ]) ψ^-1( X) ϕ(Z)
=∑_g∑_X ∑_Z
W_π( [ 1; 1_2m ]σ_2m+1[ 1_m X ; 1_m ; 1 ][ g ; g ; 1 ][ 1_m -^tZ; 1_m ; 1 ])
ψ^-1( X) (ϕ)(Z)
in the odd case n=2m+1, where the summation domain of g, X, and Z are taken over _m() \_m(),
𝒩_m() \ℳ_m(), and ^m, respectively. We express γ(π,∧^2,) in terms of the Bessel functions associated with π.
<cit.>
Let π be an irreducible cuspidal representation of _2m(). Then we have
γ(π,∧^2,)=
q^-m/2∑_g ∈_m() \_m()∑_X ∈_m() \_m()π( σ_2m[ 1_m X; 1_m ][ g ; g ]σ^-1_2m)
×^-1( X) (e_1 ^tg^-1 ^te_m).
In particular, we have γ(π,∧^2,^-1)=γ(π,∧^2,).
We define a Shalika subgroup S_2m of GL_2m by
S_2m={[ 1_m X; 1_m ][ g ; g ] | X ∈ℳ_m, g ∈ GL_m }.
Let Θ be a Shalika character of S_2m given by
Θ( [ 1_m X; 1_m ][ g ; g ])=ψ( X).
A non-zero vector v ∈ V_π is called a Jacquet–Shalika vector if π(s)v=Θ(s)v for every s ∈ S_2m().
Over a nonarchimedean local field F, we let G= GL_2n(F) and H=S_2m(F) in (<ref>). We say that a representation ρ of G admits a Jacquet–Shalika period
if ρ is (S_2m(F),Θ)-distinguished.
Let π be an irreducible cuspidal representation of _m(). Suppose that n=2m and π admits a Jacquet–Shalika vector.
Then we have
γ(π,∧^2,) =γ(π,∧^2,^-1)=-q^-m/2.
We embed the Shalika subgroup S_2m() to G=_2m() via the conjugation by σ_2m.
We define a linear functional Λ∈__m()(π__m(),__m()) on
(π,) by
Λ(W_π)=1/_m()W_π(1_2m).
We choose W_π to be π. Upon using <Ref> with G=_2m(), L=S_2m() a Shalika subgroup, and Ξ=Θ a Shalika character, we find from <cit.> that
∑_p ∈_m() \_m() ∑_X ∈_m()Λ(π(σ_2m[ p ; p ][ 1_m X; 1_m ]σ^-1_2m)π)^-1( X)
=∑_p ∈_m() \_m() ∑_X ∈_m() \_m()π(σ_2m[ p ; p ][ 1_m X; 1_m ]σ^-1_2m)^-1( X) =1,
which gives rise to
∑_g ∈_m() \_m() ∑_X ∈_m()Λ(π(σ_2m[ g ; g ][ 1_m X; 1_m ]σ^-1_2m)π) ^-1( X)^-1(e_m g ^te_1)
=-1.
After multiplying both sides by q^-m/2, and then making the change of variables g ↦ g^-1 and X ↦ -X, we arrive at the identity
γ(π,∧^2,^-1)
=q^-m/2∑_g ∈_m() \_m() ∑_X ∈_m() \_m()π^-1(σ_2m[ g ; g ][ 1_m X; 1_m ]σ^-1_2m)
( X) ^-1(e_1 ^tg^-1 ^te_m)
=-q^-m/2.
All that remains is to take the complex conjugate. In this way, we conclude from <Ref> that γ(π,∧^2,) =γ(π,∧^2,^-1)=-q^-m/2, as desired.
We now spell out functional equations for γ(π,∧^2,) of an irreducible cuspidal representation π of _n().
Let π be an irreducible cuspidal representation of _n().
*
If π does not admits a Jacquet–Shalika vector, then we have
γ(π,∧^2,) γ(π,∧^2,^-1) =1
and γ(π,∧^2,)=1.
* If n=2m and π admits a Jacquet–Shalika vector, then we have
γ(π,∧^2,) γ(π,∧^2,^-1) =q^-m and γ(π,∧^2,)=q^-m/2.
Part <ref> has been done in <cit.>, and Part <ref> is straightforward from <Ref>.
§.§ Jacquet–Shalika integrals and close field theory
Let ρ be level zero supercuspidal representations of _n(F) with its attached Whittaker models (ρ,ψ_F).
Let _^♭ be a non-trivial additive character of F of level zero, that is trivial on 𝔬 but not on 𝔭^-1.
For each W_ρ∈𝒲(ρ,_) and Φ∈𝒮(F^m), we define Jacquet–Shalika integrals J(s,W_ρ,Φ) by
∫_N_m(F) \ GL_m(F)∫_𝒩_m(F) \ℳ_m(F)∫_F^m W_ρ( σ_2m+1[ 1_m X ; 1_m ; 1 ][ g ; g ; 1 ][ 1_m ; 1_m ; z 1 ])
ψ_^-1( X) Φ(z) | g|^s-1 dz dX dg
in the odd case n=2m+1 and
∫_N_m(F) \ GL_m(F)∫_𝒩_m(F) \ℳ_m(F) W_ρ( σ_2m[ 1_m X; 1_m ][ g ; g ]) ψ_^-1( X) Φ(e_mg) | g|^s dX dg
in the even case n=2m. These integrals converge absolutely for Re(s) sufficiently large, and it defines a rational function in ℂ(q^-s).
The exterior square gamma factor Γ(s,ρ,∧^2,_) defined as a proportionality. The exterior square γ-factor is a rational function in ℂ(q^-s)
satisfying
J(1-s,ρ̌(τ_n)W̌_ρ,_(Φ))= Γ(s,ρ,∧^2,_)J(s,W_ρ,Φ),
where τ_n is the matrix [ 1_m; 1_m ] if n=2m, and the matrix [ 1_m ; 1_m ; 1 ]
if n=2m+1. The local exterior square L-function L(s,ρ,∧^2) is the generator of the ℂ[q^± s]-fractional ideal of Jacquet–Shalika integrals J(s,W_ρ,Φ)
with W_ρ∈𝒲(ρ,_) and Φ∈𝒮(F^m) normalized to be of the form P(q^-s) for some P(X) ∈ℂ[X] satisfying P(0)=1.
A principal series representation of the form Σ= Ind_B_n(F)^ GL_n(F)(μ_1 ⊗…⊗μ_n ) is said to be spherical if it has a K_n-fixed vector.
It is worthwhile to mention that Σ is a full induced representation from the Borel subgroup B_n(F) of unramified characters μ_i of F^×.
Here unramified means that each μ_i is invariant under the maximal compact subgroup 𝔬^× of F^×.
Such a spherical representation must have a one-dimensional space of Whittaker functionals Λ∈__n()(Σ__n(),_^♭). The map v ↦Λ(Σ(·)· v) a priori need not to be injective, so that the Whittaker model 𝒲(Σ,_^♭)
consisting of Whittaker functions on GL_n(F) of the form W_Σ(g):=Λ(Σ(g)· v) may
only be a model of a quotient of Σ. However, if ρ is an irreducible generic representation of GL_n(F), then ρ is isomorphic to its unique Whittaker model 𝒲(ρ,ψ_),
which is the image of V_ρ under the map v ↦Λ(ρ(·)· v). According to <cit.>, the local functional equation (<ref>) can be extended to
ρ and Σ, which is sufficient for applications therein.
Let F be a local function field. Let ρ be an irreducible generic subquotient of a spherical representation Ind_B_n(F)^ GL_n(F)(μ_1 ⊗…⊗μ_n ). Then we have
Γ(s,ρ,∧^2,^♭_)=∏_1≤ j < k ≤ nΓ(s,μ_j ×μ_k,^♭_).
Let us set Σ= Ind_B_n(F)^ GL_n(F)(μ_1 ⊗…⊗μ_n ).
Let V_ρ and V_Σ denote their underlying space of ρ and Σ, respectively.
By the uniqueness of Whittaker functionals,
a non-zero Whittaker functional on V_ρ induces a non-zero Whittaker functional on V_Σ. As this representation has a unique Whittaker functional,
this must be it and we conclude that Γ(s,ρ,∧^2,^♭_)= Γ(s,Σ,∧^2,^♭_).
For such a spherical representation Σ, the subspace of spherical vectors must be one-dimensional and we normalized the spherical Whittaker function W_Σ^♭ in the Whittaker model 𝒲(Σ,,^♭_)
so that W_Σ^♭(1_n)=1. Upon taking W_Σ^♭∈𝒲(Σ,^♭_) and Φ^♭∈𝒮(F^m) of a characteristic function on 𝔬^m and using <cit.>, we have the identity
∏_i=1^n L(1-s,μ^-1_i,∧^2)∏_1≤ j < k ≤ n L(1-s,μ^-1_j ×μ^-1_k)
=J(1-s,Σ̌(τ_n)W̌^♭_Σ,^♭_(Φ^♭))
=Γ(s,Σ,∧^2,^♭_)J(s,W_Σ^♭,Φ^♭)=Γ(s,Σ,∧^2,^♭_) ∏_i=1^n L(s,μ_i,∧^2)∏_1≤ j < k ≤ n L(s,μ_j ×μ_k)
from which the result we seek for follows.
We let k denote a global function field with field of constant 𝔽_q and ring of adèles 𝔸_k.
One of the most powerful tool for proving local factors is the standard globalization due to Lomelí <cit.> (cf. <cit.>).
Let ρ be a level zero unitary supercuspidal representation of GL_n(F) over a local function field F.
There is a global field k with a set of three places S={ v_0,v_1,v_∞} such that k_v_0≅ F.
There exists an irreducible cuspidal automorphic representation Π=⊗'_v Π_v of GL_n(𝔸_k) satisfying
the following properties:
* Π_v_0≅ρ;
* Π_v is an irreducible unramified principal series representation at every v ∉ S;
* Π_v_1 and Π_v_∞ are irreducible quotients of unramified principal series representations.
* If ρ is generic, then Π is globally generic.
The aforementioned globalization is required to prove a purely local statement, namely, that
local exterior square γ-factors Γ(s,ρ,∧^2,_) via Rankin-Selberg methods due to Jacquet and Shalika <cit.>
agrees with those Γ_ LS(s,ρ,∧^2,_) via Langlands-Shahidi methods <cit.> in positive characteristic at hand.
We eventually generalize the equality to all characteristic, notably, zero in <Ref>.
Let ρ be a level zero supercuspidal representation of GL_n(F) over a local function field F. Then we have
Γ(s,ρ,∧^2,_)= Γ_ LS(s,ρ,∧^2,_).
Twisting by an unramified character does not affect the conclusion, so there is no harm to assume that π is unitary (cf. <cit.>).
Applying Theorem <ref> to the level zero supercuspidal representation, there are a global field k with three places v_0, v_1, and v_∞
such that k_v_0≅ F, and an irreducible unitary cuspidal automorphic representation Π of GL_n(𝔸_k) with the required properties in <Ref>.
We take a non-trivial additive character Ψ of 𝔸_k k, and assume, as we may, that Ψ_v_0=ψ_F.
The global functional equation via the Langlands-Shahidi method can be read from <cit.> as
L^S(s,Π,∧^2)=Γ_ LS(s,Π_v_0,∧^2,Ψ_v_0) ∏_v ∈ S-{ v_0}Γ_ LS(s,Π_v,∧^2,Ψ_v) L^S(1-s,Π̌,∧^2).
Since for v ∉ S we know that Π_v and Ψ_v are unramified so that ε(s,Π_v,∧^2,Ψ_v) ≡ 1,
<cit.> is rephrased as
L^S(s,Π,∧^2)=Γ(s,Π_v_0,∧^2,Ψ_v_0) ∏_v ∈ S-{ v_0}Γ(s,Π_v,∧^2,Ψ_v) L^S(1-s,Π̌,∧^2).
Applying Lemma <ref> gives us Γ_ LS(s,Π_v,∧^2,Ψ_v)=Γ(s,Π_v,∧^2,Ψ_v) for v ∈ S-{ v_0}.
In order to do so, the result that we seek for is immediate, once we divide (<ref>) by (<ref>).
Let Γ(s,∧^2 ( φ),_) denote the Artin exterior square γ-factor and Γ(s, φ,_) the Artin standard γ-factor. We then have a following result.
<cit.>
For (F,φ,_) that is Del-associated to (F',φ','_'), we have
Γ(s,∧^2 ( φ),ψ_F)=Γ(s,∧^2 ( φ'),_').
Let ρ(φ) be a level zero supercuspidal representation of GL_n(F) obtained from a tamely ramified Weil representation φ of W_F of degree n
via the local Langlands correspondence (LLC).
The identity
Γ_ LS(s,ρ(φ),∧^2,_)=Γ(s,∧^2( φ),_)
relating analytic γ-factors Γ_ LS(s,ρ,∧,_) with corresponding Artin factors Γ(s,∧^2 (φ),_)
has been established for non-archimedean local fields F of characteristic 0 in <cit.>
and positive characteristic in <cit.>.
For (F,ρ,_) that is Kaz-associated to (F',ρ',_'), we have
Γ(s,ρ,∧^2,_)=Γ(s,ρ',∧^2,_') and Γ_ LS(s,ρ,∧^2,_)=Γ_ LS(s,ρ',∧^2,_').
We consider the first equality. Proposition 3.23 in <cit.> yields the equivalent condition that
Hom_S_2m(F)(ρ⊗|(·)|^s/2,Θ) ≠ 0 for some s ∈ℂ if and only if Hom_S_2m(F')(ρ' ⊗|(·)|^s'/2,Θ) ≠ 0 for some s' ∈ℂ.
If this is the case, we use <cit.> in conjunction with the fact that ω_ρ(ϖ_F)=ω_ρ'(ϖ_F') to prove
Γ(s,ρ,∧^2,_)=q^m(s-1/2)ω_ρ(ϖ_F)·L(m(1-s),ω^-1_ρ)L(ms,ω_ρ)
=q^m(s-1/2)ω_ρ'(ϖ_F')·L(m(1-s),ω^-1_ρ')L(ms,ω_ρ')
=Γ(s,ρ',∧^2,_').
Otherwise, owing to <cit.>, we are guided to
Γ(s,ρ,∧^2,_)= γ(π,∧^2,ψ)=Γ(s,ρ',∧^2,_').
Next, we deal with the second equality. For (F,φ,_) that is Del-associated to (F',φ',_'), a similar notation ρ'(φ') applies to φ'.
We know from Proposition <ref> that ρ(φ) is Kaz-associated to ρ'(φ'), at which point Proposition <ref> together with (<ref>) completes the proof.
It is time to bring Deligne-Kazhdan close field theory and Theorem <ref> back together for good use.
Let φ be a n-dimensional tamely ramified Weil representation of W_ corresponding to the level zero supercuspidal representation ρ(φ) of _n() via the Macdonald correspondence.
Then we have
Γ(s,ρ(φ),∧^2,_)=Γ(s,∧^2( φ),_).
Given a local field F' of characteristic p and an integer m ≥ 1, there exists a local field F of characteristic 0 such that F' is m-close to F <cit.>. The converse also holds for m=1. Specifically, for a field F of characteristic 0, its residue field 𝔬_F/𝔭_F is isomorphic to 𝔽_q with q=p^k for some prime p and integer k ≥ 1. Then we take F' to be 𝔽_q((t)) of characteristic p. Now for a p-adic field F, <Ref> and <Ref> allow us to deduce Γ(s,ρ,∧^2,_)= Γ_ LS(s,ρ,∧^2,_). The desired equality then simply follows from (<ref>) and <Ref>.
§ THE BUMP–FRIEDBERG GAMMA FACTOR
§.§ The Bump–Friedberg sum
We define the embedding J: _m ×_m →_n by
J(g,g^')_k,l=
g_i,j if k=2i-1, l=2j-1,
g^'_i,j if k=2i, l=2j,
0 otherwise,
for n=2m even
and J: _m+1×_m →_n by
J(g,g^')_k,l=
g_i,j if k=2i-1, l=2j-1,
g^'_i,j if k=2i, l=2j,
0 otherwise,
for n=2m+1 odd. We denote by M_m,m the standard Levi subgroup of _2m associated with the partition (m,m) of 2m.
Let w_m,m=σ_2m and then we set H_2m=w_m,mM_m,mw^-1_m,m. Let w_m+1,m=w_m+1,m+1|_ GL_2m+1 so that
w_m+1,m=[ 1 2 … m+1 | m+2 m+3 … 2m 2m+1; 1 3 … 2m+1 | 2 4 … 2m-2 2m; ].
In the odd case, w_m+1,m≠σ_2m+1. We let M_m+1,m denote the standard Levi subgroup of GL_2m+1 associated with the partition (m+1,m) of 2m+1. We set H_2m+1=w_m+1,mM_m+1,mw^-1_m+1,m. The reason for introducing auxiliary elements w_m,m and w_m+1,m is that J(g,g')=w_m,m diag(g,g')w^-1_m,m for diag(g,g') ∈ M_m,m, and
J(g,g')=w_m+1,m diag(g,g')w^-1_m+1,m for diag(g,g') ∈ M_m+1,m.
We emphasize that H_n is compatible with the intersection in a manner that H_n ∩ GL_n-1=H_n-1. Let π be an irreducible cuspidal representation of _n(). For W_π∈(π,) and ϕ∈_0(^⌊(n+1)/2 ⌋), we define the Bump-Friedberg sum as
Z(W_π,ϕ)
:=∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π(J(g,g')) ϕ(e_mg'),
in the even case n=2m and
Z(W_π,ϕ)
:=∑_g ∈_m+1() \_m+1() ∑_g' ∈_m() \_m() W_π(J(g,g')) ϕ(e_m+1g),
in the odd case n=2m+1. Similarly, we define the dual Bump-Friedberg sum as
Ž(W_π,ϕ)
:=∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W̌_π(J(g,g')) (ϕ)(e_mg'),
in the even case n=2m
Ž(W_π,ϕ)
:=∑_g ∈_m+1() \_m+1() ∑_g' ∈_m() \_m() W̌_π(J(g,g')) (ϕ)(e_m+1g),
in the even case n=2m+1.
Let π be an irreducible cuspidal representation of _2m(). Then we have
Ž(W_π,ϕ)=∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π( σ_2m[ g'; g ]σ^-1_2m) (ϕ)(e_1 ^tg'^-1)
for m=2n even and
Ž(W_π,ϕ)=∑_g ∈_m+1() \_m+1() ∑_g' ∈_m() \_m()
W_π (J(g,g')) (ϕ)(e_1 ^tg^-1)
for m=2n+1.
We begin with inserting the identity (<ref>) for W̌_π. Subsequently, we make the change of variables g ↦_n ^tg^-1_n, g' ↦_n ^tg'^-1_n, and then g ↦ g_n, g' ↦ g'_n to arrive at the result.
We aim to prove functional equation Z(W_π,ϕ)= Ž(W_π,ϕ)
satisfied by the Bump–Friedberg sum Z(W_π,ϕ). This allows us to define the Bump–Friedberg gamma factor
of an irreducible cuspidal representation π of _n().
Let π be an irreducible cuspidal representation of _n(). For every W_π∈(π,) and for any ϕ∈_0(^⌊(n+1)/2 ⌋),
there exists a complex number such that
∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π(σ_2m[ g ; g' ]σ^-1_2m) ϕ(e_mg')
=∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π( σ_2m[ g'; g ]σ^-1_2m) (ϕ)(e_1 ^tg'^-1),
in the even case n=2m and
∑_g ∈_m+1() \_m+1() ∑_g' ∈_m() \_m() W_π (J(g,g')) ϕ(e_m+1g)
=∑_g ∈_m+1() \_m+1() ∑_g' ∈_m() \_m() W_π (J(g,g')) (ϕ)(e_1 ^tg^-1),
in the odd case n=2m+1.
It can be checked from <Ref> that bilinear forms B_1 : (W_π,ϕ) ↦ Z(W_π,ϕ) and B_2 : (W_π,ϕ) ↦Ž(W_π,ϕ) belong to
the space _ H_n() (π_H_n() ⊗_0(^⌊(n+1)/2 ⌋) ,_H_n()).
Hence it suffices to show that such Bilinear forms B_1 and B_2 differ only by scalars, that is to say,
_ H_n() (π_H_n() ⊗_0(^⌊(n+1)/2 ⌋) ,_H_n()) ≤ 1.
We identify _n() ∩ H_n() \ H_n() with ^⌊(n+1)/2 ⌋ - { 0 }, and then use the
Frobenius reciprocity to deduce that
_ H_n() (π_H_n() ⊗_0(^⌊(n+1)/2 ⌋) ,_H_n())
≅_ H_n() (π_H_n() ⊗ Ind^H_n()__n() ∩ H_n()() ,_H_n())
≅__n() ∩ H_n() (π__n() ∩ H_n() ,__n() ∩ H_n()).
After suitably conjugating _n() ∩ H_n() by Weyl elements, the dimension of the last space is at most one dimensional by <cit.> for even case =2m and <cit.> for odd case n=2m+1.
In the course of the proof of proceeding theorem, we get the following multiplicity one result as a byproduct, which is used in the proof of <Ref>.
Let π be an irreducible cuspidal representation of _n(). Then we have
__n() ∩ H_n() (π__n() ∩ H_n() ,__n() ∩ H_n() ≤ 1.
§.§ The Friedberg–Jacquet period
Let ρ be an irreducible supercuspidal representation of _2m(). As in (<ref>), the representation ρ is called H_2m(F)-distinguished or distinguished with respect to H_2m(F) if
_H_2m()(ρ_H_2m(),_H_2m()) ≠ 0.
We also say that ρ admits a Friedberg–Jacquet period if ρ is H_2m(F)-distinguished. Let ℓ and ℓ' the linear form on (ρ,_)
defined by
ℓ : W_ρ↦ Z_(0)(1/2,W_ρ):=∫__m() \_m()∫__m() \_m() W_ρ(J(g,p')) dgdp'
and
ℓ' : W_ρ↦ Z_(0)(1/2,W̌_ρ):=∫__m() \_m()∫__m() \_m()W̌_ρ(J(g,p')) dgdp'.
Let ρ be an irreducible supercuspidal representations of _2m() which is distinguished with respect to H_2m(). Then there exists
a non-zero constant c(ρ) ∈^×, which is independent of _, such that ℓ'=c(ρ)ℓ.
We know from <cit.> that L(s,π, BF) is holomorphic at s=1/2 since ρ is assumed to be distinguished
with respect to H_2m(). As a consequence, all the integrals Z_(0)(s,W_ρ) are holomorphic at s=1/2 from which it follows that
the linear forms ℓ and ℓ' are well-defined. Since ρ is H_2m()-distinguished, ρ is also H_2m()-distinguished.
Taking
_H_2m()(ρ_H_2m(),_H_2m())= __2m() ∩ H_2m()(ρ__2m() ∩ H_2m(),__2m() ∩ H_2m())
into account, ℓ is a H_2m()-invariant functional on (ρ,_) and the integral in the linear form of ℓ'
is a H_2m()-invariant functional on (ρ,_^-1). As ρ≅ρ^ι where ρ^ι(g)=ρ(^tg^-1),
ℓ' gives a H_2m()-invariant functional on ρ as well.
Using the multiplicity one result of H_2m()-invariant linear functionals accompanied by <cit.>, this yields that
two linear forms ℓ and ℓ' differ by a non-zero scalar c(ρ) which depends only on the representation ρ.
Let ε(s,ρ,_) be the standard ε-factor defined by Godement and Jacquet <cit.>.
The local constant takes the form ε(s,ρ,_)=ε(0,ρ,_)q^-f(ρ,ψ_F)s,
where f(ρ,ψ_F)=-n+f(ρ), for a non-negative integer f(ρ) regardless of the choice of ψ_F <cit.>. We shall primarily be interested in the special value of ε(s,ρ,_) at s=1/2. Although we narrow it down to {± 1 } for distinguished representations ρ,
it is challenging to determine the sign of the root number ε(1/2,ρ,_). This is what is known as distinction problems <cit.>. It is our belief that ε(1/2,ρ,_)=1 for distinguished representations ρ, but we leave these out as it will be a digression from the main theorem of this paper.
Let ρ be an irreducible supercuspidal representations of _2m() which is distinguished with respect to H_2m().
Then we have ε(1/2,ρ,^♭_)=ε(1/2,ρ,_) ∈{± 1 }. In particular, if ρ is a level zero supercuspidal representations of _2m(), then we get
ε(1/2,ρ,^♭_)=ε(1/2,ρ,_)=ε(s,ρ,_) ∈{± 1 }.
Appealing to <cit.> with an observation that the central character ω_ρ of the distinguished representation ρ is trivial, the central value of epsilon factors ε(1/2,ρ,_) does not depend on the choice of _.
With the choice of level one additive character _, we recall from <cit.> that the level zero supercuspidal representations are of
conductor zero and then their corresponding gamma factors ε(s,ρ,_) are complex numbers instead of rational functions in q^s.
In this way, we see that ε(1/2,ρ,_)=ε(s,ρ,_).
Let us make a straight observation.
The ε-factor satisfies the identity
ε(s,ρ,_) ε(1-s,ρ̌,^-1_)=1.
Since ρ is self-contragredient, that is to say ρ≅ρ̌ <cit.>, it is clear from the fact ε(s,ρ̌,^-1_)=ε(s,ρ,_) that
ε(1/2,ρ,_ )^2=1.
Thereupon we conclude that ε(1/2,ρ,_ ) ∈{± 1}, as claimed.
Let W^ ess_ρ be the essential Whittaker function defined by Jacquet–Piatetski-Shapiro–Shalika and ρ_ ur a certain
unramified standard module attached to ρ.
We refer the reader to <cit.> for precise definitions of these objects. By evaluating the essential Whittaker function W^ ess_ρ in (ρ,^♭_), we specify the constant c(ρ).
Let ρ be an irreducible supercuspidal representations of _2m() which is distinguished with respect to H_2m().
Then we have ℓ'=ε(s,ρ,_)ℓ. In particular, if ρ is a level zero supercuspidal representations of _2m(), then we get
Z(W̌_π,_^m)=ε(s,ρ,_)Z(W_π,_^m).
Since c(ρ) does not depend on the choice of _, we can take _ to ^♭_.
Upon using <cit.> in conjunction with <Ref>, and then making the change of variables g ↦ g [ ϖ^-f(π)1_n-1 ; 1 ], we are led to
ℓ'(W^ ess_ρ)=ℓ(W̌^ ess_ρ)=ε(1/2,ρ,^♭_)^2m-1ℓ(ρ[ ϖ^f(π)1_n-1 ; 1 ] W^ ess_ρ)
=ε(s,ρ,_) ℓ(W^ ess_ρ ).
With the help of <cit.>, we deduce from the self-contragredient of ρ that
ℓ(W^ ess_ρ )=L(1/2,ρ_ ur)L(1,ρ_ ur,∧^2)=L(1/2,ρ_ ur)L(1,ρ_ ur,∧^2)=ℓ(W^ ess_ρ).
To sum up, we obtain ℓ'(W^ ess_ρ) =ε(s,ρ,_)ℓ(W^ ess_ρ), from which we conclude that c(ρ)=ε(s,ρ,_).
We assume that ρ is a level zero supercuspidal representation constructed from π and choose W_ρ to be W^∘_ρ. Since the support of W^∘_ρ
is contained in _2m()K_2m, we arrive at (cf. <cit.>)
ℓ(W^∘_ρ)= ∫__m()∩ K_m \ K_m∫__m()∩ K_m \ K_m W^∘_ρ(J(k,k')) dkdk'
=(_m()(1_m+_m(𝔭)))^2 ∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π(J(g,g'))
=(_m()(1_m+_m(𝔭)))^2 Z(W_π,_^m),
and similarly we have ℓ'(W^∘_ρ)=(_m()(1_m+_m(𝔭)))^2 Z_(W_π,_^m).
The common volume term is cancelled out, and we are left with Z(W_π,_^m)=ε(s,ρ,_)Z(W_π,_^m), as required.
A non-zero vector v ∈ V_π is called a Friedberg–Jacquet vector if π(h)v=v for every h ∈ H_2m().
We now characterize the existence of Friedberg–Jacquet vectors in terms of the non-vanishing sum.
Let π be an irreducible cuspidal representation of _n() with n=2m even. Then π admits a Friedberg–Jacquet vector if and only if there exists W_π∈(π,) such that
∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π(J(g,g')) ≠ 0.
We assume that π has a non-zero Friedberg–Jacquet vector. We equip (π,) with
an inner product (·,·) in which π is unitary. We define W_ FJ∈(π,) by
W_ FJ(g)=1/H_n()∑_p ∈_n() ∩ H_n() π(gp)
for g ∈ H_n(). Taking the advantage of the average, we see that W_ FJ(gh)=W_ FJ(g)
for all h ∈_n() ∩ H_n().
Using inclusion _H_n()(π_H_n(),)
⊆__n() ∩ H_n() (π__n() ∩ H_n() ,),
we deduce the equality _H_n()(π_H_n(),)
=__n() ∩ H_n() (π__n() ∩ H_n() ,) from the one-dimensionality of both spaces, <Ref>. In this way, W_ FJ produces an element T_W_ FJ∈_H_n()(π_H_n(),)
stated by T_W_ FJ(W')=(W',W_ FJ) for W' ∈(π,), from which it follows that W_ FJ is a Friedberg–Jacquet vector.
Furthermore, the given summation is non-trivial, because <cit.> yields
∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_ FJ(J(g,g'))
=1/_n()∩ H_n()∑_p ∈_n() ∩ H_n() π(p)=1.
Conversely, we assume that there exists W_π∈(π,) such that
∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π(J(g,g')) ≠ 0.
We define W^♯_ FJ∈(π,) by
W^♯_ FJ(h)=1/_n()∩ H_n()∑_g ∈ H_n() W_π(hg).
Combining
W^♯_ FJ(1_n)=∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() W_π(J(g,g')) ≠ 0
along with the quasi-invariance property that W^♯_ FJ(hh')=W^♯_ FJ(h) for all h' ∈ H_n(), W^♯_ FJ is a non-zero
Friedberg–Jacquet vector that we seek for.
§.§ Bump–Freidberg integrals and close field theory
Let ρ be level zero supercuspidal representations of _n(F)
constructed from irreducible cuspidal representations π of _n()
with its attached Whittaker models (ρ,ψ_F).
For W_ρ∈(ρ,_) and Φ∈(^⌊(n+1)/2 ⌋), we define the Bump-Friedberg integral Z(s_1,s_2,W_ρ,Φ) by
∫__m() \_m()∫__m() \_m() W_ρ(J(g,g')) Φ(e_mg') g^s_1-1/2 g'^1/2+s_2-s_1 dg' dg
for n=2m even and
∫__m+1() \_m+1()∫__m() \_m() W_ρ(J(g,g')) Φ(e_m+1g) g ^s_1 g'^s_2-s_1 dg' dg
for n=2m+1 odd. For the sake of coherence with <cit.>, we bring further notations. For t ∈ a complex number, we denote by δ_t the character defined by
δ_t : J(g,g') ↦ g/ g'^t.
We denote by χ_n characters of H_n():
χ_n : J(g,g') ↦_H_n() for n=2m;
g g' for n=2m+1.
In particular, we are interested in the case for s_1=s+t+1/2 and s_2=2s. With regard to δ_r, χ_n, s, and t,
These Bump–Friedberg integrals depending on the parity of numbers n can be put in a single integral as
Z(s,t,W_ρ,Φ)=∫_ (_n() ∩ H_n()) \ H_n() W_ρ(h) Φ(e_n h) χ^1/2_n(h) δ_t(h) h^s dh.
The integral converges absolutely for Re(s) and Re(t) sufficiently large <cit.>, and it enjoys a meromorphic continuation to ×
as an element of (q^-s,q^-t). There exists a rational function in (q^-s,q^-t) such that for every W_ρ in (ρ,_) and Φ in (^⌊(n+1)/2 ⌋), we have the following functional equation <cit.>:
Z(1/2-s,-1/2-t,W_ρ,_(Φ))= Z(s,t,W_ρ,Φ).
For our purpose, it will often be convenient to write Z(s,W_ρ,Φ) and Γ(s,ρ, BF,_) in place of Z(s,0,W_ρ,Φ) and Γ(s,0,ρ, BF,_), respectively. The local Bump–Friedberg L-function L(s,ρ, BF) is the generator of the ℂ[q^± s]-fractional ideal of Bump–Friedberg integrals Z(s,W_ρ,Φ) with W_ρ∈𝒲(ρ,ψ_F) and Φ∈𝒮(F^⌊ (n+1)/2 ⌋) normalized to be of the form P(q^-s) for some P(X) ∈ℂ[X] satisfying P(0)=1.
Let π be an irreducible cuspidal representation of _2m(). Then for every W_π∈(π,), ϕ∈(^m), and s ∈,
there exists such that
Z(W_π,ϕ)+q^-m(1-2s)ω^-1_ρ(ϖ)(ϕ)(0)L(m(1-2s),ω^-1_ρ)ε(s,ρ,_)Z(W_π,_^m)
= (Z(W_π,ϕ)+q^-2msω_ρ(ϖ)ϕ(0)L(2ms,ω_ρ)Z(W_π,_^m)).
The computation for two sides are quite similar. For this reason, we give a detailed proof only in the dual integral Z(1/2-s,-1/2-t,W^∘_ρ,_(Φ_∘)).
The rational for our choice is that the dual side of modified functional equation is less written in the literature (cf. <cit.>, <Ref>)
and certain additional difficulties arise. Since the support of W^∘_ρ lies in _n()K_n=⨿_l ∈ϖ^l_n()K_n,
for (s) ≪ 0, Z(1/2-s,W^∘_ρ,_(Φ)) can be decomposed as
Z(1/2-s,-1/2-t,W^∘_ρ,_(Φ_∘))
=∑_l ∈ q^-ml(1-2s)∫_ω^-1_ρ(xϖ^l)
×∫__m()∩ K_m \ K_m∫__m()∩ K_m \ K_mW^∘_ρ(J(k,k'))_(Φ_∘)(e_mk'xϖ^l) dkdk' x.
The Fourier transform _(Φ_∘) is a lift of (ϕ), from which we deduce that _(Φ_∘)(e_mk'xϖ^l)=0 for l < 0, while _(Φ_∘)(e_mk'xϖ^l)=ϕ(0) for l > 0. Upon making the change of variables k' ↦ k'x^-1 for l=0, the infinite sum can be reduced to
Z(1/2-s,-1/2-t,W^∘_ρ,_(Φ_∘))
=
∫__m()∩ K_m \ K_m∫__m()∩ K_m \ K_mW^∘_ρ(J(k,k')) _(Φ_∘)(e_mk') dk dk'
+ ( ∑_l=1^∞ q^-ml(1-2s)ω^-1_ρ(ϖ^l) .
·(ϕ)(0)
. ∫_ω^-1_ρ (x) x
∫__m()∩ K_m \ K_m∫__m()∩ K_m \ K_mW^∘_ρ(J(k,k')) dk dk' ).
We rewrite the integration as sums akin to the proof of <cit.> by
Z(1/2-s,-1/2-t,W^∘_ρ,_(Φ_∘))
=(_m()(1_m+_m(𝔭)))^2
×(Z(W_π,ϕ)+ ∑_l=1^∞ q^-ml(1-2s)ω^-1_ρ(ϖ^l)
(ϕ)(0) ∫_ω^-1_ρ (x) x· Z(W_π,_^m)
).
To deal with the second term, we assume that ω_ρ is unramified. Combining <cit.> with <cit.>, ρν^s_0 is S_2m()-distinguished for some s_0 ∈ℂ, which is amount to saying that it is H_2m()-distinguished.
It follows from <Ref> that the second term is equal to
q^-m(1-2s)ω^-1_ρ(ϖ)(ϕ)(0)L(m(1-2s),ω^-1_ρ) Z(W_π,_^m)
=q^-m(1-2s)ω^-1_ρ(ϖ)(ϕ)(0)L(m(1-2s),ω^-1_ρ)ε(s,ρ,_)Z(W_π,_^m).
On the other hand, if ω_ρ is ramified, π does not admit a non-zero Friedberg–Jacquet vector in that ω_π is non-trivial.
This in turn implies that the second term vanishes, as
∫_ω^-1_ρ (x) x=0=Z(W_π,_^m),
thanks to <Ref>. The analogous argument for Z(s,t,W^∘_ρ,Φ_∘) goes through, and it guides us to
Z(s,t,W^∘_ρ,Φ_∘)
=(_m()(1_m+_m(𝔭)))^2
(Z(W_π,ϕ)+q^-2msω_ρ(ϖ)ϕ(0)L(2ms,ω_ρ)Z(W_π,_^m)).
All that remains is to apply the functional equation (<ref>) and then to cancel out the common volume term.
In contrast to the exterior square local factor Γ(s,ρ,∧^2,_), the Bump and Friedberg local factor possesses two parameters s and t. For this reason, Γ(s,t,ρ, BF,_) is not really defined as a proportionality, but rather the functional equation for the ε-factor, ε(s,t,ρ, BF,_), need to be established beforehand.
Thankfully, we will see the following theorem, <Ref>, that is independent of t in the class of level zero supercuspidal representations ρ.
Therefore there is no harm to assign t=0 to define Z(s,W_ρ,Φ) and Γ(s,ρ, BF,_).
Let ρ be a level zero supercuspidal representation of _n().
* If π does not admit a Friedberg–Jacquet vector, then we have
= .
* If n=2m and π admits a Friedberg–Jacquet vector, then we have
=ε(s,ρ,_)q^m(2s-1/2)ω^-1_ρ(ϖ) L(m(1-2s),ω^-1_ρ)/L(2ms,ω_ρ).
We first look into the odd case m=2n+1. Just like (<ref>), we decompose the domain of integrations _n()K_n as shells ϖ^l_n()K_n to see that
Z(s,t,W^∘_ρ,Φ_∘)=(_m()(1_m+_m(𝔭)))(_m+1()(1_m+1+_m+1(𝔭)))
×(Z(W_π,ϕ)+ ∑_l=1^∞ q^-l((2m+1)s+t+1/2)ω_ρ(ϖ^l)
ϕ(0) ∫_ω_ρ (x) x· Z(W_π,_^m)
),
and
Z(1/2-s,-1/2-t,W^∘_ρ,_(Φ_∘))
=(_m()(1_m+_m(𝔭)))(_m+1()(1_m+1+_m+1(𝔭))) (Z(W_π,ϕ) .
.+ ∑_l=1^∞ q^-l((2m+1)(1/2-s)-t)ω^-1_ρ(ϖ^l)
(ϕ)(0) ∫_ω^-1_ρ (x) x· Z(W_π,_^m)
).
We take W_π=π and ϕ=δ_e_m+1. In light of (<ref>) along with <Ref>, we are left with
Z(s,t,π_,Φ_∘)=(_m()(1_m+_m(𝔭)))(_m+1()(1_m+1+_m+1(𝔭))) Z(π,δ_e_m+1)
=(_m()(1_m+_m(𝔭)))(_m+1()(1_m+1+_m+1(𝔭))),
which is a non-zero constant. Appealing to <cit.> along with <cit.> and <cit.>, the local factor ∈[q^± s,q^± t] is a unit in q^-s and q^-t, that is, a monomial of the form =α q^-β s q^-η t, with α∈ and β, η∈ℤ. This forces that all but one of summands in (<ref>) must vanish. Among them, the only term which survives is
Z(π,δ_e_m+1)= Z(π,δ_e_m+1)=.
Combining all these calculations, we find that =, as needed.
We turn our attention to the case when n=2m and π does not have Friedberg-Jacquet vector.
By taking the advantage of <Ref>, Z(W_π,_^m)=0 for all W_π∈(π,). As before, <Ref> simply turns into
Z(W_π,ϕ)=Z(W_π,ϕ)= Z(W_π,ϕ).
All that remains is to choose W_π=π and ϕ=δ_e_m. In doing so, <Ref> guarantees that Z(π,δ_e_m) is precisely 1, from which the equality = follows.
Suppose that n=2m and π admits a Friedberg–Jacquet vector.
Upon choosing ϕ=_^m, the relation (_^m)=q^m/2δ_0 implies that Z(W_π,1_^m)=0.
With aid of <Ref>, we take W_π∈(π,_) satisfying Z(W_π,_^m)=1. In this way, we reduce <Ref> to
q^-m(1-2s)+m/2ω^-1_ρ(ϖ)L(m(1-2s),ω^-1_ρ)ε(s,ρ,_)
=(1+q^-2msω_ρ(ϖ)L(2ms,ω_ρ))= L(2ms,ω_ρ)
from which the required equality holds.
We accomplish the following nice expression of Bump–Friedberg gamma factors in terms of their Bessel functions.
Let π be an irreducible cuspidal representation of _n(). Then we have
=q^-m/2∑_g ∈_m() \_m() ∑_g' ∈_m() \_m() π( σ_2m[ g'; g ]σ^-1_2m) (e_1 ^tg'^-1 ^te_m)
in the even case n=2m
=q^-m+1/2∑_g ∈_m+1() \_m+1() ∑_g' ∈_m() \_m() π (J(g,g')) (e_1 ^tg^-1 ^te_m+1)
in the odd case n=2m+1. In particular, we have =.
Just as in the proof of <Ref>, we take W_π=π and ϕ to be an indicator function δ_e_m in the even case n=2m
and δ_e_m+1 in the odd case n=2m+1, at which point <cit.> ensures that Z(π,δ_e_m)=Z(π,δ_e_m+1)=1. It remains to note that (δ_e_m)(y)=q^-m/2(e_m ^ty) and
(δ_e_m+1)(y)=q^-m+1/2(e_m+1^ty).
We precisely evaluate the sum (<ref>), when π has the Friedberg–Jacquet vector.
Let π be an irreducible cuspidal representation of _n(). Suppose that n=2m and π admits a Friedberg–Jacquet vector.
Then we have
= =-ε(s,ρ,_)q^-m/2.
We insert the identity (<ref>) for . Next, we select W_π=π and ϕ=δ_e_m an indicator function on e_m, so that its Fourier transform
is given by (δ_e_m)(y)=q^-m/2(e_m ^ty). In this way, <Ref> becomes
+q^-m(1-2s)-m/2ω^-1_ρ(ϖ)L(m(1-2s),ω^-1_ρ)ε(s,ρ,_)Z(π,_^m)
=ε(s,ρ,_)q^m(2s-1/2)ω^-1_ρ(ϖ) L(m(1-2s),ω^-1_ρ)/L(2ms,ω_ρ).
We clear the denominator to express it as
-ω^-1_ρ(ϖ)q^-m(1-2s)
+q^-m(1-2s)-m/2ω^-1_ρ(ϖ)ε(s,ρ,_)Z(π,_^m)
=ε(s,ρ,_)q^m(2s-1/2)ω^-1_ρ(ϖ)-ε(s,ρ,_)q^-m/2.
We compare the coefficients of constant terms and q^2ms terms individually. In doing so, we arrive at a system of linear equations
=-ε(s,ρ,_)q^-m/2
-ω^-1_ρ(ϖ)q^-m+q^-m-m/2ω^-1_ρ(ϖ)ε(s,ρ,_)Z(π,_^m)
=ε(s,ρ,_)q^-m/2ω^-1_ρ(ϖ),
treating and Z(π,_^m) as unknown variables, from which the equality =-ε(s,ρ,_)q^-m/2 shall follow. Having <Ref> in mind, all we have to do is to take the complex conjugate. We end up with
==-ε(s,ρ,_)q^-m/2.
We will shift our focus to the coincidence of arithmetic and analytic local factors, but beforehand we state functional equations for over finite fields.
Let π be an irreducible cuspidal representation of _n().
*
If π does not admits a Friedberg–Jacquet vector, then we have
= 1
and =1.
* If n=2m and π admits a Friedberg–Jacquet vector, then we have
=q^-m and =q^-m/2.
Upon invoking <Ref>, the functional equation in <ref> can be seen from
the double-duality
Z( W_π,(ϕ))=Z(W_π,ϕ).
just as in <cit.>. In view of <Ref>, we combine <Ref> and <Ref> to see the rest of results, and this ends the proof.
In practice, <cit.> allows us to generalize the local function equation (<ref>) to
an irreducible generic representation ρ and a spherical representation Ind_B_n(F)^ GL_n(F)(μ_1 ⊗…⊗μ_n ) at least for the Bump–Friedberg γ-factor with one variable s (t=0). We do not strive for maximal generality, so this hypothesis might be redundant, but which holds in all our applications.
Let F be a local function field. Let ρ be an irreducible subquotient of a spherical representation Ind_B_n(F)^ GL_n(F)(μ_1 ⊗…⊗μ_n ). Then we have
Γ(s,ρ, BF,^♭_)=Γ(s+1/2,ρ,^♭_)Γ(2s,ρ,∧^2,^♭_)
=∏_1 ≤ i ≤ nΓ(s+1/2,μ_i,^♭_)∏_1≤ j < k ≤ nΓ(2s,μ_j ×μ_k,^♭_).
The proof of Lemma <ref> literally goes through word by word except that we use the unramified computation of
Bump and Friedberg <cit.> in place of <cit.> (cf. The reader may be referred to <cit.> for an alternative proof of unramified computations).
As an intermediate step, we establish that Bump–Friedberg γ-factors agree with the counterpart Langlands–Shahidi gamma factors in positive characteristics.
We extend the coincidence of two local factors to all characteristic, notably, zero in <Ref>.
Let ρ be a level zero supercuspidal representation of GL_n(F) over a local function field F. Then we have
Γ(s,t,ρ, BF,_)=Γ(s+t+1/2,ρ,_)Γ_ LS(2s,ρ,∧^2,_)=Γ(s+t+1/2,ρ,_)Γ(2s,ρ,∧^2,_).
Putting together <Ref> and <cit.>, Γ(s,t,ρ, BF,_) and Γ(s+t+1/2,ρ,_) are independent of t,
so that we take t to be 0. With the help of <cit.>, twists by unramified characters do not affect on the first equality. For this reason,
we may assume that π is unitary without loss of generality.
Applying Theorem <ref> to the level zero supercuspidal representation, there are a global field k with three places v_0, v_1, and v_∞
such that k_v_0≅ F, and an irreducible unitary cuspidal automorphic representation Π of GL_n(𝔸_k) with the required properties in <Ref>.
We choose a non-trivial additive character Ψ of 𝔸_k k, and assume, as we may, that Ψ_v_0=_.
The global functional equation for exterior square L-functions via the Langlands-Shahidi method can be read from <cit.> as
(<ref>), while that for standard L-functions due to Godement and Jacquet is extracted from <cit.> as
L^S(s,Π)=Γ(s,Π_v_0,Ψ_v_0) ∏_v ∈ S-{ v_0}Γ(s,Π_v,Ψ_v) L^S(1-s,Π̌).
In the meantime, taking into account local functional equations (<ref>),
the global functional equation for Bump-Friedberg L-functions in <cit.> (cf. <cit.>) takes
the following explicit form:
L^S(s+1/2,Π,ψ)L^S(2s,Π,∧^2)
=Γ(s,Π_v_0, BF,Ψ_v_0) ∏_v ∈ S-{ v_0}Γ(s,Π_v, BF,Ψ_v) L^S(1/2-s,Π̌) L^S(1-2s,Π̌,∧^2).
In accordance with Lemma <ref>, each places v in S-{ v_0} can be controlled in such a way that
Γ(s,Π_v, BF,Ψ_v)=Γ(s+1/2,Π_v,Ψ_v)Γ_ LS(2s,Π_v,∧^2,Ψ_v).
After plugging s+1/2 for s in (<ref>), the case for positive characteristics is at least done by dividing (<ref>) by the product of (<ref>) and (<ref>).
The Bump–Friedberg gamma factor Γ(s,t,ρ, BF,_) is comparable with Kazhdan close field theory.
The proof is nearly identical to that of <Ref>, and so we shall be brief.
For (F,ρ,ψ) that is Kaz-associated to (F',ρ',ψ'), we have
Γ(s,t,ρ, BF,_)=Γ(s,t,ρ', BF,_').
As a consequence of <Ref>, we know that ε(s,ρ,_)=ε(s,ρ',_'). With Vol(𝔭_F)= Vol(𝔭_F')=q^-1/2 and ω_ρ(ϖ_F)=ω_ρ'(ϖ_F') in hand, <Ref> readily implies our assertion.
We are now in a position to formulate the main factorization formula conjectured by Bump and Friedberg <cit.>.
Let φ be a n-dimensional tamely ramified representation of W_ corresponding to the level zero supercuspidal representation ρ(φ) of _n() via Macdonald correspondence. Then we have
s,tρ(φ)_ =ε(s+t+1/2,φ,_)Γ(2s,∧^2( φ),_).
Let F be a non-archimedean local field of characteristic 0 with its residue field 𝔬_F / 𝔭_F isomorphic to _q. Let F'=_q((t)) so that F and F' are 1-close.
At this point, we have given the detailed argument before in the proof of <Ref>. One may simply mimic the argument there, resting on <Ref>, <Ref>, <Ref>, and a part of local Langlands correspondence <cit.>.
§.§ The Bump–Friedberg epsilon factor and the Gauss sum
The following elementary lemma illustrates how the standard ε-factors and ε_0-factors are related by, but it does not seem to be recorded elsewhere.
We take the occasion to provide a proof for completeness.
Let φ be a n-dimensional tamely ramified representation of W_. Then we have
ε(s,φ,_)=ε_0(φ,_).
As mentioned in the proof of <Ref>, <cit.> asserts that ε(s,φ,_)=ε(s,ρ(φ),_)
is a complex number. The identity (<ref>) enforces that V^I_F={ 0}. The result then follows from <cit.>.
Our next task is to deduce the decomposition of Bump–Friedberg γ-factors, which we may think of as being the finite field analogue of <Ref>.
Let π(φ) be an irreducible cuspidal representation of _n() associated to tamely ramified representation φ of W_F of degree n via Macdonald correspondence. Then we have
γ(π(φ), BF,ψ)=ε_0(φ,_)γ(π(φ),∧^2,).
We break it down into two cases.
It is worth noting the equivalent statement that π admits a Jacquet-Shalika vector if and only if
it admits a Friedberg–Jacquet vector, which we defer to the next section (see <Ref>).
Suppose that π does not admit a Friedberg–Jacquet vector. Owing to <Ref>, <Ref>, <Ref> together with <cit.>,
we are guided to
γ(π(φ), BF,ψ)=sρ(φ)_
=ε(s+t+1/2,φ,_)Γ(2s,∧^2( φ),_)
=ε_0(φ,_)γ(π(φ),∧^2,).
It remains to deal with the case when n=2m is even and π admits a Friedberg–Jacquet vector.
This case only requires a purely local approach avoiding globalization, as
the result is immediate from combining <Ref> and <Ref> with <Ref>.
The following expression for ε_0(φ,_)ε_0(∧^2 ( φ),_) in terms of Gauss sums is thought of as the Bump–Friedberg analogue of <Ref> and <Ref>.
Let π be an irreducible cuspidal representation of _n(). We let α∈_q^n be a regular character
corresponding to π via Green's parametrization and m=⌊n/2⌋. Then we have
ε_0(φ,_)ε_0(∧^2 ( φ),_)
=(-1)^n+n 2q^-1/2( n+n 2)τ(α,ψ_n)τ(α^1+q^m,ψ_d)
∏_i=1^m-1τ(α^1+q^i,ψ_n),
where d=n if n is odd, and d=m if n is even.
When the second representation is the trivial one of _1(), _, Rankin–Selberg γ-factor γ(s,π(φ) ×_,_) degenerates into Godement–Jacquet ε-factor ε(s,π(φ), _).
Now, <cit.> along with <cit.> and <Ref> ensure that
ε_0(φ⊗_W_F,_)=γ(s,π(φ) ×_,_)=ε(s,φ, _)=ε_0(φ,_).
Our claim is a direct consequence of <cit.> and <cit.>.
The proof of <Ref> should be compared with that of <Ref> below. Unlike the equality
Γ(s,ρ(φ), As,_)
=ω^n-1_ρ(δ) λ_(_)^-n(n-1)/2ε(s, As (φ),_),
which is independently settled in <cit.> and <cit.> for irreducible supercuspidal representations ρ, the corresponding equality for exterior square γ-factors
has been less developed. We use Deligne–Kazhdan close field theory to deduce the required identity for level zero supercuspidal representations ρ, which is enough for applications.
Let π(φ) be an irreducible cuspidal representation of _n() associated to tamely ramified representation φ of W_F of degree n via Macdonald correspondence. Then we have
γ(π(φ),∧^2,)=ε_0(∧^2 (φ),_) and γ(π(φ), BF,ψ)=ε_0(φ,_)ε_0(∧^2 ( φ),_).
We separate it into two cases. Suppose that π does not admits a Jacquet–Shalika vector.
We use <Ref> in conjunction with <cit.> and <cit.> in order to see that
γ(π(φ),∧^2,)=Γ(s,ρ(φ),∧^2,_)=ε(s,∧^2( φ),_)=
ε_0(∧^2 (φ),_).
We now handle the remaining case when n=2m is even and π admits a Jacquet-Shalika vector. The central character ω_π=α_
becomes trivial so that α^1+q^m=. Since (α^1+q^i)^1+q^m= and α^1+q^i is not trivial for 0 ≤ i ≤ m-1, we invoke <cit.> to get τ(α^1+q^i,ψ_n)=-q^m. In light of <cit.>, we conclude that
ε_0(∧^2 ( φ),_)=(-1)^2m 2q^-1/22m 2τ(α^1+q^m,ψ_m)
∏_i=1^m-1τ(α^1+q^i,ψ_2m)=-q^-1/22m 2· q^m(m-1)τ(,ψ_m),
which agrees with γ(π(φ),∧^2,)=-q^-m/2 in <Ref>, having used the fact that τ(,ψ_m)=1.
Then the second equality can be justified from <Ref>.
We are prepared to gain a product formula for γ(π,∧^2,) and γ(π, BF,ψ) with regard to Gauss sums.
Let π be an irreducible cuspidal representation of _n(). We let α∈_q^n be a regular character
corresponding to π via Green's parametrization and m=⌊n/2⌋. Then we have
γ(π,∧^2,)=(-1)^n 2q^-1/2n 2τ(α,ψ_n)τ(α^1+q^m,ψ_d)
∏_i=1^m-1τ(α^1+q^i,ψ_n)
and
γ(π, BF,ψ)=(-1)^n+n 2q^-1/2( n+n 2)τ(α,ψ_n)τ(α^1+q^m,ψ_d)
∏_i=1^m-1τ(α^1+q^i,ψ_n),
where d=n if n is odd, and d=m if n is even.
§ PERIOD VECTORS AND DISTINCTIONS
In this section, we study the period vectors and integrals for four pairs of groups (G,L): Jacquet–Piatetski-Shapiro–Shalika period, Flicker–Rallis period, Friedberg–Jacquet period, and
Jacquet–Shalika period. Let σ be a level zero supercuspidal representation of G(E) coming from an irreducible cuspidal representation Π of G().
The group G and its closed subgroups L and N, as well as its representations Π and σ, are given by the following table:
Period Vectors G() G(E) L U Π σ r
Jacquet
–Piatetski-Shapiro
–Shalika GL_n()
× GL_n() GL_n(F)
× GL_n(F) GL_n _n π_1 ×π_2 ρ_1 ×ρ_2 -
Flicker–Rallis GL_2m+1() GL_2m+1(E) GL_2m+1 N_2m+1 π ρ As
Friedberg–Jacquet GL_2m() GL_2m(F) H_2m _m ×_m π ρ BF
Jacquet–Shalika GL_2m() GL_2m(F) S_2m _m ×_m π ρ ∧^2
Given a pair of representations σ and Π, their corresponding central characters ω_Π and ω_σ, and their associated Whittaker models (Π,) and (σ,), in addition to the character Ξ of L, are highlighted in the following table:
Period Vectors ω_Π ω_σ Ξ ν^α s_0 (Π,) (σ,) q^-β
Jacquet
–Piatetski-Shapiro
–Shalika
ω_π_1ω_π_2 ω_ρ_1ω_ρ_2 _L ν^s_0 (π_1,^-1)
⊗(π_2,) (ρ_1,_^-1)
⊗(ρ_2,_) q^-n/2
Flicker–Rallis ω_π ω_ρ _L ν^s_0 (π,) (ρ,) q^-n/2
Friedberg–Jacquet ω_π ω_ρ _L ν^s_0 (π,) (ρ,_) q^-m/2
Jacquet–Shalika ω_π ω_ρ Θ ν^s_0/2 (π,) (ρ,_) q^-m/2
For each four period vectors, we prove a relation between the period integrals and sums, and the L-factors L(s,σ,r) and γ-factors γ(Π,r,).
The local factors that show up in this section include the Rankin–Selberg factors L(s,ρ_1 ×ρ_2) and γ^⋆(π_1×π_2,), the Asai factors
L(s,ρ, As) and , the Bump–Friedberg factors L(s,ρ, BF) and , and the exterior square factors L(s,ρ,∧^2) and γ(π,∧^2,).
When Π=π_1 ×π_2, we let γ(Π,r,) denote γ^⋆(π_1×π_2,). With the above datum, the following statements are equivalent:
* Π admits a non-zero period vector, i.e., there exists a non-zero vector v ∈ V_Π
such that Π(g)v=v for all g ∈ L.
* ω_Π_= _.
* There exists W_Π∈(Π,) such that
∑_g ∈ U() \ L() W_Π(g) ≠ 0.
* γ(Π,r,)=q^-β.
* σ admits a nontrivial twisted period, i.e., _L()(σν^α s_0_L(), Ξ) ≠ 0 for some s_0 ∈ℂ.
* ω_σ_=ν^-α s_0_.
* There exists W_σ∈(σ,) such that the integral
∫_U() \ L() W_σ(g) ν^α s_0(g) dg
is well-defined and non-vanishing for some s_0 ∈.
* L(s,σ,r) has a pole at s=s_0.
In order to keep our exposition of the proof neat and concise, we separate it into two parts.
We deal with the equivalent assertions for finite field cases.
The equivalence of <ref> & <ref> has been explained in <Ref>, <Ref>, <cit.>, and <cit.>.
The equivalence of <ref> & <ref> is a direct consequence of functional equations: <Ref>, <Ref>, <Ref>, and <Ref>.
The equivalence of <ref> & <ref> is just a summary of <Ref>, <Ref>, <Ref>, and <cit.>.
We treat the parallel statements for non-archimedean local field cases.
The equivalent statement about central characters <ref> & <ref> is clear from the fact that ω_Π_=
ω_σ∘_. The equivalence of <ref> & <ref> is simply a restatement of <cit.>, <cit.>, and <cit.>.
Since L(s,σ,r) and L(1-s,σ,r) does not share any common poles,
the equivalence of <ref> & <ref> can be seen from <Ref>, <Ref>, <Ref>, and <cit.>.
The proof of equivalent statements <ref> & <ref> needs to be managed more carefully. The absolute convergence of the integral (<ref>) can be justified from <cit.>. We omit the detail, since its proof is standard <cit.>. Taking care of the characterization of poles in terms of residual integrals (<ref>), the proof of <cit.> which is originated from <cit.> carries over verbatim to the setting of the Flicker–Rallis and Friedberg–Jacquet periods.
The main point is to think of (<ref>) as a constant multiple of the leading coefficient in the Laurent expansion of Jacquet–Piatetski-Shapiro–Shalika integral Ψ(s,W_ρ_1,W_ρ_2,Φ), Flicker integral I(s,W_ρ,Φ),
Friedberg–Jacquet integral Z(s,W_ρ,Φ), and Jacquet–Shalika integrals J(s,W_ρ,Φ) at s=s_0, respectively.
When Π=π_1 ×π_2, our result can be regarded as a special case of <cit.> for d_π(σ)=1.
Let ρ be level zero supercuspidal representations of 2mn(F)
constructed from irreducible cuspidal representations π of _2m().
The following statements are equivalent:
* π admits a Friedberg–Jacquet vector.
* π admits a Jacquet–Shalika vector.
* ρν^s_0 is H_2m()-distinguished for some s_0 ∈ℂ.
* ρν^s_0 is (S_2m(F),Θ)-distinguished for some s_0 ∈ℂ.
The following equivalent statements can be read from the proof of <cit.>, which has its origin in the work of Matringe <cit.>:
* L(2s,ρ,∧^2) has a pole at s=s_0;
* L(s,ρ, BF) has a pole at s=s_0;
* ρν^s_0 is (S_2m(F),Θ)-distinguished;
* ρν^s_0 is H_2m(F)-distinguished.
All we need to do at this point is to look back on <Ref> for Friedberg–Jacquet and Jacquet–Shalika periods.
The author is deeply indebted to Elad Zelingher for sending a proof of <Ref> to us and kindly allowing us to reproduce it here.
We would like to thank Rongqing Ye for drawing the attention to the equality of exterior square gamma factors in author's thesis, and Andrew Knightly and Gilbert Moss for many fruitful discussions.
Thanks are also owed to David Schwein for elaborating the close field theory in Harish-Chandra learning seminar, where the author first learned the topic.
This work was supported by the National Research Foundation of Korea (NRF) grant
funded by the Korea government (No. RS-2023-00209992).
This manuscript has no associated data.
The author states that there is no conflict of interest.
amsplain
|
http://arxiv.org/abs/2307.00366v1 | 20230701153130 | Enhancing the EEG Speech Match Mismatch Tasks With Word Boundaries | [
"Akshara Soman",
"Vidhi Sinha",
"Sriram Ganapathy"
] | eess.AS | [
"eess.AS"
] |
The high-quality single-cloud reddening curve sample
R. Siebenmorgen1,
J. Smoker2,3, J. Krełowski4,
Karl Gordon5, and Rolf Chini6,7,8
Received: August 1, 2022/ Accepted: July 1, 2023
=========================================================================================
Recent studies have shown that the underlying neural mechanisms of human speech comprehension can be analyzed using a match-mismatch classification of the speech stimulus and the neural response. However, such studies have been conducted for fixed-duration segments without accounting for the discrete processing of speech in the brain. In this work, we establish that word boundary information plays a significant role in sentence processing by relating EEG to its speech input. We process the speech and the EEG signals using a network of convolution layers. Then, a word boundary-based average pooling is performed on the representations, and the inter-word context is incorporated using a recurrent layer. The experiments show that the modelling accuracy can be significantly improved (match-mismatch classification accuracy) to 93% on a publicly available speech-EEG data set, while previous efforts achieved an accuracy of 65-75% for this task.
Index Terms: Speech-EEG match mis-match task, auditory neuroscience, word segmentation, speech comprehension.
§ INTRODUCTION
Humans have the unique ability to communicate through speech. While speech comprehension is mastered from a young age, many neural processes enabling this seamless activity are unknown. One of the simplest ways of furthering the understanding of speech comprehension is through the recording of neural responses using electroencephalography (EEG).
The EEG is a non-invasive neural imaging technique that measures electrical activity in the brain by placing electrodes on the scalp <cit.>. It has been demonstrated that the EEG signal recorded during a speech listening task contains information about the stimulus <cit.>. One can investigate how the brain comprehends continuous speech by developing models that relate the speech with the EEG signal using machine learning techniques <cit.>.
The early attempts explored linear models for relating continuous natural speech to EEG responses <cit.>. They can be categorized into three different types - forward models, backward models, or hybrid models. The forward models predict EEG from speech stimuli, while the backward models reconstruct speech from EEG responses. In many studies, the correlation between the predicted and ground truth signal is considered as a measure of neural tracking <cit.>. However, linear models may be ill-equipped to capture the non-linear nature of the auditory system. Recently, deep neural networks have been employed to compare and analyze speech stimuli and EEG responses. Several studies have shown promising results with deep learning models for EEG-speech decoding <cit.>.
These advancements in speech decoding from the brain will also be beneficial for the development of brain-computer interfaces(BCIs).
In many of the computational approaches, the speech envelope has been the most commonly used feature <cit.>. Other features such as spectrograms <cit.>, phonemes <cit.>, linguistic features <cit.>, and phono-tactics <cit.> have also been explored with linear forward/backward models. Lesenfants et al. <cit.> demonstrated that combining phonetic and spectrogram features improves the EEG-based speech reception threshold (SRT) prediction.
While forward/backward models and correlation tasks were previously explored, the match mismatch tasks have been recently investigated as an alternative task <cit.>. Here, the task is to identify whether a portion of the brain response (EEG) is related to the speech stimulus that evoked it. In the previous studies using the match mismatch task, the auditory stimulus and speech of a fixed duration (5s) are processed through a series of convolutional and recurrent layers <cit.>.
In this work, we argue that the prior works on speech-EEG match mismatch tasks are incomplete without considering the fragmented nature of speech comprehension. While speech and EEG signals are continuous, the neural tracking of speech signals is impacted by the linguistic markers of speech <cit.>. The most striking of this evidence comes from models of word surprisal <cit.> with N400 response evoked for unpredictable words <cit.>. In the simplest form, we hypothesize that the task of relating continuous speech with EEG must also include word-level segmentation information.
We propose a deep learning model to perform match mismatch classification tasks on variable length inputs using word boundary information.
The model consists of convolutive feature encoders of both the speech and EEG inputs. Further, the word segmentation information, obtained by force-aligning the speech with the text data using a speech recognition system, is incorporated in the feature outputs through a word-level pooling operation. The pooled representations are further modelled with recurrent long short-term memory (LSTM) layers to model the inter-word context. The final output from the LSTM network for the speech and EEG streams is used in the match mismatch classification task.
The major contributions of this paper are:
* Proposing a match mismatch classification model that can incorporate word boundary information.
* Proposing a loss function based on Manhattan distance for the match mismatch task.
* Experimental illustration of the effectiveness of the model, where the classification performance is significantly improved over the prior works.
* A detailed set of ablation experiments to elicit the impact of word boundary information in speech EEG matching task.
§ METHODS
§.§ Dataset
We experiment with a publicly available speech-EEG data set[https://doi.org/10.5061/dryad.070jc] released by Broderick et al. <cit.>. It contains electroencephalographic (EEG) data recorded from 19 subjects as they listened to the narrative speech. The subjects listened to a professional audio-book narration of a well-known work of fiction read by a single male speaker. The data consists of 20 trials of roughly the same length, with each trial containing 180s of audio. The trials preserved the chronology of the storyline without repetitions or breaks. The sentence start and end time, and the word-level segmentation of the speech recordings are provided. The word segmentation is obtained using a speech recognition-based aligner <cit.>. The EEG data were acquired using the 128-channel BioSemi system at a sampling rate of 512Hz, while the audio data was played at 16kHz. Overall, the speech-EEG data amounted to a duration of 19 hours.
§.§ EEG Preprocessing
The CNSP Workshop 2021 guidelines[https://cnspworkshop.net/resources.html] served as the basis for the EEG pre-processing pipeline. It is implemented using the EEGLAB software <cit.>.The EEG signal is band-pass filtered between 0.5-32 Hz. Then it is down-sampled to 64Hz. After removing noisy channels (determined using the channel level statistics), the EEG channels are re-referenced to the mastoids. The data from each channel is also normalized by computing the z-score. The EEG pre-processing code and the codes used for further analysis discussed in this paper publicly available[https://github.com/iiscleap/EEGspeech-MatchMismatch].
§.§ Acoustic Feature - Mel Spectrogram
The mel spectrogram of the speech signal is used as the stimulus feature. The mel spectrogram is computed for each sentence. A mel filter bank with 28 filters distributed in the mel-scale ranging from 0-8kHz frequency is used. The input audio is pre-emphasized with a factor of 0.97 before windowing. In order to obtain speech features at a sampling frequency of 64Hz, the spectrogram computation uses a Hamming window function of the width
31.25ms with half overlap.
§.§ Match-mismatch classification task
The accuracy of a match-mismatch classification task is employed in this study as a measure of the neural tracking of speech. fig:mmtask illustrates this paradigm in detail. The classification model is trained to relate the speech segment to its corresponding EEG response. In this study, the segment is chosen to be a sentence. We also compare with prior works <cit.>, which perform this task at the sentence level. The time-synchronized stimulus of the EEG response segment is the matched speech. Another sentence from the same trial of data collection is chosen as the mismatched speech. Selecting mismatched samples from the same trial makes the classification task challenging enough to encourage the model to learn the stimulus-response relationships. This sampling approach also avoids the chances of memorizing the speech features along with its label. The matched EEG response for these speech sentences is also included in the mini-batch training to ensure that memorisation is disallowed.
§.§ Model architecture
We employed different modelling paradigms to analyze the encoding of acoustic and semantic features in EEG signals.
§.§.§ Baseline Model
Recently, Monesi et al. <cit.> showed that convolutional neural network (CNN) and long short-term memory (LSTM) based architectures outperform linear models for modelling the relationship between EEG and speech. This work employed a match mismatch classification task on fixed duration windows of speech and their corresponding EEG data. The work also demonstrated that mel spectrogram features of the speech stimulus provide the best neural tracking performance compared to other representations like speech envelope, word embedding, voice activity and phoneme identity <cit.>. They have performed the match mismatch task of 5s duration segments with 90% overlap between successive frames. The prior works <cit.> use an angular distance between EEG and speech representations, average pooling over time, and a sigmoid operation. The model is trained with binary cross entropy loss <cit.>.
We use this approach as the baseline setup for the proposed framework.
§.§.§ Proposed match mismatch Model
The speech signal representation 𝐒 is the mel-spectrogram of dimension 28 × T, where T denotes the duration of a speech sentence at 64Hz. Similarly, the EEG data for the same sentence is denoted as 𝐄, and it is of dimension 128 × T.
Both the speech and the EEG features are processed through a parallel neural pipeline, as depicted in fig:speech_nw, without any weight sharing. This sub-network consists of a series of convolutional layers and LSTM layers.
The convolutional layers implement 1-D and 2-D convolutions with 1 × 8 and 16 × 9 kernel sizes, respectively. The 1-D and 2-D layers have 8 and 16 kernels, respectively. Further, the 2-D CNN layers also introduce a stride of (1,3) to further down-sample the feature maps.
The word boundary information available in the dataset is converted to the equivalent sampling rate (both EEG and audio representations at 64/3 Hz). The audio and EEG feature maps are average pooled at the word level using the word boundary information. As a result, for a given sentence, the EEG and speech branches generate vector representations sampled at the word level. An LSTM layer models the inter-word context from these representations. This layer is included in both the stimulus (speech) and response (EEG) pathways.
The last hidden state of the LSTM layer, of dimension 32, is used as the embedding for the stimulus/response, denoted as R_s/R_e respectively.
We propose the Manhattan distance between the stimulus and response embeddings. The similarity score is computed as,
d(𝐄, 𝐒) = exp (- || R_e - R_s ||_1)
The similarity score for the matched pair (𝐄, 𝐒^+) and mismatched pair (𝐄, 𝐒^-) are computed. The model, with a dropout factor of 0.2, is trained using a binary cross-entropy loss, with [d(𝐄, 𝐒^+), d(𝐄, 𝐒^-)] mapped to [1, 0] targets.
§.§.§ Training and Evaluation Setup
The dataset contained recordings from 19 subjects. All the experiments reported in this work perform subject-independent evaluation (the subjects used in training are not part of the evaluation). Further, we report the average results of 3-fold validation, with classification accuracy as the metric.
The experiments are run with a batch size of 32. The models are trained using Adam optimizer with a learning rate of 0.001 and weight decay parameter of 0.0001. The models are learned with a binary cross-entropy loss.
§ RESULTS AND DISCUSSION
§.§ Baseline model on fixed duration segments.
The baseline implementation for comparison is the work reported in Monesi et al. <cit.>. This architecture is an LSTM model that operates on fixed-duration audio EEG data. All experiments are run for 20 epochs of training. The result of the model with fixed duration frames is given in tab:fixed. In order to increase the amount of training data, we also use 90% overlap between segments.
§.§ Baseline model at sentence level
The baseline model architecture is implemented for fixed-duration segments in training and testing. In order to operate at the sentence level, we have modified the dot product operation as element-wise multiplication followed by an average pooling. This score is passed through the sigmoid function, and the model is learned on sentence-level audio-EEG pairs. For the mismatch condition, a random speech spectrogram is paired with the EEG to generate the score. These results are reported in Table <ref>.
§.§ Proposed model with sentence level processing
The results with the proposed model are also reported in Table <ref>. We compare three different similarity scoring approaches, i) Angular (Cosine) similarity, ii) Negative L2 distance (Euclidean) and iii) proposed Manhattan similarity (Eq. <ref>). As seen in the results, the Euclidean and Manhattan similarity improves over the cosine similarity. The proposed EEG-speech match-mismatch classifier model reports an average accuracy of 93.97%, which is statistically significantly higher than the baseline model's sentence-level performance (Wilcoxon signed-rank test, p<1e-4). The epoch-wise accuracy for test fold-1 is also illustrated in fig:result_main.
§.§ Mismatch sample selection for sentence processing
Previous match-mismatch EEG-speech studies <cit.> dealt with fixed-duration speech and EEG segments. Cheveigne et al. <cit.> used an unrelated random segment as a mismatched sample, while studies like <cit.> employ a neighbouring segment as the mismatched sample.
The sampling of the mismatched segments from the same trial ensures that the distribution of the matched and mismatched segments is similar. We explore a similar strategy for sentence-level analysis by selecting the neighbouring sentence in the same trial as the mismatched sample.
Table <ref> shows how the mismatch selection strategy affects the classification accuracy. The average accuracy has a slight degradation when the next sentence is used as the mismatch sample.
§.§ Importance of accurate word boundaries
We conducted several ablation tests to understand the impact of the word boundary information. The model is fed with random word boundaries in the first set of experiments. Each sentence is assumed to contain a fixed number of words and their boundaries are chosen at random. The results are reported in Figure <ref>.
The accuracy improves gradually when the number of word boundaries is increased, even though they are random. The accuracy of the experiment using 8 words in a sentence is 64%, which is significantly lower than the model's performance with accurate boundary information (Wilcoxon signed-rank test, p<0.0001).
The final experiment shown in fig:result_randomW assumes a random number of words in each sentence with random boundaries, and it provided an accuracy of 60%.
In the second set of experiments, we provide accurate word boundary information but skip the word boundary information at every n-th word. These results are reported in Table <ref>. For example, Skip-3 in this table corresponds to removing the word boundary inputs at every 3-rd entry. The pooling is done with the rest of the available word boundaries for these experiments. As seen in Table <ref>, the results with a higher value of n (of skip-n experiments), approach the setting without any removal (accuracy of 93.97%). It is also noteworthy that, even with the Skip-2 setting (word boundary information available for every alternate word), the performance is 82.3%, significantly better than the baseline model.
This study also demonstrates that accurate word boundary information significantly impacts the match mismatch classification, which further illustrates that the EEG signal encodes the word level tracking of speech.
§ CONCLUSIONS
In this paper, we have attempted to validate the hypothesis that speech comprehension in the brain is segmented at the word-level in the EEG responses to continuous speech.
For this task, we developed a deep neural network model consisting of convolutional encoders, word-level aggregators and recurrent layers. A novel loss function for this task based on Manhattan similarity is also proposed.
The proposed model validated the hypothesis by improving the accuracy of match-mismatch classification of speech and EEG responses at the sentence level. The incorporation of word boundary information yields statistically significant improvements compared to the baseline model, demonstrating the importance of this information in the neural tracking of speech. Moreover, the proposed model handles variable length inputs. Overall, this model can have potential applications in various domains, including speech recognition, brain-computer interfaces, and cognitive neuroscience. Future research could explore this model's extension to incorporate multi-modal inputs in the form of textual data in addition to the speech spectrogram.
IEEEtran
|
http://arxiv.org/abs/2307.03088v1 | 20230706160110 | Label-Synchronous Neural Transducer for End-to-End ASR | [
"Keqi Deng",
"Philip C. Woodland"
] | eess.AS | [
"eess.AS"
] |
Volumetric Occupancy Detection: A Comparative Analysis of Mapping Algorithms
Manuel Gomes1,2,
Miguel Oliveira1,2, and Vítor Santos1,2
Member, IEEE
============================================================================
Neural transducers provide a natural approach to streaming ASR. However, they augment output sequences with blank tokens which leads to challenges for domain adaptation using text data. This paper proposes a label-synchronous neural transducer (LS-Transducer), which extracts a label-level encoder representation before combining it with the prediction network output. Hence blank tokens are no longer needed and the prediction network can be
easily adapted using text data. An Auto-regressive Integrate-and-Fire (AIF) mechanism is proposed to generate the label-level encoder representation while retaining the streaming property. In addition, a streaming joint decoding method is designed to improve ASR accuracy. Experiments show that compared to standard neural transducers, the proposed LS-Transducer gave a 10% relative WER reduction (WERR) for intra-domain Librispeech-100h data, as well as 17% and 19% relative WERRs on cross-domain TED-LIUM 2 and AESRC2020 data with an adapted prediction network.
E2E ASR, neural transducer, domain adaptation
§ INTRODUCTION
End-to-end trainable (E2E) automatic speech recognition (ASR)
simplifies traditional hidden Markov model (HMM)-based methods and directly transcribes speech into text <cit.>. The neural transducer (NT) is a widely used E2E ASR structure with good streaming properties <cit.>
compared to the attention-based encoder-decoder (AED) approach. While the AED can also be applied to streaming ASR <cit.>, it requires learning accurate monotonic alignments and always incurs significant latency <cit.>.
When using a large amount of labelled training data, the neural transducer model has been reported
to outperform HMM-based methods on some public data <cit.>. However, it still suffers from domain shift <cit.>, and target-domain labelled data can not always be collected in quantity <cit.>.
Therefore, it is more efficient to adapt transducer models to unseen domains
using text-only
data, which is often easier to obtain
<cit.>.
Domain adaptation is more challenging for E2E ASR than for the HMM-based approach <cit.>, which uses a separate language model (LM) that can easily employ text-only data.
Although the prediction network of NT models is analogous to the LM in terms of structure <cit.>, it doesn't perform solely as an LM <cit.> as it needs to coordinate with the acoustic encoder to generate both blank and non-blank tokens <cit.>.
In fact,
the blank token plays a key role in the standard NT to augment output sequences since it allows the
frame-level encoder output to be combined with the label-level prediction network output <cit.>.
The motivation of this paper is to modify the NT model so that the blank token isn't required and hence make the NT model more
adaptable with text-only data while retaining low-latency streaming properties. This paper proposes a label-synchronous neural transducer (LS-Transducer), which extracts a label-level representation from the acoustic encoder output before combing it with the prediction network output, thus avoiding the need for the blank token to align them.
To generate this label-level encoder representation,
an Auto-regressive Integrate-and-Fire (AIF) mechanism is proposed,
which is extended from Continuous Integrate-and-Fire (CIF) <cit.> approach but
which is more efficient due to the parallel structure and increased robustness to inaccurate unit boundaries.
In addition, a streaming joint decoding method is designed to achieve better accuracy.
ASR experiments with models trained on the LibriSpeech-100h data set <cit.> show that
the proposed LS-Transducer gives significantly reduced WER over standard NT models for
both intra-domain and cross-domain scenarios.
The rest of this paper is organised as follows:
Section <ref> introduces general related work and Section 4 reviews the CIF on which the AIF technique is based.
Section 4 describes the AIF and LS-Transducer methods proposed in this paper.
Section 5 details the experiments and Section 5 draws conclusions.
§ RELATED WORK
Several studies have explored the use of text-only data for E2E ASR domain adaptation.
One solution
is LM fusion that incorporates an external LM into E2E ASR <cit.>, often using shallow fusion <cit.>. However, the E2E ASR model implicitly learns an internal LM characterising the source domain training data <cit.>. To solve this issue, the internal LM of the E2E ASR can be estimated <cit.>.
For example, HAT <cit.> was proposed as an efficient way to estimate the internal LM by removing the effect of the encoder from the transducer network.
However, the internal LM estimation complicates the decoding process and accurate internal LM estimation is not always feasible due to domain mismatch <cit.>.
Recently the
factorised neural transducer <cit.> investigated fine-tuning the internal LM on target-domain text but can give rise to intra-domain performance degradation <cit.>. The use of Kullback-Leibler divergence regularisation can avoid this issue but limits the internal LM learning the target domain <cit.>. Another approach is to use Text-to-Speech (TTS) to synthesise speech from target-domain text which is then
used to fine-tune the transducer models <cit.>, but this method is computationally expensive and not flexible for fast adaptation <cit.>.
§ CONTINUOUS INTEGRATE-AND-FIRE (CIF)
Before introducing the LS-Transducer, CIF <cit.> is regarded as background knowledge, as the essential AIF mechanism in LS-Transducer is an extension of CIF.
The aim of CIF
technique is to
estimate a monotonic alignment for streaming ASR. As shown in Fig. <ref>,
CIF first learns a weight α_t for each frame of encoder output E_t.
This weight α_t can be obtained by a Sigmoid function, after mapping the encoder output E_t to a one-dimensional scalar using convolutional or fully-connected layers <cit.> or even directly using a particular element of E_t <cit.>.
The weights are then accumulated
across time and used to integrate the current label acoustic representation via a weighted sum. This continues until the accumulated weight reaches above a threshold of 1.0,
at which point the current weight α_t is split into two parts: one part to make the accumulated weight for the current label to be exactly 1.0 and the remainder will be used for the integration of the next label.
The CIF process then “fires" the integrated acoustic representation c_j that corresponds to the label y_j and resets the accumulation.
The CIF process is shown in Fig. <ref>, where the predicted weights (α_1, ⋯, α_T) could be, e.g., (0.2, 0.9, 0.2, 0.3, 0.6, 0.1 ⋯). Then, α_2=0.9 is split into 0.8 and 0.1, so that the representation c_1=0.2E_1+0.8E_2 can be emitted.
A similar situation arises for α_5=0.6, which is split into 0.4 and 0.2, so that c_2=0.1E_2+0.2E_3+0.3E_4+0.4E_5.
Subsequent calculations of c_3, c_4, etc. proceed similarly until the end of the encoder output
During training, to force the representations 𝐂=(c_1, ⋯, c_L) to have the same length L as the target sequence, a scaling strategy is employed: α̂_t=α_t · (L/∑_i=1^Tα_i)
where α̂_t is used instead of α_t to extract 𝐂. In addition, a quantity loss ℒ_ qua is used to supervise CIF to extract a number of integrated representations close to the target length L: ℒ_ qua=|∑_i=1^Tα_i - L|, since the number of label representations generated is found by the accumulation ∑_i=1^Tα_i during decoding.
Note that CIF doesn't always locate the real acoustic boundaries and accurately predict the text sequence length <cit.>, especially when using units like BPE in English E2E ASR tasks.
Since the scaling strategy is used during training, a mismatch exists between training and decoding. Furthermore, CIF is a serial method <cit.>, which can reduce training efficiency.
§ LABEL-SYNCHRONOUS NEURAL TRANSDUCER
This paper proposes a label-synchronous neural transducer (LS-Transducer), which is illustrated in Fig. <ref>. The LS-Transducer uses the proposed AIF mechanism to generate a label-level encoder representation before combining it with the prediction network output.
To facilitate the adaptation of the prediction network with text-only data, the LS-Transducer combines the logits computed from the prediction network and label-level encoder representations, rather than frame-level hidden features, in an additive manner.
During training, the LS-Transducer uses the cross-entropy (CE) loss ℒ_ ce between the target text and logits output by the joint network, as shown in Fig. <ref>. In addition, the CTC <cit.> loss ℒ_ ctc is also used by the encoder to help the model converge.
§.§ Auto-regressive Integrate-and-Fire (AIF)
This paper proposes the AIF mechanism to generate label-level representations 𝐂=(c_1, ⋯, c_L) from the acoustic encoder output 𝐄=(E_1, ⋯, E_T) as in Fig. <ref>. AIF extends CIF and
also uses accumulated weights α_t to locate boundaries and thus decide when to fire a label-level representation c_j.
The difference is that when extracting the c_j, AIF uses dot-product attention instead of the weights α_t and takes the prediction network intermediate output as the query.
AIF generates the c_j in an auto-regressive fashion, which has many advantages over conventional CIF.
First, AIF no longer needs to split the weight α_t at the boundary as mentioned in Sec. <ref>, so it can be implemented in parallel by masking certain attention weights. Second, AIF does not need to employ the scaling strategy to enforce the extracted 𝐂 to have the same length as the target, as the length of 𝐂 is decided by the number of queries,
so there is no mismatch
between training and decoding. Third, although the boundaries found using the accumulated weights α_t are not always accurate,
as shown in the dashed box of Fig. <ref>,
AIF addresses this problem by taking the first frame as the left boundary when extracting the c_j.
To be more specific, inspired by <cit.>, this paper employs a simple method to have
the weight α_t by applying the Sigmoid function to
the last element of each encoder output frame E_t [AIF is not limited to this simple method of generating α_t, other methods including convolutional or fully-connected layers could be used.]. Other elements of the encoder output are used to extract the label-level representation 𝐂.
The first step decides when to fire the label-level representation c_j where j∈ (1, L), AIF achieves this by accumulating the weights α_t from left to right until it exceeds j, and this time step is recorded as T_j+1.
If the j isn't reached until all T frames have been read, T_j=T.
Second, the located E_1:T_j,1:d-1 is used as the keys and values and
the c_j can be extracted via a dot-product attention operation as follows:
c_j = Softmax(d_j^ inter· FC(E_1:T_j,1:d-1)^⊤)· FC(E_1:T_j,1:d-1)
where query d_j^ inter is the prediction network intermediate output,
d is the encoder output dimension, and FC denotes
a fully connected layer to map the E_t,1:d-1 to the same dimension as d_j^ inter. This process is carried out incrementally until the last c_L is generated.
In the example given in Fig. <ref>, the accumulated weight α_t exceeds 1.0 at the 5-th time step (i.e. ∑_i=1^5α_i>1 and ∑_i=1^4α_i≤1), so the 𝐄_1:4,1:d-1 are used as the keys and values to extract c_1 with d_1^ inter as the query; similarly, accumulated weight α_t exceeds 2 at the 11-th time step, so the 𝐄_1:10,1:d-1 are the keys and values for c_2 with query d_2^ inter.
Subsequent extraction for c_3, c_4, etc. are similar.
AIF also uses the quantity loss ℒ_ qua=|∑_i=1^Tα_i - L| to encourage the model to locate accurate boundaries. Hence, the overall training objective ℒ_ all of the LS-Transducer is:
ℒ_ all=γℒ_ ctc+(1-γ) ℒ_ ce+μℒ_ qua· L
where L is target length, and γ and μ are hyper-parameters.
§.§ Streaming joint decoding
Since the LS-Transducer uses the CTC branch to help model convergence,
the CTC score is also used to refine search space and eliminate irrelevant alignments without increasing latency. Hence this paper proposes a method to extend the CTC/attention joint decoding <cit.> method to the streaming decoding scenario for the LS-Transducer.
Suppose g is a partial hypothesis, q is a token appended to g, and h=g· q is a new hypothesis. In standard CTC/attention
joint decoding <cit.>,
the CTC prefix scores S_ ctc are computed as:
p_ ctc(h,⋯|E)=∑_ν∈( 𝒰∪[ eos])p_ ctc(h·ν|E)
S_ ctc(h, E)= log(p_ ctc(h,⋯|E))
where ν denotes all possible non-empty tokens (U denotes normal tokens) and p_ ctc is the CTC sequence probability <cit.>. However, if q is end-of-sentence ([ eos]), the CTC score is computed as:
S_ ctc(h, E)= log(γ_T^(n)(g)+γ_T^(b)(g))
where γ_T^(n)(g) and γ_T^(b)(g) are the forward probabilities <cit.> of the g over T frames, with CTC paths ending with
a non-blank or blank label, respectively.
These processes depend on whole encoder output E with T frames, hampering streaming decoding.
To achieve streaming joint decoding, inspired by <cit.>,
this paper uses
S_ ctc(h, E_1:T_q) to approximate S_ ctc(h, E), where T_q is the maximum number of encoder output frames can be accessed when predicting token q, which is
decided by the accumulated weights α_t of the
proposed AIF. However, when the corresponding CTC spike of token q does not appear during
E_1:T_q, preliminary experiments showed this could greatly degrade the performance because the CTC score S_ ctc(h, E_1:T_q) would be very likely to predict [ eos].
Previous work alleviated this problem by waiting until the corresponding CTC spike appeared before starting decoding <cit.> or switching to decoding the next block of speech when predicting the [ eos] label <cit.>. However, these methods are not feasible for the proposed LS-Transducer.
To address this problem, a streaming joint decoding method is proposed that modifies the computation of the CTC prefix scores for [ eos], which is shown as follows where h=g· [ eos]:
S_ ctc(h, E_1:T_q)=
log(p_ ctc(h,⋯|E_1:T_q)), T_q < T
log(γ_T_q^(n)(g)+γ_T_q^(b)(g)), T_q = T
This means that if the speech has not been fully read (i.e. T_q < T), h won't be considered as complete and
the score for [ eos] will be extremely small because CTC never sees the [ eos] label during training.
This makes sense because the CTC prefix score should only consider ending prediction after loading the whole speech.
The process is shown in Algorithm 1 which modifies the condition in line 2 compared with the standard CTC prefix score calculation <cit.>. z_t and p(z_t=q|E_1:T_q) are the frame-level label and probability.
[ sos] denotes start-of-sentence.
Other details of the CTC prefix score follow <cit.>.
During streaming joint decoding, the score S_ lst assigned by the LS-Transducer is computed based on the predicted probability p_ lst and follows the chain rule, where p_ lst is obtained by applying a Softmax to the
final logits output by the joint network as shown in Fig. <ref>.
S_ lst(h, E_1:T_q)=∑_i=1^n log(p_ lst(h_i|h_1, ⋯, h_i-1, E_1:T_i))
where n is the length of hypothesis h=g· q and T_i is the corresponding right boundary of the i-th label as determined by the proposed AIF.
The overall score S is computed as:
S(h, E_1:T_q) = β S_ ctc(h, E_1:T_q)+(1-β)S_ lst(h, E_1:T_q)
§ EXPERIMENTS
§.§ Corpus
ASR transducer models were trained on the “train-clean-100” subset of Librispeech <cit.>, a read audiobook corpus, and its dev/test sets (i.e. “test/dev-clean/other”) were used for intra-domain evaluation.
The training set transcripts and Librispeech LM training text were used as source-domain text data.
In order to show the effectiveness of the LS-Transducer on domain adaptation, two out-of-domain corpora were employed.
The first was the TED-LIUM2 <cit.> dev/test sets, which is spontaneous lecture-style data.
The training set transcripts and TED-LIUM2 LM training text were used as the
target-domain adaptation text data.
The second was AESRC2020 <cit.> dev/test sets, which include human-computer interaction speech commands. The target-domain text data was its training set transcriptions.
§.§ Model descriptions
All models implemented are based on ESPnet <cit.> toolkit. Experiments used the raw speech data as input and 1000 modelling units as text output, including 997 BPE units and 3 non-verbal symbols.
Three standard Transformer transducer (T-T) <cit.> models were built with streaming Wav2vec2.0 encoders and different prediction networks
and compared to the proposed LS-Transducer.
The T-T with an
embedding layer as the prediction network is denoted as a Stateless T-T (319M parameters); the T-T with a 6-layer 1024-dimensional LSTM prediction network is denoted as LSTM T-T (370M parameters); and the T-T with a 6-layer unidirectional Transformer prediction network (1024 attention dimension, 2048 feed-forward dimension, and 8 heads) is denoted as Transformer T-T (371M parameters). All three T-T baseline models used the
Wav2vec2.0 encoder <cit.> (i.e. "w2v_large_lv_fsh_swbd_cv").
A chunk-based mask <cit.> was implemented to achieve a streaming Wav2vec2.0 encoder during training, with a 320 ms average latency.
The proposed LS-Transducer (373M parameters) had the same encoder as the three standard T-T baseline models and had a unidirectional Transformer prediction network
that was the same as the Transformer T-T. The 3-rd layer output of the prediction network was used as the intermediate output for AIF.
The FCs in Fig. <ref> mapped dimensions from 1024 to 1000.
In the Librispeech-100h data, the average number of frames corresponding to each unit is approximately 11, or 220 ms, which is less than 320 ms, so theoretically the AIF in the LS-Transducer did not introduce any additional latency.
In Eq. <ref>, γ and μ were set to 0.5 and 0.05, respectively, Three standard T-T models also used the CTC branch with 0.3 weight to help training.
In Eq. <ref>, β was set to 0.3 except for TED-LIUM2 which was set to 0.4. A Transformer-based AED offline model (394M parameters) was also built as a topline model, which uses the same streaming Wav2vec2.0 encoder but was trained in an offline manner and decoded via offline CTC/attention joint decoding.
A source-domain 6-layer Transformer LM was trained on the source-domain text data
for 25 epochs and fine-tuned on the target-domain text for extra 15 epochs as the target-domain LM.
The source-domain LM was used to initialise the prediction network of the LS-Transducer but not for the three standard T-T models as this didn't improve performance <cit.>. ASR models were trained for 40 epochs.
When adapting the LS-Transducer prediction network, the first 3 layers were fixed, and the rest were fine-tuned on the adaptation text data with 50 and 20 epochs for AESRC2020 and TED-LIUM2 data.
Shallow fusion <cit.> was implemented with a 0.2 weight if using the target-domain LM for domain adaptation. The beam size was 10 during decoding.
§.§ Experimental results
Experiments compared the proposed LS-Transducer with the standard T-T models for both intra-domain and cross-domain scenarios. Ablation studies were conducted in order to verify the effectiveness of the AIF and prediction network initialisation.
Some related methods were also implemented and experimentally compared to the LS-Transducer.
§.§.§ Intra-domain ASR
Table <ref> lists intra-domain ASR results, in which
our models achieved promising results on the Librispeech-100h benchmark and
the Transformer T-T achieved the best results among the three standard T-T models, indicating that the prediction network with a strong Transformer structure was still effective in further improving ASR performance.
In addition, the proposed LS-Transducer still clearly outperformed the strong standard Transformer T-T model with 10.2% relative WER reduction (WERR). Furthermore, the LS-Transducer even performed virtually as well as the offline AED topline model,
which demonstrates the advantages of the LS-Transducer, including that the prediction network can be flexibly initialised with the LM and that the AIF is robust to inaccurate boundaries.
§.§.§ Cross-domain ASR
Experiments were conducted to compare cross-domain ASR performance on the TED-LIUM 2 and AESRC2020 corpora. As shown in Table <ref>, the proposed LS-Transducer gave the best performance on both cross-domain corpora. After adapting the prediction network on the target-domain text data, further improvements could be obtained which surpassed the best result achieved by the three standard T-T models, with 17.0% and 19.0% relative WERR on TED-LIUM 2 and AESRC2020, respectively.
Even when the standard T-T models used external target-domain LM to improve cross-domain performance through shallow fusion <cit.>, there was still a performance gap of around
10% relative WERR compared to the proposed LS-Transducer with the prediction network adapted. In addition, the LS-Transducer could also use the external target-domain LM via shallow fusion to further improve the cross-domain performance.
Therefore, it can be concluded that the proposed LS-Transducer surpassed the standard T-T models in the source domain and is also very effective and flexible for domain adaptation. Since the LS-Transducer no longer predicts the blank label, the prediction network performs like a standard LM.
§.§.§ Ablation studies
Ablation studies were conducted
to evaluate the effectiveness of the proposed AIF mechanism. As shown in Table <ref>, the proposed AIF greatly outperformed CIF <cit.> and played an essential role that allows the LS-Transducer to outperform the strong Transformer T-T model.
This is because the proposed AIF mechanism has many advantages that improve the WER over CIF.
These include that there is no mismatch between training and decoding and more robustness to inaccurate acoustic boundaries.
In addition, considering the prediction network of the LS-Transducer was initialised by a source-domain LM, further ablation studies were conducted to evaluate the effect of initialising the prediction network of the standard Transformer T-T model. As shown in Table <ref>, pre-training the prediction network of Transformer T-T did not improve performance but led to degradation, which is consistent with the conclusion in <cit.>. Therefore, the proposed LS-Transducer provides a natural way to use pre-trained LM in E2E ASR, which has been actively studied <cit.>.
§.§.§ Comparison with related work
As further point of comparison, we implemented factorised T-T <cit.> with a stateless blank predictor and HAT <cit.>
based on the Transformer T-T baseline to compare with our LS-Transducer. Table <ref> shows that HAT (371M parameters) and factorised T-T (372M parameters) slightly
degraded intra-domain performance compared to standard T-T (371M parameters) but improve cross-domain performance. However,
the proposed LS-Transducer (373M parameters) still clearly outperformed HAT and factorised T-T in both intra and cross-domain scenarios with WERRs between 8.1% and 15.4%.
The WER improvement brought by the proposed LS-Transducer
over the HAT and factorised T-T is statistically significant of the 0.001 null using the matched-pair sentence-segment word error statistical test <cit.>.
§ CONCLUSIONS
This paper proposes a label-synchronous neural transducer (LS-Transducer), which does not require the prediction of blank tokens
and is thus easy to adapt the prediction network on text data. An Auto-regressive Integrate-and-Fire (AIF) mechanism was designed that generates a label-level encoder representation which was then combined with prediction network outputs while still allowing streaming. In addition, a streaming joint decoding method was proposed to refine the search space during beam search. Experiments show that the proposed LS-Transducer is very effective and flexible in terms of domain adaptation, and clearly outperformed the standard Transformer-Transducer (T-T) models in both intra-domain and cross-domain scenarios with up to 19.0% relative WER reduction. Furthermore, the LS-Transducer
has a relative WER decrease between 8.1% and 15.4% compared with factorised T-T and HAT.
ieeetr
|
http://arxiv.org/abs/2307.01815v2 | 20230704164112 | On perfect powers that are sums of cubes of a nine term arithmetic progression | [
"Nirvana Coppola",
"Mar Curcó-Iranzo",
"Maleeha Khawaja",
"Vandita Patel",
"Özge Ülkem"
] | math.NT | [
"math.NT",
"Primary 11D61, Secondary 11D41, 11D59, 11J86, 14H52"
] |
]On perfect powers that are sums of cubes of a nine term arithmetic progression
Vrije Universiteit Amsterdam, de Boelelaan 1111, Room 9A94, 1081 HV, Amsterdam, The Netherlands
[email protected]
Hans-Freudental Gebouw, Utrecht University, Budapestlaan 6, Room 5.03, 3584 CD Utrecht, The Netherlands
[email protected]
School of Mathematics and Statistics, University of Sheffield, Hounsfield Road, Sheffield S3 7RH, United Kingdom
[email protected]
School of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, United Kingdom
[email protected]
Galatasaray University, Çırağan Cd. No:36, Istanbul, Turkey
[email protected]
[2010]Primary 11D61, Secondary 11D41, 11D59, 11J86.
We prove that the only integral solutions to the equation
(x-4r)^3 + (x-3r)^3 + (x-2r)^3+(x-r)^3 + x^3 + (x+r)^3+(x+2r)^3 + (x+3r)^3 + (x+4r)^3 = y^p
satisfy the condition xy=0 if p≥ 5 is a prime. We also show that there are infinitely many solutions for p=2 and p=3. This is a natural continuation of previous work carried out by A. Argáez-García and the fourth author. We use an amalgamation of existing methods to overcome the increased computational challenge. Most notable is a significant computational efficiency obtained through appealing to Bilu, Hanrot and Voutier's Primitive Divisor Theorem and the method of Chabauty, as well as employing a Thue equation solver earlier on.
[
Özge Ülkem
August 1, 2023
==================
§ INTRODUCTION
Solving Diophantine equations has always been the order of the day amongst number theorists.
In this paper, we explore Diophantine equations that arise from arithmetic progressions. These types of equations have been previously studied by many;
see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>,
<cit.>, <cit.>,
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. We refer the reader to the state-of-the-art survey <cit.> for a comprehensive overview of these works.
Specifically, we determine perfect powers that can be written as the sum of cubes of nine consecutive terms in a bounded arithmetic progression.
This is a natural continuation of previous work carried out by A. Argáez-García and the fourth author (see <cit.>, <cit.> and <cit.>).
In particular, we prove the following theorem:
Let p be a prime. The equation
(x-4r)^3 + (x-3r)^3 + (x-2r)^3+(x-r)^3 + x^3 + (x+r)^3+(x+2r)^3 + (x+3r)^3 + (x+4r)^3 = y^p
with x, r, y, p ∈, (x, r) = 1 and 0 < r ≤ 10^6 only has integer solutions which satisfy xy = 0, if p ≥ 5, and it has infinitely many non-trivial solutions if p ∈{2,3}.
Our proof is inspired by <cit.>, <cit.> and <cit.>. The proof of Theorem <ref>
uses a battery of different
techniques used in the realm of Diophantine equations. Our careful fusion of various
techniques
allows us to overcome the increased computational challenge occurred when allowing for more terms in arithmetic progression, thus enabling the proof of
Theorem <ref>.
Indeed, the novelties of this paper include a significant computational efficiency obtained through appealing to
Bilu, Hanrot and Voutier’s Primitive Divisor Theorem and the method of Chabauty, as well as
employing a Thue equation solver earlier on. Section <ref> reports the significant computational savings obtained.
Before we dive into the details, we give an overview of the structure of the proof. In Section <ref>, we consider the small prime exponents 2 and 3, and prove the infinitude of solutions.
Subsequently, we may assume that the exponent is greater than or equal to 5. In Section <ref>, we apply a descent argument to equation (<ref>), resulting in 12 distinct ternary equations that require resolving.
Thus, in order to prove Theorem (<ref>), we need to show that any solution arising from any descent case corresponds to a trivial solution to (<ref>) (i.e. xy=0).
We then apply a theorem of Mignotte <cit.>, <cit.> which is based on the theory of linear forms in logarithms to bound the exponent p.
This is an integral step since it reduces the proof of Theorem <ref> to the resolution of a finite number of equations in one less variable.
However, we end up obtaining roughly twenty billion equations to solve, in the unknown variables x and y, which makes the implementation non-effective.
In Section <ref>, we significantly reduce the number of equations that need to be resolved using the aforementioned work of the fourth author <cit.> (which builds upon the Primitive Divisor Theorem <cit.>, see Table <ref>).
To ease the computational burden, in Section <ref> we
treat the prime exponents 5 and 7 separately where possible. We employ the method of Chabauty (see <cit.>, <cit.> and <cit.>), in combination with the computationally efficient test presented in <cit.>, as well as making use of <cit.> inbuilt Thue solver, which is based on an algorithm of Bilu and Hanrot <cit.>, and Tzanakis and de Weger <cit.>.
To further our quest for elimination, we then apply the “empty set" criteria which is based on work of Sophie Germain
(see <cit.> or <cit.> for the original statement).
We try to eliminate the remaining equations by performing a further descent over number fields and applying local solubility tests, all of which are detailed in Section <ref>.
The implementation of these tests in <cit.> will point us towards
descent equations with specified prime exponent p
with potential non-trivial solutions. We solve the last remaining equations with 's
inbuilt solver.
We refer the keen and interested reader to <cit.> for a comprehensive overview of existing results in this area as well as an overview of contemporary and classical techniques used in the resolution of Diophantine equations.
§ ACKNOWLEDGEMENTS
This project stemmed from the Women in Numbers Europe 4 workshop, which took place in August 2022 at Utrecht University. The authors are immensely appreciative towards the organisers: Ramla Abdellatif,
Valentijn Karemaker,
Ariane Mézard and Nirvana Coppola for hosting such an inspiring and productive workshop, and for all of their time committed towards such a noble
endeavour.
N. Coppola is supported by the NWO Vidi grant No. 639.032.613, New Diophantine Directions.
M. Khawaja is supported by an EPSRC studentship from the University of Sheffield (EPSRC grant no. EP/T517835/1).
Ö. Ülkem is supported by TÜBITAK project no. 119F405.
§ SMALL PRIME EXPONENTS
In this section, we treat equation (<ref>) for p ∈{2,3}, thus proving the following result.
Let p ∈{2,3}.
The equation
(x-4r)^3 + (x-3r)^3 + (x-2r)^3+(x-r)^3 + x^3 + (x+r)^3+(x+2r)^3 + (x+3r)^3 + (x+4r)^3 = y^p
with x, r, y ∈, (x, r) = 1 and r >0 has infinitely many integer solutions.
§.§ The exponent 2.
If p=2 in equation (<ref>), we obtain an elliptic curve, namely 9x(x^2 + 20r^2) = y^2. The change of variables y = 3Y and x=X gives rise to the following integral Weierstrass model:
E_r : Y^2 = X^3 + 20 r^2 X.
The curious reader can find a detailed overview of the elliptic curves theory used in this section in <cit.>.
Integral points on E_r correspond to integral solutions to equation (<ref>) when p=2.
For a fixed value of r, Siegel's Theorem <cit.> tells us that E_r has finitely many integer points and with the help of computer software, we may be able to determine all integral points on E_r for specified positive integer r. As we vary r, we obtain infinitely many integral points, hence infinitely many integral solutions to equation (<ref>).
We demonstrate this explicitly by constructing parametric families of solutions to equation (<ref>) when p=2.
We first ascertain that integral solutions to
equation (<ref>) do not arise from torsion points on E_r.
Let r be a non-zero integer.
The curve E_r: Y^2 = X^3 + 20 r^2 X
has torsion subgroup E_r()_tor = {(0,0), ∞}.
The polynomial X^3 + 20 r^2 X has two irreducible factors, namely X and X^2+20r^2. Since X^2+20r^2 has no rational roots for any positive integer r,
/2×/2⊈E()_tor but /2⊆ E()_tor. Mazur's classification theorem <cit.> gives the following possible options for E()_tor:
/2, /4, /6, /8, /10 and /12.
To narrow down the possibilities further, we first investigate the feasibility of 3-torsion. The 3-torsion points of E_r are given by solutions of the 3-division polynomial,
ψ_3(X) = 3X^4+6· 20 r^2X^2-20^2r^4;
hence, the x-coordinates of the 3-torsion points satisfy
x^2 = -6· 20 r^2± 4 · 20 r^2√(3)/6.
Therefore, x cannot be rational and so we have no 3-torsion.
Our list has now reduced to
/2, /4, /8 and /10.
To refine the torsion group further, we must analyse
the 4-th and 5-th division polynomials. Using <cit.>, and arguments similar to above, we find that there is no rational 4 or 5-torsion.
This leaves
E()_tor = /2
as our only possibility. In particular, E()_tor = {(0,0), ∞}.
In particular, as a Corollary to Theorem <ref>, we note that the torsion points found on E_r do not give rise to a non-trivial integral solution to (<ref>) when p=2. However, we can still give an explicit construction of an infinite family of integral solutions to equation (<ref>) when p=2 using the curves E_r.
Let A and B be non-zero integers
and take r=2AB(A^2+5B^2).
Then, the elliptic curve E_r as defined in <ref> has positive rank.
Let A, B
∈∖{0}.
Define
X=(A^2-5B^2)^2,
Y= (A^2-5B^2) ( (A^2+5B^2)^2 + 20A^2B^2),
which gives an integral point (X,Y) on E_r.
Thus, given non-zero integers A and B, and setting r=2AB(A^2+5B^2)∈∖{0},
we have explicitly
constructed
an integer point (X,Y) different from (0,0) on E_r.
Given Theorem <ref>, this must be a point of infinite order on E_r, and thus E_r has strictly positive rank.
The following
corollary addresses our question of integral solutions to equation (<ref>) for
p=2.
Let A, B be non-zero positive integers.
Take r=2AB(A^2+5B^2).
Then equation (<ref>) has a non-trivial integral solution (x,y) when p=2.
Proposition <ref> gives an explicit (non-torsion hence non-trivial) integer point on the curve E_r.
If we take
x=A^2-5B^2,
y= 3(A^2-5B^2) ( (A^2+5B^2)^2 + 20A^2B^2),
we obtain a non-trivial integral solution to equation (<ref>).
For completeness, we
note that we can pull up an infinite parametric family of solutions to equation (<ref>) when p=2 via a more elementary approach. For M and N positive integers, we let
x = 20(MN)^2, r = M^4-5N^4, y = 60MN(M^4+5N^4),
to obtain non-trivial integral solutions to
equation (<ref>) for p=2.
Moreover, note that this parametric family of solutions then gives rise to integer points on E_r for r = M^4 - 5N^4 where M and N are non-zero integers. Again, via Theorem <ref>, we may deduce that any member of the family of elliptic curves E_r for r=M^4 - 5N^4 have positive rank.
Let M and N be strictly positive integers. Let r= M^4 - 5N^4. Take
X = 20(MN)^2, Y = 20MN(M^4+5N^4).
Then (X,Y) is an integral point on E_r.
We immediately notice that the two parametric families of solutions constructed have zero intersection. Suppose P=(x_P, y_P)∈ E_r(). For the family of points constructed in Corollary <ref>, x_P is a square in . Conversely, x_P is never a square in for the family of points constructed in Corollary <ref>.
§.§ The exponent 3.
We now turn to the exponent p=3 case, which also yields a genus 1 curve, namely,
C: 9x(x^2 + 20r^2)=y^3.
As this is a plane cubic with one rational point, it is an elliptic curve. More precisely, let
P=(x,y,r)=(0,0,1) ∈ C(ℚ). Then, using <cit.>, we find that C is isomorphic to an elliptic curve E via the isomorphism
ϕ : C → E,
ϕ (x,y,r)=(5y/x,-150r/x),
where E is the elliptic curve with Weierstrass equation
E: Y^2 = X^3 - 1125.
Observe that the curve E is independent of r, and E() ≅ with Mordell–Weil basis given by (45, 300), which corresponds to (x,y,r)=(1,9,-2). We deduce that C has infinitely many integer points (x,y,r).
§ THE FIRST DESCENT AND INITIAL EXPONENT BOUND
Since we have considered prime exponents 2 and 3 in Section <ref>, we now proceed under the assumption that p≥ 5.
We rewrite equation (<ref>) as 9x(x^2+20r^2)= y^p. Since 3 | y, we make the substitution y = 3w to obtain
x(x^2+20r^2) = 3^p-2w^p.
We note that (x, x^2 + 20r^2) ∈{1,2,4,5, 10, 20} depending on whether 2,4,5,10 or 20 divides x or not.
Therefore, we consider twelve cases and apply a simple descent argument in each case.
For eight of the cases, we bound the exponent p by applying the following theorem of Mignotte <cit.>,<cit.> (based on the method of linear forms in logarithms).
The bounds obtained are recorded in Table <ref>, along with the descent information.
Assume that the exponential Diophantine inequality
|ax^n - by^n | ≤ c, with a,b,c ∈_≥ 0 and a≠ b,
has a solution in strictly positive integers x and y with max{x,y} > 1. Let A = max{a,b,3}. Then
n ≤max{ 3 log(1.5| c/b| ), 7400log A/log(1+log A/|log a/b|)}.
Finally, we address descent cases 9–12 in Table <ref>.
For example, let us consider descent Case 9.
If p≥ 3, then 4 divides x.
This contradicts our assumption that 4 does not divide x.
Thus p=2.
Similar arguments show that p is bounded by 2 for descent cases 10–12.
§ AN INCREDIBLY EFFICIENT SIEVE
For descent cases 1–4 in Table <ref>, we make drastic computational improvements by very quickly discarding infeasible values of the prime exponent p
that cannot produce solutions to equation <ref>. We primarily achieve this via an application of prior work of the fourth author
<cit.>, which is based on the Primitive Divisor Theorem due to Bilu, Hanrot and Voutier <cit.>.
Let C_1≥ 1 be a square free integer and C_2 a positive integer. Assume that C_1C_2≢7 8. Let p be a prime for which
C_1x^2+C_2=y^p, (C_1x^2, C_2, y^p)=1
has a solution (x, y) ∈_>0. Write C_1C_2 = cd^2, where c is squarefree.
Then one of the following holds:
* p ≤ 5;
* p=7 and y=3, 5 or 9;
* p divides the class number of (√(-c));
* p divides (q-(-c/q)), where q is a prime q | d and q ∤ 2c.
Application of Theorem <ref> dramatically reduces the number of equations that need resolving in cases 1–4. This is evident from column 4 of Table <ref>. Unfortunately, we are unable to apply Theorem <ref> to cases 5–8 to obtain similar significant computational savings. We highlight that this is incredibly helpful in particular for case 4, where the bound obtained through appealing to Theorem <ref> is particularly large (p ≤ 142,861) and would not be tractable under computations performed in previous works <cit.>.
§ SMALL PRIME EXPONENTS: REVISITED
In this section, we deal with the prime exponents 5 and 7.
This enables us to further reduce the computational effort needed in resolving certain descent equations.
We begin this section by giving a summary of additional methods required.
§.§ The method of Chabauty
Let X/ be a curve with genus g. Suppose J_X has rank r, where J_X is the Jacobian of X. Let p≥ 3 be a prime of good reduction for X.
If the Chabauty condition r < g is satisfied then the method of Chabauty guarantees the existence of a non-empty set of so-called annihilating differentials.
By studying the zeros of these annihilating differentials, one can find a set of p-adic points on X that contain X().
We refer the reader to <cit.> for a brief overview, or to <cit.> and <cit.> for a comprehensive overview of the method of Chabauty. We note if X is a genus 2 hyperelliptic curve then there is a readily available <cit.> implementation that computes X() using Chabauty, provided that the Chabauty condition is satisfied.
§.§ Thue equations
For a fixed prime p, each descent case yields ternary equations of the form
aw_2^p-bw_1^2p=cr^2
In these cases, we let τ = w_2 and σ=w_1^2.
This transforms (<ref>) to
aτ^p-bσ^p=cr^2.
For a fixed value of r, equation (<ref>) is a Thue equation of degree p. For small enough values of p,
we can solve (<ref>) using the Thue solver in <cit.> which is based on an algorithm of Bilu and Hanrot <cit.>, and Tzanakis and de Weger <cit.>.
§.§ The non-existence of solutions for exponent 5
Setting p=5 and making appropriate substitutions
in (<ref>) yields a genus 2 hyperelliptic curve C.
By the infamous theorem of Faltings <cit.>, C has finitely many rational points, and hence, integral points.
With the exception of one descent case, we are able to determine C() using the method of Chabauty.
Thus we are able to find a resolution to equation <ref> with exponent p=5. Key data is recorded in
Table <ref>.
For the convenience of the reader, we expand upon the details in descent cases 1 and 7. Firstly, descent case 1 gives rise to
the genus 2 hyperelliptic curve C as stated in Table <ref>.
Using the Chabauty implementation in <cit.>, we determine all rational points on C; these have been recorded in Table <ref>.
The point at infinity
∞∈ C()
corresponds
to x=0, whilst the points (9,± 540)∈ C() imply that 3| r, contradicting the coprimality of x and r.
We now consider descent case 7.
We deduce that
the genus 2 hyperelliptic curve obtained, C (stated in Table <ref>),
has Jacobian with a rank bounded above by 1.
With the help of <cit.>, we found a point D∈ J() that has infinite order, where
D=(x^2 - x + 2/3, 15x -10),
given in Mumford representation,
thus the rank of J() is 1.
Hence we are able to apply the method of Chabauty, and using the <cit.> implementation, we find that C()={∞}.
Unfortunately, we find that
descent case 3 is intractable under the method of Chabauty
as we are unable to ascertain the rank of the Jacobian of the genus 2 hyperelliptic curve C (stated in Table <ref>),
hence
we cannot verify that the Chabauty condition is satisfied.
Instead,
we let σ=w_2 and τ=w_1^2 to yield the Thue equation
σ^5-2^4· 3^6·τ^5=5r^2
for a fixed value of r. We use 's <cit.> Thue solver, and find that the only solutions to (<ref>), for 1≤ r≤ 10^6, are
(σ, τ, r) ∈{(5, 0, 5^2), (5^3, 0, 5^7), (5· 7^2, 0, 5^2· 7^5)}.
Our computations with prime exponent 5 thus prove the following proposition.
The equation
(x-4r)^3 + (x-3r)^3 + (x-2r)^3+(x-r)^3 + x^3 + (x+r)^3+(x+2r)^3 + (x+3r)^3 + (x+4r)^3 = y^5
with x, r, y ∈, (x, r) = 1 and 0< r <10^6 has no solutions with xy ≠ 0.
§.§ The non-existence of solutions for exponent 7
We consider descent cases 1–4.
An appropriate substitution (see Table <ref>), transforms the descent equation to an equation of the form
C_1X^2+C_2=w_2^p,
where C_1, C_2∈_>0, C_1 is squarefree and C_1C_2≢7 (mod 8).
Suppose p=7.
By Theorem <ref>,
any solution to (<ref>) requires w_2∈{3, 5, 9}.
One can easily verify that for any such w_2, the corresponding solution (x, y, r) to (<ref>) would satisfy (x, r)>1.
This contradicts our hypotheses.
Thus, we have proven the following.
The equation
(x-4r)^3 + (x-3r)^3 + (x-2r)^3+(x-r)^3 + x^3 + (x+r)^3+(x+2r)^3 + (x+3r)^3 + (x+4r)^3 = y^7
with x, r, y ∈, (x, r) = 1, r >0
and with
3 | x has no solutions with xy ≠ 0.
§ EQUATION ELIMINATION: 6.7 BILLION TO SOLVE
To solve all (approximately 6.7× 10^9) of
the remaining equations in variables w_1 and w_2,
we employ a combination of different criteria to finally resolve each one of the remaining eight descent cases. The implementation of these tests in <cit.> eliminates equations with no solutions.
§.§ Sophie Germain's empty set criteria
First, we apply the “empty set” criteria given in the following lemma. This result is based on work of Sophie Germain and gives a criteria for the nonexistence of solutions to (<ref>). For each prime p, the criteria constructs an auxiliary prime q and a set 𝒮(p,q). Since the elements of 𝒮(p,q) lie in the finite field _q, the criteria is computationally efficient. Upon comparing Table <ref> with Tables <ref> and <ref> in Section <ref>, we see that this criterion is indeed powerful as only a small proportion of equations survive after its application.
<cit.>
Let p ≥ 3 be a prime. Let a, b and c be positive coprime integers.
Let q=2kp+1 be a prime such that q∤ a.
Define
𝒮^'(p,q)={η^2p : η∈_q }
={0}∪{ζ∈_q^* : ζ^k=1}
and
𝒮(p,q)={ζ∈𝒮^'(p,q) : ((b ζ+c)/a)^2k∈{0,1}} .
If 𝒮(p,q)=∅,
then equation (<ref>)
does not have integral solutions.
§.§ Local solubility
To the equations that survive Lemma <ref>, we apply a classical local solubility test to eliminate further equations. We outline the procedure below.
For a fixed valued of r, the ternary equations from the descent are of the form
aw_2^p-bw_1^2p=c
and satisfy (a, b, c) =1. If g = Rad((a, c))>1, the assumption yields g| w_1 and we can write w_1=gw_1', a=gd and c=gf. Thus from
equation (<ref>) we obtain
dw_2'^p-ew_1'^2p=f,
where w_2'=w_2 and e=g^2p-1.
The repetition of similar arguments yields
Dλ^p-Eμ^2p=Fν,
where D, E and F are now pairwise coprime. Now we can study the local behaviour of (<ref>) at different primes.
Let q be a prime.
* Suppose q| D.
From reducing (<ref>) modulo q, it follows that -EF is a quadratic residue modulo q.
* Suppose q ≡ 1 p. Then we can view (<ref>) as an equation in _q. If we assume that q| D, we see that (<ref>) can only have (non-trivial) solutions if -F/E is a 2p-power in _q. Similarly, if q| E (resp. q| F), then (<ref>) has solutions over _p if F/D (resp. E/D) is a p-power.
* Suppose q∈{2, 3, 5, 7, p} or q| DEF. Then we can use the local solubility implementation in <cit.> to test if the rational projective curve given by (<ref>) has _q–rational points.
§.§ Descent over number fields
To the equations that survive Sections <ref> and <ref>, we apply a further descent over number fields.
With D, E, F as in (<ref>), we write
E'= ∏__q E is odd q.
Hence, EE'=s^2 for some s∈.
Write DE'=r and FE'=n^2m with m squarefree. Then we can rewrite (<ref>) as
rρ^p = (sκ^p+n√(-m))(sκ^p-n√(-m)),
where ρ=λ^p and κ=μ. Let K= (√(-m)) and 𝒪 be its ring of integers.
Let
𝔊={𝒫⊂𝒪 : 𝒫 is a prime ideal and 𝒫| r or 𝒫| 2n√(-m)}.
Let τ = (sκ^p+n√(-m)).
If 𝒫∉𝔊, then _𝒫(ττ)=p_𝒫(ρ). By assumption, _𝒫(τ+ τ)=_𝒫(2n√(-m))=0. Hence, the equivalence class of τ in K^∗/(K^∗)^p is an element of the “p-Selmer group”
K(𝔊, p) = {ε∈ K^∗/(K^∗)^p: _𝒫(ε) ≡ 0 p for all 𝒫∉𝔊}.
This is an _p vector space that can be computed by <cit.> using the command .
Then,
(sκ^p+n√(-m))=εη^p,
for some η∈ K^∗, and ε∈ℰ := {ε∈ K(𝔊, p) : norm(ε)/r∈ (^∗)^p}.
This yields the following two criteria analogous to Lemma <ref>, and local solubility techniques over .
<cit.>
Let K=(√(-m)), and let 𝔮 be a prime ideal of K. Suppose one of the following holds:
(i) _𝔮(s), _𝔮(n√(-m)), _𝔮(ε) are pairwise distinct modulo p;
(ii) _𝔮(2s), _𝔮(ε), _𝔮(ε) are pairwise distinct modulo p;
(iii) _𝔮(2n√(-m)), ord_𝔮(ε), _𝔮(ε) are pairwise distinct modulo p.
Then there is no κ∈ and η∈ K satisfying (<ref>).
<cit.>
Let q = 2kp + 1 be a prime. Suppose q𝒪 = _1 _2 where _1, _2 are distinct,
and such that __j (ε) = 0 for j = 1, 2. Let
χ^'(p, q) = {η^p : η∈_q}.
Let
χ(p, q) = {ζ∈χ^'(p, q) : ((sζ + n√(-m))/ε)^2k≡ 0 or 1 mod _j for j = 1, 2}.
Suppose χ(p, q) = ∅. Then there is no κ∈ and η∈ K satisfying (<ref>).
§ THE FINAL RESOLUTION!
In this section, we apply the techniques described in Sections <ref> and <ref> in order to resolve all the remaining equations.
To this end, we run a <cit.> script
which implements the mathematical bounds and tests outlined in Sections <ref> and <ref>.
This completes the proof of Theorem <ref>.
§.§ Full resolution of Cases 1 to 4
We first record computational data for Cases 1–4. These cases
are amenable to Theorem <ref>, which provides an extremely fast test to eliminate exponents. Further, we have already dealt with small prime exponents 5 and 7 in Section <ref>.
Recall our exponent is bounded above via Mignotte's Theorem (see Table <ref>) and thus we have a finite number of equations to resolve in two unknown variables. We take all remaining equations through the tests outlined in Section <ref>, and record the outcome in Table <ref>.
We remark once more that application of Theorem <ref> and Chabauty techniques significantly reduced the number of equations that needed to be resolved, thereby significantly reducing the computation time (see Table <ref>).
Moreover, Theorem <ref> allowed us to quickly bypass certain exponents that would otherwise pass the insolubility tests of Section <ref> and give rise to intractable Thue equations.
One such example arises when considering descent case 3. We choose the pair (r,p)=(390625, 17).
The ternary descent equation arising from this case is given by
w_2^17-2^28· 3^30· w_1^34=5· 390625^2.
We found that the techniques outlined in Section <ref> fail to resolve (<ref>).
This is merely one example where the strategies outlined in previous work <cit.>,
are alone insufficient to prove Theorem <ref>. Another example of an intractable Thue equation bypassing all tests of Section <ref>, arising from descent case 2 with exponent 19 and r=262144 can be found in <cit.>.
§.§ Full resolution of Cases 5 to 8
We now turn to Cases 5–8. Unfortunately,
we are unable to apply Theorem <ref> to quickly discard exponents, or deal with the exponent 7 in an efficient manner. Computationally, a full resolution is still achieved, albeit, much less efficiently. See Table <ref> for a comparison of the computational times.
We recall that we have dealt with the exponent p=5 in Section <ref> using Chabauty.
In contrast with cases 1–4
we now find equations that survive Sophie Germain's criteria, local solubility tests and the second descent (over a number field). These remaining equations all have exponent 7 and occur at the following values of r:
r∈{2401,277360, 352832, 389176, 729296, 809336, 826864, 903464, 979616}.
These 9 equations are resolved using the Thue solver in which is based on an algorithm of Bilu and Hanrot <cit.>, and Tzanakis and Weger <cit.>. No solutions are found.
§.§ Computational Data
We list the approximate computational times for each descent case here. Computations ran on an , and split over 8 processors, took roughly 19 days to complete.
siam
|
http://arxiv.org/abs/2307.03231v1 | 20230706180004 | Superconformal indices for non-Lagrangian theories in five dimensions | [
"Hee-Cheol Kim",
"Minsung Kim",
"Sung-Soo Kim",
"Gabi Zafrir"
] | hep-th | [
"hep-th"
] |
§
[runin]section.1em[.]
§
0em1em0.5em
1/2
|
http://arxiv.org/abs/2307.02685v1 | 20230705230336 | The Kibble-Zurek Scenario and Coarsening Across Nonequilibrium Phase Transitions in Driven Vortices and Skyrmions | [
"C. Reichhardt",
"C. J. O. Reichhardt"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.stat-mech",
"cond-mat.supr-con"
] |
Theoretical Division and Center for Nonlinear Studies,
Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
We investigate the topological defect populations for
superconducting vortices
and magnetic skyrmions on random pinning
substrates under driving
amplitudes that are swept at different rates or suddenly quenched.
When the substrate pinning is sufficiently strong, the system
exhibits a nonequilibrium phase transition
at a critical drive into a more topologically ordered state.
We examine the number of topological defects that remain as we cross the
ordering transition at different rates. In the vortex case,
the system dynamically orders into a moving smectic, and
the Kibble-Zurek scaling hypothesis gives exponents consistent
with directed percolation.
Due to their strong Magnus force,
the skyrmions dynamically order into an isotropic
crystal, producing different Kibble-Zurek
scaling exponents that are more consistent with coarsening.
We argue that in the skyrmion crystal,
the topological defects can both climb and glide,
facilitating coarsening, whereas in the vortex smectic state,
the defects cannot climb and coarsening is suppressed.
We also examine pulsed driving across the ordering transition
and find that the defect population
on the ordered side of the transition decreases with time as a power law,
indicating that coarsening can occur across nonequilibrium phase transitions.
Our results should be general to a wide class of nonequilibrium systems driven over random disorder where there are well-defined topological defects.
The Kibble-Zurek Scenario and Coarsening Across
Nonequilibrium Phase Transitions in Driven Vortices and Skyrmions
C. Reichhardt and C. J. O. Reichhardt
August 1, 2023
===================================================================================================================
2pc
§ INTRODUCTION
Phase transitions,
such as the transition
from solid to fluid or the change from paramagnetic to ferromagnetic,
are well studied in equilibrium systems, and may have discontinuous
first-order character or continuous second-order character
<cit.>.
Typically these
transitions are identified via an order parameter,
symmetry breaking, or the formation of topological defects.
There has been growing interest in understanding
whether nonequilibrium
systems can also exhibit phase transition behavior, and if
so, how this behavior can be characterized <cit.>.
There are now several systems that have
shown strong evidence for
nonequilibrium phase transitions,
such as transitions among different turbulent states
<cit.>,
reorganization of periodically sheared
colloidal systems <cit.>,
and emergent behaviors in systems with non-reciprocal
interactions <cit.>.
Another phenomenon
exhibiting behavior
consistent with a nonequilibrium phase transition is
the depinning of particles coupled
to random or disordered substrates <cit.>.
For example, the depinning of elastic objects
such as charge density waves
from a random substrate shows scaling near the depinning
threshold <cit.>. Other models
with strong plasticity,
such as the depinning of
colloidal particles or
vortices in type-II superconductors, also exhibit
scaling of the velocity-force curves near depinning with different
exponents than
those found for elastic depinning <cit.>.
A wide variety of continuous and first-order behavior can occur
during depinning from ordered substrates
due to the formation of kinks or solitons
that can produce hysteresis across the transition,
indicative of the type of metastability associated with
a first-order transition <cit.>.
One of the most studied depinning systems
is vortices in type-II superconductors,
which in the absence of quenched disorder form a
triangular lattice <cit.>.
When the underlying disordered substrate
is strong enough, the vortices form a topologically disordered state that
can undergo plastic depinning, and at higher drives
there is a dynamical ordering transition
into a moving smectic or anisotropic crystal
<cit.>.
Above the ordering transition,
a large fraction of the vortices have six neighbors as in a perfect lattice,
but a moving isotropic crystal does not form
due to the anisotropic fluctuations produced by the pinning on the moving
vortex structure.
In two-dimensional (2D) systems, the strongly
driven vortices organize into
a smectic state consisting
of a series of chains of vortices that slide past each other.
In this case, there can still be several topological defects present
in the form of
dislocations composed of
pairs of fivefold and sevenfold disclinations (5-7 pairs)
that slide in the direction of drive,
so that in the dynamically reordered state,
the Burger's vectors of all of the dislocations are oriented along
the same direction.
The dynamical ordering of vortices at high
drives has been studied with neutron
scattering and direct imaging <cit.>,
but it can also be
deduced from features in the velocity-force
curves
and peaks in the differential conductivity
<cit.>.
When thermal fluctuations are important,
the vortices can still dynamically order at higher drives
but the driving force needed to order the system
diverges as
the temperature T approaches the pin-free melting temperature
<cit.>.
The dynamical ordering can also produce signatures
in the conduction noise.
Near depinning, the noise has a strong
1/f^α signature and there is a large amount of low frequency
noise power,
while above the
dynamical reordering transition, the
noise has narrow band characteristics
and the noise power is low <cit.>.
Similar dynamical ordering of particle-like systems has also been studied
for colloids <cit.>,
Wigner crystals <cit.>,
pattern forming systems <cit.>,
frictional systems <cit.>, active matter <cit.>, and
magnetic skyrmions <cit.>.
Recently it was shown in simulations and experiments that
dynamical ordering transitions in driven vortices and colloids
can also be examined within the framework of the Kibble-Zurek (KZ)
scenario <cit.>.
Under equilibrium conditions,
when a phase transition occurs from a
disordered to an ordered phase as a function of some control parameter,
there can be well-defined topological defects
such as domain walls, dislocations, or, in the case of superfluid
transitions, vortices.
If
the control parameter
is changed slowly so that the system remains
in the adiabatic limit,
topological defects will be absent on the ordered side of the transition.
According to the KZ scenario, however,
if the control parameter is swept across the transition sufficiently
rapidly, topological defects persist on the ordered side of the
transition,
and the defect density P_d
scales as a power law P_d∝τ_q^-β,
where τ_q is the time duration of the quench of the
control parameter across the
transition <cit.>.
The exponent β is related to the universality class and scaling of the
underlying second order phase transition
according to β = (D-d)ν/(1 + zν), where
D is the dimension of the system,
d is the dimension of the defects, and z and
ν are the critical exponents that
relate to the specific universality class of the
transition.
The KZ scenario has been studied in a variety of equilibrium
systems such as liquid crystals
<cit.>,
superfluid vortices <cit.>,
ion crystals <cit.>,
2D colloidal systems <cit.>
and cold atoms <cit.>.
In principle, the KZ scenario can be
applied to nonequilibrium phase transitions when
well-defined topological defects can
be identified.
There have been some applications of the KZ scenario
to nonequilibrium systems for which the underlying phase transition is
in an equilibrium universality class <cit.>; however,
there are
other examples of nonequilibrium phase transitions
that have no equilibrium counterpart, such as
directed percolation <cit.>.
Recently Reichhardt and Reichhardt studied the defect
populations across
the dynamical ordering transition of 2D driven
superconducting vortices for increasing drive sweep rates 1/τ_q,
and found power law scaling consistent
with the KZ scenario <cit.>.
Interestingly, the exponents in the vortex system
were consistent with
1+1-dimensional directed percolation <cit.>
rather than with the 2D Ising model. Directed
percolation is a universality
class that is associated with many of
the previously observed nonequilibrium
phase transitions <cit.>.
In the case of driven superconducting vortices,
the ordered state consists of one-dimensional (1D) chains
forming a moving smectic configuration,
so it is natural for the system to behave more like a 1D
than a 2D system.
In Ref. <cit.> it was also shown
that colloidal particles driven over quenched disorder
form a moving smectic as well, producing the same KZ scaling.
Maegochi et al. observed similar exponents in an experimental
realization of the superconducting
vortex system <cit.>.
Some of the next questions
to address for driven systems are
whether the KZ scenario can also be applied in cases where moving ordered
crystals form instead of moving smectics, so that the ordered dynamics
are fully 2D.
In this case, it would be interesting to determine whether
the system would fall into the class of 2D directed percolation, or
into some different universality class.
Another general question is the possible role of coarsening in these systems.
In the case of an instantaneous quench across the ordering transition,
there will be a specific defect population,
but it is not known if these defects coarsen on the ordered
side of the transition even in systems with no thermal fluctuations.
In equilibrium systems, when an instantaneous quench is performed from the
disordered to the ordered phase,
the defect population
can exhibit coarsening with different types of
power law behaviors that depend on the nature of the defects
<cit.>.
Numerical studies of quenches in spin ices also showed that coarsening
dominates over the KZ scaling if the topological
defects interact sufficiently strongly with each other
on the ordered side of the transition <cit.>.
In this work, we consider both continuous driving and
instantaneous quenching across dynamical ordering
transitions for superconducting vortices and magnetic skyrmions in
two-dimensional systems with quenched disorder.
The skyrmions are magnetic particle-like textures
<cit.>
that have many similarities to
vortices in type-II superconductors
in that they form a triangular lattice <cit.>,
can interact with
pinning <cit.>, and can be set into motion with an
applied drive
<cit.>. There have been several numerical
and experimental studies that have
demonstrated
the dynamical ordering of skyrmions into a crystal under an applied drive
<cit.>. One of the key differences between skyrmions and
superconducting vortices is that skyrmions have a strong
Magnus force
that creates velocities that are perpendicular to the forces experienced
by the skyrmions. As a result,
under a drive skyrmions
move at an angle with respect to the driving direction,
called the skyrmion Hall angle
<cit.>.
Additionally, the Magnus
force affects the fluctuations
skyrmions experience from moving over the pinning landscape
<cit.>.
In the case of
superconducting vortices where the
overdamped dynamics cause the velocities
to be aligned with the direction of drive,
the fluctuations produced by pinning are strongest in the direction of motion,
causing the vortices to adopt a smectic configuration;
however, for skyrmions, the Magnus forces mix the fluctuations
so that they are both parallel
and perpendicular to the direction of motion, permitting the
skyrmions to form an
isotropic lattice <cit.>.
As a result, the moving ordered
state is significantly different in the skyrmion and superconducting
vortex systems,
so an open question is whether the KZ scenario still
applies to driven skyrmions,
and if so, whether it would fall in a different universality class
from that of the vortices.
In principle, one would not expect the skyrmions to
be in the 1D directed percolation
universality class since the ordering
of the skyrmion lattice is strongly two-dimensional in character.
Here we show that the skyrmions form a lattice or polycrystalline
crystal rather than a moving smectic
and can reach a higher level of dynamical ordering than
the superconducting vortices.
As a function of the quench time τ_q,
the skyrmion defect populations obey
a power law scaling with P_d ∝τ_q^-β,
where the observed value of β = 0.5
is different from the values
β = 0.401
expected for 1D directed percolation and
β = 0.64
expected for 2D directed percolation.
We argue that the exponents are consistent with a coarsening
process that, in 2D, is expected to give
β = 1/2, indicating that the behavior
is more like that of systems with strongly interacting
defects subjected to a quench <cit.>.
Once the superconducting vortices
form moving 1D channels, the defects
are locked into the channels and are unable to climb, so
the defect density remains static in the smectic phase.
Skyrmions form an isotropic 2D lattice in which the topological defects
can both climb and glide. This permits defect annihilation to
occur and causes coarsening dynamics to dominate the slow quenches.
We cannot rule out the possibility that the
skyrmions simply fall into a different universality class of phase
transitions than the superconducting vortices;
however, we can directly observe coarsening dynamics
on the ordered side of the
transition by considering instantaneous quenches
of the skyrmion system.
These quenches reveal that
the defect annihilation has a
power law dependence on time that is consistent with coarsening
to a more ordered state.
The instantaneously quenched skyrmion system forms a polycrystalline
arrangement rather than the smectic structure observed for superconducting
vortices.
The coarsening of the skyrmion lattice
is most prominent just above the drive where the
ordering transition occurs,
while at drives much higher than the ordering transition,
it occurs in two stages.
The first stage consists of the annihilation
of individual defects,
while the second stage
involves the coarsening of the grain boundaries.
In the instantaneous quenches,
the system can better order closer to the critical points
since the effective shaking temperature
produced from collisions of the particles with the pinning sites is
largest close to the transition, causing the defects to be
more mobile.
§ SIMULATION
We consider
particle-based models of both superconducting vortices and magnetic skyrmions
driven over a random substrate in a two-dimensional system
of size L × L with L=36
and periodic boundary conditions.
In both cases, the particles
have repulsive interactions modeled as a Bessel function
<cit.>.
Throughout this work the sample contains
N_v=1296 particles.
The skyrmion motion is obtained with
a modified Thiele equation
that has been used extensively to study collective skyrmion transport
effects
<cit.>.
The dynamics of a single skyrmion
or vortex are given by the following equation of motion:
α_d v_i + α_m ẑ× v_i =
F^ss_i + F^sp + F^D + F^T_i .
The particle velocity is
v_i = d r_i/dt and dissipation
arises from the damping term α_d
that aligns the
velocity in the direction of the net applied force.
The second term on the left is a Magnus force
of magnitude α_m that
creates a velocity component perpendicular to the net applied
forces.
One way to characterize the relative importance of the Magnus and
damping terms is with
the
intrinsic skyrmion Hall angle,
θ_sk^ int = arctan(α_m/α_d).
Skyrmion Hall angles ranging from
θ_sk=5^∘ to 50^∘ have been measured;
however, it is likely that larger
skyrmion Hall angles are possible
in samples containing smaller skyrmions
where direct imaging of
the skyrmion dynamics is difficult.
We fix α_m^2 + α_d^2=1, and for the
vortices, α_m=0 and α_d=1.
The skyrmions and vortices have repulsive interactions
described by
F_i^ss = ∑^N_j=1A_sK_1(r_ij)r̂_ij,
where
r_ij = | r_i - r_j|
is the distance between particles i and j,
r̂_ij=( r_i- r_j)/r_ij,
and K_1(r) is
the first order Bessel function,
which decays exponentially at long range.
Experimental evidence exists for repulsive
skyrmion interactions that decay exponentially at longer range <cit.>.
The
particles
also interact with random disorder from the substrate modeled as
N_p non-overlapping pinning sites in the form of finite range
attractive parabolic wells, with a maximum strength of F_p
and a range of R_p=0.35.
Here, F_i^sp=∑_k=1^N_p(F_p/R_p)Θ( r_ik^(p)-R_p)r̂_ik^(p),
where the distance between particle i and pin k is
r_ik^(p)= r_i- r_k,
r̂_ik^(p)=( r_i- r_k)/| r_ik^(p)|,
and Θ is the Heaviside step function.
This model was shown in previous work
to capture a variety of vortex and skyrmion behaviors
observed in experiment,
including dynamic ordering and the
velocity dependence of the skyrmion Hall angle.
We fix N_v/N_p=2.
The initial particle positions are obtained using simulated annealing
with a nonzero temperature represented by Langevin kicks F^T_i,
where ⟨ F^T_i⟩=0 and
⟨ F^T_i(t) F^T_j(t^')⟩=2α_d k_B T δ_ijδ(t-t^').
When the pinning is sufficiently strong,
the system forms a topologically disordered state
even at zero drive.
After initialization,
we set F^T to zero and
apply a uniform driving force F^D=F_Dx̂
on all the particles
in the x-direction.
To study the rate dependence,
we increase the drive in increments of δ F_D=0.002 and
wait for τ_q simulation time steps between increments.
We stop the sweep at a particular maximum value of F_D,
and we take
τ_q=5 to 10000.
For most of this work, we set
F_p= 1.0 so that
dynamical ordering near a drive of F_D=1.4, and we
study the defect densities near F_D = 1.8, above
the dynamic reordering transition.
For slow sweep rates or large values of τ_q,
the system exhibits pinned, plastic, and dynamically ordered phases,
with a critical depinning force F_c marking the transition from
pinned to plastic flow, while the
dynamical ordering force F_cr
is defined as the drive
at which the system dynamically orders into a moving smectic
or moving crystal.
We pass across
F_cr at different drive sweep rates and count
the number of topological defects for a
fixed
value of F_D on the ordered side of F_cr.
For small τ_q, more defects are present,
and the KZ scenario predicts that the fraction
of topological defects will scale as a power law with the quench rate.
The vortices obey the same equation of motion as the skyrmions
but have α_m = 0, giving θ^ int_sk = 0^∘.
§ RESULTS
In Fig. <ref>(a), we show a Voronoi plot
of the vortex positions in a system with
F_p = 1.0
at a drive of F_D = 0.4.
In this case, the system is strictly overdamped with
θ_sk^ int=0^∘,
and the drive is increased from F_D = 0 to F_D=3.5 in increments
of δ F = 0.002
with a waiting time of τ_q=1000
simulation time steps at each increment for a total time
of τ = 1.75× 10^6 simulation time steps.
At this value of F_p, the
system forms a disordered state when F_D = 0.
At F_D = 0.4, the system has depinned and the particles are undergoing
plastic flow in a fluid-like state.
Figure <ref>(b) shows that the corresponding structure factor
S( k)
has a ring signature indicative of disorder.
At higher drives, the system dynamically orders into a moving smectic,
as illustrated in Fig. <ref>(c) at F_D = 1.8,
where most of the particles form 1D chains and the topological defects
are aligned in the direction of the drive.
The corresponding S( k) in
Fig. <ref>(d)
has the two pronounced peaks expected for a smectic structure.
For slower quench rates, the system becomes
more strongly ordered.
Figure <ref>(e,f) shows the Voronoi and S( k) plots
for the same drive of F_D = 1.8 in a system with a finite
Magnus force appropriate for skyrmions,
with α_m = 0.8, α_d = 0.6, and
θ_sk^ int = 53.1^∘.
In this case, near depinning the system is still disordered and has
the same features shown in Fig. <ref>(a,b),
but at high drives,
the system becomes more strongly ordered
and develops six peaks in S( k),
as shown Fig. <ref>(f), indicative of a moving crystal.
This demonstrates that the nature of the driven ordered state in
skyrmions is different from that of the vortices.
In Fig. <ref> we plot
the fraction P_6 of particles with six neighbors
for the vortices and skyrmions from Fig. <ref>. Here
P_6=N_v^-1∑_i^N_vδ(z_i-6) where z_i is the coordination
number of particle i obtained from the Voronoi construction.
There is a critical drive F_cr at which
the system shows an increase in order,
indicative of the dynamic ordering transition.
For the vortices, the increase in P_6
corresponding to F_cr falls at a lower drive value
compared to the skyrmions, and the saturation value of P_6 is
also lower for the vortices than for the skyrmions.
The ordered state for the vortices is
the moving smectic illustrated in Fig. <ref>(c,d),
where the dislocations are locked in
1D channels and cannot climb. In contrast, for the
skyrmion case, the system forms a moving crystal
and the defects are able to climb, leading to
the emergence of a more ordered state.
This further underscores the fact that the
dynamically reordered states for the skyrmions and
the vortices are different.
In Fig. <ref>(a), we plot
⟨ V_x⟩ and ⟨ V_y⟩,
the velocities parallel and perpendicular to the driving direction,
respectively, versus F_D
for the same skyrmion system from Fig. <ref>(e,f) but for a quench
rate that is ten times lower,
τ_q=10000.
Here there is a nonlinear regime near
depinning, and at high drives, the velocity curves become linear.
Figure <ref>(b) shows the absolute value of
the measured skyrmion
Hall angle,
θ_sk = |arctan(⟨ V_y⟩/⟨ V_x⟩)|,
which starts off at zero
in the pinned phase and increases linearly with increasing F_D before
saturating at high drives to a value close to
the intrinsic skyrmion Hall angle θ_sk^ int.
The velocity dependence of the skyrmion Hall
angle has been studied previously in simulations
<cit.> and
observed in experiments
<cit.>.
The plot of P_6 versus F_D in
Fig. <ref>(d)
shows that P_6 is low in the plastic flow regime
where θ_sk is increasing,
but that a dynamical ordering transition occurs
for F_D > 1.325 and the system orders into
a mostly crystalline state
with P_6≈ 0.98.
The skyrmion Hall angle is close to
its intrinsic value when the system is on
the ordered side of the transition.
Now that we have established the
range of drives for which the system is ordered, we can
sweep through the ordering transition at different rates and
count the defects.
Figure <ref> shows P_6 versus
F_D over the range F_D = 0 to F_D=3.5 for
quench times of τ_q = 10, 20, 40, 70, 100, 1000, and 4000,
where the case of τ_q=10000 was already shown in
Fig. <ref>. The quench times
correspond to total simulation times of 10^3τ_q.
When F_D > 1.3, the system
becomes more ordered as the value of τ_q increases.
In Fig. <ref>
we plot the Voronoi constructions at F_D = 1.8 for
τ_q = 10, 40, 100, and 4000, showing that
for a given drive, fewer defects
become trapped at lower quench rates.
In Fig. <ref> we plot the fraction of defects,
P_d = 1 -P_6, versus τ_d for both vortices and skyrmions
from the system in Fig. <ref>
at a drive of F_D = 1.8.
The lines are fits to P_d ∝τ_q^-β
with β = 0.5 for the skyrmions and β=0.36 for the vortices.
The steeper slope for the skyrmion case is a reflection of
the fact that the skyrmions can order more effectively than the vortices.
Previous
simulations <cit.> of the KZ scenario for vortices gave
a value of β≈ 0.385,
and it was argued that
this was close to the
value β = 0.401 expected for 1+ 1 directed percolation
since the vortices form 1D chains in the moving smectic state.
The KZ scenario predicts that across a second-order phase transition,
β = (D -d)ν/(1 + zν),
which gives β=2/3
for the 2D Ising model and β=0.6 for 2D directed
percolation <cit.>,
both of which are higher values than what we observe for the vortices
and the skyrmions.
Additionally, for very fast quenches in the skyrmion case,
the fits give even lower values of β, which argues
against the system being in the 2D Ising universality class.
This suggests that
although the behavior of the skyrmions is more 2D in character
than that of the vortices,
it is neither 2D directed percolation nor Ising-like.
An exponent of β = 1/2 was obtained
from quenches of a 2D
artificial spin ice system <cit.>, and it was argued
that in that system, the dynamics is dominated by coarsening
of the defects through the quench,
leading to the formation of domain walls surrounding regions of
size R.
As a function of time, R increases <cit.>
according to R(t) = t^1/2, and therefore
the number of defects decreases with time
as 1/R(t).
In the skyrmion system,
we find that some of the topological defects form domain walls,
as illustrated in Fig. <ref>(b).
In our simulations, once the skyrmion system is on the ordered side
of the transition with F_D > F_cr, the topological defects
interact strongly with each other
and can annihilate through a coarsening process.
As a result, different sweep rates τ_q
give access to different portions of the coarsening
process and produce exponents associated with coarsening.
For the vortex system where the particles form 1D chains,
the defects remain trapped in the chains and cannot climb,
reducing the amount of coarsening that occurs and allowing a
greater number of topological defects to survive
on the ordered side of the
transition, as shown in Fig. <ref>(b).
We cannot rule out the possibility that the skyrmion
system could fall in some other universality class or
that the coarsening might compete with the critical dynamics.
In Fig. <ref>, we plot P_d versus τ_q
for the skyrmion system with θ_sk^ int=53.1^∘
from Fig. <ref> at varied pinning strengths of
F_p = 1.4, 1.0, 0.7, and 0.4. The solid line is
a power law fit with exponent
β = 0.5.
In this case, we examine the defect densities
at F_D=1.2F_cr since the value of the critical reordering force
F_cr varies as a function of
F_p and the ratio α_m/α_d.
In Fig. <ref>(a) we plot
P_d versus τ_q for the same system at F_p = 1.0 but
for varied Magnus force contributions giving
θ_sk^ int = 84.26^∘, 53.1^∘, 37.95^∘, and
23.58^∘.
The solid line is a power law fit with β = 0.5.
When θ = 84.26^∘,
we observe significant deviations from
the power law; however, in this case, α_m is ten times
larger than α_d, so the dynamics are heavily dominated by
gyrotropic motion.
For the smaller skyrmion
Hall angles, the exponents become more robust,
and these smaller values of θ_sk^ int
are well within the range of
what has been observed experimentally.
Figure <ref>(b) shows the same variation of P_d versus τ_q
with skyrmion Hall angle in a system with weaker
pinning of F_p = 0.4.
In general, we find that for
skyrmion Hall angles greater than 10^∘,
the system dynamically orders into
an isotropic crystal and
exhibits a scaling exponent close to β = 0.5,
while for smaller skyrmion Hall angles (not shown),
the system forms a moving smectic and β decreases toward
the value obtained for vortices with θ_sk^ int=0^∘.
§ INSTANTANEOUS QUENCHES
Another method for examining the behavior of the defects
on the ordered side of the
transition is to perform instantaneous quenches starting from a drive
well below the critical ordering transition drive F_cr, where
the system is topologically disordered.
We instantaneously
increase the drive to a value above
F_cr and measure the time-dependent decay
of the defect population.
We specifically consider the system from
Fig. <ref> with F_p = 1.0, where the vortices
with θ_sk^ int=0^∘
form a dynamically ordered smectic state but the
skyrmions with θ_sk^ int = 53.1^∘ form
a dynamically ordered crystal, and
we instantly change the driving from F_D = 0.5 to F_D = 1.7.
The ordering transition
for the skyrmions occurs near F_cr= 1.325.
The plot of P_d versus simulation time in
Figure <ref> for both vortices and skyrmions shows that
there is an extended regime in which the population of defects
continues to decrease for the skyrmion system, indicative of coarsening,
while in the vortex system the defect population rapidly saturates.
The solid line is
a power law fit to P_d ∝ t^-α
with α = 0.57. This
exponent is close to the
value β=0.5 obtained in Fig. <ref>
as a function of τ_q for finite rate
quenches in the skyrmion system,
suggesting
that coarsening on the ordered side of the transition
is occurring more strongly for the skyrmions than
for the vortices.
This could be due to the
fact that the skyrmions form a more isotropic structure
that allows both climb and glide of the defects,
while the vortices form a smectic structure
containing trapped defects that cannot annihilate. Generally,
for any value of F_D in instantaneous quenches above the ordering drive,
the skyrmions show an extended regime of coarsening compared to the
vortices and reach a lower saturated value of P_d.
In Fig. <ref> we plot P_d versus time for the skyrmion
system from Fig. <ref> for
quenches from F_D=0.5 to different final values of F_D.
For final values of F_D = 1.0 and 1.2, which are below the
critical ordering drive F_cr, the defect populations
show little change since the system remains in the disordered phase.
For a final value of F_D = 1.325, which is
just above the ordering transition, coarsening extends out to
long times and can be described by a power law
P_d ∝ t^-α with α=0.5, as indicated by the dashed line.
For a final value of F_D = 1.7,
there is a similar extended range of coarsening, as also shown
in Fig. <ref>.
At a final value of F_D = 2.0, we start to see some deviations
and there is a sharp jump down in the
defect density at later times.
In an isotropic driven system with quenched disorder, the particles can be
regarded as experiencing
an effective shaking temperature <cit.> T_ eff
produced by the pinning, where T_ eff∝ 1/F_D.
As the
final drive
value increases, this effective temperature decreases and
the amount of activated defect hopping is reduced,
leading to a reduction in the amount of coarsening that occurs.
§ DISCUSSION AND SUMMARY
We have examined the topological defect populations
upon passing through a nonequilibrium phase transition
from a disordered plastic flow state
to a two dimensional ordered or partially ordered
moving state for vortices and skyrmions as a function of
quench rate through the transition.
In the overdamped vortex system,
which as shown in previous work forms a moving smectic
on the ordered side of the transition,
the defect density varies as P_d ∝τ_q^-β with
β≈ 0.36.
It has been argued that this is a result of the fact that the
reordering transition is an absorbing phase transition in
the 1 + 1 directed percolation universality class since
the moving state forms
1D chains.
Similar exponents were obtained in
both simulations <cit.> and experiments
<cit.>.
For the case of skyrmions where there is
a nondisspative Magnus term, the ordered system forms a more isotropic
moving crystal rather than a smectic state.
In general, we find that the skyrmions can reach a much more
ordered state than the
vortices
and that for the skyrmions, β≈ 0.5.
This suggests that the dynamical ordering transition for the skyrmions
falls into a different universality class
than that of the vortices.
We also argue that coarsening may be occurring for the skyrmions
on the ordered side of the transition and that
the difference in the skyrmion and vortex exponents could
be the result of coarsening dynamics.
To test this, we performed
instantaneous quenches across the transition and found a similar
decay in the defect populations for vortices and skyrmions at
short times; however, at longer times, the defect population saturates
much sooner and at a higher level for the vortices as the defects become
trapped in the smectic structure, while for skyrmions the system continues
to coarsen for a much longer time.
For the skyrmions, the defect population after an instantaneous
quench decays as a power law with an exponent
in the range of α = 0.5 to 0.57.
Our results suggest that the Kibble-Zurek scenario
can be applied to nonequilibrium phase transitions
in driven systems with quenched disorder, where
depending on the nature of the ordered state,
different scaling behavior can appear. For skyrmions,
the dynamics may reflect coarsening
rather than a critical scaling due to the ability of defects to annihilate
even during the quench.
It would be interesting
to apply the Kibble-Zurek scenario
to other driven systems with quenched disorder,
such as those with periodic substrates, to three dimensional or
layered systems, and also to explore different types of driving protocols.
This work was supported by the US Department of Energy through
the Los Alamos National Laboratory. Los Alamos National Laboratory is
operated by Triad National Security, LLC, for the National Nuclear Security
Administration of the U. S. Department of Energy (Contract No. 892333218NCA000001).
|
http://arxiv.org/abs/2307.02457v1 | 20230705173144 | DeSRA: Detect and Delete the Artifacts of GAN-based Real-World Super-Resolution Models | [
"Liangbin Xie",
"Xintao Wang",
"Xiangyu Chen",
"Gen Li",
"Ying Shan",
"Jiantao Zhou",
"Chao Dong"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.MM"
] |
[
DeSRA: Detect and Delete the Artifacts of GAN-based Real-World Super-Resolution Models
equal*
Liangbin Xieequal,xxx,yyy,comp1
Xintao Wangequal,comp1
Xiangyu Chenequal,xxx,yyy,zzz
Gen Licomp2
Ying Shancomp1
Jiantao Zhouxxx
Chao Dongyyy,zzz
xxxState Key Laboratory of Internet of Things for Smart City, University of Macau
yyyShenzhen Key Lab of Computer Vision and Pattern Recognition, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
comp1ARC Lab, Tencent PCG
comp2Platform Technologies, Tencent Online Video
zzz Shanghai Artificial Intelligence Laboratory
Chao [email protected]
Machine Learning, ICML, Real-World Image Restoration
0.3in
]
Image super-resolution (SR) with generative adversarial networks (GAN) has achieved great success in restoring realistic details.
However, it is notorious that GAN-based SR models will inevitably produce unpleasant and undesirable artifacts, especially in practical scenarios.
Previous works typically suppress artifacts with an extra loss penalty in the training phase.
They only work for in-distribution artifact types generated during training.
When applied in real-world scenarios, we observe that those improved methods still generate obviously annoying artifacts during inference.
In this paper, we analyze the cause and characteristics of the GAN artifacts produced in unseen test data without ground-truths.
We then develop a novel method, namely, DeSRA, to Detect and then “Delete” those SR Artifacts in practice.
Specifically, we propose to measure a relative local variance distance from MSE-SR results and GAN-SR results, and locate the problematic areas based on the above distance and semantic-aware thresholds.
After detecting the artifact regions, we develop a finetune procedure to improve GAN-based SR models with a few samples, so that they can deal with similar types of artifacts in more unseen real data.
Equipped with our DeSRA, we can successfully eliminate artifacts from inference and improve the ability of SR models to be applied in real-world scenarios. The code will be available at <https://github.com/TencentARC/DeSRA>.
§ INTRODUCTION
Single image super-resolution (SISR) aims to reconstruct high-resolution (HR) images from their low-resolution(LR) observations.
Since the pioneering work of SRCNN <cit.>, numerous approaches <cit.> have been developed and made great strides in this field.
Among them, GAN-based methods <cit.> have achieved great success in generating realistic SR results with detailed textures.
Recently, BSRGAN <cit.> and Real-ESRGAN <cit.> extend GAN-based models to real-world applications and obtain promising results, demonstrating their immense potential to restore textures for real-world images.
However, it is notorious that GAN-SR methods often generate perceptually unpleasant artifacts, which would seriously affect the user experience.
This problem is exacerbated in real-world scenarios, due to the unknown and complex degradation of LR images.
Several works <cit.> have been proposed to deal with the artifacts generated by GAN-SR models.
Typically, LDL <cit.> proposes to construct a pixel-wise map indicating the probability of each pixel being an artifact by analyzing the type of texture, and then penalizes the artifacts by adding loss during training.
Although it indeed improves GAN-SR results, we can still observe obvious visual artifacts when inferencing real-world testing data, as shown in Fig. <ref>.
It is hard to solve these artifacts only by improving the training on existing data pairs, since such artifacts probably do not appear during the training of GAN-SR models.
To better illustrate this problem, we attempt to classify the GAN-SR artifacts according to the different stages they appear.
1) GAN-training artifacts usually arise in the training phase, mainly due to the unstable optimization <cit.> and the ill-posed property of SR in the in-distribution data. With the presence of ground-truth images, those artifacts could be monitored during training and thus can be mitigated by improving the training, as LDL <cit.> does.
2) There is another kind of artifacts that often appears in the real-world unseen data during inference, which we term as GAN-inference artifacts.
Those artifacts are typically out of training distribution and do not appear in the training phase.
Thus, those methods that focus on synthetic images and improve the training procedure (, LDL) cannot solve those artifacts.
Dealing with GAN-inference artifacts is a new and challenging task.
There is no ground-truth for real-world testing data with GAN-inference artifacts.
Besides, it is hard to simulate these artifacts since they may seldom or even never appear in the training set. In other words, these artifacts are unseen and out of distribution to the models.
However, solving this problem is the key to applying GAN-SR models for real-world scenarios, which has great practical value.
There are two steps to resolve the artifacts. The first step is to detect the artifact regions. In actual training of the GAN-SR model, we usually finetune it from the MSE-SR model with GAN training strategy, aiming to add fine details. Since there is no ground-truth of the inference results, we adopt the MSE-based results as the reference, which are easily accessible even for real-world data. We then design a quantitative indicator that calculates the local variance to measure the texture difference between results generated by MSE-based and GAN-based models.
After obtaining a pixel-wise distance map, we further introduce semantic-aware adjustment to enlarge the difference in perceptually artifact-sensitive regions (, building, sea) while suppressing the difference in textured regions (, foliage, animal fur).
We then filter out detection noises and perform morphological manipulations to generate the final artifact mask.
Based on the detected artifact regions, the second step is to make the pseudo GT and finetune the GAN-SR model. Firstly, we collect a small amount of GAN-based results with artifacts and replace the artifact regions with the MSE-based results according to the binarized detection masks. Then we use the combined results as the pseudo GT to construct training pairs to finetune the model for a very short period of iterations. Experimental results show that our fine-tuning strategy can significantly alleviate GAN-inference artifacts and restore visually-pleasant results on other unseen real-world data.
To summarize, 1) We make the first attempt to analyze GAN-inference artifacts that usually appear on unseen test data without ground-truth during inference.
2) Based on our analysis, we design a method to effectively detect regions with GAN-inference artifacts. 3) We further propose a fine-tuning strategy that only requires a small number of artifact images to eliminate the same kinds of artifacts, which bridges the gap of applying SR algorithms to practical scenarios.
4) Compared to previous work, our method is able to detect unseen artifacts more accurately and alleviate the artifacts produced by the GAN-SR model in real-world test data more effectively.
§ RELATED WORK
MSE-based Super-Resolution.
SR methods in this category aim to restore high-fidelity results by minimizing the pixel-wise distance between SR outputs and HR ground-truth like l1 and l2 distance.
Since SRCNN <cit.> successfully applies deep convolution neural networks (CNNs) to the image SR task, numerous deep networks <cit.> have been proposed to further improve the reconstruction quality.
For instance, many methods apply more elaborate convolution module designs, such as residual block <cit.> and dense block <cit.>.
At the same time, many works have been proposed for the Blind SR task <cit.> and video SR task <cit.>.
Recently, several Transformer-based networks <cit.> are proposed and refresh the state-of-the-art performance.
However, due to the ill-posedness of the SR problem, optimizing the pixel-wise distance unavoidably results in smooth reconstructions that lack fine details.
GAN-based Super-Resolution.
To improve the perceptual quality of SR results, GAN-based methods are proposed to introduce generative adversarial learning for SR task <cit.>.
SRGAN uses SRResNet generator and perceptual loss <cit.> to train the network.
ESRGAN further improves the visual quality by adopting Residual-in-Residual Dense Block as the backbone to enhance the generator.
To extend the GAN-SR model to real-world applications, BSRGAN <cit.> and Real-ESRGAN <cit.> design practical degradation models.
For real-world video scenarios, RealBasicVSR <cit.> and FastRealVSR <cit.> also incorporate practical degradation models.
Despite the success, GAN-SR models often suffer from severe perceptually-unpleasant artifacts.
SPSR <cit.> proposes to alleviate the structural distortion by introducing a gradient guidance branch.
LDL <cit.> constructs a pixel-wise map that represents the probability of each pixel being artifact and penalizes the artifacts by introducing extra loss during training.
Nonetheless, these methods would still result in artifacts in actual inference.
§ METHODOLOGY
Preliminary: GAN-SR models aim to learn a generative network G parameterized by θ_GAN that estimates a high-resolution image ŷ for a given low-resolution x image as:
ŷ = G(x;θ_GAN).
To optimize the network parameters, a weighted combination of three sorts of losses is adopted in most GAN-SR methods <cit.> as the loss function:
ℒ_GAN = λ_1ℒ_recons + λ_2ℒ_percep + λ_3ℒ_adv,
where ℒ_recons represents the pixel-wise reconstruction loss such as l_1 or l_2 distance, ℒ_percep
is the perceptual loss <cit.> calculating the feature distance and ℒ_adv denotes the adversarial loss <cit.>.
Due to the instability GAN training, in the training of a GAN-SR model, a MSE-SR model is generally trained first only using ℒ_reconss to obtain θ_MSE, and then the GAN-SR model is finetuned based on the pretrained θ_MSE using ℒ_GAN to get the final θ_GAN.
§.§ Analyze GAN Artifacts Introduced in Inference
Unlike MSE-based optimizations that naturally tend to produce over-smooth reconstruction results, GAN-based models can generate fine details benefiting from adversarial training.
However, GAN-SR models often introduce severe perceptually-unpleasant artifacts that seriously affect the visual quality of restored images, especially in real-world scenarios.
In some cases, the GAN-SR artifacts would make the results even worse than those generated by the MSE-based model, as shown in Fig. <ref>.
Besides, these artifacts are complicated, with many types and characteristics, and are diverse for different image content.
Essentially, methods for dealing with GAN-SR artifacts are all aimed at improving the results obtained in the inference stage.
Nevertheless, the types of artifacts that can be addressed are limited for existing methods, since they deal with the artifacts only by improving the training process.
For instance, LDL <cit.> processes the GAN-SR artifacts by adding penalty loss to problematic regions and improving the learning strategy.
It works for artifact types generated during the training phase, which exist in the in-distribution data of the training set.
We name those artifacts as GAN-training artifacts.
However, some cases of artifacts generated during the inference phase are out-of-distribution, namely, GAN-inference artifacts.
They usually appear in unseen data without reference.
Dealing with GAN-training artifacts would lead to better recovery of training data, but the capability of the model to process out-of-distribution data can only rely on its limited generalization ability.
For real-world applications, how to solve more general GAN-inference artifacts is much more important.
These artifacts are hard to synthesize during training, and thus can not be resolved by only improving the training.
In this work, we focus on processing the GAN-inference artifacts, as those artifacts have a largely negative impact on real-world applications, and solving them has great practical value.
Due to the complexity and diversity of these kinds of artifacts, it is challenging to address all of them at once.
We, therefore, deal with GAN-inference artifacts with the following two characteristics.
1) The artifacts do not appear in the pretrained MSE-SR model (i.e., the generator G with parameters θ_MSE).
2) The artifacts are obvious and have a large area, which can be observed at the first glance.
Some practical examples containing such artifacts are shown in Fig. <ref>.
For the former characteristic, we want to ensure that the artifacts are caused by GANs while the corresponding MSE-SR results are good references for test data to distinguish the artifacts.
For the latter feature, we want to address those artifacts that have a great impact on visual quality.
Before introducing the methods for addressing the artifacts, we first give a glimpse of the causes of GAN-inference artifacts.
We found that manipulations that would slightly change the degradations, such as adding imperceptible Gaussian noise or rescaling the image, could eliminate the artifacts.
As shown in Fig. <ref>, by modulating the adding noise from σ=0 to σ=12/255, the artifacts are alleviated gradually. A similar phenomenon appears when we rescale the input by setting the upscaling factor from ×0.9 to ×1.2.
These operations essentially make the degradation of the real image close to the simulated degradations.
This interesting observation illustrates that the reason for those GAN-inference artifacts is partly due to the out-of-distribution degradation of the input image.
Besides, models of different training iterations also result in artifacts with different severity, as shown in Fig. <ref>.
It reflects that the unstable training of GAN is also the cause of these artifacts.
§.§ Automatically Detect GAN-inference Artifacts
At first, we want to automatically detect the regions with obvious artifacts according to some quantitative values in the inference phase before processing these artifacts.
Due to the lack of ground-truth images, we choose the MSE-SR results as the reference to evaluate the artifacts generated by the GAN-SR model.
Its rationale lies in that the presentation of GAN artifacts is usually caused by too many unwanted high-frequency `details'.
In other words, we introduce GAN training to generate fine details, but we do not want the generated content by GAN to deviate too much from MSE-SR results.
Note that MSE-SR results are easy to access even for unseen test data, as we usually finetune the MSE-SR models to obtain GAN-SR models.
Relative difference of local variance between MSE-SR and GAN-SR patches.
Based on the above analysis, we propose to design a quantitative indicator to measure the difference between patches from MSE-SR and GAN-SR results as a basis for judging the artifacts.
We adopt the standard deviation of pixel intensities within a local region P to indicate the complexity of local texture as:
σ(i,j)=sd(P(i-n-1/2:i+n-1/2,j-n-1/2:j+n-1/2)),
where σ(i,j) indicates the local standard deviation at (i,j), sd(·) represents the standard deviation operator, and n denotes the local window size and is set to 11.
Then we calculate the difference between standard deviations of two patches to measure the texture difference d as:
d(x,y)=(σ_x-σ_y)^2.
In our case, x refers to GAN-SR patches, while y denotes MSE-SR patches.
As shown in Fig. <ref> (a), for patches with similar semantics, too large texture difference d from MSE-SR results usually indicates GAN artifacts.
However, d measures the absolute difference between patches, which is also related to the texture complexity itself.
As depicted in Fig. <ref> (b), patches with similar d have different visual quality due to their different underlying semantics. Tree regions do not have artifacts while building regions have.
Thus, we want the texture difference indicator to be a relative value independent of their original texture variation (, the scale of σ), so we further divide d by the product of σ_x and σ_y as:
d^'(x,y)=(σ_x-σ_y)^2/2σ_xσ_y.
To facilitate subsequent operations for the distance map, we hope to normalize the distance d^' in the range of [0,1].
Inspired by SSIM <cit.>, we adopt a similar transformation:
d^''(x,y)=1/1+(σ_x-σ_y)^2/2σ_xσ_y=2σ_xσ_y/σ_x^2+σ_y^2.
A constant C is introduced to stabilize the division with a weak denominator.
The final quantitative indicator can be written as:
D=2σ_xσ_y/σ_x^2+σ_y^2+C.
We derive this formula step by step according to our actual needs in artifact detection, and each step has its practical meaning in our GAN-inference artifact detection.
As shown in 3^rd and 4^th column of Fig. <ref>, we can observe that the generated map based on d covers most of the regions with high-frequency difference between MSE-SR and GAN-SR results, but cannot distinguish these artifacts. The relative and normalized texture difference D is successfully used to produce the artifacts map.
Semantic-aware adjustment.
After obtaining the distance map, we can exploit it to determine the regions that need to be addressed.
However, it is not enough to only use the difference in texture complexity as a basis for judgment, because the perceptual tolerance rate of different semantic regions is different.
For example, fine details in areas with complicated textures are difficult to perceive as artifacts like foliage, hair, and etc, while large pixel-wise differences in areas with smooth or regular textures, such as sea, sky, and buildings, are sensitive to human perception and easy to be seen as artifacts, as shown in 1^st and 2^nd column of Fig. <ref>.
Hence, it is required to adjust the artifact map D based on the underlying semantics.
We choose the SegFormer <cit.> as the segmentation model to distinguish different regions. Specifically, the SegFormer is trained on ADE20K, which covers most semantic concepts of the world.
To determine the reasonable adjustment weight for each class, we calculate pixel-wise D values in each class of all images in the training set.
For each class, we sort all the D values in descending order and set the D value in the 85% percentile as the adjustment weight:
A_k=P_85(D_k), k∈{1,2,...,K},
where A_k is the adjustment weight for the k^th class, D_k is the D value of all pixels identified as the k^th class, and P_85 is the 85^th percentile operation. The value of K is 150.
For each image, the refined detected map based on segmentation M is computed as:
M(i,j)=
0, D(i,j)/A_k ≥ threshold;
1, D(i,j)/A_k < threshold.
where D(i,j) is the D value of pixel (i,j) and threshold is a hyper-parameter to control whether the current pixel is artifact or not. We empirically set the threshold to 0.7.
We additionally perform morphological manipulations to obtain the final detected map, as shown in the 6^th column of Fig. <ref>.
Concretely, we first perform erosion using a 5×5 all-ones matrix. Then we implement dilation using the matrix to join disparate regions. Next, we fill the hole in the map by using a 3×3 all-ones matrix. Finally, we filter out discrete small regions as the detection noise.
§.§ Improve GAN-SR Models with Fine-tuning
The detection of GAN-inference artifacts itself is of great practical value.
We hope to further improve the GAN-SR model based on the detection results.
Note that we aim to solve the GAN-inference artifacts for unseen real data, so there is no ground-truth for the inference results with artifacts.
In practice, “weak restoration without artifacts is even better than strong restoration with artifacts”.
Thus, we exploit the MSE-SR results as the restoration reference.
As illustrated in Fig. <ref>, we use MSE-SR results to replace the regions where artifacts were detected in GAN-SR results. The merged images serve as the pseudo GT. This process is formulated as:
y= M· y_MSE + (1-M) · y_GAN,
where y indicates the generated pseudo GT, y_MSE and y_GAN are MSE-SR and GAN-SR results, (·) represents the element-wise product, and M is the detected artifact map.
We then use a small amount of data to generate the data pairs (x,y) from real data to finetune the model, where x represents the LR data.
We only need to finetune the model for a few iterations (about 1K iterations are enough in our experiments) and the updated model would produce perceptually-pleasant results without obvious artifacts. Moreover, it does not influence other fine details in regions without artifacts.
It can effectively suppress similar kinds of artifacts in more real testing data.
The working mechanism behind this approach is that the finetuning process narrows the gap between the distribution of synthetic data and real data to alleviate the GAN-inference artifacts.
§ EXPERIMENTS
§.§ Experiment Setup
We exploit two state-of-the-art GAN-SR models, Real-ESRGAN <cit.> and LDL <cit.>, to validate the effectiveness of our method.
We use the officially released model for each method to detect the GAN-inference artifacts.
For finetuning, the training HR patch size is set to 256. The models are trained with 4 NVIDIA A100 GPUs with a total batch size of 48. We finetune the model only for 1000 iterations and the learning rate is 1e-4.
Dataset. Although several real-world super-resolution datasets <cit.> are proposed, they assume camera-specific degradations and is far from practical scenarios. Therefore, we construct a GAN-SR artifacts dataset.
Considering the diversity of both image content and degradations, we use the validation set of ImageNet-1K <cit.> as the real-world LR data.
Then we choose 200 representative images with GAN-inference artifacts for each method to construct this GAN-SR artifact dataset.
Since there is no ground-truth map for artifact regions to evaluate the algorithm, we manually label the artifact area using labelme <cit.>.
This is the first dataset constructed for GAN-inference artifact detection.
For the finetuning process, we further divide the dataset by using 50 pairs for training and 150 pairs for validation.
Evaluation.
Due to the lack of ground-truth for real-world LR data, the classic metrics such as PSNR, SSIM cannot be adopted. We also test NIQE <cit.> and MANIQA <cit.>, and observe that these two metrics do not always match
perceptual visual quality <cit.> (see Section <ref>). Thus, we consider three metrics to evaluate the detection results, including 1) Intersection over Union (IoU) of the detected artifact area and the ground-truth artifact area, 2) Precision of the detection results and 3) Recall of the detection results.
When using A and B to represent the detected artifact area and the ground-truth artifact area for a specific region z, IoU is given by:
IoU=A∩ B/A∪ B.
We can calculate IoU for each image, and we use the average IoU on the validation set to evaluate the detection algorithm. A higher IoU means better detection accuracy.
We then define the set of regions with detected artifacts as S and the set of correct samples T is defined as:
T={z∈ S |A∩ B/A > p}.
The metric Precision=N_T/N_S indicates the number of correctly detected regions (N_T) out of the total number of detected regions (N_S).
We define the set of the ground-truth regions as G, and the set of detected GT artifact regions R is computed by:
R={z∈ G |A∩ B/B > p}.
The metric Recall=N_R/N_G represents the number of detected GT artifact regions (N_R) out of the total number of GT artifact regions (N_G). p is a threshold and we empirically set it as 0.5.
§.§ Artifact Detection Results
We conduct experiments based on Real-ESRGAN <cit.> and LDL <cit.> to validate GAN-inference artifact detection results.
We compare our DeSRA-det described in Sec. <ref> with detection based on NIQE <cit.>, PAL4Inpainting <cit.>, and the modified detection protocol in LDL <cit.>.
Since there is no reference image for the unseen data in the inference phase, we choose the non-reference index NIQE <cit.> to detect the artifacts for comparing the detection scheme without using MSE-SR results.
A similar sliding window mechanism is adopted to compute the pixel-wise map for measuring the local texture and we select the best-performing threshold for filtering the noise to obtain the final detected map.
PAL4Inpainting <cit.> is a newly proposed perceptual artifacts localization method originally for inpainting. We also include it for completeness.
As the artifact detection scheme in LDL <cit.> is designed for GAN-training artifacts with ground-truth images on synthetic data, it cannot be directly applied to solve GAN-inference artifacts without GT.
Thus, we use MSE-SR results to replace GT and set a group of threshold {0.001,0.005,0.01} for the LDL scheme.
Tab. <ref> shows the artifact detection results based on Real-ESRGAN. Our method obtains the best IoU and Precision that far outperform other schemes.
Note that LDL with threshold=0.001 obtains the highest Recall. It is because this scheme treats most areas as artifacts, and thus such detection results are almost meaningless.
Similar conclusions can be drawn from Tab. <ref> for artifact detection results based on LDL.
The visual comparison is presented in Fig. <ref>.
The detection results obtained by our approach have significantly higher accuracy than other schemes.
§.§ Improved GAN-SR Results
We finetune the model to alleviate the GAN-inference artifacts based on the detected artifacts map, as described in Sec. <ref>.
Note that this process has a very small training cost (i.e., 50 training pairs with 1000 iterations).
We compare the artifact detection results before and after using our DeSRA finetuning strategy to verify the effectiveness of improving the GAN-SR model to alleviate GAN-inference artifacts.
The condition for judging the removal of artifacts is A∩ B=0, and the condition for judging the introduction of new artifacts is A∪ B>B.
As depicted in Tab. <ref>, after the application of our DeSRA, IoU decreases from 51.1 to 12.9 on Real-ESRGAN and from 44.5 to 13.9 on LDL, illustrating that the detected area of artifacts is greatly reduced.
The removal rate is 75.43% and 74.97%, showing that three-quarters of the artifacts on unseen test data can be completely removed after finetuning.
Besides, our method does not introduce new additional artifacts, as the addition rate is 0.
We provide the visual comparison between results with and without using our method to improve GAN-SR models, as shown in Fig. <ref>.
Results generated by the improved GAN-SR models have greatly better visual quality without obvious GAN-SR artifacts compared to the original inference results.
All these experimental results demonstrate the effectiveness of our method for alleviating the artifacts and improving the GAN-SR model.
§.§ User Study
To further verify the effectiveness of our DeSRA finetuning strategy, we perform two user studies. The first is the comparison of the results generated by the original GAN-SR models and the finetuned GAN-SR models. For this experiment, the focus of comparison is on whether there are obvious artifacts. We produce a total of 20 sets of images, each containing the output results of the GAN-SR model and finetuned GAN-SR model. These images are randomly shuffled. A total of 15 people participate in the user study and select the image they think has fewer artifacts for each set. The final statistical results are shown in Fig. <ref>. 82.23% of participants think that the results generated by fine-tuned GAN-SR models have fewer artifacts. It can be seen that our method largely removes the artifacts generated by the original model.
The second is the comparison of the results of the finetuned GAN-SR models and the original MSE-SR models. This experiment is conducted to compare whether the results generated by the model have more details. We produce a total of 20 sets of images, each containing the output results of the MSE-SR model and finetuned GAN-SR model. These images are randomly shuffled. A total of 15 people participate in the user study and select the image they think has more details for each set. The final statistical results are shown in Fig. <ref>. 93% of participants think that the results generated by fine-tuned GAN-SR models have more details. It can be seen that the finetuned GAN-SR model generates more detailed results than the MSE-SR model.
§.§ Ablation Study
We first conduct the ablation study on three key designs of our artifact detection method, including relative difference (RD) (i.e., from d to d^'), normalization (i.e., from d^' to D) and semantic-aware threshold.
As shown in Tab. <ref>, without using the relative difference suffers the lowest Precision and the full Recall.
It is because the detection based on absolute difference would treat most areas as artifacts.
The detection scheme without normalization also results in low IoU, Precision, and Recall, since the thresholds for each sample probably have a different scale.
Using the semantic-aware threshold can improve the artifact detection results, because the sensitivity of human perception to different semantics is different.
All these results demonstrate the necessity of the three designs in our artifact detection method.
We also conduct an ablation study for the threshold to explore its impact on artifact detection results.
The threshold is used to control whether the pixel is the artifact or not for generating the detected map, as described in Equ. <ref>.
Usually, a precision-recall curve shows the trade-off between precision and recall for different thresholds, and a high area under the curve represents both high recall and high precision.
For simplicity, we directly use “Precision×Recall” to measure the performance of detection results to select the best threshold.
As depicted in Tab. <ref>, the highest Precision×Recall is obtained when the threshold is set to 0.7.
Thus, we select 0.7 as the default setting in our methods.
§ CONCLUSION
In this work, we analyze GAN artifacts introduced in the inference phase and propose a systematic approach to detect and delete these artifacts. We first measure the relative local variance distance from MSE-based and GAN-based results, and then locate the problematic areas based on the distance map and semantic regions. After detecting the regions with artifacts, we use the MSE-based results as the pseudo ground-truth to finetune the model. By using only a small amount of data, the finetuned model can successfully eliminate artifacts from the inference. Experimental results show the superiority of our approach for detecting and deleting the artifacts and we significantly improve the ability of the GAN-SR model in real-world applications.
§ ACKNOWLEDGEMENTS
This work was supported in part by Macau Science and Technology Development Fund under SKLIOTSC-2021-2023, 0072/2020/AMJ, and 0022/2022/A1; in part by Natural Science Foundation of China under 61971476, and 62276251; the Joint Lab of CAS -HK; in part by the Youth Innovation Promotion Association of Chinese Academy of Sciences (No. 2020356).
icml2023
§ APPENDIX
In this appendix, we provide the following materials:
* More discussions about our work. Refer to Section <ref> in the appendix.
* More details of GAN-inference artifacts detection pipeline (referring to Section 3.2 in the main paper). Refer to Section <ref> in the appendix.
* More visual results of GAN-SR artifacts (referring to Section 3.1 in the main paper). Refer to Section <ref> in the appendix.
* Visual results of GT detection mask labeled by labelme (referring to Section 4.1 in the main paper). Refer to Section <ref> in the appendix.
* More visual comparisons of different methods on artifact detection results (referring to Section 4.2 in the main paper). Refer to Section <ref> in this supplementary material.
* Artifact detection results based on SwinIR (referring to Section 4.2 and Section 4.3 in the main paper). Refer to Section <ref> in the appendix.
* More visual comparisons of results generated from original GAN-SR models and the improved GAN-SR models by using our DeSRA (referring to Section 4.2 in the main paper). Refer to Section <ref> in the appendix.
* The unreliable of NIQE <cit.> and MANIQA <cit.> metrics on evaluating the performance of artifacts removal (referring to Tab. 3 in the main paper). Refer to Section <ref> in the appendix.
§.§ More Discussions about Our Work
Discussion 1: why do we introduce the concept of GAN-inference artifacts?
Compared with the previous work, the focus of this work is different and orthogonal. Previous works focus on improving the realness of SR results or mitigating the artifacts generated in the training phase. In real-world scenarios without ground-truth, if one algorithm can restore sharp or real textures but may also generate obviously artifacts, this algorithm is still limited in practical usage since it greatly affects the user experience. For practical application, obviously annoying artifacts are intolerable and the weak restoration results without artifacts are more acceptable by users than the strong restoration results with artifacts. Therefore, dealing with the artifacts that are generated during the inference phase, called GAN-inference artifacts, are of great value for real-world applications. Besides, some cases
of artifacts generated during the inference phase are out-of-distribution, so how to alleviate the GAN-inference artifacts is challenging and needs more attention.
Discussion 2: why do we use MSE-SR results as the reference?
We admit that adopting the MSE-SR results as the reference is not optimal to distinguish the GAN-inference artifacts. However, 1) For real-world testing data, there is no ground-truth. 2) Detecting the GAN-inference artifacts perfectly is a challenging task. From our experiments, it can be observed that when we adopt the MSE-SR results as the reference to detect the artifacts, there are many overlap areas between our detected artifact map and the GT artifact map. The quantitative and qualitative results illustrate that choosing MSE-SR results as the reference is effective for detecting the GAN-inference artifacts. Deleting the GAN-inference artifacts is a challenging task and this work is the first attempt. We believe there exist other better choices and elegant algorithms to distinguish the GAN-inference artifacts, which needs further exploration.
Discussion 3: why do not we adopt PSNR, SSIM, NIQE … metrics?
1) The GAN-inference artifacts appear on unseen real test data, in this circumstance, the corresponding ground-truth images are absent. Therefore, PSNR and SSIM metrics can not be adopted to evaluate the performance. 2) We test some no-reference metrics (e.g, NIQE and MANIQA), and observe that these no-reference IQA metrics do not always match perceptual visual quality <cit.> (see Section <ref>). 3) The focus of this work is on detecting and alleviating GAN-inference artifacts. Motivated by the binary classification task, we adopt three metrics (, IOU, precision, and recall) to evaluate the performance.
Discussion 4: why do we assume that GAN-Artifact is usually a large area?
The GAN-inference artifacts are complicated and diverse, which appear in both large areas and small areas. Previous works focus on dealing with GAN-training artifacts and ignore the GAN-inference artifacts. When applied in real-world scenarios, those methods still generate obviously annoying artifacts during inference. Dealing with GAN-inference artifacts is a challenging task and there need several steps to resolve this problem. This work is the first attempt, and we only consider the artifacts that are obvious and have a large area, since this kind of artifact has a great impact on human perception. We hope that more researchers will pay attention to solving the GAN-inference artifacts, and the following works can deal with the GAN-inference artifacts that have a small area.
Discussion 5: semantic segmentation.
We admit that the detection results based on semantic segmentation are not entirely accurate, while it can get roughly accurate results to help distinguish artifacts and guide us for further processing of these artifacts, and the lost precision has a limited impact on practical applications.
Discussion 6: online continual learning.
Our method can provide a new paradigm combined with continual learning <cit.> to address the artifacts that appear in the inference stage online.
For example, for an online SR system that processes real-world data, we can use our detection pipeline to detect whether the results have GAN-inference artifacts. We can then use the images with detected artifacts to quickly finetune the SR model, so that it can deal with similar kinds of artifacts until the system encounter a new kind of GAN-inference artifacts.
Continual learning is widely studied on high-level vision tasks, but has not been applied to SR.
Our approach and application scenes naturally introduce continual learning to SR.
We hope to investigate this problem in the future, since it can greatly advance the application of GAN-SR methods in practical scenarios.
§.§ Details of GAN-inference artifacts detection pipeline
In this section, we first describe the details of GAN-inference artifacts detection pipeline. Then, we provide more details about calculating the adjustment weights.
Overall pipeline of detecting GAN-inference artifacts. The pipeline of detecting GAN-inference artifacts is shown in Fig. <ref> (a). For a GAN-SR and MSE-SR result, we first calculate the indicator D according to equation 7 in the main paper. Then we generate the segmentation map of MSE-SR result by adopting SegFormer. The segmentation map will be converted into semantic-aware adjustment weight A according to the calculated adjustment weights of each semantic class (Fig. <ref> (b)). By combining A, D and setting threshold, we can obtain the refined detected map M:
M(i,j)=
0, D(i,j)/A_k ≥ threshold;
1, D(i,j)/A_k < threshold,
where D(i,j) is the D value of pixel (i,j) and threshold is empirically set to 0.7. At last, we perform morphological manipulations to obtain the final detected map. Concretely, we first perform erosion using a 5×5 all-ones matrix. Then we implement dilation using the matrix to join disparate regions. Next, we fill the hole in the map by using a 3×3 all-ones matrix. Finally, we filter out discrete small regions as the detection noise.
Note that the visualization results of indicator D and D_refine in Fig. <ref> (a) are different from Fig. 5 in the main paper. Here we show their original values. In the main paper, for better understanding, we show their corresponding binary maps by comparing their original values with the threshold 0.7. Values that are smaller than the threshold 0.7 are set to 1.
Details of calculating adjustment weights. The calculation of adjustment weights for each semantic class is illustrated in Fig. <ref> (b). We first generate the corresponding low-resolution version by adopting the degradation model used in the training phase on the DIV2K training dataset.
Then, we generate the MSE-SR and GAN-SR results for each distorted image. After that, we calculate pixel-wise indicator D between the MSE-SR and GAN-SR results. To distinguish D of each semantic class, we choose SegFormer <cit.> as the segmentation model, and obtain the segmentation map of MSE-SR results.
By incorporating the segmentation map and indicator D, we get pixel-wise D values in each class of DIV2K. For each class, we sort all the D values in descending order and set the D value in the 85% percentile as the adjustment weight:
A_k=P_85(D_k), k∈{1,2,...,K},
where A_k is the adjustment weight for the k^th class, D_k denotes the D value of all pixels identified as the k^th class, and P_85 is the 85^th percentile operation. For example, the values of A_sky, A_tree and A_building are 1, 0.75 and 0.80, respectively.
§.§ More Visual Results of GAN-SR Artifacts
In real-world scenarios, GAN-SR models often introduce severe perceptually-unpleasant artifacts that seriously affect the visual quality of restored images. As depicted in Fig. <ref>, in some cases, the GAN-SR artifacts would make the results even worse than those generated by the MSE-based model.
§.§ Visual Results of GT Detection Mask
For Real-ESRGAN <cit.>, LDL <cit.> and SwinIR <cit.>, we construct their independent GAN-SR artifacts datasets. Each dataset contains 200 representative images with GAN-inference artifacts. Since there is no ground-truth map for artifact regions to evaluate the algorithm, we manually label the artifact areas using labelme <cit.> and generate a binary map to indicate the artifact region, as shown in Fig. <ref>.
§.§ More Visual Comparisons of Different Methods on Artifact Detection Results
For the GAN-inference artifacts generated by Real-ESRGAN <cit.>, LDL <cit.> and SwinIR <cit.>, we compare different methods on artifact detection results. The visual comparison is presented in Fig. <ref>.
The detection results obtained by our approach have significantly higher accuracy than other schemes.
§.§ Artifact Detection Results based on SwinIR
To validate the effectiveness of our proposed GAN-inference artifact detection algorithm and fine-tuning strategy, we further conduct experiments based on SwinIR. Due to the lack of the official-released pretrained weight of the discriminator, we retrain SwinIR using the officially released codes[https://github.com/JingyunLiang/SwinIR] in real setting and obtain the corresponding MSE-SR and GAN-SR models. For the GAN-inference artifacts generated by SwinIR, the artifact detection results are shown in Tab. <ref>. We can observe that our method obtains the best IoU and Precision that far outperform other schemes.
After obtaining the detected artifacts map, we finetune SwinIR with 1000 iterations to alleviate the GAN-inference artifacts. As depicted in Tab. <ref>, after the application of our DeSRA, IoU decreases from 57.9 to 21.8, illustrating that the detected area of artifacts is greatly reduced. The removal rate is 61.35%, showing that three-fifths of the artifacts on unseen test data can be completely removed after fine-tuning. Besides, our method does not introduce new additional artifacts, as the addition rate is 0.
§.§ More Visual Comparisons between the Original GAN-SR Models and the Improved GAN-SR Models with DeSRA
We provide the visual comparison between results with and without using our method to improve GAN-SR models, as shown in Fig. <ref>, Fig. <ref> and Fig. <ref>. We can observe that results generated by the improved GAN-SR models have greatly better visual quality
without obvious GAN-SR artifacts compared to the original inference results. All these experimental results demonstrate the effectiveness of our method for alleviating the artifacts and improving the GAN-SR model (Real-ESRGAN, LDL, and SwinIR).
§.§ Unrealiable of NIQE and MANIQA Metrics
In Tab. 3 of the main paper and Tab. <ref> in this supplementary material, we adopt IoU, Removal rate, and Addition rate metrics to evaluate the performance of improved GAN-SR models with DeSRA. Although NIQE <cit.> is the commonly-used metric in GAN-SR works, we observe that this metric cannot well reflect the performance of the improved GAN-SR models. As illustrated in Fig. <ref>, it can be obviously observed that the images in the second column have better visual results with fewer artifacts than the images in the first column. However, the values of NIQE (lower is better) and MANIQA <cit.> (higher is better) show the opposite results. MANIQA is the champion of the NTIRE 2022 Perceptual Image Quality Assessment Challenge. Therefore, we do not adopt these two metrics to evaluate the performance.
|
http://arxiv.org/abs/2307.00920v1 | 20230703104407 | Node-weighted Graph Convolutional Network for Depression Detection in Transcribed Clinical Interviews | [
"Sergio Burdisso",
"Esaú Villatoro-Tello",
"Srikanth Madikeri",
"Petr Motlicek"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Influence of the Anderson Transition on Thermoelectric Energy Conversion in Disordered Electronic Systems
Giuliano Benenti
August 1, 2023
=========================================================================================================
We propose a simple approach for weighting self-connecting edges in a Graph Convolutional Network (GCN) and show its impact on depression detection from transcribed clinical interviews. To this end, we use a GCN for modeling non-consecutive and long-distance semantics to classify the transcriptions into depressed or control subjects. The proposed method aims to mitigate the limiting assumptions of locality and the equal importance of self-connections vs. edges to neighboring nodes in GCNs, while preserving attractive features such as low computational cost, data agnostic, and interpretability capabilities. We perform an exhaustive evaluation in two benchmark datasets. Results show that our approach consistently outperforms the vanilla GCN model as well as previously reported results, achieving an F1=0.84% on both datasets. Finally, a qualitative analysis illustrates the interpretability capabilities of the proposed approach and its alignment with previous findings in psychology.
Index Terms: depression detection, graph neural networks, node weighted graphs, limited training data, interpretability.
§ INTRODUCTION
According to the World Health Organization (WHO), an estimated 970 million people in the world are living with a type of mental disorder, being depressive and anxiety disorders the most prevalent <cit.>. Traditionally, the diagnosis and assessment for depression are done using semi-structured interviews and a Patient Health Questionnaire (PHQ) <cit.> as main tools, and it is generally based on the judgment of general practitioners.
However, practitioners may fail to recognize as many as half of all patients with depression <cit.>. Therefore, there is an acknowledged necessity for digital solutions for (i) assisting practitioners in reducing misdiagnosis, and (ii) addressing the burden of mental illness diagnosis and treatment <cit.>.
Previous research has shown that language is a powerful indicator of our personality, social, or emotional status, and mental health <cit.>. As a result, many work exists at the intersection of artificial intelligence (AI), speech and natural language processing, psycholinguistics, and clinical psychology, showing that screening interviews, projective techniques, and essays writing provide valuable insights into the cognitive and behavioral functioning of subjects <cit.>.
Existing work on depression detection, via the use of textual transcriptions from psychotherapy sessions, varies from sentiment-based approaches <cit.>, going through methods designed to identify relevant vocabulary <cit.>, to various neural network architectures to best model the interviews, including bidirectional LSTM <cit.>, hierarchical attention-based networks <cit.>, and deep neural graph structures <cit.>. Other studies have experimented with multi-target hierarchical regression models to predict individual depression symptoms, aiming to improve performance by simultaneously predicting both binary diagnostic and depression severity regression scores <cit.>.
Finally, some works have explored the utility of enriching the models with additional (domain-specific) data <cit.>, e.g., incorporating external linguistic knowledge to enforce higher values for attention weights corresponding to salient affective words.
Contrary to previous work, our proposed approach has the following salient features: does not require any external resource (data agnostic), does not depend on large pre-trained language models to learn embeddings (low computational cost), and has interpretability capabilities by design, a must in AI-supported diagnosis.
In particular, we propose to use a Graph Convolutional Network (GCN) to classify the transcribed sessions between a therapist and a subject seeking medical attention.
Overall, the main contributions of this paper are: (1) a novel weighting approach for self-connection nodes to address the limiting assumptions of locality and the equal importance of self-connections vs. edges to neighboring nodes in GCNs; (2) to the best of our knowledge, we evaluate for the first time an inductive implementation of GCNs in the task of depression detection from transcribed interviews, outperforming previously published results on two benchmark datasets; and (3) we demonstrate the interpretability potential of the proposed model, a key characteristic in AI-supported diagnosis, showing that what the model learned aligns with findings in psychology research.[Our code is available at <https://github.com/idiap/Node_weighted_GCN_for_depression_detection>]
§ GRAPH NEURAL NETWORK ARCHITECTURE
A Graph Convolutional Network (GCN) is a multilayer neural network that operates directly on a graph and induces embedding vectors of nodes based on the properties of their neighbors <cit.> (Figure <ref>). Formally, considering a graph G=(V, E, A), where V (|V|=n) represents the set of nodes, E is the set of edges, and A∈ℛ^n× n an adjacency matrix representing the edge values between nodes. The propagation rule for learning the new k-dimensional node feature matrix H^(l)∈ℛ^n× k is computed as:
H^(l+1) = f(H^(l),A) = σ(ÃH^(l)W^(l))
where Ã=D^-1/2AD^-1/2 represents the normalized symmetric adjacency matrix, D_ii = ∑_jA_ij is the degree matrix of adjacency matrix A, W^(l) depicts the weight to be learned in the l_th layer, and σ is an activation function, e.g., ReLU: σ(x)=max(0,x).
In order to use GCNs for text classification <cit.>, we generate a large and heterogeneous text graph that contains word nodes (V_words) and training document nodes (V_tr_docs) so that global word co-occurrences can be explicitly modeled.
Accordingly, the entire set of nodes is composed as V={V_tr_docs, V_words}, i.e. the number of training documents (corpus size) plus the number of unique words (vocabulary size) of the corpus. Particularly, in this work, we use a two-layer GCN defined as:
H^(1)= σ(ÃH^(0)W^(0))
Z = softmax(ÃH^(1)W^(1))
where W^(0) is the learned word embeddings lookup table, and W^(1) represents the learned weight matrix in the second layer. Loss is computed by means of the cross-entropy function between Z_i and Y_i, ∀ i ∈ V_tr_docs. Intuitively the first layer learns the intermediate representation of the nodes (words and documents) while the second one learns the output representation, as illustrated in Figure <ref>.
Note that in the output representation, label information from the documents has been propagated to the word nodes as output probabilities, allowing the model to learn the relation between words and output labels (e.g. depression or control labels), a key aspect favoring the interpretability of the model (see Section <ref>).
In order to make a fair comparison of the GCN's performance against other classification approaches,
in this work we use the inductive version of GCNs as described in <cit.> instead of the original transductive one <cit.>.
Thus, the initial node feature matrix H^(0) is generated such that word node vectors are represented as one-hot vectors, i.e., H^(0)_i = {0,1}^m, ∀ i ∈ V_words, where m is the vocabulary size of the training documents. And, for the representation of document node vectors H^(0)_i,∀ i ∈ V_tr_docs the term-frequency-inverse document frequency (TF-IDF) values of the corresponding word in that specific document is used, i.e., H^(0)_ij= TF-IDF(i,j), ∀ i,j where i and j are a document and a word, respectively.
For the definition of the edge types in A, we consider (i) word-to-word, (ii) word-to-document, similar to <cit.>. Our key contribution here is the addition of a new edge type for (iii) self-connections, acting as a trade-off parameter in the definition of Ã.
Formally, this is expressed as follows:
A_ij =
PMI(i,j) if i,j are words &PMI(i,j)>0
PR(i,j) if i,j are words &i=j
TF-IDF_i,j if i is document &j is word
0 otherwise
where PMI is the Point-wise Mutual Information and PR stands for the PageRank algorithm <cit.>, which given a graph computes the importance of each node in relation to the role it plays on the overall structure of the graph.
Intuitively, high PMI values will strongly link word nodes with high semantic correlation, high TF-IDF values will strongly link word nodes to specific document nodes, and high PageRank values will strongly link a node to itself proportionally to its global structural relevance;
this last modification aims to mitigate the assumption of locality and equal importance of self-loops, a known limitation in the vanilla GCN <cit.>. We will refer to this modification as ω-GCN.
Finally, it is worth mentioning that GCNs allow to easily optimize the model efficiency by means of applying simple feature selection techniques to reduce the vocabulary size (i.e. number of word nodes), prior to the graph construction, which has a direct impact on both the number of trainable parameters and model's interpretability (see section <ref> and <ref>).
§ EXPERIMENTAL SETUP
§.§ Datasets
For the experiments, we use the Distress Analysis Interview Corpus - wizard of Oz (DAIC-WOZ) dataset <cit.> and the Extended Distress Analysis Interview Corpus (E-DAIC) <cit.>. Both datasets contain semi-structured clinical interviews in North American English, performed by an animated virtual interviewer,[For DAIC-WOZ the virtual interviewer is human-controlled, while for the E-DAIC the virtual interviewer is fully automatic. A portion of the DAIC-WOZ transcriptions were generated using the ELAN tool from the Max Planck Institute for Psycholinguistics <cit.>, while the E-DAIC transcripts were obtained using Google Cloud's ASR service.] designed to support the diagnosis of different psychological distress conditions. Datasets are multimodal corpora, composed by audio and video recordings, transcribed text from the interviews, and the Patient Health Questionnaire (PHQ-8 <cit.>) scores. During our experiments, we only used the speech transcripts from the subjects's responses.
Table <ref> shows the composition of the datasets. Observe that the vocabulary size of the DAIC-WOZ is smaller than the E-DAIC vocabulary; suggesting a lesser variation of terminology in the provided answers, also reflected in a lower lexical richness (LR), an indicator of the E-DAIC complexity.
§.§ Implementation details
As baseline models, we used different BERT-based models as well as simple models. More precisely, we used six pre-trained transformer-based models (bert-base-cased, bert-base-uncased, bert-large-cased, bert-large-uncased, roberta-base, roberta-large) to which a final linear layer was added to classify the input using, as usual, the [CLS] classification special token.
In addition, to make the baselines as standard and simple as possible we made use of the Transformers Python package <cit.> AutoModelForSequenceClassification class so that the size and number of linear layers are automatically selected according to each model. For each model, we also evaluated two versions, one enabling fine tuning of the base model and another not fine tuning the base model as part of the training process.
Regarding simple and classic models, we used a Support Vector Machine (SVM) with linear kernel and Logistic Regression (LR) model, both using TF-IDF-weighted words as features.
For GCN models, the size of nodes' intermediate representation was set to 64, i.e. we set k=64 for the k-dimensional feature matrix H^(1)∈ℛ^n× k.
We performed a preliminary evaluation varying k ∈{32, 64, 128, 256, 300} from which 64 showed to consistently be the best performing one.
In addition, since GCN models allow us to control the vocabulary size (i.e. number of word nodes), we trained different GCNs using different vocabulary sizes, as with SVM and LR models.
Namely, we applied the following feature selection techniques to build the vocabulary: (a) automatic selection based on term weights learned using LR; (b) top-k best selection based on ANOVA F-value between words and labels with k ∈{100, 250, 500, 1000, 1500}; and (c) full vocabulary.
Trying different sizes allowed to control the complexity of the final model; GCNs with smaller vocabularies have smaller graphs, making them simpler and easier to interpret.
Finally, all neural-based models were implemented using PyTorch while non-neural ones using Scikit-learn. Additionally, for a fair comparison, all the models were optimized on each dataset using Optuna <cit.> with 100 trials for hyperparameter search maximizing the macro averaged F1 score. For all neural-based models AdamW <cit.> optimizer (β_1=0.9, β_2=0.999, ϵ=1e-8) was used with learning rate and number of epochs n searched in γ∈ [1e-7, 1e-3] and n ∈ [1, 10], respectively.
On the other hand, for non-neural baselines, search was performed varying the regularization parameter C ∈ [1e-3, 10], the class weight (balanced, none) and the penalty norm (L2, L1, L2 + L1, or none).
As a result, a total of 40 optimized models were obtained.[14 simple baselines (SVM and LR with 7 vocabulary sizes), 12 BERT-based baselines (6 models with/without fine tuning), and 14 GCN models (vanilla GCN and ω-GCN with 7 vocabulary sizes).]
§ RESULTS
Table <ref> summarizes our results for the experiments on the dev partition DAIC-WOZ and on the dev and test partitions of E-DAIC.[DAIC-WOZ test partition is not publicly available.] For each partition, we divide the table into non-GCN models (i.e., classic and BERT-based baselines and previous research) and GCN models (vanilla GCN and our proposed ω-GCN). In addition to the results, we also report the total number of trainable parameters (`#Params') and the vocabulary size (`Vocab size'). Dashes indicate the corresponding metric is not reported in the original paper, while results marked with * are not directly comparable as the model uses external domain-specific resources. Finally, for each dataset, we only report the best-performing models among all 40 optimized models (see Section <ref>).
Overall, we see that the ω-GCN approach consistently outperforms its vanilla version.
In addition, the model can outperform baselines and previously reported works when the correct number of features is selected.
For instance, on DAIC-WOZ, ω-GCN obtains a macro F1=0.84 with only top-250 words.
On the E-DAIC dataset, the ω-GCN obtains the best performance among the considered methods, with a macro-F1 of 0.80 and 0.84 for the dev and test partitions respectively.
However, unlike the DAIC-WOZ dev results, reducing the vocabulary size leads to unstable performance between dev and test sets suggesting models are sensitive to the (reduced) vocabulary discrepancy between the training and evaluation sets, a similar phenomenon as the one reported in <cit.>, where authors argue is due to the complexity of the dataset. We leave exploring methods to mitigate this phenomenon as future work by moving from a purely word-based vocabulary to, for instance, an embedding-powered or sub-word one (e.g. as BERT with WordPiece).
Finally, GCNs have order-of-magnitude fewer parameters than BERT models and are not constrained to a maximum sequence length (e.g. 512 tokens for BERT-based models).
§.§ Exploring the model's interpretability
One of the main advantages of the proposed GCN-based approach is that does not sacrifices performance for the sake of transparency.
Figure <ref> shows the UMAP <cit.> 2-dimensional projection of the 64-dimensional word and document embbedings learned by the best performing ω-GCN model on DAIC-WOZ.
More precisely, these embeddings correspond to the intermediate representation H^(1), with the 250 word nodes painted with the learned class in the output representation Z.
The figure illustrates how the model can make use of the graph structure to learn, in the same latent space, document and word embeddings whose distance is influenced by their mutual relation and the output values.
These embeddings allow to identify clusters of strongly related words with high co-occurrence and linked to similar documents in the dataset, i.e., dataset-specific “topics” that experts could potentially use for qualitative analysis.
For instance, in DAIC-WOZ, interviews were conducted with war veterans and Figure <ref> depicts a few examples of these word clusters —e.g. (1) about “veterans” and words like “worst”, “disturbing”, “avoiding” and “hurting”; (2) about “police”, “strike”, “drama”, “moods”, “trigger”, “affects”, “moods”; (3) about “attacks”, “injustices”, “solution”; and (4) about “unemployed”, “suffering”, “awful”, “afraid”.
Finally, we performed an analysis of how much of the acquired knowledge by the model fulfills known classical psychological theories/properties.
For this, we used the Linguistic Inquiry and Word Count (LIWC) <cit.> lexical resource, composed of more than 4000 words, categorized into 64 psychological dimensions. Figure <ref> shows the result of this analysis. X-axis depicts the psychological dimensions of the words learned by the model, while the Y-axis represents the normalized frequency of the respective dimensions. As shown, the model learned that depressed subjects employ higher frequency dimensions related to affective or emotional processes (affect), cognitive processes (cogmech), relativity (relativ), and negative emotions (negemo). On the contrary, control subjects use more frequently the social processes (social), biological processes (bio), positive emotions (posemo), family and body dimensions. Overall, these findings are aligned with previously reported psychological work <cit.>.
§ CONCLUSIONS
This paper proposes the use of Graph Convolutional Networks to detect depression from transcribed clinical interviews. The proposed approach has some attractive features, including a simple yet novel weighting approach for self-connection edges, a significantly low computational cost in terms of trainable parameters, and interpretability capabilities that help to understand the model's rationale.
Evaluation results on two depression-related datasets indicate that the proposed approach is able to consistently outperform its vanilla version. Our best configurations require orders of magnitude fewer trainable parameters than transformer-based models and yet, with the right vocabulary size, are able to obtain better F1 scores than baselines and previously reported results.
Finally, an exploration of the interpretability capabilities of the model showed that what it learned from raw data was, in fact, aligned with previously reported work from the psychological theory.
As future work, we plan to use different nodes, from simple sub-word nodes to node hierarchies with different types. For instance, the addition of acoustic nodes, as a third type of node, would allow information transfer among acoustic, words and document embeddings.
IEEEtran
|
http://arxiv.org/abs/2307.01062v1 | 20230703144102 | A Data-Driven Approach to Geometric Modeling of Systems with Low-Bandwidth Actuator Dynamics | [
"Siming Deng",
"Junning Liu",
"Bibekananda Datta",
"Aishwarya Pantula",
"David H. Gracias",
"Thao D. Nguyen",
"Brian A. Bittner",
"Noah J. Cowan"
] | cs.RO | [
"cs.RO",
"cs.SY",
"eess.SY"
] |
Localized Questions in
Medical Visual Question Answering
Sergio Tascon-Morales, Pablo Márquez-Neila, Raphael Sznitman
August 1, 2023
================================================================
empty
empty
It is challenging to perform identification on soft robots due to
their underactuated, high dimensional dynamics. In this work, we
present a data-driven modeling framework, based on geometric
mechanics (also known as gauge theory), that can be applied to
systems with low-bandwidth actuation of the shape space. By
exploiting temporal asymmetries in actuator dynamics, our approach
enables the design of robots that can be driven by a single control
input. We present a method for constructing a series connected
model comprising actuator and locomotor dynamics based on data
points from stochastically perturbed, repeated behaviors around the
observed limit cycle. We demonstrate our methods on a real world
example of a soft crawler made by stimuli-responsive hydrogels that
locomotes on merely one cycling control signal by utilizing its
geometric and temporal asymmetry. For systems with first-order,
low-pass actuator dynamics, such as swelling-driven actuators used
in hydrogel crawlers, we show that first order Taylor approximations
can well capture the dynamics of the system shape as well as its
movements. Finally, we propose an approach of numerically optimizing
control signals by iteratively refining models and optimizing the
input waveform.
§ INTRODUCTION
Many traditional robots rely predominantly on rigid, fully actuated
mechanisms. While these robots maintain superior force and precision
compare to natural organisms, these rigid machines usually struggle in
tasks that involve safe interactions with humans, handling deformable
objects, and operations in unstructured environments
<cit.>. Designs from nature have inspired the
development of compliant mechanisms in robotics, enabling new
capabilities <cit.>.
The emergence of such soft
components in modern robotic platforms has provided new avenues to
improved adaptability, safety, cost, and energy efficiency. On the
other hand, the compliant nature of these soft components greatly
increases the internal degrees of freedom as well as the degree of
underactuation. Specifically, soft actuators such as pressure-powered
<cit.> or stimuli-driven
devices <cit.>
usually exhibit nonlinear or coupled (i.e. multiple body segments
reacting to the same excitation signal in different ways) dynamics,
such as change in shape or frictional behavior in response to applied
temperature, light, and electromagnetic field, in their actuation. It
is often considered a luxury to obtain precise control of the system
shape of a soft actuator. One possibility is to leverage
low-bandwidth, passive responses to stimuli as a means of locomotion,
taking advantage of differences in temporal dynamics between subsystems
or components. In this work, we investigate such passive responses
within a systematic framework for control of soft systems, focusing on
systems with low-bandwidth shape changes in response to a single
actuator input. Modeling actuator dynamics and its effects on the
system can streamline engineering efforts to design and control soft
robots, maximizing their capabilities with less exploratory or
exhaustive experimentation.
For dissipative systems with high bandwidth control in shape, previous
work has been done creating a mathematical framework to model, plan,
and optimize robot behaviors <cit.>,
and the same framework has been instrumental in understanding cyclic
animal locomotion <cit.>. The core premise underlying this
work is that complex locomotor mechanics can be rewritten in a
kinematic form, owing to the assumption that Rayleigh dissipation
dominates <cit.>. Here, the body velocity of the system is a
shape defined linear mapping of shape velocity. The challenges of
precise fabrication and sensing make it infeasible to build the model
from first principles for any practical systems. Bittner et al.<cit.> presented a data-driven approach to this problem:
instead of a global model, their method constructs a local model in
the neighborhood of the observed limit cycle, using data points from
stochastically perturbed, repeated behaviors. More recent work
<cit.> extended this data-driven approach to
shape-underactuated systems, which only have high bandwidth control
available to a subset of the shape space. The ability to build local
models provides the opportunity to sample candidate gaits offline for
sample efficient hardware-in-the-loop optimization.
Here, we extend data-driven geometric mechanics modeling methods to
platforms with low-bandwidth control distributed across the
shape space, motivated by our prior work with a realized soft robot
made of hydrogel <cit.>, see Fig. <ref>. In this prior work, we
conceptualized and built a thermo-responsive hydrogel
crawler. Although stimuli-responsive shape changes for hydrogels are
ubiquitous in literature
<cit.>, the design of our
robot exploited the swelling and shrinking induced bending mechanism, morphological assymetry, and asymmetry in friction force in response to the change in surrounding temperature to achieve the locomotion.
In this crawler, there are three distinct segments: a suspended linker
segment connects two end bilayer segments comprising active
poly(N-isopropylacrylamide) (pNIPAM) and passive
polyacrylamide (pAAM) layers with different morphologies. Asymmetry in
friction forces, caused by morphological asymmetry, between the two bilayer segments at low and high temperature allowed the robot to change its anchor during a temperature cycle to move unidirectionally. Additionally, we also hypothesize that the relative swelling speeds of these bilayers create such asymmetric
ground interactions (primarily friction forces) that can be exploited
for locomotion. Utilizing the asymmetric response time among segments,
this robot is able to locomote with a single cyclic input—temperature
cycles. Along side the fabrication of this physical crawler, we also developed an Finite Element Analysis (FEA) model in <cit.> to simulate the response and investigate the deformation mechanisms. In this paper, the hydrogel crawler data is generated from the FEA model.
Our core contribution, presented in
Section <ref>, is an extension of the current data-driven
modeling techniques of drag dominated systems to a more challenging
class of underactuated systems, where the entire shape is subject to a
low-bandwidth control input (where prior work <cit.>
required at least one element to be accessed by high bandwidth
control). In <ref>, we demonstrate our methods on a well
known, analytically tractable system, that has been modified so to
include low-bandwidth actuation of its shape parameters. Finally, in
Section <ref> we test our methods on a
high-dimensional, finite element model of our previously published
hydrogel robot <cit.>. In both examples, we show how the actuator dynamics can be simultaneously modeled with the body movements, enabling a data-driven modeling architecture for a broader class of soft or
underactuated systems. Further, we use these examples to numerically
optimize a parameterized input signal for certain objectives using an
iterative parameter optimization and model refinement approach.
§ BACKGROUND
§.§ Geometric locomotion model
Geometric mechanics <cit.>
provides a framework for locomotion based on exploiting symmetry. A
core result from this field is the reduced Lagrangian or
reconstruction equation, which makes the distinction between the
internal configuration (shape) of a locomotion system and its position
and orientation (group) in a spatially fixed reference frame. Central
to this framework is the idea of group invariance of the dynamics
<cit.>: a shape change that moves the system in a
certain way—in the system body frame—will do the same at any
position and orientation in the environment, invariant to the absolute
position and orientation.
Here, we consider a subclass of such group-invariant systems that are
dominated by Rayleigh dissipation as caused by many types of isotropic
friction <cit.>; in such dissipation-dominated systems,
the equations of motion can be kinematically reduced such that the
body velocity is expressed as a shape-defined linear mapping of shape
velocity. In this case, the kinematic equation can be written as
(g^-1ġ)^∨ = ξ = -A(r)ṙ,
where ξ the group velocity in its body frame[1], r
denotes the system shape, and A(r) is called the local
connection. Here the matrix A(r) is a function of shape r
and acts analogously to a Jacobian in which it relates the system's
shape velocities to body velocities. A spatial trajectory of the
system body frame can be calculated by integrating
(<ref>) with respect to a fixed reference frame.
[1]Here (·)^∨ is a isomorphism that maps
velocities from the lie algebra form to a vector form, and its
inverse is denoted as (·). In a SE(2) case,
(·)^∨: se(2) →ℝ^3, and
(·): ℝ^3 → se(2)
For systems within the scope of (<ref>), the local
connection can be analytically derived from a set of Pfaffian
constraints on the system's shape and body velocities. A global model
can be empirically estimated by exhaustively sampling the system shape
space and its tangent bundle (the collection of shape velocities
available at each point in the shape space)
<cit.>. However, such global models are often
difficult to obtain for animals or underactuated systems because of
the challenges in sampling this space with sufficient density.
§.§ Data-driven modeling
Bittner et
al. <cit.> developed a data-driven approaches to geometric
modeling and optimization, an approach that was later extended
<cit.> to be applied to systems with high bandwidth
control in only a subset of the shape variables. This approach allows
a local estimation of a connection in the neighborhood of a limit
cycle with far fewer samples than required to train a global model,
making it practical for in-situ system identification,
especially for systems with high dimensional shape spaces.
In this approach, data (including the system shape and position in the
form of a regularly sampled time series) are fit to an oscillator such
that each data point is assigned a phase value
<cit.>. A zero-phase-lag butterworth smoothing
filter is applied before finite differencing to obtain time
derivatives of both shape and position. Then, a local Taylor
approximation of the connection can be computed via linear regression
across data points within phase windows. A Fourier series is then fit
to these local regression coefficients to build a model that is
supported at any queried phase.
We detail the process by which we estimate a linearized model within
each phase window. Data-driven floquet analysis techniques extract
information from the observed oscillator data and assign each sample
point an estimated phase <cit.>. The observed
shape samples are then phase-averaged and fitted to a Fourier series
to obtain a limit cycle, denoted as θ_r(·). The perturbed
trajectory, relative to the limit cycle, is denoted as
δ_r := r - θ_r. The first order Taylor approximation of
the local connection in each local phase window can be constructed as
A_k(r) ≈A_k(θ_r) + δ_r^T∂A_k/∂ r,
where A_k(r) is the k^th row of the local connection, which is a vector of the same dimension as shape perturbation δ_r.
All samples are grouped into neighborhoods by their estimated phase values, and a local model is fitted in each phase window. In the m^th phase window, the averaged shape is assumed to be a constant θ_r^m. The first order Taylor approximation of the local connection matrix A(r) in this phase window can be fitted by solving the following Generalized Linear Model (GLM):
ξ^(n)_k ∼C_k + B_kδ^(n)_r + A_k(θ_r)δ̇^(n)_r + ∂A_k/∂ rδ_r^(n)δ̇^(n)_r.
Here, ξ^(n)_k corresponds to the k^th coordinate of the n^th sampled body velocity ξ^(n), and δ^(n)_r:=r^(n)-θ_r^m, δ̇^(n)_r:=ṙ^(n)-θ̇_r^m are the shape and shape velocity perturbation samples defined in the local region indexed by m. Regressor C_k := A_k(θ_r)θ̇_̇ṙ describes the average behavior in the neighborhood of θ_r^m. B_k := θ̇_r^T∂A_k/∂ r and A_k are the terms that respectively relate the effects of shape and shape velocity offsets from the limit cycle. ∂A_k/∂ r is the cross term that incorporates the interaction between δ_r and δ̇_r. Note that this is a local estimate in the m^th phase window. This local approximation is repeated for all separate groups of data points, after which a Fourier series model is used to guarantee a smooth transition among the fitted matrices.
§ METHODS
§.§ Low bandwidth shape control
In this paper, we consider systems whose locomotion can be
characterized by (<ref>) while only having access to
low bandwidth control over r. In particular, we assumed the dynamics
on r to take the general form of
ṙ = f(r,u),
where the system shape velocity ṙ is a function of its shape r and an input u.
First, we extracted a phase-averaged gait cycle (θ_r,θ_u) from the general input <cit.>. Denoting the perturbation from phase-averaged shape and control as δ_r := r - θ_r, δ_u:= u- θ_u, the local first order Taylor approximation of the actuation dynamics can be written in the following form:
f(r,u) ≈ f(θ_r,θ_u)+∂ f/∂ r(θ_r,θ_u)δ_r+∂ f/∂ u(θ_r,θ_u)δ_u
We then fit the data to the above first order approximation by solving the following Generalized Linear Model,
δ̇^(n)_r ∼D + E_r δ^(n)_r + E_u δ^(n)_u,
where D is the average shape velocities of the observed data in the local phase window, and (E_r, E_u) are the terms that describe how shape and input offsets respectively modify the average behavior. δ^(n)_r := r^(n) - θ^m_r, δ^(n)_u:= u^(n)- θ^m_u are the shape and input perturbations defined in the m^th local phase window, where (θ^m_r,θ^m_u) are the mean values of shape and input.
The estimation of the local connection can be done separately from the actuator dynamics, hence this part remains identical as in <ref>. We repeat the same procedure for all discrete phase windows and use a Fourier series to smoothly connect all local models.
The fitted models from (<ref>) and
(<ref>) can used in series to make predictions of the system
shape and position trajectories given the input signal. First, the
input signal u(t) is transformed into phase coordinate using the
fitted phase map. The initial shape is assumed to be on the limit
cycle (δ_r=0) as the same phase value of the initial input
u(t_0). At each discrete time t_i, the shape velocity perturbation
δ̇_r(t_i) is predicted using the actuator model
(<ref>) given the current shape
perturbation δ_r(t_i) and the input perturbation
δ_u(t_i). δ_r(t_i) is then integrated by Euler method to
obtain the predicted shape at the next time step
δ_r(t_i+1). The predicted shape δ_r(t_i) is then
used to predict the body velocity ξ(t_i) using the body velocity
model (<ref>). The predicted body frame position at the
next time step g(t_i+1) is then integrated using ξ(t_i).
When building an actuator model, the system shape trajectory is the integrated estimation of the shape velocity predictions. Simultaneously, it also appears
as the input to the local connection in the locomotion model. As in
prior work, the shape and body motion models are predicted in separate
stages. Note that in the process of simulating a system spatial
trajectory from a general input signal, the two integration steps of
each model evolve in series. We start with knowledge of the initial
system shape r(t=0) and the control input u(t). Then we can
numerically solve (<ref>) and (<ref>)
together using the fitted regression models,
(<ref>) and (<ref>).
We apply the model improvement metric described in
<cit.>, comparing our first order regression model
predictions to the phase-averaged baseline model predictions,
Γ_χ = 1-∑^𝒩_n=1χ_D^(n)-χ^(n)/∑^𝒩_n=1χ_T^(n)-χ^(n).
This improvement metric is defined by one minus the relative error of
the data-driven prediction χ_D with respect to the baseline
prediction χ_T over 𝒩 samples of body velocity and
shape velocity χ = {ξ,ṙ}. Γ_χ≤ 0 means
the data-driven prediction is no better than the
phase-averaged prediction, and 0 < Γ_χ≤ 1 means that
our model can make better predictions than the baseline model, up to
perfect reconstruction of the ground truth at Γ_χ = 1.
§.§ Optimizing behaviors and iterative model refinement
Once an initial model is obtained, we can make predictions on the system position trajectories g(t) by a general control input u(t). Using finite difference, we can estimate the gradient and the Hessian of displacement per cycle with respect to the control parameters around the observed data. We can then utilize the estimated gradient and Hessian to numerically optimize the control parameters for certain behaviors of the system (e.g. maximizing the displacement per cycle).
The expense of data collection[1] incentivizes our focus
on sample-efficient optimization schemes. Given an input parameterization, we sparsely sample
data in the full range of the input space, and build a rough
model. According to this rough model, we numerically optimized the input
parameters for certain behavioral objectives. Then we zoom in to the
region around the optimized parameter and re-sample points in this
local area. The model built with sample points in a smaller region
will be more refined and predictions made by the refined model more
accurate. We iterate between these two processes—optimization and
model refinement—so that in the end it converges to a local optimum
in the control space. A global optimum is not guaranteed. In Sections
<ref> and <ref>, we demonstrate the
methods using a sample objective where we maximize the displacement
per cycle and penalize the cycle time.
[1]In practice, a typical temperature cycle for our hydrogel crawler takes approximately 6 hours because of the slow actuation kinetics of the material. The FEA simulation is computationally expensive as well; running a 10-cycle simulation on an well-equipped desktop computer takes about 2 days. Both facts make data collections for such systems expensive, thus data efficiency is crucial.
§ ILLUSTRATIVE EXAMPLE: PURCELL SWIMMER WITH LOW BANDWIDTH
ACTUATION
Before demonstrating the method on
data, it is helpful to make a proof of concept on a simple analytical
model, a three-link Purcell swimmer. We modify the model to include
low-bandwidth actuation, inspired by the low-bandwith actuation of our
hydrogel robot:
ṙ_̇i̇ = c_i(r_i^s(T)-r_i), c_i>0,i=1,2,
where r_i is the i^th shape variable (joint positions),
r_i^s (T) denotes each joint's steady state equilibrium given
temperature, and c_i is the converging rate of each joint towards
its steady state equilibrium. Specifically, the steady state
equilibrium r_i^s (T) is assigned to be linear function of a
one-dimensional input signal, temperature T. We have a bound on
temperature that puts limits on the swimmer's joint angles. The
resulting dynamics can be seen in Fig. <ref>.
Assuming different constants c_i on the two joints, the shape
variables will exhibit a gait where both joints are not synchronized
under a repeating temperature cycle, see
Fig. <ref>. Although both joints are controlled by the same
temperature input, the phase lag between the two joints breaks the
symmetry of joint synchronization, making the gait enclose a nonzero
area in the shape space, which is critical for locomotion in viscous
swimming domains from the scallop theorem <cit.>.
§.§ Input generation
Our parameterization
on the control signal is concise while maintaining the ability to
alter important features of the temperature profile. Here, we used 4
parameters to describe the temperature cycle: a low-point temperature
T_low, a high-point temperature T_high, time
span per cycle t_cycle, and the portion of the half period
to ramp between high and low temperatures
η_ramp = 2t_ramp / t_cycle, where
t_ramp is the time to ramp between high and low
temperatures, see Fig. <ref>.
Performing multiple cycles of these parameterized temperature cycles,
the shape trajectory forms a stable orbit under periodic forcing. We
then perturbed the forcing parameters across cycles, which resulted in
what can be seen as a "tube" around the orbit as shown in
Fig. <ref>.
§ MAIN APPLICATION: HYDROGEL CRAWLER
§.§ Actuator dynamics
Bilayers and other multi-material structures are useful in creating interesting modes of shape changes like bending. Typical swelling-driven bilayer bending dynamics are similar to the form of an exponential low-pass filter as shown in Fig. <ref>. Specifically, the geometry of the bilayer (e.g., layer thickness ratio and material properties) can affect the steady state equilibrium of the shape variables as well as the rate of reaching equilibrium.
§.§ Hydrogel crawler
Thermo-responsive hydrogel crawlers in <cit.>, capable of swelling and shrinking, utilize geometric asymmetry, leading to asymmetry in friction forces, to generate net motion under temperature cycles. We utilized the same 2D finite element model in Abaqus Unified FEA <cit.> to produce time-dependent x-y coordinate along the contour of the robot to calculate area (2-D volume) of the locomoting segments. The data is then parameterized into the shape variable r.
Here we assume the actuation dynamics in a general (nonlinear) form
(<ref>), without any specific structure on it, namely
ṙ = f(r,T),
where the input is assumed to be the temperature T.
§.§ Finite element model
Briefly, our finite element model, based on chemo-mechanics described in <cit.>, solves coupled diffusion-deformation equations for hydrogel undergoing temperature-driven swelling and shrinking. We used Neo-Hookean and Flory-Huggins potentials to describe the entropic elastic behavior of the polymer network and the mixing of polymer-solvent, respectively. The swelling of pNIPAM caused by the lower critical solution temperature transition (LCST) was modeled by assuming a sigmoidal function for the temperature dependence of the Flory-Huggins interaction parameter. We also assumed that the diffusivity of the water through pNIPAM also increased sigmoidally with temperature across the LCST, which caused the characteristic time of deswelling to be significantly faster than the characteristic time of swelling. We also considered a combined effect of gravity and bouyancy by prescribing a net body force on the hydrogel structure. Our material model included a total of 10 parameters which were either directly determined from experiments or calibrated against experiments using finite element analysis. A list of the parameters and their values are listed in Table <ref>. In addition, we assumed a rigid frictional surface with a friction coefficient, μ_k = 0.1, underneath the robot to facilitate friction-driven locomotion induced by geometric asymmetry. Further details of the finite element simulation can be found in <cit.>.
§.§ Input generation
Here we used a similar parameterization for the input temperature
signal as that in <ref>. However, for thermo-responsive
hydrogels, the kinetics of the swelling and shrinking vary
distinctively. We therefore separated the input cycle into two
independent parts, cooling and heating. Thus, the dimension of the
input parameter space increases to six, low temperature
T_low, ramped cooling time span t_cool, cooling
ramp time ratio η_cool, high temperature
T_high, ramped heating time span t_heat, and
heating ramp time ratio η_heat. The ramp time ratios are
defined as the ratio of the corresponding ramp time to the cooling or
heating time span, i.e.,
η_cool = t_{ramp,cool}/t_cool. The
allowable range of each parameter is determined by material properties
and characteristic diffusion time and were validated by a parametric study using FEA. Specifically, the ramped cooling and heating time spans
are determined by scaling swelling and shrinking characteristic time
of the hydrogel, low and high temperature ranges are specified by
4% equilibrium strain span of the material. The ramp ratios are
ideally in the range of [0,1], but small ramp ratios means very
large rates of temperature change, which is impractical and often
causes numerical stability and convergence issues in FEA simulation
because of the excessive deformation of the finite element mesh in a
short period of time. Thus in the implementations we raised the lower
bounds to 1/32. The calculated full input parameter ranges
are shown in Table <ref>. The noisy input signal is
generated by sampling from a uniform distribution in the parameter
space. The input parameters are then used to generate noisy
temperature cycles for FEA. To avoid numerical issues, instead of running hundreds of thermal cycles at a time, our each FEA simulations comprised of 10 thermal cycles. At each iteration, we ran 10 simulations resulting in 100 cycles of input data for our data-driven model.
§.§ Shape parameterization
The soft nature of the devices and external forces makes the shape of
the device high dimensional. However, fitting models to a high
dimensional shape space will likely cause overfitting. We thus seek a
reduced-order representation of the shape of the system. Here,
principal components analysis (PCA) is a simple candidate reduction
method that could serve this purpose. We tried PCA on the streamline
along the crawler body, from which we calculated the internal angles
between each of the segments. Fitting a PCA model on the internal
angles, we found that the first two principal components (modes)
reconstructs up over 90% of the data. While it is a straight forward
way of reducing the effective degrees of freedom, the complex
cooupling between segments led to principal components that lacked a
clear physical interpretation. As an alternative, we considered volume
of each active section is a more physical, descriptive candidate for
representing the system shape variables. To do this, we estimated the
volume at each time point based on contour points from the FEA
simulation, and used this to calculate the enclosed area (2D volume
for our planar FEA). This parameterization provided a clear, intuitive
relationship between the two segments, and exhibited the phase lag
between the smaller and larger bilayer segments that we expected from
the design.
§.§ Input optimization
As a demonstration, we optimized the input parameters to maximize the
displacement per cycle. The objective function is defined as
F(u) = Δ g_x - λ t_cycle,
where Δ g_x is the displacement in the x direction per cycle,
and t_cycle is the cycle time. λ is a penalty
factor that controls the trade-off between the above two terms. The
objective function is to mainly maximize the net displacement per
cycle. During the optimization process, we noticed that the optimizer
tended to find cycles with the longest possible cycle times,
optimizing cycle-to-cycle distance, with no penalty on time, pushing
the results toward the boundary. To address this, we added a
regularizing term to penalize the cycle time.
We started by sampling 100 points (resulting 100 cycles of system
motion) in the full input parameter space as shown in Table
<ref>. We performed ten-fold cross validation to avoid
overfitting. A rough model of the system was built using the sampled
data points, and then the model was used to optimize for an input
parameter that maximizes the objective function above. The
optimization was performed using the Sequential Least Squares
Programming algorithm where the local gradient and
Hessian were estimated using finite differencing. After getting the
numerical optimum, we shrank each of the input parameter ranges by
35%, centering at the optimum, and repeated the optimization
process. We repeat this process three times, and the model improvement
metrics were calculated for each iteration as shown in
Fig. <ref>
§ DISCUSSION AND CONCLUSION
In this work, we designed and implemented a data-driven modeling framework for dissipative systems with low bandwidth actuator dynamics.
We showed the success of this method in predicting behaviors on a classical toy example, the Purcell swimmer, with a modified class of passive shape dynamics.
Built on prior work, where at least one shape element is assumed to be accessed through high bandwidth control,
this method enables modeling of novel mechanisms like the hydrogel crawler, whose internal degrees of freedom all exhibit a passive response to controllable stimuli.
We showed not only that we could model the crawler with accuracy beyond the phase averaged gait, but that the system was capable of using this model in a gradient-based optimization scheme to rapidly identify a viable crawling maneuver.
The broader implications of this result are that we now have a justifiable framework to pursue data-driven modeling and optimization of a much larger class of underactuated systems.
For applications in biology, where continuous, soft interfaces facilitate safe interaction with the body, this method provides the potential to model new mechanisms pre-deployment in the body and even in situ, since variation amongst morphology and environment across patients can be significant.
Key additional efforts for such endeavors require power, actuation, and sensing at the scales desired for the locomotion application.
Implementing our motivated prediction architecture on the Purcell swimmer provided insight in to what prediction quality was achievable on a toy system.
The models have significant prediction improvement with respect to the phase-averaged model.
This provides a level of accuracy in predicting the neighborhood of behaviors about that gait, suggesting that a gradient-based optimization may work reliably.
On the hydrogel crawler, we observe improved prediction quality with respect to the phase model.
While this prediction quality is lower than what was observed in the toy example, it is enough insight to successfully inform a gradient-based optimization scheme.
While it is not possible for us to assert global optimality, we have shown that the model was iteratively improved and appeared to settle at a performant lunging gait.
Sampling from a variety of initial conditions could excite a variety of achieved locally optimal gaits, from which a more globally optimal gait could be selected.
If the hydrogels objective were simply to obtain a functional navigation policy, we have provided a framework through which it and similar robots can rapidly obtain functional motion primitives.
The model improvement saturated and decayed as the sampling region was reduced, likely because there were few variations in the sampled data.
It is well known that the convergence properties proved in adaptive control rely on sufficient excitation of the dynamics, and likewise, we don't expect to learn informative improvement without cycles that excite significant dynamic variation.
We have learned to be careful in reducing the sampling region as it may cause the data-driven model to degrade.
The viable gaits achieved through this sample efficient optimization could practically extend to the real world.
While many samples might be available in simulated environments, there are many platforms that must be system-identified in the field.
In-situ system identification (such as the type we implement here) paired with a gradient-based optimizer provides a tractable, systems oriented way to pursue optimization of robot behaviors in hard-to-model environments.
The sample-efficiency was even highly valuable for this project, where a single cycle takes several hours to simulate.
|
http://arxiv.org/abs/2307.01263v1 | 20230703180004 | Yang-Baxter Deformed Wedge Holography | [
"Gopal Yadav",
"Hemant Rathi"
] | hep-th | [
"hep-th",
"gr-qc"
] |
`@11
addtoresetequationsection
Aequation
addtoresetequationsection
Bequation
addtoresetequationsection
Cequation
addtoresetequationsection
Dequation
addtoresetequationsection
Eequation
`@11
-1.0cm
-0.5cm
-0.5cm
17.2cm
21.9cm
7.2pt
13.9pt
1.2
|
http://arxiv.org/abs/2307.01721v1 | 20230704134654 | Nonlinearities in Long-Range Compact Michelson Interferometers | [
"Jiri Smetana",
"Chiara Di Fronzo",
"Anthony Amorosi",
"Denis Martynov"
] | physics.ins-det | [
"physics.ins-det",
"astro-ph.IM",
"physics.optics"
] |
§ INTRODUCTION
Interferometry sits at the forefront of high-precision displacement measurement. This technique sees use in a wide range of contexts concerned with the detection of weak signals, which originate from the minute scale of quantum mechanics through to the grand scale of astrophysical phenomena. Amongst the most notable interferometric devices are the contemporary gravitational-wave (GW) detectors, Advanced LIGO <cit.> and Advanced Virgo <cit.>, which are capable of 2e-20 precision in the peak sensitivity band around 100 <cit.>.
Although specialised detector facilities, such as GW detectors, can span kilometre scales, compact interferometric devices on the centimetre-scale can be traced back to 1972 <cit.>. Since then, the sensitivity of such devices has continue to improve, with the LISA Pathfinder <cit.> mission showing the versatility of interferometry in space-based applications. Advances in miniaturisation of optical and laser components have led to a range of compact devices that offer excellent sensitivity, as we outline below. The key to the interferometer's impressive sensitivity is in its extremely sharp response to small displacements, with the sensor's full range swept on the scale of the wavelength of light used. For example, the well-studied, sinusoidal response of a Michelson interferometer (e.g. <cit.>) operated with a typical 1064 laser covers the full signal range over a narrow span of just 255.
A thorough overview of notable interferometric sensors can be found in Ref. <cit.>. In this paper we will focus principally on the performance of our custom-designed compact Michelson-type sensor built by the company SmarAct and analysed previously in Ref. <cit.>. The nominal sub-picometre sensitivity that was achieved is useful across a range of applications (see Ref. <cit.> and references therein), but we particularly focus on its utility in the sensing of quiet suspension systems, namely the quadruple suspension <cit.> of the Advanced LIGO detectors. The current suspensions are sensed with BOSEM shadow sensors <cit.>, which utilise an optical sensing scheme, albeit not an interferometric one. As argued in detail in Ref. <cit.>, the low-frequency (5-30 Hz) band is limited by the injection of noise from angular control loops (also shown in Ref. <cit.>), which can ultimately be traced back to the limiting sensitivity of the existing shadow sensors. Sensitivity improvements in this detection band are essential for enhancing early-warning systems <cit.> and expanding the range of detected GW sources towards intermediate-mass black holes <cit.>.
Existing GW detectors use a range of inertial and displacement sensors to improve the stability, provide active isolation of and provide readout for control of their suspension systems. However, the detectors stand to benefit from further improved inertial sensors. A notable way to improve these devices is with better sensors, with current devices being broadly limited at low frequencies by their readout noise. Interferometric sensors are well poised to address this problem, with one example, the HoQI <cit.>, already demonstrating an improvement in the low-frequency sensitivity of a commercial geophone <cit.>. Recent high-precision devices, such as the BRS <cit.> rotation sensor and 6D <cit.> six-degree-of-freedom inertial sensor use interferometric sensing to achieve their advanced sensitivity. We expect that our sensor will be used in the future testing of the Compact-6D inertial sensor <cit.>—an evolution of the previous 6D design.
The path towards improved sensitivity in the key frequency band lies in the development of better sensors and the SmarAct interferometric sensor presents a good candidate for achieving this. However, to satisfy the requirements laid out in Ref. <cit.> it will be important to reach the full sensitivity level demonstrated in Ref. <cit.>. This is only possible if the performance of the sensor is not degraded once placed into the real environment rather than the typical `null measurement' setup used in its noise characterisation. Of particular interest to us is the impact of the sensor's linearity on the realistic sensitivity limit.
In this paper, we investigate the impact of nonlinear noise in Michelson-type interferometric displacement sensors when placed in high-RMS-displacement applications. The range of a simple Michelson interferometer is already narrow and its sinusoidal response means that the usable linear region yields an even smaller operating range. A number of different techniques exist in the literature <cit.> for extending this range through the use of multiple phase-offset readout channels, which allow for a linearised estimate of the displacement. Our particular readout scheme, based around the principle of deep frequency modulation (DFM) <cit.>, is discussed in more detail in Sect. <ref>. This readout scheme theoretically fully linearises the displacement and, with the use of a phase-unwrapping algorithm (known in this context as fringe counting) <cit.>, can extend the range of the interferometer over many multiples of the free spectral range (FSR). However, the linearisation algorithm can suffer from limitations that in practise lead to imperfect linearisation of the signal and the injection of periodic nonlinear error into the readout. The mechanisms of nonlinear coupling are analysed in Sec. <ref>–<ref>, with a theoretical framework laid out for modelling these nonlinearities in a real-world situation. The theoretical model is compared to measured data from an experimental scheme described in Sec <ref>. This theoretical model is finally applied to a simulation of the nonlinear noise performance in suspension sensing within the LIGO vacuum chambers in Sec. <ref>.
§ MODELLING NONLINEARITIES
The sensor scheme is based around a custom-designed opto-mechanical assembly derived from the SmarAct C01 PICOSCALE sensing head. The sensor, shown in Fig. <ref>, consists rather simply of a Michelson interferometer with an open port along one of the typical interferometer arms and a high-reflectivity coating applied to one face of the central beam splitter cube to act as the reference arm. This minimalist design leads to a highly compact sensor and mitigates losses from additional optical components. The simplicity of the design, a key aspect of the robust and sensitive scheme we investigated in Ref. <cit.>, results in a comparatively greater level of complexity in the algorithm employed in the phase extraction scheme.
We make use of the DFM technique, described in depth in Refs. <cit.>, that sees growing applications in interferometric displacement measurement <cit.>. The scheme begins with a modulation of the laser frequency by any number of standard techniques, in our case control of the laser cavity piezoelectric transducer. Solving for the signal measured by a photodiode at the output of an unbalanced Michelson interferometer, we obtain the rather simple relation
P(t) = A[1 + C cos(ϕ(t) + m cos(ω_m t))],
where A is the signal scale factor (akin to an amplitude), C is the fringe contrast, a value in the range 0–1 corresponding to the level of mode matching of the interfering beams, ϕ is the additional microscopic arm phase accumulated within the measurement arm of the interferometer, m is the modulation index, and ω_m is the modulation angular frequency. The arm phase can be straightforwardly related to the mirror displacement via x = ϕλ / (4 π). The modulation index can be intuitively written as m = 4 π A_m Δ L / c, where A_m is the modulation depth in frequency, such that the time-dependent laser frequency can be written as f(t) = f_0 + A_m cos(ω_m t), and Δ L is the length difference between the reference arm and measurement arm of the interferometer. This result is commonly processed further in the limit of m ≪ 1 where the small-angle approximations are appropriate and lead to a host of widespread uses, such as in cavity locking schemes (e.g. Pound-Drever-Hall locking <cit.>). It is beneficial to consider the signal obtained in Eq. <ref> as a Fourier series decomposition in terms of the harmonics of ω_m, which is given by
P(t) = P_0 + ∑_n=1^∞ 2 C A J_n(m) cos(ϕ + n π / 2) cos(n ω_m t),
where J_n is the n^th-order Bessel function of the first kind and P_0 = A(1 + C J_0(m) cos(ϕ)). Whilst we may recover the typical small-angle approximation by considering only the n=1 term, we operate the system beyond the small-m limit, which necessarily extends the scheme to include the higher order terms. This is important for us, as it gives us access to multiple signals with a cyclical sinusoidal dependence on the arm phase, ϕ. We can thus take multiple signals to construct a linear estimator of the phase.
Our particular scheme relies on a technique of demodulation at multiple harmonics of ω_m. If we multiply the raw signal by cos(k ω t) for integer vales of k, and filter out the beat signals above DC with an appropriate lowpass filter, we can write the k^th demodulated harmonic as
S_k = C A J_k(m) cos(ϕ + k π / 2).
Assuming we have a stable scheme such that A, C and m remain constant, the simplest way to proceed is to take a pair of S_k signals of different parity, for example from the k=1 and k=2 demodulation, and construct an elliptical Lissajous figure where the angular coordinate of a point along the Lissajous curve at any given time corresponds to the arm phase, ϕ. This treatment simply lays out the key set of equations that are essential for understanding the origins of the nonlinear couplings that we discuss in following sections. Further details of the detection scheme, particularly the optical layout schematic and data acquisition process are laid out in our original work in Ref. <cit.>.
§.§ Nonlinear Effects on Sensitivity
Nonlinear effects in optical interferometry have been analysed in the past and are known to lead to a periodic error on the order of a few nanometres <cit.>. These effects commonly arise from cross-talk between the two nominally orthogonal signals. For example, due to imperfections in the polarisation optics in interferometers that utilise linearly polarised states of light such as the HoQI <cit.>.
In our case, the signals channels are well isolated from each other as their orthogonality is ensured by a rigid mathematical relation and not dependent on the quality/alignment of optical components. However, other types of common nonlinearities can still occur, for example the coupling of ghost beams into the readout port <cit.>. These nonlinear sources are important to consider but in our case are not the dominant effect due to the design of the sensor. Ghost beam effects are mitigated by the relatively few optical surfaces involved in the interferometric path. Furthermore, most ghost beams are generated through interactions between surfaces of the central beam cube, where drifts in the ghost beam phase arise due to thermal expansion of the beam cube. These effects are naturally mitigated by the cube's small volume (side length of 2 mm) and the good thermal properties of glass. Nonetheless, ghost beams can be problematic for high precision applications and their effects are being investigated. We can further improve the sensor's resilience to ghost beams by angling the input beam to the beam cube so avoid having any reflections off optical surfaces at normal inidence.
We find that the dominant source of nonlinearity arise from the particulars of the DFM scheme and can be traced back to the fidelity of the modulation-demodulation procedure, the accurate fitting of the parameters in Eq. <ref>, and the limited bandwidth of the sensor. Following common wisdom, we should expect that nonlinearities increase disproportionately larger with growing signal RMS. The question is, however, whether these nonlinear effects can reduce the SNR below unity for a realistic RMS displacement or before reaching other natural constraints, such as the velocity limit imposed by the fringe-counting algorithm.
We consider a situation where high but realistic RMS displacement is reached due to a high-amplitude region of signal within a limited frequency band, with signal outside of this band settling at a much lower level by several orders of magnitude. Due to the nonlinear processes, it becomes possible for the high-amplitude signal to spread to other frequencies and thus swamp the true weak signals in the quiet regions of the spectrum. We investigate such a scenario, where the additional `nonlinear noise' generates a new and degraded noise performance, which prevents our device from reaching its nominal sub-picometre sensitivity.
This scenario is not entirely academic as it shares a practical similarity with the signals handled in inertial sensing devices. The response of a conventional mass-on-a-spring inertial sensor begins to substantially decrease towards DC, below the mass-spring resonant frequency. This feature can be found in commercial seismometers such as the Trillium T240, even more so in the velocity readout[Velocity readout multiplies the response to displacement in the frequency domain by an additional factor of ω, thus further reducing the response towards DC.] of geophones such as the Sercel L-4C, but also in custom, precise angular sensors, such as the BRS and multi-DoF sensors such as 6D. In these applications, we are searching for very weak readout signals at low frequencies, whilst the high response at and above the resonance leads to a potentially large signal RMS.
§.§ Nonlinearities from Ellipticity
Our phase extraction algorithm takes multiple nonlinear functions of ϕ (specifically sinusoidal functions) and combines them to form a linearised readout. This algorithm relies on the correct knowledge or fitting of the ellipse parameters, which are used to circularise the ellipse for use with the four-quadrant arctangent function. Therefore, if there is a mismatch between the ellipse parameters, there will remain some residual ellipticity to the Lissajous figure, which will translate into a systematic nonlinear error.
We define a quantity, elliptical error, δ, given by the fractional difference between the semi-major and semi-minor axes, such that the semi-major axis is given by a = (1 + δ) b for a semi-minor axis of size b. Through a Taylor expansion in δ (we can assume that in all reasonable scenarios δ≪ 1), we find that the phase estimator, ϕ, is related to the true arm phase approximately through
ϕ≈ϕ + δ/2sin(2 ϕ),
where the second term represents the first-order contribution to the periodic error that is generated as a result of the ellipticity.
We cannot proceed further in deriving a general spectral density equation for the nonlinear noise. However, we may use this as the basis of the time-domain simulations of the nonlinear effects that allow us to model and predict the nonlinear impact in particular applications. Additionally, due to the limited range of the sine function, we can make an estimate of the maximum noise floor that can be generated by this nonlinearity.
In the limit of broadband, high-RMS displacement, where the fluctuation in ϕ exceeds unity, we consider sin(2 ϕ) to behave as a generator of random values within the interval [-1, 1]. Therefore, the variance of this term will be within a factor of a few below unity; we adopt a representative value of 1/3, corresponding to a uniform distribution. The power spectral density (PSD) that corresponds to this variance is broadened out as the original displacement spectrum saturates the sine function and spreads to other frequencies, leading to the approximate PSD of the sine term, S_sin≈ 1 / 3γ_eff, where γ_eff is the effective bandwidth of the frequency-broadened signal. This bandwidth is highly variable based on the exact shape and RMS of the signal. However, as the noise is only ever broadened, we may state that the largest noise level that can be achieved is when γ_eff = ω_hi, the upper edge of the frequency band of the original displacement signal. Thus our order-of-magnitude estimate for the maximum nonlinear noise PSD is given by
S_x^max = λ^2 δ^2/192 π^2 ω_hi.
This equation only provides the maximal noise floor in the case of sufficiently broadband signal and is valid in the frequency band below γ_eff. As such, this noise level can be exceeded where nonlinear up-conversion occurs, particularly in the case of tightly localised resonances in the power spectrum that lead to the presence of prominent peaks at the higher harmonic frequencies.
This nonlinearity can arise due to a poor estimate of the ellipse parameters. To prevent this issue, it is possible to perform a slow sweep over the laser frequency, provided the laser frequency actuator has the range to sweep over at least one full FSR. We can fit to the resulting ellipse assuming the ellipse parameters do not change over time. In reality a number of effects can generate a drift in the ellipse parameters, such as nonlinearity in the modulation drive, timing jitter causing drifts in the demodulation phases, and residual amplitude modulation. We find that maintaining elliptical error at or below δ = 0.01 over the long term is feasible.
§.§ Nonlinearities from Nonellipticity
The extension of the above treatment of ellipticity leads us to consider nonlinearities that occur due to a departure from an elliptical Lissajous shape altogether. From our long-term observations, the sensor produces a high-fidelity elliptical Lissajous and, in most situations, the nonlinearities tend to be dominated by poorly fitted ellipse parameters. However, even with proper ellipse fitting in post processing, we find residual periodic error in the phase readout that suggests that the Lissajous figure in not entirely elliptical.
This departure of the Lissajous from a simple elliptical shape was already observed in the previous work in Ref. <cit.>, where we showed the nonlinear error in the displacement readout, using a long-range (∼100 FSR) scan over the laser frequency using the temperature setpoint. We revisit this result here, with a closer look at the amplitude and period of the error.
We sweep over the effective displacement by slowly actuating on the laser wavelength. A benefit of this method is that the modulation index does not change, which it would do if the sweep was performed over true displacement, thus introducing another source of nonlinearirty. This is achieved by setting the laser wavelength from one extreme to another (sweeping over a total wavelength span of 0.9) and allowing the built-in temperature servo to shift the wavelength to the new setpoint. We isolate a region containing approximately 10 fringes in the middle of the sweep where the wavelength was swept through approximately linearly in time. We linearise the readout using an elliptical fit in post-processing and then remove the underlying, slowly varying, nonlinearity of the sweep using a ninth-order polynomial fit. As the period of the sensor nonlinearities is much shorter any trend in the sweep, we can remove these trends without also fitting to the periodic error. The result in Fig. <ref>, shows the deviation of the measured displacement signal from the fitted `nominal' displacement over the swept region.
We note that the nonlinear deviation cannot be described by a simple function. However, it does show a clear periodicity that is approximately the same as that of the elliptical nonlinearity discussed in Sec. <ref>. We can, therefore, assign an effective elliptical error, δ_eff, which should provide a estimate of the nonlinear impact to within a factor of a few. Taking the amplitude of the nonelliptical deviation to be 1.2, this yields a δ_eff of around 2%. As this nonlinearity arises through (as yet unknown) processes that distort the Lissajous figure away from an ideal ellipse, it is not clear how to suppress this nonlinearity further. Therefore, for now, this imposes a hard limit on the linearity of the sensor.
§.§ Nonlinearities from Demodulation
The final nonlinearity we consider comes from the up-conversion of signal frequencies through the sine function interacting with the finite bandwidth of the sensor. Our algorithm relies on the low-passing of the demodulated signals. If we consider a purely sinusoidal displacement at some arbitrary frequency Ω, leading to an arm-phase fluctuation given by ϕ = A sin(Ω t), our demodulated signals are proportional to sin(A sin(Ω t)) (replacing the outer sine with cosine for the even harmonics).
This is a familiar equation within our setup and is of the type already encountered in Eq. <ref>. Thus we can state that the demodulated signal can be written as an infinite series with terms proportional to J_n(A) sin(n Ω t) for integer n. We can still reconstruct the signal to high enough accuracy only considering terms up to a particular order, n. To determine the maximum order that must be included, we make use of the fact that the J_n(x) are diminishingly small for x ≪ x_m, the location of their first maximum. We use the approximation that, for a particular order, the location of the first maximum is well approximated by the value of the order itself <cit.>. From this we conclude that the critical order is given by n ≈ A and we can neglect all orders where n ≫ A.
In a practical sense, this means that the maximum frequency that our original signal is significantly up-converted to is given by approximately AΩ. This is consistent with the intuitive argument that the frequency of the outer sinusoid is given by the phase velocity, ϕ̇ and A Ω = ϕ̇_max. Taking this intuitive argument further, we propose that for any arbitrary displacement, the corresponding ϕ̇_max must be significantly below the cutoff frequency of the lowpass filter. In a sense the finite bandwidth of the sensor, γ_s, imposes a velocity limit on the sensor application, given by
v_max≪γ_s λ/4 π.
The first constraint on γ_s is the lowpass filter cutoff frequency. However, this frequency cannot be increased arbitrarily, as when ϕ̇_max exceeds ω_m / 2, the signal sidebands around the adjacent harmonics will leak into the measurement band of its neighbour harmonic and corrupt the signal there. Therefore, the sensor bandwidth is ultimately limited to γ_s ≤ω_m / 2. Taking this limit for our setup, we obtain an absolute velocity limit of around 50.
§ SENSITIVITY DEGRADATION IN A HIGH-RMS APPLICATION
We experimentally demonstrate the nonlinear degradation of sensitivity by driving the measurement mirror with a known, high-RMS signal and observing the sensor's resulting spectrum. We make use of a moving magnet actuator, specifically a BOSEM shadow sensor <cit.> with the sensing components and circuitry removed. The coil resistance is 41.4 with an inductance of 17.8. The layout can be found in Fig. <ref>. The sensing scheme and physical layout is identical to the scheme in Ref. <cit.> and shown in Fig. 1 therein, except for the additions specified below.
We must isolate the sensor from environmental disturbances, such as seismic and acoustic couplings, as these generate uncontrolled and dominant sources of noise. To achieve this, we set up the mirror-sensor system in an acoustically isolated box on a common base plate, which is placed on foam blocks for further vibration isolation. As the sensor only measures the relative displacement between itself and the mirror, the vibrational coupling to the readout can be suppressed by many orders of magnitude. In a departure from the setup in Ref. <cit.>, we isolate the measurement mirror from the common base plate with a rubber pad (stiffer than the foam blocks) to allow some compliance to differential motion. We actuate with the coil on a stack of three RS Pro neodymium magnets (stock number 219-2231) attached to the mirror base, which allows for some residual differential drive, although most of the driven displacement remains common.
We use a ThorLabs LDC 205 C laser diode driver to provide the coil drive current. We monitor this drive current using the built-in control port of the current driver. This is essential as we find that the current driver cannot naturally drive inductance linearly, which we verified by comparing the linearity of the coil drive against the linearity of driving an equivalent resistor. This actuator nonlinearity dominates over the nonlinearity of the sensor and hence must be suppressed. We notice an even stronger nonlinearity when using a mirror-mounted PZT, which leads us to discount this otherwise much simpler actuation scheme. To improve the actuator linearity, we design and implement a feedback system where we subtract the measured drive current from the desired setpoint drive to form an error signal that controls the current driver in-loop. This scheme suppresses the sub-kilohertz nonlinear noise by around two orders of magnitude compared to the free-running performance.
We drive the current with uniform white noise, which is then band-limited with an eighth-order elliptical bandpass filter to produce a large flat signal spectrum confined to a sharply cut off frequency window. With this scheme, we can achieve a displacement RMS of around 0.2 and three orders of magnitude higher spectral density in the signal window than the residual noise at frequencies below the signal band. The RMS is sufficiently low that the displacement does not significantly change the modulation index (around 1 ppm) and so does not introduce a further source of nonlinearity.
We drive the signal injection digitally and shape this drive using digital filters using the same CDS architecture as is used to read out the sensor. Therefore, we can freely shift the amplitude and frequency band of the signal to investigate the different levels of nonlinear noise that appear at low frequencies. We also simulate the sensor response in the time domain to compare the measured noise with the nonlinear noise models derived in Sec. <ref>.
Figure <ref> shows the result of deliberately inducing nonlinearities through varying ellipticity. The nonlinear noise is matched well by our time-domain simulation, particularly for large values of the elliptical error, δ. The difference between the measured and simulated noise for the smallest δ can be explained through the presence of other nonlinearities, which begin to dominate at small values of δ. Whilst some of this discrepancy can be attributed to the residual nonlinearity of the drive, it can be entirely explained by the limiting nonelliptical nonlinearity discussed in Sec. <ref>, which we show to have a similar effective contribution as a δ of 2%. In this scheme we are operating sufficiently below the velocity limit of imposed in Sec. <ref>, which means this source of nonlinearity should not limit the readout.
§ SENSITIVITY PROJECTIONS IN THE LIGO VACUUM CHAMBERS
We have demonstrated the loss of sensitivity due to nonlinearities in a carefully contrived scenario. However, any scenario which requires access to the full proposed dynamic range of the sensor may quite possibly encounter problems with the sensor linearity. We mentioned the application in inertial sensors, which is highly relevant to the field of GW detection. An even closer application is in the sensing of the multi-stage suspension systems that are found in all contemporary GW detectors. In this section we take the example of the Advanced LIGO quadruple pendulum suspensions <cit.>.
The suspensions in the Advanced LIGO chambers are already placed within a seismically isolated environment on top of the so-called ISI. The suspension stages provide progressively greater levels of vibration filtering, which means their motion is, over many frequencies, even smaller than the residual motion injected by the ISI. Therefore, this application is intuitively less susceptible to nonlinearities due to its low RMS displacement.
We consider, specifically, the measurement of longitudinal displacement sensing of the top mass of the quadruple chain. This is the location of the sensing and control of most of the 24 suspension degrees of freedom and the location of the majority of the existing displacement sensors. We propose our device as a candidate for replacing precisely these sensors in order to achieve the required factor of 100 improvement in the suspension sensing noise, as laid out in Ref. <cit.>.
We generate an ISI noise model based on the measured ISI displacement spectral density. We further introduce a fit to the ISI-horizontal-to-top-mass-horizontal transfer function in order to estimate the representative relative displacement spectrum between the top mass and ISI. We pass this displacement spectrum through our model of the sensor nonlinearity, assuming a nominal elliptical error of 2%. As found in our investigations above, it is possible to reach hard limit of the nonelliptical nonlinearities at this level. The simulated displacement spectrum with the corresponding nonlinear noise level is shown in Fig. <ref>. As shown, the nonlinear noise can be significantly higher than the quoted sensitivity during a null measurement (Fig. 2b in Ref. <cit.>). However, the spectral density of the nonlinearity clearly shadows the spectral density of the signal at a level that is around a factor of 10 lower across the whole frequency band of interest.
§ CONCLUSION
Interferometric displacement sensors are well poised to replace many existing electromagnetic and optical sensing schemes within applications requiring sub-picometre levels of sensitivity. In this paper we follow up the investigation of our custom sensor manufactured by SmarAct from Ref. <cit.>, with a focus on the sensor's linearity. We embed this investigation within the specific context of inertial sensing and gravitational wave detection. The former use case is motivated by the sensor's future implementation on the compact six-degree-of-freedom inertial sensor prototyped in Ref. <cit.>. The latter is in recognition of the sensor's suitablity as a future candidate sensor on the upgraded Advanced LIGO quadruple suspensions <cit.>.
We briefly lay out the key equations that describe the deep frequency modulation technique that we employ within our readout scheme. From this starting point, we show the possible origins of the nonlinear couplings to the sensor readout and analyse their impact on the displacement sensitivity. We find that an imprecise fit (or drift) of the ellipticity of the Lissajous figure constructed from the two orthogonal readout signals is often the dominant source of nonlinear noise for elliptical error in excess of 2%. We subsequently revisit our measurement of the current hard limit to the nonlinearity, which is generated by periodic error due to distortion of the Lissajous figure, which cannot be corrected in real-time or in post-processing. This leads to an effective elliptical error of 2%.
We construct a band-limited, high-RMS scenario in which the nonlinear error can significantly exceed both the linear noise level and the true displacement spectrum at low frequencies. We conduct an experimental demonstration of this nonlinear noise and compare the results to a time-domain simulation, which show good agreement with each other. We also estimate an order-of-magnitude figure for the maximum nonlinear noise level in the presence of a broadband signal and find that it is consistent with measurement.
Overall, the linearity of the sensor is something that should be carefully considered based on the precise parameters of a given application, particularly the RMS displacement and the required dynamic range. We find that the nonlinear noise may limit the sensitivity of inertial sensors if not managed well. However, we find that on relatively quiet platforms, such as the Advanced LIGO ISI, the linearity of the sensor is sufficient to ensure an SNR above unity within the detection band, and thus no significant improvements to the sensor performance are necessary.
Despite the generally sufficient linearity of the sensor for applications that at the heart of our investigation, there is certainly scope for further improvement of the sensor's linearity. Future experimental work on the sensor we presented herein will focus on implementing more advanced algorithms that seek to improve long-term stability and, potentially, real-time correction and linearisation of the system.
Conceptualization, D.M.; methodology, J.S. and D.M.; investigation, J.S., C.D.F., A.A. and D.M.; resources, D.M.; data curation, J.S. and D.M.; writing—original draft preparation, J.S.; writing—review and editing, C.D.F., A.A. and D.M.; visualization, J.S.; supervision, D.M.; project administration, D.M.; funding acquisition, D.M. All authors have read and agreed to the published version of the manuscript.
This research was funded by STFC grant numbers ST/T006609/1 and ST/W006375/1 and EPSRC grant numbers EP/V008617/1.
We thank members of the LIGO Suspension Working Group for useful discussions. J.S. and D. M. acknowledge the support of the Institute for Gravitational Wave Astronomy at the University of Birmingham, STFC Quantum Technology for Fundamental Physics schemes (Grant No. ST/T006609/1 and ST/W006375/1), and EPSRC New Investigator Award (Grant No. EP/V008617/1). D.M. is supported by the 2021 Philip Leverhulme Prize.
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
BOSEM Birmingham Optical Sensor and Electromagnetic Motor
BRS Beam Rotation Sensor
DFM Deep frequency modulation
DoF Degree of freedom
FSR Free spectral range
GW Gravitational wave
HoQI Homodyne Quadrature Interferometer
ISI Internal Seismic Isolation
LIGO Laser Interferometer Gravitational-Wave Observatory
LISA Laser Interferometer Space Antenna
PSD Power spectral density
PZT Piezoelectric transducer
RMS Root mean square
SNR Signal-to-noise ratio
-0cm
References
|
http://arxiv.org/abs/2307.03317v2 | 20230706221942 | Fitted value shrinkage | [
"Daeyoung Ham",
"Adam J. Rothman"
] | stat.ME | [
"stat.ME"
] |
#1 1
Machine Learning to detect cyber-attacks and discriminating the types of power system disturbances
[
==================================================================================================
We propose a penalized least-squares method to fit the linear regression model with fitted values that are invariant to invertible linear transformations of the design matrix. This invariance is important, for example, when practitioners have categorical predictors and interactions. Our method has the same computational cost as ridge-penalized least squares, which lacks this invariance. We
derive the expected squared distance between
the vector of population fitted values and its shrinkage estimator as well as the tuning parameter value that minimizes this expectation. In addition to using cross validation, we construct two estimators of this optimal tuning parameter value and study their asymptotic properties. Our numerical experiments and data examples show that our method performs similarly to ridge-penalized least-squares.
Keywords: invariance, penalized least squares, high-dimensional data
1.9
§ INTRODUCTION
We will introduce a new shrinkage strategy for fitting linear regression models, which assume that the measured response
for n subjects is a realization of the random vector
Y = X β + ε,
where X∈ℝ^n× p is the nonrandom known design matrix with ones in
its first column and with values of the predictors in its remaining columns; β∈ℝ^p is an unknown vector of regression coefficients;
and ε has iid entries with mean zero
and unknown variance σ^2∈(0,∞).
We will consider fitting (<ref>)
in both low and high-dimensional settings, where the second
scenario typically has rank(X) < p.
If rank(X) < p, then it is well known that β is not identifiable
in (<ref>), i.e. there exists a
β̃≠β such that X β=Xβ̃. Similarly, if rank(X)< p, then
there are infinitely many solutions to the least-squares problem:
_ b∈ℝ^pY-X b^2.
Given this issue (which is unavoidable in high dimensions),
our inferential target is X β,
which is the expected value of the response for the
n subjects.
To describe least squares estimators whether
rank(X)< p or rank(X) = p, we will use the reduced
singular value decomposition of X. Let q= rank(X).
Then X = UDV', where U∈ℝ^n× q with U'U=I_q;
V∈ℝ^p× q with V'V=I_q; and
D∈ℝ^q× q is diagonal with positive diagonal entries.
The Moore–Penrose generalized inverse of X is X^- = V D^-1 U'
and a least-squares estimator of β is β̂=X^-Y.
The vector of fitted values is Xβ̂ = XX^-Y = P_X Y, where
P_X = XX^-=UU'. If rank(X)=p, then P_X = X( X' X)^-1 X'.
A nice property of this least-squares method is that its fitted
values are invariant
to invertible linear transformations of the design matrix.
Suppose that we replace X by X_∙ = XT, where T∈ℝ^p× p
is invertible. Then X = X_∙ T^-1. So (<ref>) is
Y = X β + ε
= X_∙ T^-1β + ε = X_∙β_∙ + ε,
where β_∙ = T^-1β.
We estimate X β = X_∙β_∙
with P_X Y = P_X_∙ Y, so the fitted values did not change
by changing X to X_∙.
Fitting (<ref>) by penalized least-squares has been studied by many scholars. Well-studied penalties include the ridge penalty <cit.>,
the bridge/lasso penalty <cit.>, the adaptive lasso penalty <cit.>, the SCAD penalty <cit.>,
and the MCP penalty <cit.>. Unfortunately, these methods'
fitted values are not invariant to invertible linear
transformations of X. This is particularly problematic when categorical
variables (with three or more categories) and their interactions
are encoded in X because a change in the coding can change the fit.
This lack of invariance is also present in
principal components regression <cit.>,
and partial least squares <cit.>.
§ A NEW SHRINKAGE METHOD FOR LINEAR REGRESSION WITH
INVARIANCE
§.§ Method description
To preserve the invariance to invertible linear transformations of the design matrix
discussed in the previous section,
we will use penalties that can be expressed as a function of the n-dimensional vector
Xb, where b is the optimization variable
corresponding to β. We propose the
penalized-least-squares estimator of β defined by
_ b ∈ℝ^p{ Y - Xb^2 + λXb - Y̅ 1_n^2 },
where Y̅ = 1_n'Y/n, 1_n'=(1,…,1)∈ℝ^n; λ∈ [0,∞) is a tuning parameter.
As λ increases, the fitted values are shrunk towards
the intercept-only model's fitted values Y̅ 1_n.
Let γ = 1/(1+λ). We can express this optimization
that defines our estimator as
β̂^(γ)∈_ b∈ℝ^p{γ Y - Xb^2 + (1-γ) Xb - Y̅ 1_n^2 },
where γ∈ [0,1]. The ∈ is used because
there are infinitely many global minimizers for the optimization in (<ref>)
when rank(X) < p. Conveniently, a global solution to (<ref>) is
available in closed form:
β̂^(γ) = X^-{γ Y + (1-γ) Y̅ 1_n}.
We derived this using first-order optimality.
Let P_1 = 1_n ( 1_n' 1_n)^-1 1_n'.
The estimator of Xβ is
Xβ̂^(γ) = γ P_X Y + (1-γ) P_1 Y,
which is simply a convex combination of the least-squares fitted values P_X Y
and the intercept-only model's fitted values P_1 Y = Y̅ 1_n.
Since P_X= P_X_∙, where X_∙ = XT with T∈ℝ^p× p invertible, Xβ̂^(γ) is invariant to invertible linear transformations
of X.
Due to its computational simplicity, β̂^(γ) is
a natural competitor to ridge-penalized least squares, which lacks this
invariance property. Both methods can be computed efficiently
when p is much larger than n by using the reduced singular value
decomposition of X <cit.>. Specifically,
they both cost O(n rank^2(X))
floating-point operations.
If γ=0 for our method
and λ→∞
for ridge-penalized least squares (without intercept
penalization), then both procedures fit the intercept-only model.
We will derive
an optimal value of γ that minimizes
𝔼Xβ̂^(γ)-X β^2
and propose two estimators of it:
one for low dimensions
and one for high dimensions. We also explore
using cross validation to select γ when the response and predictor measurement pairs are drawn from a joint
distribution. Conveniently, our results
generalize to shrinkage towards a submodel's
fitted values P_X_0Y, where
X_0 is a matrix with a proper subset of the
columns of X.
§.§ Related work
<cit.> proposed to predict a future value of the response
for the ith subject with a convex combination of its fitted value (from ordinary least squares) and Y̅. Although our methods are related,
<cit.> used a future-response-value prediction paradigm and
did not establish a theoretical analysis of his approach.
§ THEORETICAL PROPERTIES OF THE METHOD
Given the lack of identifiability of β in high dimensions, we investigate the estimation of the n-dimensional vector X β with Xβ̂^(γ). This
is an example of same-X prediction <cit.>.
It is related to predicting near X when p>n
<cit.>.
Suppose that the linear regression model specified in (<ref>) is
true (this model did not specify an error distribution, just that they
are iid mean 0 and variance σ^2∈(0,∞)).
Then we have the following result:
For all (n, p) ∈{1,2,…}×{1,2,…},
𝔼Xβ̂^(γ)-Xβ^2=σ^2(γ^2 r+1-γ^2)+(1-γ)^2μ-P_1 μ^2.
The proof of Proposition <ref> is in Section <ref>. When γ=1, which is least squares,
𝔼Xβ̂^(1)- X β^2 = σ^2 rank(X).
The right side of the equality in Proposition <ref> is minimized when γ = γ_ opt,
where
γ_ opt=μ - P_1 μ^2/σ^2 ( rank(X) -1) + μ - P_1 μ^2,
and μ=X β.
So the best our procedure could do is when μ - P_1 μ^2=0 (that is, the intercept-only model is correct), in which case
γ_ opt=0 and 𝔼Xβ̂^(0)- X β^2 = σ^2.
§ SELECTION OF Γ
§.§ Low-dimensional case
Let σ̂^2 = Y - P_X Y^2/(n- rank(X)),
which is an unbiased estimator of σ^2.
To construct an estimator of γ_ opt, we
use the ratio of P_X Y - P_1 Y^2 -σ̂^2 ( rank(X)-1), which is an unbiased estimator of γ_ opt's numerator, to P_X Y - P_1 Y^2, which is an unbiased estimator of γ_ opt's denominator.
This ratio estimator can be expressed as
P_XY - P_1Y^2 -σ̂^2 ( rank(X)-1)/P_XY - P_1Y^2 = 1-σ̂^2( rank(X)-1)/P_XY - P_1Y^2
= 1-1/F,
where F is the F statistic that compares the intercept-only model to the full model:
F = (Y - P_1Y^2 - Y - P_XY^2)/( rank(X)-1)/Y - P_XY^2/(n- rank(X))
= P_XY - P_1Y^2/σ̂^2( rank(X)-1).
Since F could be realized less than one (which
corresponds to a fail-to-reject the intercept-only model situation), we define our
estimator of γ_ opt to be
γ̂= (1-1/F) · 1(F > 1)
If the regression errors in (<ref>) are Normal,
n > rank(X), and rank(X) > 1,
then F has a non-central F-distribution with
degrees of freedom parameters rank(X)-1 and n- rank(X); and noncentrality parameter μ - P_1 μ^2/σ^2.
Larger realizations of F correspond to worse intercept-only model fits compared to the
full model, which makes γ̂ closer to 1.
We also explore two additional estimators of γ_ opt:
γ̂_90=(1-1/F) · 1(F≥ f_0.9),
γ̂_95=(1-1/F) · 1(F≥ f_0.95),
where f_90 and f_95 are the 0.9 and 0.95 quantiles of the central F-distribution with degrees of freedom rank(X)-1 and n- rank(X).
These estimators may perform better when γ_ opt is near zero because they have a greater probability of estimating γ_ opt as zero than γ̂ has.
Interestingly, <cit.> proposed to predict a
future response value for the ith subject
with (1-ρ)β̂'x_i + ρY̅, where ρ∈[0,1] is estimated and β̂ is the ordinary least-squares estimator.
They derived 1/F as an estimator of ρ from the normal equations for the regression of
Y_ new, i on β̂'x_i, (i=1,…, n),
where Y_ new, i is an independent copy of Y_i.
They also discussed using truncation to ensure their estimator
of ρ is in [0,1].
§.§ Consistency and the
convergence rate of γ̂
We analyze the asymptotic performance
of γ̂ when the
data are generated from
(<ref>) and n and p grow together.
Define r = rank(X) and δ^2=μ-P_1 μ^2. The optimal tuning parameter
value is a function of r and δ^2,
so its value in the limit
will depend on these sequences.
Assume that the data-generating
model in (<ref>) is correct, that the errors
have a finite fourth moment, and that r ≥ 2.
If p/n→τ∈ [0,1) and
either r→∞
or δ^2→∞,
then γ̂- γ_ opt→_P 0 as n→∞.
The proof of Proposition <ref> is in Section <ref>.
We see that consistency is possible whether
the design matrix rank r grows. If r is
bounded, then consistency requires
δ^2=μ-P_1 μ^2→∞,
which is reasonable even when the intercept-only model is a good approximation because n is growing.
One can also show consistency of γ̂_90 and γ̂_95 with δ^2=o(r) added to the assumptions for Proposition <ref>.
Next, we establish a bound on the rate of convergence of γ̂ with further assumptions on the design matrix X and the error ε.
Suppose that the assumptions of Proposition <ref> hold, that the errors
in (<ref>) are Gaussian,
and that r ≥ 6 is nondecreasing as n→∞. Then
γ̂-γ_ opt =
O_P(r^-1/2) if δ^2=O(r)
O_P((δ^2/r)^-3/4)+O_P(n^-1/2(δ^2/r)^-1/2) if r →∞ and r=o(δ^2)
O_P((δ^2)^-3/4)+O_P(n^-1/2(δ^2)^-1/2) if r=O(1).
The proof of Proposition <ref> is in Section <ref>. From the definition of γ_ opt, we know that γ_ opt→ 0 when δ^2 =o(r), in
which case γ̂-γ_ opt=O_P(n^-1/2)
provided that r≍ n. On the other hand,
γ_ opt→ 1 when r = o(δ^2). For example,
γ̂-γ_ opt=O_P(n^-1/2)
provided that δ^2 ≍ n^2/3
and r is bounded.
When δ^2=o(r),
γ̂_90,γ̂_95
and γ̂ all have the same
convergence rate bound.
§.§ Tuning parameter selection in high dimensions
Estimating the unknown parameters in
γ_ opt is challenging when p > n
and rank(X) = n. For example, it is impossible to estimate the regression's error variance σ^2 without assuming something extra about μ=X β. This is because the
data-generating model in (<ref>) reduces to
Y = μ + ε,
where μ has n unknown free parameters and ε has iid entries
with mean zero and variance σ^2. So we have a sample size
of 1 to estimate each μ_i, which is not enough if we also want
to estimate σ^2.
We explore using cross-validation to choose a
value of γ that minimizes the total
validation squared error in our numerical experiments. This cross-validation procedure implicitly assumes that
the response and predictor measurement pairs for
each subject are drawn from a joint distribution.
As an alternative, we derive a high-dimensional estimator of γ_ opt that estimates σ^2 with an assumption about μ. The following paragraphs introduce this estimator, which is not invariant to invertible
linear transformations of X.
Recall that γ_ opt=δ^2/(σ^2( rank(X)-1)+δ^2), where δ^2=μ-P_1 μ^2.
Since P_X is an identity operator when rank(X)=n,
𝔼Y-P_1Y^2=𝔼P_XY-P_1Y^2=σ^2( rank(X)-1)+δ^2.
So given an estimator σ̌^2 of σ^2,
we study the following plug-in estimator of γ_ opt:
γ(σ̌^2)= max(0,Y-P_1Y^2-σ̌^2( rank(X)-1)/Y-P_1Y^2),
where the truncation at 0 is necessary to ensure that γ∈[0,1]. We continue by describing the
estimator of σ^2 that we will use in (<ref>).
If one ignores invariance and assumes Gaussian errors, then one could simultaneously estimate β and σ^2
by penalized likelihood with the same penalty used in (<ref>). However, this joint optimization is not convex. We avoid this nonconvexity by modifying a reparametrized penalized Gaussian likelihood optimization problem proposed by <cit.>.
Let η=σ^-1 and β^*= βη.
We estimate these parameters with
(β̂^*,η̂)=_( β^*, η) ∈ℝ^p × (0, ∞){1/2nYη-X β^*^2- log(η)+αβ^*_-1^2 },
where β^*=(β^*_1,…, β^*_p)=(β^*_1, β^*_-1); and
α=n(2Y-P_1Y^2)^-1.
This choice of α was motivated by <cit.>, who verified that ridge regression (with tuning parameter α) can
be used to consistently estimate σ^2
provided that αβ^2=o(1). However,
<cit.> use
a different estimator of σ^2 than
the transformed solution to (<ref>).
We also examine other choices for α in the simulations (see section <ref>).
The reparametrized optimization problem in (<ref>) is strongly convex with the following global minimizer:
η̂ =(n^-1Y'(I-X(X'X+2nα M)^-1X')Y)^-1/2,
β̂^* =η̂(X'X+2nα M)^-1X'Y,
where M= diag(0,1,1,…,1)∈ℝ^p× p.
Since η=σ^-1, which is estimated using (<ref>), the corresponding estimator of σ^2 is
σ̌^2=n^-1Y'(I-K)Y,
where K=X(X'X+2nα M)^-1X'.
<cit.> proposed a bias corrected version of σ̌^2, since the uncorrected
estimator does not converge to σ^2.
Their corrected estimator is σ̌^2_c=C^-1σ̌^2, where C=1- tr(K)/ rank(X). Using σ̌^2_c in (<ref>), we define our
high-dimensional estimator of γ_ opt
by
γ_c = max(0, Y'(I-P_1- rank(X)-1/ rank(X)- tr(K)(I-K))Y/Y'(I-P_1)Y),
We have the following
consistency result for γ_c.
Assume that the data-generating
model in (<ref>) is correct, that the errors
follows a distribution with finite fourth moment, and
δ^2=o(n), d_2 (nα)^-1=o_P(1), where d_2 is
the second-largest eigenvalue of X'X.
Then γ_c - γ_ opt→_P 0 as n→∞.
The proof is in Appendix <ref>.
We see that γ_c converges to γ_ opt
when γ_ opt→ 0.
The assumption that d_2 (nα)^-1=o_P(1) is met
when α=n (2Y-P_1Y^2)^-1, δ^2=o(n),
and d_2=O(1) because
d_2 (nα)^-1 =2n^-2d_2Y-P_1Y^2
=2n^-2d_2(δ^2+2( μ-P_1 μ)' ε+ ε'(I-P_1) ε)→_P 0,
since ε^2=O_P(n).
§ SHRINKING TOWARD THE FITTED VALUES OF A SUBMODEL
The previous sections developed our new shrinkage method
for linear regression with fitted values that are
invariant to invertible linear transformations of
the design matrix X. Its shrinkage target was
the fitted values of the intercept-only model P_1Y.
We can generalize this so that the shrinkage target
is P_X_0 Y, where X_0 is the design matrix for a submodel formed from a proper subset of the columns of X,
e.g. X_0 = 1_n (the first column of X) as it was previously.
The generalized shrinkage
estimator of β in (<ref>) is
β̃^(γ)∈_ b ∈ℝ^p{γY - Xb^2 + (1-γ) Xb - P_X_0 Y^2 },
where γ∈[0,1].
This generalization could
be useful when μ - P_1 μ^2
is large but μ - P_X_0μ^2 is small, where μ=X β. All of the results obtained for the special
case that X_0=1_n also hold in this general case with
P_1Y replaced by P_X_0 Y; and with rank(X)-1
replaced by rank(X) - rank(X_0). The proofs follow by making these replacements in the proofs from
the special case that X_0=1_n.
We continue by
stating these generalized results.
The generalized fitted-value shrinkage method's fitted values are
Xβ̃^(γ) = γ P_XY + (1-γ) P_X_0 Y.
For all (n, p) ∈{1,2,…}×{1,2,…},
𝔼Xβ̃^(γ)- X β^2 = σ^2 {γ^2 rank(X) + (1-γ^2) rank(X_0) } + (γ-1)^2 μ-P_X_0μ^2.
The right side of (<ref>) is minimized when γ=γ̃_ opt,
where
γ̃_ opt=μ - P_X_0μ^2/σ^2 ( rank(X) - rank(X_0)) + μ - P_X_0μ^2.
We can estimate γ̃_ opt in low dimensions with
γ̃= (1-1/F̃) · 1(F̃ > 1),
where F̃ is the F-statistic for comparing the submodel X_0
to the full design matrix model X:
F̃ = (Y - P_X_0Y^2 - Y - P_XY^2)/( rank(X)- rank(X_0))/Y - P_XY^2/(n-p)
= P_XY - P_X_0Y^2/σ̂^2( rank(X)- rank(X_0)).
Recall that r= rank(X) and let r_0 = rank(X_0).
Assume that the data-generating
model in (<ref>) is correct, that the errors
follows a distribution with finite fourth moment, and
r-r_0 ≥ 1.
If p/n→τ∈ [0,1) as n→∞ and
either (r-r_0)→∞
or δ^2→∞,
then γ̃- γ̃_ opt→_P 0.
Suppose that the assumptions of Proposition <ref> hold, that the errors
in (<ref>) are Gaussian,
and that r-r_0 ≥ 5 is nondecreasing as n→∞. Then,
γ̃-γ̃_ opt =
O_P((r-r_0)^-1/2) if δ^2=O(r-r_0)
O_P((δ^2/(r-r_0))^-3/4)+O_P(n^-1/2(δ^2/(r-r_0))^-1/2) if (r-r_0) →∞ and r-r_0=o(δ^2)
O_P((δ^2)^-3/4)+O_P(n^-1/2(δ^2)^-1/2) if r-r_0=O(1).
§ SIMULATION STUDIES
§.§ Low-dimensional experiments
We conducted a lower-dimensional simulation study in which
the data were generated from the linear regression
subjects model (<ref>)
with n=300 and p∈{75, 150}.
Also, ϵ_1,…, ϵ_n are iid N(0,1).
The design matrix X has ones in its first column and independent
draws from N_p-1(0, Σ) in the remaining entries on each row,
where Σ_jk=0.5^|j-k|.
We randomly generated the regression coefficient vector with the following equation:
β= X^-(1_p+τ Z),
where τ∈{0,10^-6,10^-4,10^-2,1,10^0.5,10^1,10^1.5};
and Z is N_p(0, I). Then
μ-P_1 μ=(P_X-P_1)(1_p+τ Z)=τ (P_X-P_1) Z.
So τ controls the size of δ^2=μ-P_1 μ^2.
We used 50 independent replications in each setting.
In each replication, we measured the performance of each estimator β̂_ est
using the same-X loss:
n^-1X β-Xβ̂_ est^2.
The candidate estimators β̂_ est that we considered
were the following:
* 2n-G: L_2-squared penalty with 10-fold cross validation for γ (<ref>).
* 2n-Or: L_2-squared penalty using the oracle γ_ opt in (<ref>).
* 2n-Es: L_2-squared penalty using γ̂ in (<ref>).
* 2n-Es90: L_2-squared penalty using γ̂_90 in (<ref>).
* 2n-Es95: L_2-squared penalty using γ̂_95 in (<ref>).
* 2n-Rep: L_2-squared penalty using σ̌^2_c in (<ref>), α=n(2Y-P_1Y^2)^-1 in (<ref>), and the corresponding γ_ c,low= max(0,1-1/F_ rep), where F_ rep=P_XY-P_1Y^2/(σ̌^2_c( rank(X)-1)).
* O: Ordinary least square (OLS) estimator given by
β̂^(1)=X^-Y.
* R: Ridge-penalized least squares <cit.>
β̂_ Ridge= argmin_b ∈ℝ^pY-Xb^2+λb_-1^2,
where b=(b_1,b_-1) with b_-1=ℝ^p-1; 10-fold cross validation for the selection of λ.
* L: Lasso-penalized least squares <cit.>
β̂_ LASSO= argmin_b ∈ℝ^pY-Xb^2+λb_-1_1,
where 10-fold cross validation is used for the selection of λ.
For the methods that require cross validation,
λ and γ were
selected from {10^-7+0.25j:j=0,1,⋯,44} and {k/99:k=0,1,⋯,99}, respectively. To facilitate the fairest
comparison between our invariant methods and the ridge/lasso methods,
we used the following standardization process, which is the default
process used by the R package glmnet:
the ridge/lasso shrunken coefficient estimates are computed using
the standardized design matrix X_∙ defined by
X_∙ =X[ 1 -X_2S_2^-1 -X_3S_3^-1 … -X_p-1S_p-1^-1 -X_pS_p^-1; 0 S_2^-1 0 … 0 0; 0 S_3^-1 ⋮ ⋮; ⋮ ⋮ 0 ⋱ 0 ⋮; 0 0 ⋮ S_p-1^-1 0; 0 0 0 … 0 S_p^-1; ]
=XT,
where X_j = n^-1∑_i=1^n X_ij and S_j^2 = (n-1)^-1∑_i=1^n (X_ij-X_j)^2 for j ∈{2,…,p}.
Let β̂_∙ be the shrinkage estimator of the standardized
coefficients. Since all the S_j's will be positive, we invert T to estimate the original β with T^-1β̂_∙.
Our proposed fitted-value shrinkage procedures are invariant to this standardizing transformation of X.
We display side-by-side boxplots of the same-X losses from the 50 replications when p=75 in Figure <ref>. Additional boxplots are in Figure <ref> in Appendix <ref>. Without surprise, our fitted-value shrinkage with oracle tuning 2n-Or performed the best among these candidates. Our proposed estimator 2n-Es and its two variants 2n-Es95 and 2n-Es90 followed and generally outperformed OLS, Ridge, and Lasso. Of the fitted-value shrinkage estimators, 2n-Es outperformed 2n-G and 2n-Rep. For smaller τ values (that correspond to smaller δ^2 values), the modified thresholds 2n-Es90 and 2n-Es95 performed better than 2n-Es. Ridge and Lasso performed similarly to 2n-Es when
δ^2 was small, but performed worse when δ^2 was larger. We also graphed the average same-X loss values over the 50 replications as a function of λ in order to compare the fitted-value shrinkage (<ref>) to Ridge regression (<ref>) in Figure <ref> and Figure <ref>. The minimum average same-X loss for fitted value shrinkage (<ref>) was either lower than or nearly equal to that of Ridge (<ref>). However, the range of values of λ that corresponded to average losses near the minima was much narrower for fitted-value shrinkage than it was for Ridge when medium
to large values of τ were used (Figure <ref>, <ref>).
We also display side-by-side boxplots of the observed same-X losses from the 50 replications as well as the average loss values over the 50 replications as a function of λ when p=150
in Figure <ref>. Additional boxplots are in Figure <ref> in Appendix <ref>. These results with p=150 are similar to results when p=75:
the oracle method 2n-Or was the best and
our proposed estimators 2n-Es90, 2n-Es95, 2n-Es were the most competitive. However, the performance gap between
our procedure with oracle tuning 2n-G and our procedure using
non-oracle tuning has increased (Figure <ref>, <ref>).
We expect this is related to the narrower valley observed in the
graph of the average same-X loss as a function of the tuning parameter.
§.§ High-dimensional experiments
We used the same data generating model as in Section <ref>
except that n=200, p=300, and σ∈{2,3}.
Since 2n-Es is not applicable in high dimensions, we tested variants of 2n-G and 2n-Rep
that used 5, 10, and n-fold cross validation. They are labeled 2n-5fold, 2n-10fold, 2n-Loocv, respectively. In addition, we tried different α values that control the matrix K in (<ref>):
* 2n-Rep1: Same as original 2n-Rep with α=n(2Y-P_1Y^2)^-1.
* 2n-Rep2: Same as original 2n-Rep with α=n(2β̂^(1)^2)^-1, where the OLS estimator β̂^(1) is defined in (<ref>).
* 2n-Rep3: Same as original 2n-Rep with α=n(2β̂^(1)^3)^-1.
* 2n-Rep4: Same as original 2n-Rep with α=n(2β̂^(1)^4)^-1.
In Figure <ref>,
we display side-by-side boxplots of the observed same-X losses from the 50 replications when n=200 and p=300. There are additional boxplots displayed in Figure <ref> in Appendix <ref>.
In general, when δ^2=μ-P_1μ^2 is large relative to nσ^2, our proposed high-dimensional estimator and its variants: 2n-Rep1 to 2n-Rep4 perform better than Ridge, Lasso, and 2n-Gs (Figure <ref>, <ref>). Furthermore, larger δ^2 led to improved tuning-parameter selection (Figure <ref>, <ref>, <ref>, <ref>, <ref>). In these situations, Ridge, Lasso, and 2n-Gs peformed poorly. However, 2n-Gs, Ridge, and Lasso performed better when δ^2 is
small relative to nσ^2. In this setting, 2n-Rep1 and 2n-Rep4 struggled (see Figure <ref>, <ref>).
Except for a few cases, the number of folds used for tuning parameter
selection for 2n-G did not have a
significant impact on the prediction accuracy.
In Figure <ref>, we display average loss values over the 50 replications as a function of λ when n=200 and p=300.
These results look similar to lower-dimensional results displayed in Figure <ref>, except the curve valleys are narrower for fitted-value shrinkage.
§ DATA EXAMPLES
§.§ Low dimensional data experiments
We compared our proposed fitted-value shrinkage procedures to competitors
on three data sets. We used the same
non-oracle estimators as the previous section except we excluded 2n-Es90 because it performed similarly to 2n-Es95.
Each data example was analyzed using the following procedure:
For 50 independent replications, we randomly selected 70% of the subjects for the training set and used the remaining subjects as the test set. Tuning parameter selection was done using the training set and prediction performance was measured using squared error loss on the test set. The following is the short description of the three low-dimensional data set we examined.
(FF): The Forest Fire <ref> data are from from <cit.> and are stored at the UCI Machine learning repository via <https://archive.ics.uci.edu/dataset/162/forest+fires>. There are 517 observations corresponding to forest fires in Portugal from 2000 to 2003. The response is the total burned area (in ha) from the fire, which was transformed with x↦ ln(x+1), which was suggested by <cit.>. There were originally 13 attributes. However, in the pre-processing step, since we are not focusing on spatio-temporal methods, we excluded time, date and location coordinates. After this processing, the full-data design matrix had (n,p)=(517,9) with 8 numerical-variable columns and one intercept column.
(GDP): The GDP data <ref> are from <cit.>. These data consist of 161 observations of GDP growth rates for the two periods 1965-1975 and 1975-1985. The data are also in the R package quantreg <cit.>. The response is Annual change per capita GDP. There are 13 numerical predictors, e.g. Initial per capita GDP, Life expectancy.
We also added a quadratic term for the predictor Black Market Premium. After processing, the full-data design matrix has (n,p)=(161,15).
(FC): The Forecast data set <ref> is from <cit.> for the purpose of bias correction for the Local Data Assimilation and Prediction System (LDAPS), which is a numerical weather report model used by Korea Administration (KMA), Seoul, South Korea. It has a public access through <https://archive.ics.uci.edu/dataset/514/bias+correction+of+numerical+prediction+model+temperature+forecast>. The data are regional observations from 2013 to 2017, from which we randomly selected 500. We used the true maximal temperature of the next day as the response and removed date, station ID, and true minimal temperature of the next day.
The full-data design matrix had (n,p)=(500,20).
In Table <ref>, we display mean squared prediction errors averaged over 50 training/test set splits for
the three data examples <ref>, <ref>, and <ref>. Our fitted-value shrinkage estimators performed similarly to Ridge and Lasso, which both lack invariance to invertible linear transformations of the design matrix.
§.§ Low dimensional data analyses with categorical variables and their interactions
We analyzed two data sets from existing R packages to illustrate the performance of our estimators when categorical variables with interactions are present in the model. The competitors and setup for the data experiments are nearly identical to the previous Section <ref>,
except we added three new fitted-value shrinkage estimators that shrink
toward the submodel without interactions instead of the intercept-only model. These new submodel shrinkage methods are labeled
2n-Repsb, 2n-Gsb, 2n-Essb, 2n-Es95sb,
and they respectively correspond to 2n-Rep, 2n-G, 2n-Es, and 2n-Es95. The following is a description of
the examples:
(Dia-1): The Diamonds data set is from Diamonds data frame in the R package Stat2Data <cit.>, and it was obtained from <https://awesomegems.com/>. The are n=351 subjects and the response is the price of the diamond (in dollars). There are 3 numerical predictors: size, depth, and price per carat. We divided price per carat and the total price by 1000. There are 2 categorical predictors: color (with levels D to J) and clarity (with levels IF, VVS1, VVS2, VS1, VS2, SI1, SI2, and SI3). We divided color into 5 levels (D, E, F, G, and (H,I,J)), and categorized the clarity into 3 levels ((IF, VVS1, VVS2), (VS1, VS2), (SI1, SI2, SI3)). We used reference-level coding in the design matrix, where (H,I,J) was the reference level for color; and (SI1, SI2, SI3) was the reference level for clarity. Interactions between color and price per carat as well as clarity and price per carat were added. The full design matrix has (n,p)=(351,16) and the submodel with linear terms only has p=10.
(Dia-2): The setting is identical to that of <ref>, except we used the category (VS1, VS2) as the reference level for coding the categorical predictor clarity.
(NG-1): The NaturalGas data is from <cit.>, and is in the R package AER <cit.>.
There are 138 observations on 10 variables. We removed state name and year
and added an interaction between state code and heating degree days. The
reference level for state code, which is the only categorical predictor,
was set to 45 (UT). The full-data design matrix had
(n,p)=(138,17) and the submodel without interactions had p=12.
(NG-2): This is the same as <ref>, except the reference level for state code was set to 5 (CA).
We display mean squared prediction errors averaged over 50 training/test set splits for these data examples in Table <ref>. Our proposed estimators performed similarly or better than Ridge and Lasso. We also notice that changing the way that categorical predictors were encoded in the design matrix changes the performance of Ridge and Lasso, which lack invariance.
§.§ High dimensional data experiments
For high-dimensional data examples, we randomly selected subjects
from existing data sets so that there were fewer subjects than predictors.
We used the same splitting and evaluation procedure that we used in Sections <ref> and <ref>. The competitors are same as those considered in Section <ref>. The following is a description of the examples:
(mtp): The data set mtp comes from <cit.> and is available at the OpenML repository via <https://www.openml.org/search?type=data status=active id=405>. There are 4450 subjects with 203 numerical measurements. The response is oz203. We randomly selected 120 subjects and removed the 23 predictors that had fewer than 30 distinct values, which ensured that there were no constant columns in the 120-row design matrix other than the intercept column. The full-data design matrix had (n,p)=(120,180).
(topo): The topo.2.1 data set is from <cit.> and is available through the OpenML repository via <https://www.openml.org/search?type=data sort=runs id=422 status=active>. There are 8885 subjects with 267 numerical measurements. The response is oz267. We randomly selected 180 subjects and removed the 22 predictors that had fewer than 30 distinct values. After this, there were 34 constant columns (other than the intercept) that were also removed. The R code for this processing is in Appendix <ref>. The full-data design matrix has (n,p)=(180,214).
(pah): The pah data set comes from <cit.>, and is also available from OpenML repository via <https://www.openml.org/search?type=data status=active id=424>. There are 80 subjects with 113 numerical measurements. The response is oz113. The full-data design matrix had (n,p)=(80,113).
In Table <ref>, we report the
mean squared prediction errors averaged over 50 training/test set splits for these data examples. We see that 2n-Rep2, 2n-Rep3, 2n-Rep4 had inferior prediction performance compared to 2n-Rep1. In contrast to its same-X loss performance in simulations, the cross validation version of our method 2n-G gave reasonable out-of-sample prediction performance. Lasso followed by Ridge were the best, but 2n-Rep1 was competitive and had lower variance.
§ APPENDIX
§.§ Proof of Proposition <ref>
We start with the following standard decomposition:
𝔼Xβ̂^(γ)-Xβ^2= tr( var(Xβ̂^(γ)-Xβ))+
𝔼(Xβ̂^(γ)-Xβ)^2.
From (<ref>),
tr( var(Xβ̂^(γ)-Xβ)) = tr(σ^2 (γ^2 P_X +(1-γ^2) P_1) ),
=σ^2(γ^2 r+1-γ^2)
𝔼(Xβ̂^(γ)-Xβ) =(1-γ)(μ-P_1μ),
where μ = Xβ. Combining above two equations, we conclude that the statement holds.
§.§ Proof of Proposition <ref>
When a n-variate random variable V=(V_1,…,V_n) has i.i.d. elements V_i following a distribution that has mean μ, variance σ^2, and has finite 4-th moment with 𝔼(V_i^4)≤ M, for all i=1,…,n. Let an (i,j)-th element of A be a_ij, then, for a symmetric non-negative definite A∈ℝ^n× n,
var(V'AV)≤ 2(M-3σ^4)∑_i=1^n a_ii^2+4σ^4 tr(A^2)+8σ^2μ'A^2μ
holds.
For the simplest case, we first consider when μ=0. Then, by the i.i.d property of {V_i} and the moment conditions,
𝔼[(V'AV)^2] ≤ M∑_i=1^n a_ii^2+σ^4∑∑_1≤ i≠ j≤ n a_ii a_jj + 2σ^4∑∑_1≤ i≠ j≤ n a_ij a_ji
= M∑_i=1^n a_ii^2 + σ^4 (( tr(A))^2-∑_i=1^n a_ii^2) +2 σ^4 ( tr(A^2)-∑_i=1^n a_ii^2)
=(M-3σ^4)∑_i=1^n a_ii^2+σ^4(( tr(A))^2+2 tr(A^2))
Together with, 𝔼(V'AV)=σ^2 tr(A), we know that,
var[(V'AV)^2]=(M-3σ^4)∑_i=1^n a_ii^2+2σ^4 tr(A^2)
For general V with μ≠0, we denote V=V_0+μ, then,
var(V'AV) = var(V_0'AV_0+2μ'AV_0)
≤ 2 var(V_0'AV_0)+8 var(μ'AV_0).
This completes the proof.
Now we turn to the proof of Proposition <ref>.
Let δ^2=μ - P_1 μ^2 and
define A_n=P_XY-P_1Y^2/(r-1), B_n=Y-P_XY^2/(n-r) and their expected values a_n=σ^2+δ^2/(r-1),b_n=σ^2, respectively. Then we can express γ_ opt=1-1/f_n with its sample counterpart 1-1/F_n where F_n=A_n/B_n,f_n=a_n/b_n. At first glance, considering that
f_n = a_n/b_n=1+δ^2/σ^2( rank(X)-1),
it is obvious that f_n≥ 1. By the assumption, we have the error distribution to be 𝔼(e_i^4)≤ M, for M>0,i=1,…,n.
For the first step, knowing that 𝔼(B_n)=b_n,
𝔼(B_n-b_n) =0
var(B_n-b_n) = var(B_n)
= 1/(n-r)^2 var(I-P_X)Y^2
= 1/(n-r)^2 var{Y'(I - P_X)Y}
≤2(M-σ^4)/n-r,
which converges to 0 because n→∞ and r/n→τ∈[0,1). The last inequality is due to Lemma <ref>, with the fact that I-P_X is an idempotent matrix with rank n-r.
So B_n-b_n→_q.m0, which implies that B_n/b_n→_q.m. 1 because b_n=σ^2 is constant.
Next, knowing that 𝔼(A_n)=a_n, similarly,
𝔼(A_n/a_n-1) =0
var(A_n/a_n-1) =1/a_n^2 var(A_n)
=1/a_n^2 (r-1)^2 var{(P_X - P_1)Y^2}
=1/a_n^2 (r-1)^2 var{Y'(P_X - P_1)Y}
≤2(M-σ^4)(r-1)+8δ^2σ^2/(σ^2(r-1)+δ^2)^2
= 2(M-σ^4)(r-1)+8δ^2σ^2/σ^4(r-1)^2+δ^4+2σ^2δ^2(r-1),
where we used Lemma <ref> for (<ref>) with the fact that P_X-P_1 is an idempotent matrix with rank r-1. If r →∞ holds, since (<ref>) is bounded above by 4(r-1)^-1+2(M-σ^4)(σ^4(r-1))^-1, it converges to 0.
On the other hand, under the second condition, we have the same convergence result. This is because we can have an upper bound of (<ref>) as (M-σ^4)(2σ^2δ^2)^-1+8σ^2(δ^2)^-1, it converges to 0 with δ^2→∞.
Combining two convergence results, B_n/b_n→_q.m1,A_n/a_n→_q.m1, it implies B_n/b_n→_p1,A_n/a_n→_p1, and further yields F_n/f_n=A_n/a_n/B_n/b_n→_p 1 by Slutsky's theorem.
Additionally, let us denote that F^0_n= max(1,F_n), then we have γ̂=1-1/F^0_n. Finally, we get
ℙ(|γ̂-γ_ opt|≥ϵ) =
ℙ(|1-1/F^0_n-1+1/f_n|≥ϵ)
=ℙ(|1/F^0_n-1/f_n|≥ϵ)
=ℙ(|F^0_n-f_n|/F^0_n f_n≥ϵ)
≤ℙ(|F_n-f_n|/F_nf_n≥ϵ,F_n≥ 1)+ℙ(|1-f_n|≥ϵ,F_n< 1)
≤ℙ(|F_n-f_n|/f_n≥ϵ,F_n≥ 1)+ℙ(f_n≥ 1+ϵ,F_n< 1)
≤ℙ(|F_n/f_n-1|≥ϵ)+ℙ(|F_n/f_n-1|≥ϵ/1+ϵ) → 0.
§.§ Proof of Proposition <ref>
If a random variable X follows a chi-squared distribution with a degree of freedom K≥ 5, and a non-central parameter λ>0 which is denoted as χ^2(K;λ), the followings holds.
𝔼(X^-1)≤ min{1/K-2,1/2√(λ(K-2))}
𝔼(X^-2)≤ min{1/2(K-4),(K-4+λ)/(2√(λ(K-4)))-1/2λ}
<cit.> verified the following three results for K≥ 5.
𝔼^1_K=∫_0^1 s^K-3e^λ(s^2-1)/2ds,
𝔼^1_K=1/λ-((K-4)/λ)𝔼^1_K-2,
𝔼^n_K-𝔼^n_K+2=2n 𝔼^n+1_K+2,
where 𝔼^n_K=𝔼(1/(χ^2(K;λ))^n). Hence
𝔼(X^-2)=𝔼^2_K =𝔼^1_K-2-𝔼^1_K/2
=(K-4+λ)𝔼^1_K-2-1/2λ
holds. Furthermore,
𝔼(X^-1)=𝔼^1_K = ∫_0^1 s^K-3e^λ(s^2-1)/2ds
≤∫_0^1 s^K-3 ds=1/K-2.
On the other hand, from the same equation,
∫_0^1 s^K-3e^λ(s^2-1)/2ds
≤ e^-λ/2(∫_0^1 s^2K-5ds)^1/2(∫_0^1 s e^λ s^2ds)^1/2
=e^-λ/21/√(2K-4)√(e^λ-1)/√(2λ)
≤1/2√(λ(K-2)).
This completes the proof.
With Lemma <ref>, we prove the statement of Proposition <ref>.
From the definition of γ̂ (<ref>), we obtain
|γ̂-γ_ opt| =|I(F_n ≥ 1)(1-1/F_n)-(1-1/f_n)|
≤|1/f_n-1/F_n|,
where γ_ opt=1-1/f_n, f_n= σ^2(r-1)+δ^2/σ^2(r-1)=1+δ^2/σ^2(r-1)∈ [1,∞).
Let us follow the same notations in Proposition <ref> as F_n=A_n/B_n,f_n=a_n/b_n, where A_n=P_X Y - P_1 Y ^2/(r-1),B_n=Y - P_X Y ^2/(n-r),a_n=σ^2+δ^2/(r-1),b_n=σ^2. Since we have normal errors, A_n,B_n are independent. In addition, (r-1)A_n/σ^2∼χ^2(r-1;δ^2/ σ^2), (n-r)B_n/σ^2∼χ^2(n-r), and we denote A'_n:=A_n/σ^2,B'_n:=B_n/σ^2. First, we consider the first case where δ^2=O(r). Since F_n follows non-central F distribution,
𝔼(F_n) =(n-r)(r-1+δ^2/σ^2)/(r-1)(n-r-2)
=(2/n-r-2+1)f_n,
when n-r>2. And, with n-r>4,
var(F_n)=2(n-r-2)(n-r)^2/(n-r-4)(n-r-2)^2[(r-1+δ^2/σ^2)^2/(n-r-2)(r-1)^2 + r-1+2δ^2/σ^2/(r-1)^2].
Then,
𝔼(r(F_n-f_n)^2)=2r(n-r-2)(n-r)^2/(n-r-4)(n-r-2)^2[(r-1+δ^2/σ^2)^2/(n-r-2)(r-1)^2 + r-1+2δ^2/σ^2/(r-1)^2]+4r/(n-r-2)^2f_n^2,
which asymptotically bounded by the assumptions. Furthermore, since we know that for arbitrary small ϵ > 0,
γ̂-γ_ opt=-γ_ optI(F_n≤ 1-ϵ)+I(F_n≥ 1-ϵ)(γ̂-γ_ opt)
holds, and the second term is bounded by (<ref>). Indeed, on the event F_n≥ 1-ϵ, by Cauchy-Schwarz inequality and the fact that f_n→ f_∞∈[1,∞),
(𝔼|1/f_n-1/F_n|)^2≤1/f_n^2𝔼|F_n-f_n|^2𝔼(1/F_n^2) ≤1/(1-ϵ)^2f_n^2𝔼|F_n-f_n|^2𝔼(1/F_n^2)
≍𝔼|F_n-f_n|^2.
On the other hand, we show that the first term in (<ref>) is negligible. Indeed, P(F_n≤ 1-ϵ) is at its smallest when f_n→ 1, since F_n/f_n→ 1. Hence, it suffices to show that the tail bound of P(F_n≤ 1-ϵ) in the case of f_n→ 1 is negligible. Indeed,
P(F_n≤ 1-ϵ) =P(A'_n/B'_n≤ 1-ϵ,A'_n>1-ϵ/2)+P(A'_n/B'_n≤ 1-ϵ,A'_n≤ 1-ϵ/2)
=P(B'_n≥1-ϵ/2/1-ϵ)+P(A'_n≤ 1-ϵ/2).
Lemma 1 from <cit.> with x=nt/10 yields P(Z≥ 2nt)≤ exp(-nt/10) for all t≥ 1, where a random variable Z follows χ^2(n). This further implies that the first term in (<ref>) is controlled by exp(-(n-r)(1-ϵ/2)/(20(1-ϵ))) which has exponential decay in n. Furthermore, Theorem 7 of <cit.> yields that the second term in (<ref>) is bounded above by exp(-ϵ^2(r-1)^2/(16(r-1+δ^2/σ^2))).
Based on the case that δ^2=o(r), the upper bound has exponential decay in r.
Consequently, 𝔼|1/f_n-1/F_n|=O(r^-1/2) for δ^2=O(r).
However, when r=o(δ^2), f_n→∞, hence the same reasoning cannot be applied. We use Lemma <ref> instead. Indeed,
𝔼|1/f_n-1/F_n| =𝔼|b_n A_n-B_n a_n/A_n a_n|
≤𝔼b_n/A_n|A_n/a_n-1|+𝔼1/A_n|B_n-b_n|
≤σ^2[𝔼(A_n/a_n-1)^2𝔼(1/A_n)^2]^1/2+[𝔼(B_n-b_n)^2]^1/2𝔼(1/A_n),
by Cauchy-Schwarz inequality and independence of A_n,B_n. Since
𝔼(1/A_n)=(r-1)/σ^2𝔼(1/χ^2(r-1;δ^2))≤(r-1)/σ^2 min{1/r-3,1/2√(δ^2(r-3))}:=c_1(r,δ^2)/σ^2,
and,
𝔼(1/A_n)^2=(r-1)^2/σ^4𝔼(1/χ^2(r-1;δ^2))^2 ≤(r-1)^2/σ^4 min{1/2(r-5)(r-5+δ^2)/(2√(δ^2(r-5)))-1/2δ^2}
:=c_2(r,δ^2)/σ^4
from Lemma <ref>, and combining results from the Proposition <ref>,
𝔼|1/f_n-1/F_n|
≤√(4σ^4(r-1)+8δ^2σ^2)/σ^2(r-1)+δ^2√(c_2(r,δ^2))+√(2)/√(n-r)c_1(r,δ^2)
≤2√(2)σ√(c_2(r,δ^2))/√(σ^2(r-1)+δ^2)+√(2)/√(n-r)c_1(r,δ^2)
If δ^2/r=o(1) holds with growing r, the first term in (<ref>) is of order O_P((δ^2/r)^-3/4), and the second term is of O_P(n^-1/2(δ^2/r)^-1/2). Similarly we can have analogous rates for the case where r is not divergent, and this completes the proof.
§.§ Proof of Proposition <ref>
We denote C_n=Y-P_1Y^2,c_n=δ^2+ σ^2(n-1). Referring to Lemma <ref> with an rank n-1 idempotent matrix A=I-P_1, we have
varC_n/c_n^2≤2(M-σ^4)(n-1)+8σ^2δ^2/(δ^2+σ^2(n-1))^2→ 0.
Thus, with 𝔼C_n=c_n, we obtain C_n/c_n→_P 1. Now,
Y'(I-P_1-n-1/nC(I-K))Y/Y'(I-P_1)Y-δ^2/δ^2+σ^2(n-1) =C_n-(n-1)σ̌^2_c/C_n-c_n-σ^2(n-1)/c_n
=(n-1)C_nσ^2-(n-1)c_nσ̌^2_c/C_nc_n
=(n-1)σ^2(C_n-c_n)-(n-1)c_n(σ^2-σ̌^2_c)/C_nc_n.
Since C_n/c_n→_P 1, (n-1)σ^2/c_n≤ 1, the first term in the (<ref>) converges to 0 in probability. Next, we prove the consistency of σ̌^2_c. Indeed, since K and P_1 are simultaneously diagonalizable and KP_1=P_1K=P_1,
𝔼1/nCY'(I-K)Y =1/nc(σ^2 tr(I-K)+(μ-P_1μ)'(I-K)(μ-P_1μ))
=σ^2+1/nC(μ-P_1μ)'(I-K)(μ-P_1μ).
The eigenvalues of I-K are 2nα/d_j+2nα, for j=2,…,n, and thus, nC= tr(I-K)=∑_j=2^n2nα/d_j+2nα. Hence,
1/nC(μ-P_1μ)'(I-K)(μ-P_1μ) ≤1/d_n+2nα/∑_j=2^n 1/d_j+2nαδ^2
≤1/d_n+2nα/n-1/d_2+2nαδ^2=d_2+2nα/d_n+2nαδ^2/n-1→ 0,
since the assumption on eigenvalues forces lim_n→∞d_2+2nα/d_n+2nα= 1. This yields 𝔼σ̌^2_c-σ^2→ 0, and δ^2=o(n). In addition, with the repeated identity, letting ϕ_j=1/d_j+2nα,
var(1/nCY'(I-K)Y) = 2σ^4/n^2C^2 tr(I-K)^2+4σ^2/n^2C^2(μ-P_1μ)'(I-K)^2(μ-P_1μ)
≤ 2σ^4ϕ_2^2+ …+ϕ_n^2/(ϕ_2+…+ϕ_n)^2+ 4σ^2δ^2ϕ_n^2/(ϕ_2+…+ϕ_n)^2
≤ 2σ^4ϕ_n^2/nϕ_2^2+ 4σ^2δ^2ϕ_n^2/n^2ϕ_2^2→ 0.
Consequently, σ̌^2_c-σ^2→_P 0, and this further yields that the second term in (<ref>) converges to 0 in probability. Finally, due to γ_ opt≥ 0, |Y'(I-P_1-n-1/nC(I-K))Y/Y'(I-P_1)Y-γ_ opt|≥ |γ_c-γ_ opt| holds. Thus, this concludes that γ_c-γ_ opt→_P 0.
§.§ Additional plots for simulations
Additional supplement plots are in Figure <ref> and <ref>.
§.§ Codes
§.§.§ Codes for mtp data cleaning
|
http://arxiv.org/abs/2307.01563v1 | 20230704083401 | Approximate information for efficient exploration-exploitation strategies | [
"Alex Barbier-Chebbah",
"Christian L. Vestergaard",
"Jean-Baptiste Masson"
] | stat.ML | [
"stat.ML",
"cs.LG",
"q-bio.QM"
] |
APS/123-QED
[email protected]
This paper addresses the exploration-exploitation dilemma inherent in decision-making, focusing on multi-armed bandit problems. The problems involve an agent deciding whether to exploit current knowledge for immediate gains or explore new avenues for potential long-term rewards.
We here introduce a novel algorithm, approximate information maximization (), which employs an analytical approximation of the entropy gradient to choose which arm to pull at each point in time.
matches the performance of Infomax and Thompson sampling while also offering enhanced computational speed, determinism, and tractability.
Empirical evaluation of indicates its compliance with the Lai & Robbins asymptotic bound and demonstrates its robustness for a range of priors.
Its expression is tunable, which allows for specific optimization in various settings.
Approximate information for efficient exploration-exploitation strategies
Jean-Baptiste Masson
August 1, 2023
=========================================================================
*Introduction.
The exploration-exploitation dilemma is a fundamental challenge in decision-making. It arises when an agent must choose between exploiting its current knowledge to maximize immediate rewards or acquiring new information that may lead to greater long-term gains. This dilemma is ubiquitous in various fields, from anomaly detection <cit.> to the modelling of biological search strategies <cit.> and human decision-making <cit.>.
The multi-armed bandit problem is a paradigmatic example of an explore-exploit problem and has been extensively studied and applied in a range of fields, including applied mathematics <cit.> to animal behavior <cit.>, neuroscience <cit.>, clinical trials <cit.>, finance <cit.>, epidemic control <cit.>, and reinforcement-learning <cit.>, among others.
In the multi-armed bandit problem, an agent is presented with a set of possible actions, or "arms", each associated with a probabilistic reward (akin to a multi-armed slot machines game).
The agent must choose which arm to pull at each time step to maximize its cumulative reward over a fixed or infinite time horizon.
Hence, at each time step, the agent can either play the arm with the better rewards to improve the knowledge on that arm or explore new arms to test if they would not lead to increased rewards.
In the following we begin with a brief introduction to the bandit problem, followed by a presentation of our novel approximate information procedure, completed with its corresponding analytical expression. We then provide empirical evidence of the procedure's efficacy before delving into a discussion of its various properties and implications.
We consider the classic multi-armed bandit setting <cit.>.
At each point in time, t, an agent chooses an arm, _t, between K different arms, = {1, 2, …, K}. The chosen arm, returns a stochastic reward, , drawn from a distribution whose mean, , is unknown to the agent [Fig. <ref>(a)].
The agent's goal is to maximize the cumulative reward (equivalently, minimize the cumulative regret) with no time horizon. Formally, we aim to minimize the expected regret <cit.>, 𝔼[], with
= t - ∑_τ=1^t .
The regret, , measures the cumulative difference between the rewards obtained by the algorithm and the expected reward that it would have obtained by choosing the best action.
Optimal strategies, regardless of their details, are characterized by the following asymptotic bound (the Lai and Robbins bound) <cit.>:
⟨ R(t) ⟩_t→∞≥β log(t),
where β is a constant factor that depends on the reward distributions.
Multiple strategies attain the Lai and Robbins bound [Eq. (<ref>)].
Notably, the ϵ_n-greedy strategy <cit.>, which plays the best current arm with probability 1-ϵ_n and randomly samples other arms with probability ϵ_n, with a time-varying ϵ_n;
the Upper Confidence Bound-2 (UCB-2) algorithm <cit.>, which relies on a tuned confidence index associated to each arm to decide which arm to play;
Thompson sampling (proportional betting), which relies on sampling the action from the posterior distribution that it maximizes the expected reward.
Importantly, methods such as the ϵ_n-greedy and UCB-based algorithms require parameter tuning to reach the Lai and Robbins bound, making them sensitive to uncertainties and variations of the prior information used for tuning.
*Approximate information maximization for bandit problems.
We aim here to develop a tractable, functional-based algorithm for the multi-armed bandit problem.
Inspired by the Infomax principle <cit.>, we rely on the entropy as a functional to optimise to decide which arm to play.
Contrary to classical bandit algorithms, the entropy encompasses the information carried by all arms in a single functional, thus characterising the global state of the game. More precisely, we aim to optimise , the entropy of the posterior distribution of the value of the maximal reward, ,
= -∫_ () ln() d,
where = [, ] is the support of (which depends on the nature of the game), and
() = ∑_=0^K (i = ) ∏_j≠ i(j≤).
The entropy summarizes the information about the state of the game and we require our algorithm to greedily optimise its gradient, i.e., to select the next arm according to:
i=1..Kargmin⟨(t+1) - (t) | _t+1 = ⟩ .
By doing so, the algorithm seeks to maximize the expected decrease in entropy, conditioned on the current knowledge of the game. This strategy has shown to be competitive with state-of-the-art algorithms and attain the Lai and Robbins bound <cit.>.
However, while Eq. (<ref>) can be numerically evaluated, it cannot be computed in closed form for most bandit problems. To obtain an algorithm that is both tractable and computationally efficient, a second functional approximating the entropy has to be derived.
Hence, we devise a set of approximations of both and to get a tractable algorithm. We develop our approach on the 2-armed bandit. We denote the arms according to their current mean rewards, respectively the maximum one by _ (with expected reward _) and the minimum by _ (with _).
Note that the true expected reward of _ may be smaller than that of _ due to the stochasticity of the game.
Our approximate form of the entropy reads:
= (1 - ) + - (1-) ln(1- ).
It decomposes the entropy into three tractable terms corresponding to approximations made on .
The first term, , approximates the entropy of the mode of . The second, , captures the entropy of the tail (on the high reward side, see Fig. <ref>(b,c)) of . These approximate entropies are weighted by factors depending on , a corrective term that compensates for an extension of the integral boundaries in order to make the entropy evaluations analytically tractable (see Supplemental Material <ref> for details).
More precisely, the tail term reads:
= - ∫_^() ln() d,
where is the approximation of , the value of θ where the probability of being the maximum is identical for both arms (see red and orange curves on Fig. <ref>(c)), and () = ( = ) is the posterior probability of the current suboptimal arm having expected reward θ.
The approximate entropy of the main mode is split into two terms:
= - ∫_() ln() dθ
- ∫_() dθ ,
where () is the posterior probability at of the current optimal arm,
ij() = (i = , i≥j ) is the posterior probability for the expected reward of arm i to be larger than j,
and = is a predetermined constant [see Eq. (<ref>) in Supplemental Material <ref>].
The first term in Eq. (<ref>) is the leading-order term of the mode of , dominated by the current optimal arm, whereas the second term handles the corrections induced by the suboptimal arm in the vicinity of (see Supplemental Material <ref> for details).
Finally, the third, corrective term in Eq. (<ref>) is = ∫_^() d.
We propose approximate information maximization (), an algorithm that consists in evaluating Eq. (<ref>) for each arm in each time step t and choosing the one that minimizes the expected value of (t+1) according to Eq. (<ref>). Depending on the reward distributions, and their associated , the log dependencies inside and can be integrated analytically or approximated by its long-time asymptote (see Supplemental Material <ref> for a detailed deviation of all terms).
Then, provides a direct implementation following an analytically tractable expression.
Yet, the posterior distribution of the value of the maximal reward can summarise the information about the state of the game:
() = ∑_=0^K (i = ) ∏_j≠ i(j≤) ,
To access an analytically tractable form of the functional, we require our algorithm to greedily optimise it:
i=1..Kargmin⟨F(t+1) - F(t) | _t+1 = ⟩ ,
Where F is the entropy of and reads = -∫_ () ln() d where = [, ] is the support of the posterior distributions (which depends on the nature of the game). The algorithm seeks to maximize the expected entropy decrease, conditional on the current knowledge of the game. While Eq.(<ref>) can be numerically evaluated, it cannot be computed in closed form for most bandit problems. Hence, we devise a set of approximations of both and to get a tractable algorithm. We develop our approach on the 2-armed bandit. The approximate form of our entropy reads:
= (1 - ) + - (1-) ln(1- ).
It decomposes the entropy into three tractable terms corresponding to approximations made on . approximates the entropy of the mode of . captures the remaining approximate entropy of the tail (on the high reward side, see Fig.<ref>(b,c)) of . These approximate entropies are weighted by factors depending on , a corrective term intended to compensate for integration boundary approximations in entropy evaluations. More precisely, the tail term reads (see Supplemental Material <ref> at [] for detail):
= - ∫_^() ln() d,
where is the approximation of the value of bandit reward where the probability of being the maximum is identical for both arms (see red and orange curves on Fig.<ref>(c)), and () = ( = ) is the probability evaluated at θ of the current suboptimal arm denoted by i_min (of average reward ). Note that the current sampling of the game may have led the player to perceive the suboptimal arm as optimal. The approximate entropy of the main mode is split into two terms:
= - ∫_() ln() dθ
- ∫_() dθ ,
where () is the posterior probability at of the current optimal arm denoted i_ (with average reward ), ij() = (i = , i≥j ) is the posterior probability for the expected reward of arm i to be larger than j, and =.
The first term in eq.(<ref>) is the leading-order term of the mode of dominated by the current optimal arm, whereas the second term handles the corrections induced by the suboptimal arm in the vicinity of (see Supplemental Material <ref> at [] for detail).
Finally, the corrective term = ∫_^() d, in eq.(<ref>) is the signed normalising factor of integrated tail of above .
The approximate information maximization () algorithm consists in evaluating Eq.(<ref>) for each arm in each time step t and choosing the one that minimizes the expected value of (t+1) according to Eq.(<ref>). Depending on the reward distribution and their associated , the log dependencies inside and can be integrated analytically or approximated by its long-time asymptote (see Supplemental Material <ref> at [] for detailed deviation of all terms). Then, provides direct implementation with tractable analytical expression.
*Results.
We demonstrate the performance of on the paradigmatic Bernoulli bandits <cit.> and on Gaussian bandits <cit.> with unknown mean ∈[0,1] and unit variance. Supplementary Table <ref> lists analytic expressions for the terms of [Eq. (<ref>)] for each problem.
Figure <ref> compares the performance of the AIM algorithm with other state-of-the-art algorithms on numerically generated data (see Supplemental Material <ref> & <ref> for implementation of and other classic bandit strategies).
For both Bernoulli and Gaussian bandits, empirically follows the Lai & Robins bound, with a regret scaling as log(t). Its long time performance matches that of Thompson sampling while relying on a simple analytical formula. Additionally, outperforms Thompson methods at intermediate times for challenging parameter configurations [Fig. <ref>(b)].
The following heuristic argument qualifies the optimal asymptotic scaling of .
Assuming t ≫ 1 and ≫≫ 1, i.e., the best arm has been predominantly pulled.
Then, the variation along and = t- of the approximate entropy reads:
∂/∂ = (1-) ∂/∂ + ∂/∂ +..
(- + ln(1-) + 1) ∂/∂,
To leading order, the minimum of Eq. (<ref>) is found at ∼ln(t)/(, ). For Bernoulli bandits, where (, ) is the Kullback-Leibler divergence between the reward distributions, thus recovering the Lai and Robbins bound (see derivation in Supplemental Material <ref>).
Note that this derivation is not entirely rigorous as it assumes that, after a certain time, we can be sure that the optimal arm has been predominantly pulled. We checked this assumption by investigating the asymptotic behaviour of high cumulative regret events (Fig. <ref>), for which the sub-dominant arm has been drawn a non-negligible fraction of time. These events are exponentially rare and happen only for small -, which require exponentially long times to be distinguished (a behaviour that is shared by Thompson sampling).
*Conclusion.
In this study, we present a new approach, , designed to effectively balance exploration and exploitation in multi-armed bandit problems.
employs an analytic approximation of the entropy gradient to select the optimal arm. This novel approach mirrors the performance of Infomax (see Supplemental Material <ref> and Fig. <ref>), from which it is derived, while offering improved computational speed. It also parallels Thompson sampling in functionality, yet outperforms it in terms of being deterministic and more easily managed.
Empirical testing demonstrated that complies with the Lai and Robbins bound and exhibits robustness to a broad spectrum of priors. Furthermore, since it relies on an analytic expression, can easily be fine-tuned to optimise performance in various scenarios, while still satisfying the Lai and Robbins bounds.
Specifically, tuned is highly efficient for K-armed bandits with K>2 (see Supplemental Material <ref> and Fig. <ref> for derivation and examples).
Due to its reliance on a single, analytically tractable functional expression, proves adaptable for different bandit problems, particularly where other approaches may face efficiency constraints.
Interesting future research directions include devising a rigorous proof of optimality, applying and optimising to multi-armed problems with finite horizons, with insufficient time to sample all bandits, and its extension to Monte-Carlo path-planning schemes.
*Acknowledgments.
We thank Etienne Boursier for helpful discussions for optimality of .
37
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Ding et al.(2019)Ding, Li, and
Liu]ding_interactive_2019
authorK. Ding,
authorJ. Li, and
authorH. Liu, in
booktitleProceedings of the Twelfth ACM International
Conference on Web Search and Data Mining
(publisherAssociation for Computing Machinery,
addressNew York, NY, USA, year2019), WSDM '19, pp.
pages357–365, ISBN isbn978-1-4503-5940-5.
[Vergassola et al.(2007)Vergassola,
Villermaux, and Shraiman]vergassola_infotaxis_2007
authorM. Vergassola,
authorE. Villermaux,
and authorB. I.
Shraiman, journalNature
volume445, pages406 (year2007),
notenumber: 7126 Publisher: Nature Publishing Group.
[Martinez et al.(2014)Martinez, Arhidi,
Demondion, Masson, and Lucas]martinez_using_2014
authorD. Martinez,
authorL. Arhidi,
authorE. Demondion,
authorJ.-B. Masson,
and authorP. Lucas,
journalJournal of Visualized Experiments : JoVE p.
pages51704 (year2014).
[Cardé(2021)]carde_navigation_2021
authorR. T. Cardé,
journalAnnual Review of Entomology volume66,
pages317 (year2021).
[Cohen et al.(2007)Cohen, McClure, and
Yu]cohen_should_2007
authorJ. D. Cohen,
authorS. M. McClure,
and authorA. J. Yu,
journalPhilosophical Transactions of the Royal Society B:
Biological Sciences volume362, pages933
(year2007), notepublisher: Royal Society.
[Hills et al.(2015)Hills, Todd, Lazer,
Redish, and Couzin]hills_exploration_2015
authorT. T. Hills,
authorP. M. Todd,
authorD. Lazer,
authorA. D. Redish,
and authorI. D.
Couzin, journalTrends in Cognitive Sciences
volume19, pages46 (year2015).
[Mehlhorn et al.(2015)Mehlhorn, Newell,
Todd, Lee, Morgan, Braithwaite, Hausmann, Fiedler, and
Gonzalez]mehlhorn_unpacking_2015
authorK. Mehlhorn,
authorB. R. Newell,
authorP. M. Todd,
authorM. D. Lee,
authorK. Morgan,
authorV. A. Braithwaite,
authorD. Hausmann,
authorK. Fiedler, and
authorC. Gonzalez,
journalDecision volume2,
pages191 (year2015), noteplace: US
Publisher: Educational Publishing Foundation.
[Doya(2007)]doya_bayesian_2007
authorK. Doya,
titleBayesian Brain: Probabilistic Approaches to
Neural Coding (publisherMIT Press, year2007),
ISBN isbn978-0-262-04238-3, notegoogle-Books-ID:
bsQMWXXHzrYC.
[Jepma and Nieuwenhuis(2011)]jepma_pupil_2011
authorM. Jepma and
authorS. Nieuwenhuis,
journalJournal of Cognitive Neuroscience
volume23, pages1587 (year2011).
[Slivkins(2019)]slivkins_introduction_2019
authorA. Slivkins,
journalFoundations and Trends® in Machine Learning
volume12, pages1 (year2019),
notepublisher: Now Publishers, Inc.
[Gittins(1979)]gittins_bandit_1979
authorJ. C. Gittins,
journalJournal of the Royal Statistical Society. Series B
(Methodological) volume41, pages148
(year1979), notepublisher: [Royal Statistical Society,
Wiley].
[Zhou(2016)]zhou_survey_2016
authorL. Zhou, titleA
Survey on Contextual Multi-armed Bandits (year2016),
notearXiv:1508.03326 [cs].
[Bubeck et al.(2011)Bubeck, Munos, and
Stoltz]bubeck_pure_2011
authorS. Bubeck,
authorR. Munos, and
authorG. Stoltz,
journalTheoretical Computer Science
volume412, pages1832 (year2011).
[Bayati et al.(2020)Bayati, Hamidi,
Johari, and Khosravi]bayati_unreasonable_2020
authorM. Bayati,
authorN. Hamidi,
authorR. Johari, and
authorK. Khosravi, in
booktitleAdvances in Neural Information Processing
Systems, edited by
editorH. Larochelle,
editorM. Ranzato,
editorR. Hadsell,
editorM. F. Balcan,
and editorH. Lin
(publisherCurran Associates, Inc., year2020),
vol. volume33, pp. pages1713–1723.
[Bouneffouf et al.(2020)Bouneffouf, Rish,
and Aggarwal]bouneffouf_survey_2020
authorD. Bouneffouf,
authorI. Rish, and
authorC. Aggarwal, in
booktitle2020 IEEE Congress on Evolutionary
Computation (CEC) (year2020), pp. pages1–8.
[Auer et al.(2002)Auer, Cesa-Bianchi, and
Fischer]auer_finite-time_2002
authorP. Auer,
authorN. Cesa-Bianchi,
and authorP. Fischer,
journalMachine Learning volume47,
pages235 (year2002).
[Morimoto(2019)]morimoto_foraging_2019
authorJ. Morimoto,
journalJournal of Theoretical Biology
volume467, pages48 (year2019).
[Wilson et al.(2021)Wilson, Bonawitz,
Costa, and Ebitz]wilson_balancing_2021
authorR. C. Wilson,
authorE. Bonawitz,
authorV. D. Costa, and
authorR. B. Ebitz,
journalCurrent Opinion in Behavioral Sciences
volume38, pages49 (year2021).
[Tervo et al.(2014)Tervo, Proskurin,
Manakov, Kabra, Vollmer, Branson, and Karpova]tervo_behavioral_2014
authorD. G. R. Tervo,
authorM. Proskurin,
authorM. Manakov,
authorM. Kabra,
authorA. Vollmer,
authorK. Branson, and
authorA. Y. Karpova,
journalCell volume159, pages21
(year2014).
[Bouneffouf et al.(2017)Bouneffouf, Rish,
and Cecchi]bouneffouf_bandit_2017
authorD. Bouneffouf,
authorI. Rish, and
authorG. A. Cecchi, in
booktitleArtificial General Intelligence, edited by
editorT. Everitt,
editorB. Goertzel, and
editorA. Potapov
(publisherSpringer International Publishing,
addressCham, year2017), Lecture Notes in Computer
Science, pp. pages237–248, ISBN
isbn978-3-319-63703-7.
[Marković et al.(2021)Marković,
Stojić, Schwöbel, and Kiebel]markovic_empirical_2021
authorD. Marković,
authorH. Stojić,
authorS. Schwöbel,
and authorS. J.
Kiebel, journalNeural Networks
volume144, pages229 (year2021).
[Durand et al.(2018)Durand, Achilleos,
Iacovides, Strati, Mitsis, and Pineau]durand_contextual_2018
authorA. Durand,
authorC. Achilleos,
authorD. Iacovides,
authorK. Strati,
authorG. D. Mitsis,
and authorJ. Pineau,
in booktitleProceedings of the 3rd Machine Learning for
Healthcare Conference (publisherPMLR,
year2018), pp. pages67–82, noteiSSN:
2640-3498.
[Villar(2018)]villar_bandits_2018
authorS. S. Villar,
journalProbability in the Engineering and Informational Sciences
volume32, pages229 (year2018),
notepublisher: Cambridge University Press.
[Villar et al.(2015)Villar, Bowden, and
Wason]villar_multi-armed_2015
authorS. S. Villar,
authorJ. Bowden, and
authorJ. Wason,
journalStatistical Science volume30,
pages199 (year2015), notepublisher:
Institute of Mathematical Statistics.
[Shen et al.(2015)Shen, Wang, Jiang, and
Zha]shen_portfolio_2015
authorW. Shen,
authorJ. Wang,
authorY.-G. Jiang, and
authorH. Zha, in
booktitleTwenty-fourth international joint conference on
artificial intelligence (year2015).
[Lin and Bouneffouf(2022)]lin_optimal_2022
authorB. Lin and
authorD. Bouneffouf, in
booktitle2022 IEEE International Conference on Fuzzy
Systems (FUZZ-IEEE) (year2022), pp.
pages1–8, noteiSSN: 1558-4739.
[Silver et al.(2016)Silver, Huang,
Maddison, Guez, Sifre, van den Driessche, Schrittwieser, Antonoglou,
Panneershelvam, Lanctot et al.]silver_mastering_2016
authorD. Silver,
authorA. Huang,
authorC. J. Maddison,
authorA. Guez,
authorL. Sifre,
authorG. van den Driessche,
authorJ. Schrittwieser,
authorI. Antonoglou,
authorV. Panneershelvam,
authorM. Lanctot,
et al., journalNature
volume529, pages484 (year2016).
[Ryzhov et al.(2012)Ryzhov, Powell, and
Frazier]ryzhov_knowledge_2012
authorI. O. Ryzhov,
authorW. B. Powell,
and authorP. I.
Frazier, journalOperations Research
volume60, pages180 (year2012),
notepublisher: INFORMS.
[Sutton and Barto(1998)]sutton_reinforcement_1998
authorR. S. Sutton and
authorA. G. Barto,
titleReinforcement Learning: An Introduction,
Adaptive Computation and Machine Learning series (publisherA
Bradford Book, addressCambridge, MA, USA, year1998),
ISBN isbn978-0-262-19398-6.
[Lai and Robbins(1985)]lai_asymptotically_1985
authorT. L. Lai and
authorH. Robbins,
journalAdvances in Applied Mathematics
volume6, pages4 (year1985).
[Reddy et al.(2016)Reddy, Celani, and
Vergassola]reddy_infomax_2016
authorG. Reddy,
authorA. Celani, and
authorM. Vergassola,
journalJournal of Statistical Physics
volume163, pages1454 (year2016).
[Pilarski et al.(2021)Pilarski, Pilarski,
and Varró]pilarski_optimal_2021
authorS. Pilarski,
authorS. Pilarski, and
authorD. Varró,
journalIEEE Transactions on Artificial Intelligence
volume2, pages2 (year2021),
noteconference Name: IEEE Transactions on Artificial
Intelligence.
[Thompson(1933)]thompson_likelihood_1933
authorW. R. Thompson,
journalBiometrika volume25,
pages285 (year1933), notepublisher:
[Oxford University Press, Biometrika Trust].
[Honda and Takemura(2010)]honda_asymptotically_2010
authorJ. Honda and
authorA. Takemura, in
booktitleCOLT 2010 - The 23rd Conference on Learning
Theory, Haifa, Israel, June 27-29, 2010, edited by
editorA. T. Kalai and
editorM. Mohri
(publisherOmnipress, year2010), pp.
pages67–79.
[Ng and Geller(1969)]ng_table_1969
authorE. W. Ng and
authorM. Geller,
journalJournal of Research of the National Bureau of Standards,
Section B: Mathematical Sciences volume73B,
pages1 (year1969).
[Garivier and Cappé(2011)]garivier_kl-ucb_2011
authorA. Garivier and
authorO. Cappé,
journalComputing Research Repository - CORR
(year2011).
[Kaufmann et al.(2012)Kaufmann, Korda,
and Munos]kaufmann_thompson_2012
authorE. Kaufmann,
authorN. Korda, and
authorR. Munos, in
booktitleAlgorithmic Learning Theory, edited by
editorN. H. Bshouty,
editorG. Stoltz,
editorN. Vayatis, and
editorT. Zeugmann
(publisherSpringer, addressBerlin, Heidelberg,
year2012), Lecture Notes in Computer Science, pp.
pages199–213, ISBN isbn978-3-642-34106-9.
Supplemental material
§ ENTROPY APPROXIMATION
Here, we derive the approximations and constituting Eq. (<ref>) in the main text.
Equation (<ref>) relies on the observation that that functional form of the posterior can naturally be split in two distinct parts above and below the point ≈,
= + ,
with
= -∫_^() ln() d , = -∫_^() ln() d .
The individual contribution, and are easier to approximate using standard techniques than the full expression, .
We detail these approximations below.
§.§ Approximation of the main mode's contribution
The approximations leading to derives from decomposing as:
= -∫_^ln() dθ -∫_^ln( 1 +/) dθ.
To be able to perform the integration analytically, we extend the upper bound of the integrals from to .
This requires neglecting the contribution from in the first term, resulting in the weight normalisation factor appearing in Eq. (<ref>).
We furthermore approximate ln( ) by ln() in the first term.
Next, we approximate the second term of Eq. (<ref>) by
ln( 1 +/) ≈/ + ,
which is a variation of an approximation deduced from the Taylor series of the inverse hyperbolic tangent for 0 < / < 1.
First, note that this term should contribute significantly only when / < 1 since the entropy has already been partitioned. Thus, Eq. <ref> choice is justified because it stays bounded even for ≫ which occurs since the integral bounds have been pushed above . Finally, is obtained as the solution to:
∫_0^1 ln(1 +x ) = ∫_0^1 x/x+1 ,
leading to =.
Taken altogether, this leads to Eq. (<ref>).
§.§ Approximation of the tail contribution
The approximation expression for the tail contribution to the entropy, [Eq. (<ref>)] is obtained from [Eq. (<ref>)] by neglecting the contribution from _, i.e., approximating by .
This approximation requires our body-tail separator to precisely determine when the best arm contribution becomes sub-dominant.
Rephrased differently, approximates the transition value where the current less expected arm will become more likely to be the maximum than the best expected arm. At long times, since _ must be much more selected than the suboptimal one, we should observe a distribution that is highly contracted compared to . This effect will result in a tail that is mostly dominated by justifying our previous assumption.
§ ANALYTICAL DERIVATION OF
Here, we summarize all the steps leading to the analytic expressions used in for 2-arms study case and exhibited in Supplementary Table <ref>.
§.§ Gaussian approximation of the Beta posterior distribution
For the Bernoulli bandits, we approximate the Beta distributions by Gaussian distributions.
To do so, we define i, i such that
𝔼[ii - i] = i+1/i+2 = i,
and
Var[ii - i] = i+1/i+2(1 - i - i+1/i+2) 1/i+3
= i (1-i)/i.
Thus, i and i are respectively the mean and the number of draws that lead to a Gaussian approximation with the same two first moments as the true Beta distribution.
(Note that for a Gaussian reward distribution, we have directly i= i/i and i=i.)
§.§ The partitioning approximation
In this section we derive an approximation of the intersection point (defined above as ) where the distributions and intersect at their highest value (if more than one solution exists).
We start with the case of Bernoulli bandits. The exact equation verified by the intersection point is
e^- (,)∫_0^ e^- (,θ') dθ' = e^- (,)∫_0^ e^- (,θ') dθ'.
Taking the logarithm of Eq. (<ref>) and normalizing the last term leads to
(,) - (,) + 1/2ln/ + ln∫_0^√() e^- (,θ') dθ')/∫_0^√() e^- (,θ') dθ' = 0.
The distributions are uni-modal, and assuming that (,) > (,) and recalling that is the highest intersection solution, we approximate by neglecting the last term,
(,) - (,) + 1/2ln/≈ 0.
In the long time limit ≫ and will be in the vicinity of when the Gaussian expansion of the divergence is relevant [in particular for ∼ O(ln)].
Thus, we approximate (,) by (,) and expand (,) to lowest order in , which leads to the expression given in Table <ref>,
= +√( 2 [ (,) + 1/2 ln/]),
where = (1- )/N_, which verifies -∼ o(^-1/3), consistent with the Gaussian expansion of the distance around .
We apply the same reasoning for Gaussian rewards, which leads to:
(-)^2/2 σ^2 - (-)^2/2 σ^2 + 1/2ln/≈ 0.
Solving for leads to:
= ( -)/- + 2/|-|×√( ( -)^2 + σ^2 (- )ln( /) ) .
Note that the expressions, Eq. (<ref>) and Eq. (<ref>), rely on the assumption that (,) > (,).
For ≤, the contributions from and do not intersect and = (i.e., =1 and =+∞ for Bernoulli and Gaussian rewards, respectively), which means that the contribution from the tail is zero.
§.§ Closed-form expressions for the main mode's contribution
§.§.§ Gaussian posterior distributions
Here, we derive the term given in Table <ref> for Gaussian posterior distributions.
Inserting the Gaussian form of the posterior into Eq. (<ref>) gives:
= -∫_-∞^+∞e^- ^2/√(2π)1/2[ 1 + ( ) ] (-1/2ln( 2 π) - ^2 )
-∫_-∞^+∞e^- ^2/√(2π)1/2[ 1 + ( ) ] dθ,
where V_i is the distribution's variance, = ( - )/√(2 ), and = ( - )/√(2 ).
We integrate the constant part of the first term by use of the following identity <cit.>:
∫_-∞^∞1/2 [1+(θ-θ_1/√(2V_1)) ] e^-(θ-θ_2)^2/2 V_2/√(2 π V_2) = 1/2[1 + (θ_2-θ_1/√(2)√(V_2 + V_1)) ],
which leads to
∫_-∞^∞1/2e^-^2/√(2π)[1 + () ] 1/2ln(2π) dθ = 1/4ln(2π) [ 1 + (-/√(2( + ))) ].
Next, we integrate by parts the second part of the first term to obtain:
∫_-∞^∞^2 1/2[1 + ()] e^-^2/√(2π) = ∫_-∞^∞1/4e^-^2/√(2π)[1 + ()]
+ ∫_-∞^∞ (-) 1/2e^-^2/√(2π) e^-^2/√(2π)
= 1/4[1 + ( - /√(2( + ))) ]
+( -) /2 √(2 π)( +)^3/2 e^-( - )^2/2( + ),
where we also employed the identity of Eq. (<ref>).
Finally, the last term is also integrated using Eq. (<ref>), giving:
-∫_-∞^∞1/2[1+ () ] e^-^2/√(2 π) = -/2[1 + (-/√(2( +))) ].
Combining Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>) leads to the expression given in Table <ref>.
§.§.§ Bernoulli posterior distributions
Reminding <ref>, the analytic derivation of made for Gaussian reward [Eq. (<ref>), Eq.(<ref>) and Eq. (<ref>)] can be extended to Bernoulli reward with i and i thus obtained.
§.§ Closed-form expressions for the tail contribution
We conclude our approach by considering the tail contribution to the approximate entropy.
§.§.§ Gaussian posterior distribution
We first consider the Gaussian reward case for which the contribution from the tail can be derived exactly,
= ∫_θ_eq^∞e^-^2/√(2π)[ 1/2ln(2π) + ^2 ]
= 1/4ln(2π e) ( -/√(2)) + -/2√(2π) e^ -(-)^2/2 .
§.§.§ Bernoulli posterior distribution
We now focus on the Bernoulli case for which requires a second approximation in order to get rid of the numerical integration. We have:
≈ -ln(p_())
≈(1- , m) [ (, ) + 1/2ln(2 π ) ],
where , m is the normalized incomplete beta function evaluated at with parameters + 1 and - + 1. Of note, we have bounded ln(p_(θ)) by its value at and included only the leading order at large times.
We remark that Eq. (<ref>) is one possible solution, but others would work as well as long as their leading order is given by .
Applying Eq. (<ref>) to the obtained analytic expression Eq. (<ref>), leads to the algorithm.
§ AIM ALGORITHMS
Here, we summarize the algorithm procedures introduced in the main text.
§.§ Two-armed Gaussian bandit
* Draw each arm once, update i, i and their associated i, i according to Table <ref>.
* For t > 2, sort the arms according to i(t) to get ((t), (t), , ) and ((t), (t), , ) couples.
* If (t) = (t) then choose the arm which has been currently drawn the least. If (t) = (t) choose randomly.
* Otherwise, if (t) ≥(t) then draw .
* Otherwise, evaluate according to Table <ref>. Then, evaluate the absolute value of the gradient of ( i, i, j, j, ) along each arm according to:
Δ_i = | 1/2( i + i(t) + ασ, i + 1, ..) + 1/2(i + i(t) - ασ, i + 1, ..) - (i, i,..) |,
where the dots refer to constant variables (j, j, ), α=1, and given by Eq. (<ref>) with Table <ref>. Next, draw the arm with the highest gradient.
* Update i(t+1), i(t+1) according to the reward returned by the chosen arm.
Let us draw some additional observations. First, we stress that if ≤, the current best arm, _, is automatically drawn. It is because the current best arm should always be played in this case (since the information about it is lesser than the current worst arm, _).
Next, also requires to sort its input values. Thus, (t), (t), (t) and (t) used in each evaluation are different from the ones used in the sorting step 2. However, is shared among all evaluations. This avoids adding perturbations induced by the cutoff of the tail. Finally, one should note that Eq. <ref> is a particular way to evaluate the gradient, but other approaches are possible (i.e., by modifying α or computing the higher derivative orders).
§.§ Two-armed Bernoulli bandit
For the Bernoulli bandit, most of the procedure is identical to the Gaussian case described above. One simply has to replace the expressions for the case of Gaussian rewards by those corresponding to Bernoulli rewards given in Table <ref>.
The only difference in the procedure regards the gradient evaluation [Eq. (<ref>), which is replaced by:
Δ_i = |i/i( i +1, i + 1,..) + i -i/i(i, i + 1,..) - ( i, i,..) |,
where, as above, the two dots refer to constant variables j, j and .
§.§ K>2 armed Gaussian bandit
To further assess the efficiency of the algorithm, we address the multi-armed case (with a number of arms K>2).
We notice that Eq. (<ref>) is asymmetric in (max,i), which suggests that all but max could be decoupled in the general entropy expression by neglecting the correlations between subdominant arms.
In practice, we thus propose to evaluate the K-1 gradients between each subdominant arm and max by the use of Eq. (<ref>) with each subdominant arm in place of _. The dominant arm max is pulled if all the gradient evaluations favor max, and the subdominant arm with the highest absolute gradient is chosen if at least one gradient favors a subdominant arm. Thus, the implementation for K>2 reads:
* Draw each arm once, and update i, i and their associated i, i according to Table <ref>.
* At each step t > K, determine the arm with the best empirical mean reward such that:
* > i ∀ i ≠,
* or i_ = { i : i = }argmax( i ) if the dominant arm is not unique. If the maximal i is not unique, i_ is drawn randomly among {i : i = i = }.
* Compare each arm i to by computing the two following gradients:
Δ_i = | 1/2 ( i + i(t) + ασ, i + 1, ..) + 1/2( i + i(t) - ασ, i + 1, ..) - (i,i,..) |,
where the two dots refer to constant variables (, , ), and
Δ_i, = | 1/2( + (t) + ασ, + 1,.. ) + 1/2( + (t) - ασ, + 1,.. ) - ( , ,..) |,
where the two dots refer to constant variables i, i and , and is computed following Table <ref> with i, i, ,.
* Then, select the arm such that:
* = if Δ_i < Δ_i, , ∀ i ≠,
* or = Δ_i - Δ_i, ≥ 0{argmax(Δ_i - Δ_i, ) } elsewise. If there is more than one solution is drawn randomly among them.
* Update i(t+1), i(t+1) according to the reward returned by the chosen arm.
§.§ K>2 armed Bernoulli bandit
For the multi-armed Bernoulli bandit, most of the procedure is identical to the multi-armed Gaussian algorithm described above.
As for the two-armed case, the procedure is implemented by replacing the expressions by those for Bernoulli rewards given in Table <ref>.
The expressions for the gradients are replaced by:
Δ_i = | i/i ( i + 1, i + 1, ..) + i -i/i( i, i + 1, ..) - (i,i,..) |,
where the two dots refer to constant variables (, , ), and
Δ_i, = |/( +1, + 1,..) + -/( , + 1,..) - ( , ,..) |,
where the two dots refer to constant variables i, i and .
§ OTHER STATE-OF-THE-ART BANDIT ALGORITHMS
Here, we briefly review several baseline algorithms which provide a benchmark of our gradient method.
§.§ Epsilon-n-Greedy
This method is a variation of the strategy, and is one of the most widely used bandit algorithms due to its undeniable simplicity <cit.>.
The strategy selects either a random arm with a probability ϵ or the current dominant arm otherwise. The strategy is a generalized form of this approach where the parameter ϵ is a time-dependent function ϵ(t) = min{ 1, c(_1,_2) K/(d^2 t) }. The constant c is a hyperparameter of the method, which needs to be tuned for optimal performance. Here, we used c=10 tuned for Bernoulli uniform priors and c=30 for Gaussian uniform priors. Let us stress that relies on a priori knowledge of the distribution of {_1,_2} in order to be effective.
§.§ UCB-Tuned
This method belong to the class of upper confidence bound (UCB) algorithms which pull the arm maximising a proxy function generally defined as F_i = i + R_i where R_i bounds the regret to logarithmic growth.
For UCB-tuned, R_iis given by:
R_i = c(1,2) √(ln(t)/imin( 1/4, s_i(t) ) ), s_i(t) = σ̂_̂î^2 + √(2ln(t)/i),
where σ̂_̂î^2 is the reward variance and c a hyperparameter.
Here, we rely on the optimised version of F_i proposed in <cit.> for Bernoulli reward:
F_i = i + 1/i + 2 + c(1,2) √(ln(t+2K)/i+2min( d, s_i(t) ) ), s_i(t) = (i + 1)(i - i + 1)/(i+2)^2 (i+
3) + √(2ln(t+2K)/i+2),
with c=0.73 and d=0.19. For Gaussian rewards we adapt (<ref>) with c=2.1 and σ̂_̂î^2 = σ^2/i, although this is not necessarily optimal.
§.§ KL-UCB
This method is an another upper confidence bound (UCB) variant which has been especially designed for bounded reward, and in particular for Bernoulli distributed rewards where it reaches the Lai and Robbins bound <cit.>. For KL-UCB, F_i reads:
F_i = max{ θ∈Θ: i( i/i, θ) ≤ln(t) + c(1,2)ln(ln( t)) },
where Θ denotes the definition interval of the posterior distribution. By testing various c values, we end up with c(1,2) = 0.00001.
§.§ Thompson sampling
At each step, Thompson sampling <cit.> stochastically selects an arm based on the posterior probability that it maximizes the expected reward.
In practice, after drawing K random values according to each arms' posterior distribution, it picks the arm with the largest value:
= i=1..Kargmax( ii - i).
§.§ Infomax
At each step, Infomax relies on a greedy entropy minimization to decide the arm to be played. Here we adapt the Infomax <cit.> algorithm by replicating the steps detailed in Supplementary Section <ref> but replacing by a numerical integration of [Eq. (<ref>)].
The regret performance of the Infomax algorithm thus obtained is compared with and Thompson methods in Fig. <ref>.
§ ASYMPTOTIC OPTIMALITY OF APPROXIMATE INFORMATION MAXIMISATION
Here, we provide details on the asymptotic behaviour of the entropy at fixed , , and t≫1. Recall that the derivative of Eq. (<ref>), from which we seek to derive the asymptotic behavior of by minimizing , is given by:
∂/∂ = (1-) ∂/∂ + ∂/∂ + (- + ln(1-) + 1) ∂/∂ .
We will focus on Bernoulli reward distributions, and thus the terms of given by Table <ref>.
Since the different terms share common derivatives, we first provide the derivative. To leading order it is equal to:
∂/∂ = (1-)/-t(2 (, ) -1) + ln(t/ -1) /2(t- )^2
= O ( ln(t)/√(t ), √(ln(t)/√(t))).
Next, we focus on the norm and main mode terms, and , respectively.
Since , ≪, all the terms
that depend exponentially on or are negligible to leading order.
We thus obtain
∂/∂∼1/2(t- ) + O(exp(-C_0 t)),
with C_0 >0.
Next, we consider the tail terms of which we propose to rewrite the regularized incomplete beta distribution as
= 1- ,m
= C(,)√()∫_^1 e^- (,θ) dθ
= C(,)√() e^- (,)∫_^1 e^-[ln( /θ) + (1-) ln( 1-/1-θ) ] dθ,
where = /, = and C(,) ∼ [2 π (1-)]^-1/2 + O(^-1/2) by convergence of the Beta distribution to its Gaussian counterpart. Next, we partition the integral, which denotes I, at a cutoff μ_c = (c - ), leading to:
I = ∫_c^1 e^-[ ln( /θ) + (1-) ln( 1-/1-θ) ] dθ + ∫_0^μ_ce^[ ln(1 + μ/) + (1-) ln( 1 -μ/(1-)) ] /dμ.
Taking μ_c ∼ A √(), we obtain a well-defined expansion of the second integral, which leads to:
I = C e^[ ln( 1 + A/√()) + (1-) ln( 1- A/(1-) √()) ] + ∫_0^ A √()e^μ (/ - 1-/1-) /[ 1 + A_μ^2/ + O(μ^3/^2) ] dμ
= (1-)/ (-) + O(^-2).
where the difference between and is of the order O(^-1) and are then included in the second term.
By the use of Eq. (<ref>), we find
∼ C_2 (1-)/√() (-) e^- ( ,)[ 1 + O(^-1) ],
to leading order.
Using this gives us the dominant order of the variation of :
∂/∂ = -C_2/2 ^3/2 (1-)/(-) e^- ( ,)
- C_2∂/∂[1 + (1-)/( - )^2] e^- ( ,)/√()
-C_2 (1-)/√() (-) e^- ( ,)[ (,) + ∂/∂( 1-/1- - /) ]
+∂ C_2/∂ (1-)/√()(-) e^- ( ,) + O ( e^- ( ,)^-2),
where C_2 = C e^ (,) e^- (,) accounts for the change of variable between (,) and (,). The derivative of C_2 is to leading order:
∂ C_2/∂∼ C_2 O (1/^3/2).
Combining Eq. (<ref>) with Eq. (<ref>) yields the leading order of the derivative of :
∂/∂ ∼ - C_2 (,) (1-)/√() (-) e^- ( ,)(1+ O (1/)).
We finally expand to leading order, which yields:
∂/∂ = ∂/∂ [ (, ) + 1/2ln(2 π )] + [ ∂/∂( 1-/1- - /) + (, ) - 1/2 ]
= - C_2 (,)^2 √() (1-)/(-) e^- ( ,)(1+ O(1/)).
Inserting the leading order terms of Eqs. (<ref>), (<ref>), and (<ref>) into Eq. (<ref>) leads to:
∂/∂ = 1/2(t- ) - C_2 √() (,)^2 (1-)/ (-) e^- ( ,)(1+ O (1/))
- 1/2ln(t) C_2 (,) (1-)/√() (-) e^- ( ,)(1+ O (1/)).
Finally, setting ∂/∂ =0 leads to:
1/t∼ 2 C_2 √() (,) (1-)/ (-) e^- ( ,)( 1 + ln(t)/2 (,)),
which, by taking the logarithm and noting that →, leads to:
ln(t) ∼ ( ,) - ln[ 2 C_2 √() (,) (1-)/ (-)( 1 + ln(t)/2 (,)) ].
Hence, we obtain the Lai and Robbins relation for the algorithm:
∼ln(t)/ ( ,) + o(ln(t)).
§ TUNED APPROXIMATE INFORMATION MAXIMIZATION
Here, we detail how the algorithm can be tuned for specific multi-armed bandit problems, showing its capacity to outperform Thompson sampling (see Fig. <ref>).
We propose some empirically optimised variations to the functional form of the approximate entropy [Eq. (<ref>) and Table <ref>] derived in the main text.
These lead to a simplified version of the approximate entropy, which is adjusted to provide a better expected regret in case of expected rewards drawn according to a uniform prior while still respecting the Lai and Robbins bound.
§.§ Bernoulli rewards with K > 2
To obtain a tuned version of for multi-armed bandits, we propose to simplify the main mode term by keeping the dominant term (when Δ/√(2V_t)→∞) which we multiplied by two:
= - ln( 2 π (1-)/).
We also simplified the expression by neglecting the contribution from (i.e., letting →0).
This leads to the tuned expression:
≈ + .
Thus, the tuned version of for multi-armed Bernoulli bandits consists in replacing functional in Eq. (<ref>) and Eq. (<ref>) by Eq. (<ref>).
§.§ Gaussian rewards with K > 2
For the multi-armed Gaussian bandits, we propose the following form:
(,,,) = 1/8[1+ ( -/√(2 σ^2 ^-1)) ] ln( 2π e^1 -2 σ^2/) + 2ln( 2π eσ^2/) (-/√(2σ^2 ^-1)).
Since, Eq. (<ref>) exhibits a simple closed-form expression, it is possible to derive an exact and explicit expression of its expected gradient for continuous Gaussian reward distributions,
= ∫_-∞^∞e^-μ^2/2 σ^2/√(2πσ^2)[ | ( + μ/ + 1, + 1,..) - (..)| - | (.., + μ/ + 1, +1) - (..)| ]dμ
= ∫_-∞^∞e^-μ^2/2 σ^2/√(2πσ^2)[ |Δ_| - |Δ_| ],
where the two dots refer to constant variables. Noticing that the first term (variation along ) is independent of the integration variable, we obtain:
Δ_ = -1/8[1+ ( -)/√(2σ^2 ^-1)) ] ln( 1 + 1/).
By use of the identity Eq. (<ref>) this leads to:
Δ_ = 1/8ln( 2π e^1 -2 σ^2/) [ ( (-)( + 1)/√(2σ^2)√( + 2 )) - ( - /√(2 σ^2 ^-1)) ]
+ 2ln( 2πσ^2 e^1/+1) ( (-)( + 1)/√(2σ^2)√( + 2 )) - 2ln(2πσ^2 e^1/) ( - /√(2 σ^2 ^-1)).
Combining <ref> and <ref> leads to the complete expression of the gradient:
Δ_max, min = 1/8[1+ ( -/√(2σ^2 ^-1)) ] ln( 1 + 1/)
- 1/8ln( /2π e^1 -2 σ^2) [ ( (-)( + 1)/√(2σ^2)√( + 2 )) - ( - /√(2 σ^2 ^-1)) ]
-2 ln( +1/2πσ^2 e^1) ( (-)( + 1)/√(2σ^2)√( + 2 )) + 2ln(/2πσ^2 e^1) ( - /√(2 σ^2 ^-1)).
Thus, the tuned and continuous version of for Gaussian rewards consists in replacing gradient evaluation in Eq. (<ref>) by Eq. (<ref>)
|
http://arxiv.org/abs/2307.00889v2 | 20230703093930 | Minimality of a toric embedded resolution of singularities after Bouvier-Gonzalez-Sprinberg | [
"Büşra Karadeniz Şen",
"Camille Plénat",
"Meral Tosun"
] | math.AG | [
"math.AG",
"14B05, 14M25, 32S45"
] |
[2020]14B05, 14M25, 32S45
This work is partly supported by the projects TUBITAK no.118F320 and PHC Bosphore no.42613UE
This paper is devoted to construct a minimal toric embedded resolution of a rational singularity via jet schemes. The minimality is reached by extending the concept of the profile of a simplicial cone given in <cit.>.
Minimality of a toric embedded resolution of singularities
after Bouvier-Gonzalez-Sprinberg
B. Karadenİz Şen, C. Plénat and M. Tosun
August 1, 2023
==============================================================================================
§ INTRODUCTION
Let X be a variety with the singular locus Sing(X). By <cit.>, it is known that (X, Sing(X)) admits a resolution, means that there exists a smooth variety X̃ and a proper birational map X̃→ X which is an isomorphism over X∖ Sing(X). Later, in <cit.>, Nash introduced the arc spaces X_∞={γ: Spec ℂ[t]→ X} associated with X which provides additional information about a resolution; he also conjectured that the number of irreducible components of X_∞^Sing(X) (the arcs passing through Sing(X)) is at most the number of essential irreducible components of the exceptional locus of a resolution. J. Fernandez de Bobadilla and M. Pe Pereiran proved in <cit.> that the equality is true for surfaces (see also <cit.>), but there are counterexamples in higher dimensions, see for example <cit.>.
Therefore it makes sense to ask whether one can build a resolution of X by means of its arc spaces. One way to deal with it is to use the link between the arc and jet spaces of X as the space of arcs X_∞ may be viewed as the limit of the jets schemes X_m={γ_m: Spec ℂ[t]/t^m+1→ X} <cit.>. We get to the relationship between some irreducible components of jet schemes and divisorial valuations via the correspondence between some irreducible families of arcs (known as cylinders) passing through a subvariety Y and divisorial valuations over Y <cit.>. This raises the following problem:
Can one construct an embedded resolution of singularities of X⊂ℂ^n from the irreducible components of the space X_m^Sing(X) of jets centered at Sing(X)?
In light of this, the authors in <cit.>, (generalizing the dimension 1 case in <cit.>), construct a toric embedded resolution from the jet schemes for some surface singularities which are Newton non-degenerate in the sense of Kouchnirenko <cit.> and get the following diagram:
π^-1(X)∩S̃_Σ=X̃[d]^π[r]^ S̃_Σ[d]^π_Σ
X [r]^f ℂ^n
where S̃_Σ represents the smooth toric variety obtained by a regular refinement Σ of the dual Newton polyhedron DNP(f) of X:{f=0} using the valuations associated to the irreducible components of some m-jets schemes.
With preceding notation, the strict transform of {f=0} by π_Σ is the Zariski closure of (π_Σ)^-1 (ℂ^3 ∩{f=0}).
Moreover, the following result indicates that X̃=π^-1(X)∩S̃_̃Σ̃ is smooth.
<cit.>
Let X⊂ℂ^3 where X:{f=0} is Newton non-degenerate in the sense of Kouchnirenko. Then the following properties are equivalent:
1) A refinement Σ of DNP(f) is regular.
2) The proper birational morphism μ_Σ:Z_Σ⟶ℂ^3 is an embedded toric resolution of singularities of X where Z_Σ is the toric variety associated with Σ.
The goal of this article, following the spirit in <cit.>, is to show that there is a minimal toric embedded resolution when X is a surface with rational singularities of multiplicity 3 (RTP-singularities for short) and to provide an algorithm to build it. The complete list of the minimal abstract resolution graphs of RTP-singularities is presented in <cit.> where the author gives a characterization of rational singularities via their minimal abstract resolution graphs and proved that the embedding dimension for a rational singularity equals "multiplicity+1". The explicit equations defining RTP-singularities in ℂ^4 are due to G. N. Tyurina <cit.>. Using some suitable projections of these equations, the authors in <cit.> obtained the hypersurfaces X'⊂ℂ^3 with dim(Sing(X'))=1 whose normalizations are the surfaces given in <cit.> and, they showed that X' is Newton non-degenerate in the sense of Kouchnirenko. These nonisolated forms of RTP-singularities are served in <cit.> to construct a toric embedded resolution via the jet schemes X_m of RTP-singularities. But the question of minimality remained open because the abstract resolution obtained in <cit.> was not itself minimal. Here we define the minimality of the resolution as below:
Let Σ be a regular refinement of the DNP(f) with vectors in some subset G_Σ⊂ℝ^3. A minimal toric embedded resolution is a smooth toric variety obtained by Σ if the abstract resolution has no -1 curve and
G_Σ= ∪ G_σ where σ's are full dimensional cones in Σ with
G_σ={x∈σ∩ℤ^n\{0} | ∀ n_1,n_2 ∈σ∩ℤ^n, x=n_1+n_2 ⇒ n_1=0 or n_2=0 }.
Using the equations obtained in <cit.>, we show the following:
There exists an equation giving the nonisolated form of an RTP-singularity such that
i) its abstract resolution graph is minimal,
ii) the chosen irreducible components of the m-jets schemes are associated with vectors which provide an embedded toric resolution,
iii) those vectors are in G_Σ.
This implies by ii) and by the fact that the vectors in G_Σ are always in any resolution, G_Σ is exactly composed of these chosen vectors. We also show that:
The Hilbert basis of the DNP(f) of an RTP-singularity gives a minimal toric embedded resolution.
Sketch of the proof:
i) Using the equations given in <cit.>, we obtain the minimal abstract graph via Oka's algorithm.
ii) Let 𝒞_m be an irreducible component of X_m^Sing(X). Then ψ_m^a^-1(𝒞_m) is an irreducible cylinder in ℂ^3_∞ (where ψ_m^a: ℂ^3_∞⟶ℂ^3_m is the truncation morphism associated with the ambient space ℂ^3). Let η be the generic point of ψ_m^a^-1(𝒞_m). By Corollary 2.6 in <cit.>, the map ν_𝒞_m:ℂ[x,y,z]⟶ℕ defined by
ν_𝒞_m(h)=_t h∘η
is a divisorial valuation on ℂ^3. We can associate a vector with 𝒞_m, called the weight vector, in the following way:
v(𝒞_m):=(ν_𝒞_m(x),ν_𝒞_m(y),ν_𝒞_m(z)) ∈ℕ^3.
We define the "good" irreducible components of jets schemes giving a resolution after computing the graph of the jet schemes (see <cit.> for definition and detailed computations) and call the corresponding vectors as ”essential valuations”.
iii) Finally, to show that the essential valuations are in G_Σ, we introduce, following <cit.> the profile for a cone generated by at least 3 vectors. Then we show that the essential valuations are inside the profile; more precisely, we find a convex set inside the profile such that the vectors reach the hypersurfaces delimiting these sub-cones so-called sub-profiles. The convexity implies that the essential valuations are free over ℤ, i.e. in G_Σ. Thus as they give a non-singular refinement of DNP(f), the essential valuations and elements of G_σ (for each σ) coincide.
Our remarks and questions:
1) Question 1: It is known that the vectors obtained via tropical valuations of X give the minimal abstract resolution of X (see <cit.>). We observe the intersection of the set of vectors in the Groebner fan of X with the set of vectors obtained from jet schemes of X is exactly the Hilbert basis for rational double point singularities (RDP-singularities). Is this true for all Newton non-degenerate singularities?
2) Question 2: For RDP-singularities and RTP-singularities all the vectors in the Hilbert basis lie inside the profiles. Is the fact that the vectors in the Hilbert basis lie inside the profile a characterization of rational singularities? For example, the surfaces defined by f=y^3+xz^2-x^4=0 and f=z^2+y^3+x^21=0 have elliptic singularities and they are Newton non-degenerates. Their Hilbert basis give a resolution of singularities; but in both cases, the profile does not contain all the vectors in the Hilbert basis.
3) Question 3: Does Hilbert basis give an embedded resolution for any Newton non-degenerate singularity?
This article is structured as follows: We start by recalling the definition of Hilbert basis of a cone. We generalize the notion of a profile given in <cit.>. Then, using the new equations of RTP-singularities (comparing with <cit.>) we develop the proof of the theorem for B-types which was a special case in <cit.> as the authors did not obtain a toric embedded resolution. We end up with some remarks on the preceding questions. One can find in the Appendix the computations for the RTP-singularities.
§ HILBERT BASIS OF POLYHEDRAL CONES
Let n,r∈ℕ^*. Let v_1,…,v_r be some vectors in ℤ^n. A rational polyhedral cone in ℝ^n generated by the vectors {v_1,…,v_r} is the set
σ:=<v_1,…,v_r>={v∈ℝ^n | v=∑_i=1^rλ_i v_i, λ_i∈ℝ_≥ 0}.
When σ doesn't contain any linear subspace of ℝ^n we call it strongly convex. In the sequel, a cone will mean a strongly convex rational polyhedral cone. The dimension of σ is the dimension of the subspace span{v_1,…,v_r} in ℝ^n. Two cones σ and σ' in ℝ^n are said to be equivalent if dim(σ )=dim(σ') and there exists a matrix A∈ GL_n(ℤ) with M(σ)=A· M(σ') where M(σ) denotes the matrix [v_1 … v_r]. When dim(σ )=n=r we say that σ is a simplicial cone.
A vector v ∈ℤ^n is called primitive if all its coordinates are relatively prime. A cone σ=<v_1,…,v_r>⊂ℝ^n is called regular if the generating vectors are primitive and M(σ) is unimodular.
It is well known that the notion of regular cones is important in toric geometry, and in singularity theory a regular cone leads to a smooth toric variety. A regular cone can be constructed from a non-regular cone. Such a process is called regular refinement; it consists of a refinement of a cone into the subcones by some n-1 dimensional subspaces such that every subcone in the subdivision is regular. Let's recall a few concepts to provide a better definition of getting a regular refinement of a cone. Consider the set S_σ:=σ∩ℤ^n which is a finitely generated semigroup with respect to the addition. For special σ's there are several methods to find the set of generators of S_σ. One method comes from integer programming <cit.>.
A subset H_σ⊂ S_σ is called the Hilbert basis of σ if any element u∈ S_σ can be written as a non-negative integer combination of the elements in H_σ and it is the smallest set of generators with respect to inclusion.
<cit.>
Every cone admits a finite Hilbert basis.
The Hilbert basis H_σ is contained in the parallelepiped
P_σ:={u∈ℤ^n | u=∑_i=1^rλ_i v_r , 0≤λ_i ≤ 1}.
It follows from the fact that any vector u=∑_λ_i≥ 0λ_iv_i∈σ can be written as u=∑_i=1^r(⌊λ_i⌋ + λ'_i)v_i where ⌊λ_i⌋ is the integer part of λ_i and ∑_i=1^rλ'_i∈ P_σ.
The first primitive vector lying on a 1-dimensional subcone of σ is called an extremal vector of σ.
Let σ⊂ℝ^3 be a cone. If an element u∈ S_σ is in H_σ then it is an extremal vector in any regular refinement of σ.
Let Σ be a regular refinement of σ. Denote by τ_1,τ_2,…, τ_k the maximal dimensional regular subcones in Σ. Let u∈ H_ σ. So u belongs to at least one of τ_i's and u=α_1v_1^(i)+α_2v_2^(i)+α_3v_3^(i)∈σ where v_1^(i), v_2^(i), v_3^(i) are the extremal elements of τ_i, which is a basis for ℤ^3. Since u belongs to H_σ, we have u=v_j^(i) for some j=1,2,3, which means that u itself is an extremal vector for τ_i.
Let σ=<v_1,v_2,…,v_n>⊂ℝ^n be a simplicial cone. Consider the map
l_σ: ℝ^n →ℚ
v ↦ l_σ(v)
such that ł_σ(v_i)=1 with each extremal vector v_i for σ.
<cit.> The subset
p_σ:=σ∩ l_σ^-1([0;1])
is called the profile of σ.
In the case σ⊂ℝ^n is non-simplicial (which will be often the case for RTP-singularities below), we extend the definition as below.
The profile of a cone σ=<v_1,v_2,…,v_r>⊂ℝ^n is the smallest convex hull such that its extremal vectors are exactly v_1,v_2,…,v_r.
It may happen that all extremal vectors are on a unique hyperplane even though σ=<v_1,v_2,…,v_r>⊂ℝ^n is non-simplicial. In this case, p_σ is defined as in the case of a simplicial cone.
Moreover, p_σ can be identified with its boundaries composed by the union of at most (r-2) hyperplanes in ℝ^n.
Let σ=<v_1,v_2,…,v_r>⊂ℝ^n. There is no other integer point in p_σ than the elements of H_σ.
Assume that r=n and σ is simplicial. We have v=∑_i=1^nα_iv_i ∈σ with α_i∈ℝ_≥ 0. Let v∈ p_σ. Then l_σ(v)∈ [0,1] which means
0≤ l_σ(α_1v_1+α_2v_+…+α_nv_n)=α_1l_σ(v_1)+α_2l_σ(v_2)+…+α_nl_σ(v_n)≤ 1
Since l_σ(v_i)=1 for all i, we have 0≤α_1+α_2+…+α_n≤1. If there exists one i_0∈{1,…,n} such that α_i_0=1 we get v=v_i_0∈ p_σ. If not, we have α_i=a_i/b_i with a_i<b_i, b_i≠ 0 for all i . As v cannot be written as the sum of two integer vectors we have v∈ H_σ.
When σ is a non-simplicial cone, we get the affirmation by applying the discussion above to the each simplicial subcone lying in a suitable regular refinement of σ into simplicial cones.
§ THE NEW EQUATIONS FOR RTP-SINGULARITIES
Let X be defined by a complex analytic function
f(z_1,z_2,… z_n)=∑_(a_1,a_2,… a_n)∈ℤ^n_≥ 0 c_(a_1,a_2,… a_n)z_1^a_1z_2^a_2…z_n^a_n
The closure in ℝ^n of the convex hull of the set
S(f):={(a_1,a_2,… a_n)∈ℤ^n_≥ 0| c_(a_1,a_2,… a_n)≠0}
is called the Newton polyhedron of f, denoted as NP(f). Let Σ(f) be a regular refinement of the dual Newton polyhedron DNP(f). Then X_Σ(f) is smooth and a toric map X_Σ(f)→ℂ^n obtained between the corresponding toric varieties is a toric embedded resolution of X (see <cit.>). When the coefficients c_(a_1,a_2,… a_n)∈ℂ are generic and the NP(f) is nearly convenient we say that f is non-degenerate with respect to NP(f). In this case, the regular refinement Σ_2(f) of all 2-dimensional cones in DNP(f) gives an abstract resolution of X and it induces a toric embedded resolution of X by getting a regular refinement Σ_3(f) of all 3-dimensional cones in ℝ^3 <cit.>.
Such an embedded resolution is said to be minimal if the vectors appearing in the regular refinement are all irreducible and if the abstract resolution does not present -1 curves.
We present below an algorithm to find a minimal toric embedded resolution of RTP-singularities which are treated in <cit.>. Here we use the equations obtained in <cit.> to present the non-isolated form of RTP-singularities different than those in <cit.>.
The regular refinement Σ_2(f) of all 2-dimensional cones in DNP(f) where f is one of the following equations gives the minimal abstract resolution of the corresponding RTP-singularity.
i) A_k,l,m:
∙ For k=l=m>1
y^3m+3+xy^m+1z-xz^2-z^3=0
∙ For k=l<m and k,l,m≥ 1
y^k+l+m+3+y^2k+2z+y^k+1z^2+xy^k+1z+xz^2-z^3=0
∙ For l<m<k and k,l,m≥ 1
l+k>2m and l+k≤ 2m, l+k is even
y^3k+y^2k+m+l-2-2y^l+kz-xy^kz+y^mz^2+xz^2-z^3=0
∙ For l<m<k and k,l,m≥ 1
l+k≤ 2m, l+k is odd
y^2k+m+y^k+mz+y^l+kz+xy^kz-y^kz^2+y^lz^2+xz^2-z^3=0
ii) B_k,n: For r≥ 1, n≥ 2
∙ For k=2r-1
x^2n+3z-x^ry^2-y^2z=0
∙ For k=2r
x^n+r+2y-x^2n+3z+y^2z=0
iii) C_n,m: For n≥ 3, m≥ 2
x^n-1y^2m+2+y^2m+4-xz^2=0
iv) D_n: For n≥ 1
x^2n+2y^2-x^n+3z+yz^2=0
v) E_60:
z^3+y^3z+x^2y^2=0
vi) E_07:
z^3+y^5+x^2y^2=0
vii) E_70:
z^3+x^2yz+y^4=0
viii) F_k-1: For k≥ 2
y^2k+3+x^2y^2k-xz^2=0
ix) H_n: For n≥ 1
∙ For n=3k-1
z^3+x^2y(x+y^k-1)=0
∙ For n=3k
z^3+xy^kz+x^3y=0
∙ For n=3k+1
z^3+xy^k+1z+x^3y^2=0
Recall that, when X⊂ℂ^n is a surface with a rational singularity, the minimality of an abstract resolution is characterized by the fact that there is no -1 curve in the resolution. These new equations are Newton non-degenerate in the sense of Kouchnirenko, so one can show by the Oka's algorithm <cit.> that the abstract resolution in each case is minimal (see the tables in the Appendix). Note that the equations given in <cit.> for the types E's and H_n are the same as the one given above and lead us to the minimal abstract resolution, which is not the case for the other types with the equations presented in <cit.>.
§ MINIMAL TORIC EMBEDDED RESOLUTIONS: THE B_K,N-SINGULARITIES
§.§ Jet schemes and embedded valuations
Let us recall few facts about the jet schemes and define the set EV(X) of the embedded valuations, that will provide us the regular refinement of a given DNP(f). Let X∈ℂ^3 be an hypersurface defined by one of the equations above. Let m∈ℕ. Consider the morphism
φ:ℂ[x,y,z]/<f>→ℂ[t]/<t^m+1>
where x(t)=x_0+x_1t+x_2t^2+…+x_mt^m (mod t^m+1)
y(t)=y_0+y_1t+y_2t^2+…+y_mt^m (mod t^m+1)
z(t)=z_0+z_1t+z_2t^2+…+z_mt^m (mod t^m+1)
such that f(x(t), y(t), z(t))=F_0+tF_1+…+t^mF_m (mod t^m+1). The m-th jets scheme of X is defined by
X_m=Spec(ℂ[x_i, y_i, z_i; i=1,…,m]/<F_0, F_1,… ,F_m>)
It is a finite dimensional scheme. For n∈ℕ with m>n we have a canonical projection π_m,n: X_m → X_n. These affine morphisms verify π_m,p∘π_q,m=π_q,p
for p<m<q and they define a projective system whose limit is a scheme that we denote X_∞, which is called the arcs space of X. Note that X_0=X. The canonical projection π_m,0:X_m⟶ X_0 will be denoted by π_m. Denote also X_m^Y:=π_m^-1(Y) for Y⊂ X. Consider the canonical morphism Ψ_m : X_∞⟶ X_m and the truncation map ψ_m^a: ℂ^3_∞⟶ℂ^3_m associated with the ambient space ℂ^3, here the exponent "a" stands for ambient map . The morphism ψ_m^a is a trivial fibration, hence
ψ_m^a^-1(𝒞_m) is an irreducible cylinder in ℂ^3_∞. Let η be the generic point of ψ_m^a^-1(𝒞_m). By Corollary 2.6 in <cit.>, the map ν_𝒞_m:ℂ[x,y,z]⟶ℕ defined by
ν_𝒞_m(h)=_t h∘η
is a divisorial valuation on ℂ^3. To each irreducible component 𝒞_m of X_m^Y, let us associate a vector, called the weight vector, in the following way:
v(𝒞_m):=(ν_𝒞_m(x),ν_𝒞_m(y),ν_𝒞_m(z)) ∈ℕ^3.
Now, we want to characterize the irreducible components of X_m^Y that will allow us to construct an embedded resolution of X.
For p∈ℕ, we consider the following cylinder in the arcs space:
Cont^p(f)={γ∈ℂ^3_∞ : _tf∘γ=p}.
Let X:{f=0}⊂ℂ^3 be a surface. Let Y be a subvariety of X.
(i) The elements of the set:
EC(X):={Irreducible components 𝒞_m of X_m^Y such that ψ_m^a^-1(𝒞_m)∩ Cont^m+1f≠∅
and v(𝒞_m)≠v(𝒞_m-1) for any component 𝒞_m-1 verifying
π_m,m-1(𝒞_m)⊂𝒞_m-1, m⩾ 1 }
are called the essential components for X.
(ii) The elements of the set of associated valuations
EV(X):={ν_𝒞_m, 𝒞_m∈ EC(X) }
are called embedded valuations for X.
In <cit.> the authors explicitly construct the jet graphs and embedded resolutions for all cases of RTP-singularities; but the abstract resolutions of the singularities of types A, B, C, D and F were containing at least one curve with self-intersection -1 which is not the case for the new equations. Moreover, the equation of B-type singularities given in <cit.> is very particular since its jet graph provides a resolution which is not a refinement of the DNP(f). In this article, we find a toric embedded resolution with the help of the jet graph of the new equation for B-type singularities. We also show that the vectors obtained from the jets are irreducible by showing that they are inside the profile, more exactly they reach hypersurfaces that form a new convex subcone inside the profiles, that we call subprofils; for geometrical reason, the vectors will be in G_σ for each cone σ.
In the sequel we present the entire computations for B-type singularities, the results for the other cases are collected in a table (see Appendix).
§.§ B_k,n-singularities
Consider the hypersurface X⊂ℂ^3 having B_k,n singularities, means its defining equation is f=x^2n+3z-x^ry^2-y^2z=0 for k=2r-1 or f=x^n+r+2y-x^2n+3z+y^2z=0 for k=2r (given in the list above).
Comparing with <cit.>, we see that we only have two cases to treat instead of five cases. Moreover the computation process is simpler since, in both cases the NP(f) admits a unique compact face.
The DNP(f) for k=2r-1 and k=2r are as follows:
For B_2r-1,n-singularities, the embedded valuations of X are
∙ (1,0,1), (1,0,2), …, (1,0,r)
∙ (2,2n+3,0),(2,2n+3,1),…,(2,2n+3,2r)
∙ (0,1,1),(0,1,2), (1,n+2,r+1)
∙ (1,s,0),(1,s,1),…,(1,s,r) with 1≤ s ≤ n+2
and, for B_2r,n-singularities, the embedded valuations of X are
∙ (1,0,1), (1,0,2), …, (1,0,n+r+2)
∙ (2,2n+3,0),(2,2n+3,1),…,(2,2n+3,2r+1)
∙ (0,1,1), (1,n+2,r+1)
∙ (1,1,0), (1,1,1),…,(1,1,n+r+1)
∙ (1,2,0), (1,2,1),…,(1,2,n+r)
⋮
∙ (1,n+2,0),(1,n+2,1),…,(1,n+2,r).
In both cases the embedded valuations give a toric embedded resolution of X and the vectors on the skeleton gives the minimal abstract resolution graph of the singularity.
In order to give the elements of EV(B_k,n), we compute the jet graph of the singularity as in <cit.>. The jet graph of B_2r-1,n-singularities is
and the jet graph of B_2r-1,n-singularities is
The vectors in the set EV(B_k,n) gives a regular refinement of the DNP(f). They are the vectors written in blue in the jet graphs. A (simplicial) regular refinement of each subcone in the DNP(f) for B_2r-1,n-singularities with these elements is illustrated in the following figure:
The refinement of σ_1 in DNP(f) is regular since we have
|[ 0 1 1; 0 s s+1; 1 r r; ]|=1 for 0≤ s≤ n and also |[ 0 0 1; 0 1 n+2; 1 2 r+1; ]|=|[ 0 2 1; 0 2n+3 n+2; 1 2r r+1; ]|=|[ 0 2 1; 0 2n+3 n+1; 1 2r r; ]|=1.
For the regularity of σ_2 in DNP(f), we look at two subcones:
For <(1,n+1,0),(1,n+1,r),(2,2n+3,2r),(2,2n+3,0)>,
|[ 2 2 1; 2n+3 n+1 n+1; 2s+1 s+1 s; ]|=1, |[ 2 2 1; 2n+3 2n+3 n+1; 2s 2s+1 s; ]|=1, |[ 2 2 1; 2n+3 2n+3 n+1; 2s 2s-1 s; ]|=1 for 0≤ s≤ r-1.
And, for the subcone <(1,n+1,0),(1,n+1,r),(1,0,r),(1,0,0)> we have
|[ 1 1 1; k k k+1; l l+1 r; ]|=1 for 0≤ l≤ r, |[ 1 1 1; k k k-1; l l+1 r; ]|=1 for 0≤ l≤ r,
|[ 1 1 1; k k k+1; l l+1 0; ]|=1 for 0≤ l≤ r, |[ 1 1 1; k k k-1; l l+1 0; ]|=1 for 0≤ l≤ r.
Finally for the regularity of σ_3 in DNP(f), we look at the subcone <(1,n+2,0),(1,n+2,r+1),(2,2n+3,2r),(2,2n+3,0)> for which we have, for all 0≤ s≤ r-1
|[ 2 2 1; 2n+3 2n+3 n+2; 2s 2s+1 s; ]|=1, |[ 2 2 1; 2n+3 2n+3 n+2; 2s 2s-1 s; ]|=1 and |[ 2 2 1; 2n+3 n+2 n+2; 2s+1 s+1 s; ]|=1.
and the subcone <(1,n+2,0),(1,n+2,r+1),(0,1,2),(0,1,0))> has, for 0≤ l≤ r
|[ 1 1 0; n+2 n+2 1; l l+1 1; ]|=1.
Hence DNP(f)=σ_1∪σ_2∪σ_3 is regular. A similar computation gives a regular refinement for the B_2r,n-singularities. Using Oka's algorithm, we can compute self-intersections and genus of the corresponding curves, and show that we get the minimal abstract resolution.
The vectors in EV(B_k,n) lives inside the profiles of B_k,n singularities. More precisely, for each subcones in DNP(f) there exists hypersurfaces inside each profile which is reached by the vectors in EV(B_k,n). Moreover the vectors in each subcones are free over ℤ.
For B_2r-1,n-singularities, let's look at the 3-dimensional subcones in DNP(f):
For σ_1=<(0,0,1),(1,0,r),(0,1,2),(2,2n+3,2r)>, the profile p_σ_1 is bounded by two hyperplanes which are
H_1:(2n-2nr+3-3r)-y+(2n+3)z-(2n+3)=0 and H_2:(n-r+2)x-y+z-1=0
Let p_σ_1^1 and p_σ_1^2 denote two cones bounded respectively by the hyperplanes H_1^(1):(r-1)x-z+1=0 and H_2^(1):(n-r+2)x-y+z-1=0. They form a convex hull inside the profile p_σ_1; we call them (and by abuse of language, the hypersufaces too) subprofiles. The coordinates of each vector in the set {(1,n+2,r+1), (1,1,n+r+1), (1,2,n+r), (1,3,n+r-1),…,(1,n,r+2),(1,n+1,r+1)} satisfies at least one of the equations defining H_1^(1) and H_2^(1). Moreover p_σ_1^1∪ p_σ_1^2 is convex. This implies that all the elements in the previous set are in H_σ_1.
For σ_2=<(1,0,0),(1,0,r),(2,2n+3,0),(2,2n+3,2r)>, the profile p_σ_2 is bounded by a unique hyperplane which is H:(2n+3)x-y-(2n+3)=0; it contains the vectors (2,2n+3,1),(2,2n+3,2),…,(2,2n+3,2r),(1,0,1),(1,0,2),…,(1,0,n+r+1),(1,1,0),(1,1,1),(1,1,2),(1,1,3),…,(1,1,n+r+1),(1,2,0),(1,2,1),…,(1,2,n+r),(1,3,0),(1,3,1),…,(1,3,n+r-1),…,(1,n+1,0),(1,n+1,1),…,(1,n+1,r+1). All these vectors including the generators are in the subprofile defined by two hyperplanes H_1^(2): x=1 and H_2^(2): (n+2)x-y-1=0.
For σ_3=<(0,1,0),(0,1,2),(2,2n+3,0),(2,2n+3,2r)>, the profile p_σ_3 is bounded by a unique hyperplane H: (n+1)x-y+1=0; it contains the vectors (2,2n+3,1),(2,2n+3,2),… , (2,2n+3,2r),(1,n+2,0),(1,n+2,1),…,(1,n+2,r+1). All these vectors including the generators belong to do subprofile defined by the hyperplane H:(n+1)x-y+1=0 (here profile and subprofile are the same).
For B_2r,n-singularities, the DNP(f) and the 3-dimensional subcones in it behave as in the following:
For σ_1=<(0,0,1),(1,0,n+r+2),(0,1,1),(2,2n+3,2r+1)>, the profile p_σ_1 is bounded by two hyperplanes H_1: (2n^2+2nr+5n+3r+3)x-(2n+2)y-(2n+3)z+(2n+3)=0 and H_2: rx-z+1=0 (see figure below). It contains the vectors (1,n+2,r+1), (1,1,n+r+2),(1,2,n+r),(1,3,n+r-1),…,(1,n,r+2),(1,n+1,r+1). All these vectors including the generators are in the subprofile defined by the hyperplanes H_1^(1): (n^2+nr+2n+r+1)x-ny-(n+1)z+(n+1)=0 and H_2^(1): rx-z+1=0.
For σ_2=<(1,0,0),(1,0,n+r+2),(2,2n+3,0),(2,2n+3,2r+1)>, the profile p_σ_2 is bounded by the unique hyperplane H: (2n+3)x-y-(2n+3)=0. It contains the vectors (2,2n+3,1), (2,2n+3,2),… ,(2,2n+3,2r),(1,0,1), (1,0,2),… ,(1,0,n+r+1), (1,1,0), (1,1,1), (1,1,2),(1,1,3),… (1,1,n+r+1), (1,2,0),(1,2,1), … , (1,2,n+r), (1,3,0), (1,3,1),…,(1,3,n+r-1),… , (1,n+1,0), (1,n+1,1),… , (1,n+1,r+1) as all these vectors including the generators are in the subprofile defined by two byperplanes H_1^(2): x=1 and H_2^(2): (n+2)x-y-1=0.
For σ_3=<(0,1,0),(0,1,1),(2,2n+3,0),(2,2n+3,2r+1)>, the profile p_σ_3 is bounded by the unique hyperplane H: (n+1)x-y+1=0. It contains the vectors (2,2n+3,1),(2,2n+3,2),…,(2,2n+3,2r),(1,n+2,0),(1,n+2,1),…,(1,n+2,r+1). All these vectors including the generators are in the subprofile defined by the hyperplane H: (n+1)x-y+1=0.
Let H_DNP(f)=H_σ_1∪ H_σ_2∪ H_σ_3 be the Hilbert basis of DNP(f). The elements of EV(B_k,n) are in H_DNP(f) and give a minimal toric embedded resolution of the singularity.
In fact by <ref>, they are irreducible and by <ref>, the elements give a resolution and form exactly the Hilbert basis of DNP(f). In other words;
For a B_k,n-singularity with its new equation, the union of Hilbert basis of each full dimensional subcone in DNP(f) is the resolution of the singularity.
For all other RTP-singularities, we present the results in a table format (equations, subprofiles, vectors)
in Appendix.
For RDP-singularities, the profiles and subprofiles coincide (see <cit.>).
§ REMARKS ON HYPERSURFACES WITH ELLIPTIC SINGULARITIES
Three natural questions arise from our algorithm applied in the previous sections:
1) Does Hilbert basis give a toric embedded resolution for any Newton non-degenerate singularity?
2) Let σ be a 3-dimensional cone in DNP(f).
(a) Is it true for all rational singularities that each element in H_σ lies inside p_σ?
(b) Is there any singularities that some element in H_σ lies outside the p_σ?
For the first two questions, we don't have an answer yet but the answer for 2(b) is positive as the following example shows: Let X be the hypersurface defined by f(x,y,z)=y^3+xz^2-x^4
The dual Newton polyhedron DNP(f) consists of three 3-dimensional cones; these cones and their Hilbert bases are:
σ_1 =<e_1,e_3, u_1, u_2> H_σ_1 ={e_1,e_3,u_1,u_2,(1,1,1),(3,4,5)}
σ_2 =<e_2,u_1,u_2> H_σ_2 ={e_2,u_1,u_2,(1,1,0),(2,1,0),(1,1,1),(2,3,3)}
σ_3 =<e_2,e_3,u_2> H_σ_3 ={e_2,e_3,u_2,(1,2,2),(2,3,3),(3,4,5)}
where u_1=(3,1,0), u_2=(6,8,9). The profile p_σ_3 of σ_3 is defined by the hyperplane H: 8x-3y-3z+3=0. But, the following figure shows that the element (1,2,2) from H_σ_3 is outside of p_σ_3. The set H_DNP(f) still give a minimal toric embedded resolution of the singularity.
Note that the hypersurface in this example has elliptic singularities and it is Newton non-degenerate. It is then natural to ask if it is a characterization of rational singularities, or just a question of choice of coordinates.
§ REMARKS ON THE GRÖBNER FAN OF X
Let X be defined by f(x,y,z)=0. Let w=(w_1,w_2,w_3)∈ℝ^3_>0. The number
o_w(f):=min {w_1a_1+w_2a_2+w_3a_3 | (a_1,a_2,a_3)∈ S(f)}
is called the w-order of f. The polynomial
In_w(f):=∑_{(a_1,a_2,a_3)∈ S(f)| w_1a_1+w_2a_2+w_3a_3=o_w(f)} c_(a_1,a_2,a_3)x^a_1y^a_2z^a_3
is called the w-initial form of f. We say that u is equivalent to w if In_u(f)=In_w(f). The closure of the set
C_w(f):={u∈ℝ^3| In_w(f)=In_u(f)}
is a cone, called Gröbner cone of f. The union of Gröbner cones of f form a fan, called the Gröbner fan of X, denoted by 𝒢(X) (see <cit.> for more detailes), which is introduced by T. Mora and L. Robbiano in <cit.>. The full-dimensional cones in 𝒢(X) are in correspondence with the distinct monomials in f <cit.>. The set
𝒯(f):={u=∈ℝ^3 | In_u (f) is not a monomial}
is called the tropical variety of f.
The tropical variety of an RTP-singularity is exactly the minimal abstract resolution of the singularity.
As before we provide the details for B_k,n-singularities:
For B_2r-1,n-singularities, we look for all the vectors w_i∈ℕ^3, 1≤ i ≤ 4 for which In_w_1(f)=f, In_w_2(f)=x^2n+3z-x^ry^2, In_w_3(f)=x^2n+3z-y^2z and In_w_4(f)=-x^ry^2-y^2z. This gives the following Gröbner cones in 𝒢(X):
C̅_w_1=<(2,2n+3,2r)>,
C̅_w_2=<(0,1,2),(2,2n+3,2r)>,
C̅_w_3=<(2,2n+3,0),(2,2n+3,2r)>,
C̅_w_4=<(1,0,r),(2,2n+3,2r)>.
For B_2r,n-singularities, we look for all the vectors w_i∈ℕ^3, 1≤ i ≤ 4 for which In_w_1(f)=f, In_w_2(f)=x^n+r+2y-x^2n+3z, In_w_3(f)=-x^2n+3z+y^2zand In_w_4(f)=x^n+r+2y+y^2z. This gives the following Gröbner cones of 𝒢(X):
C̅_w_1=<(2,2n+3,2r+1)>,
C̅_w_2=<(0,1,1),(2,2n+3,2r+1)>,
C̅_w_3=<(2,2n+3,0), (2,2n+3,2r+1)>,
C̅_w_4=<(1,0,n+r+2), (2,2n+3,2r+1)>.
In both cases, comparing with Figure 1 above, the union C̅_w_1∪C̅_w_2∪C̅_w_3∪C̅_w_4 is the abstract resolution of B_k,n-singularities.
Let f defines an RDP-singularity. Let 𝒥(f) be the set of vector appearing in the jet graph of f. The intersection 𝒢(X)∩𝒥(f) is exactly the Hilbert basis of DNP(f), so gives the minimal toric embedded resolution of the singularity. This is not always true for RTP-singularities. For example, in the case of E_60-singularity, the vector w=(2,3,3) for which In_w(f)=z^3 is in the intersection but it is not in Hilbert basis of DNP(f). It is important to notice that this vector is not revealed in building the toric embedded resolution of the singularity. Hence 𝒢(X)∩𝒥(f) also gives a toric embedded resolution of an RTP-singularity, which may not be minimal.
0
mag A. Altintaş Sharland, G. Çevik and M. Tosun, Nonisolated forms of rational triple singularities, Rocky Mountain J. Math., 46, No.2, (2016), 357-388.
ACMZ A. Altintaş Sharland, C. O. Oğuz, M. Tosun and Z. Zafeirakopoulos, Algorithm providing explicit equations of rational singularities, In preparation.
AGS F. Aroca, M. Gomez-Morales and K. Shabbir, Torical modification of Newton non-degenerate ideals, Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas, 107-1, (2013), 221-239.
ar-hu F. Aroca, M. Gomez-Morales and H. Mourtada, Grobner fan and embedded resolutions of ideals on toric varieties, Beiträge zur Algebra und Geometrie/Contributions to Algebra and Geometry, (2023), 1-12.
Artin M. Artin, On isolated rational singularities of surfaces, Amer. J. Math. 88, (1966), 129-136.
cg C. Bouvier and G. Gonzalez-Sprinberg, Systéme générateur minimal, diviseurs essentiels et G-désingularisations de variétés toriques, Tôhoku Math. J., 47, (1995), 125-149.
Fernex T. de Fernex, The space of arcs of an algebraic variety, Algebraic Geometry: Salt Lake City 2015, Proc. Symp. Pure Math., Vol. 97-1, 2018.
Fernex2 T. de Fernex, Three-dimensional countre-examples to the Nash problem, Compos. Math. Vol. 149, (2013), 1519-1534.
dedo T. de Fernex and R. Docampo, Terminal valuations and the Nash problem, Invent. Math. Vol. 203 (2016), 303-331.
ELM L. Ein, R. Lazarsfeld and M. Mustata, Contact loci in arc spaces, Compos. Math. Vol. 140, (2004), 1229-1244.
bobadilla-pe J. Fernandez de Bobadilla and M. Pe Pereira, The Nash problem for surfaces, Annals of Math., 176, No.3, (2012), 2003-2029.
Fukuda K. Fukuda, A. N. Jensen and R. R. Thomas, Computing Gröbner fans, Math. of Computation, 76, No.260, (2007), 2189-2212.
giles F. R. Giles and W. R. Pulleyblank, Total dual integrality and integer polyhedra, Linear Algebra and Its Applications, 25, (1979), 191-196.
hironaka H. Hironaka, Resolution of singularities of an algebraic variety over a field of characteristic zero I, II, Annals of Math., 79, No.1, (1964), 109-203.
IK S. Ishii and J. Kollár, The Nash problem on arc families of singularities, Duke Math. J. 120-3, (2003), 601-620.
Jensen A. N. Jensen, Computing Grobner fans and tropical varieties in Gfan, The IMA Volumes in Math. and its appl., 148, (2008), 33-46.
JK J. M. Johnson and J. Kollár, Arc spaces of cA-type singularities, J. of Singularities, 7, (2013), 238-252.
bhcm B. Karadeniz, H. Mourtada, C. Plénat and M. Tosun, The embedded Nash problem of birational models of rational triple singularities, J. of Singularities, 22, (2020), 337-372.
khovanskii A. G. Khovanskii, Newton polyhedra (resolution of singularities), J. of Soviet Mathematics, 27, (1984), 2811-2830.
kouch A. G. Kouchnirenko, Polyédres de Newton et nombres de Milnor, Invent Math., 32, No.1, (1976), 1-31.
LMR M. Lejeune-Jalabert, H. Mourtada and A. Reguera, Jet schemes and minimal embedded desingularization of plane branches, Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas, 107-1, (2013), 145-157.
Mora T. Mora and L. Robbiano, The Gröbner fan of an ideal, J. Symb. Comput., 6 (2/3), (1988), 183-203.
Mo H. Mourtada, Jet schemes of rational double point surface singularities, Valuation Theory in Interaction, EMS Ser. Congr. Rep., Eur. Math. Soc., (2014), 373-388.
hc H. Mourtada and C. Plénat, Jet schemes and minimal toric embedded resolutions of rational double point singularities, Comm. in Algebra, 46-3, (2018), 1314-1332.
Nash J. F. Nash, Arc structure of singularities, Duke Math. J., 81-1, (1995), 31-38.
oka M. Oka, Non-degenerate complete intersection singularity, Act. Math. Hermann, Paris, 1997.
O1 M. Oka, On the resolution of the hypersurface singularities, Adv. Stud. Pure Math., 8, (1987), 405-436.
rosales J. C. Rosales and P. A. Garcia-Sanchez, Finitely generated commutative monoids, Nova Science Publishers, Inc., New York, 1999.
Tyurina G. N. Tyurina, Absolute isolatedness of rational singularities and rational triple points, Fonc. Anal. Appl. 2-4, (1968), 324-332.
Varc A. N. Varchenko, Zeta-function of monodromy and Newtons diagram, Invent. Math., 37, No.3, (1976), 253-262.
.4cm
B. Karadeniz Şen C. Plénat
Gebze Technical University Aix Marseille University, 12M, CMI
Department of Mathematics Technopôle Château-Gombert
41400, Kocaeli, Turkey 39, rue F. Joliet Curie, 13453 Maresille Cedex 13
E-mail: [email protected] E-mail: [email protected]
M. Tosun
Galatasaray University
Department of Mathematics
Ortaköy 34357, Istanbul, Turkey
E-mail: [email protected]
|
http://arxiv.org/abs/2307.01916v1 | 20230704210030 | Maximizing Seaweed Growth on Autonomous Farms: A Dynamic Programming Approach for Underactuated Systems Navigating on Uncertain Ocean Currents | [
"Matthias Killer",
"Marius Wiggert",
"Hanna Krasowski",
"Manan Doshi",
"Pierre F. J. Lermusiaux",
"Claire J. Tomlin"
] | eess.SY | [
"eess.SY",
"cs.AI",
"cs.RO",
"cs.SY"
] |
IEEEexample:BSTcontrol
HJHamilton-Jacobi
HJIHamilton-Jacobi-Isaacs
ODEOrdinary Differential Equation
MPCModel Predictive Control
MDPMarkov Decision Processes
RMSERoot Mean Squared Error
MSE[MSE]mean-squared-error
RLReinforcement Learning
PDEPartial Differential Equation
ASVAutonomous Surface Vehicle
NGRNet Growth Rate
DPDynamic Programming
Maximizing Seaweed Growth on Autonomous Farms:
A Dynamic Programming Approach for Underactuated Systems
Navigating on Uncertain Ocean Currents
Matthias Killer^1,2,*, Marius Wiggert^1,*, Hanna Krasowski^2, Manan Doshi^3,
Pierre F.J. Lermusiaux^3 and Claire J. Tomlin^1
^* M.K. and M.W. have contributed equally to this work.
^1 M.K., M.W., and C.J.T. are with the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, USA. For inquiries contact: [email protected]
^2 M.K. and H.K. are with the School of Computation, Information and Technology of the Technical University of Munich, Germany
^3 M.D. and P.F.J.L. are with the Department of Mechanical Engineering at the Massachusetts Institute of Technology, USA.
The authors gratefully acknowledge the support of the C3.ai Digital Transformation Institute and
the IFI program of the German Academic Exchange Service (DAAD) funded by the Federal Ministry of Education and Research (BMBF).
August 1, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Seaweed biomass offers significant potential for climate mitigation, but large-scale, autonomous open-ocean farms are required to fully exploit it. Such farms typically have low propulsion and are heavily influenced by ocean currents. We want to design a controller that maximizes seaweed growth over months by taking advantage of the non-linear time-varying ocean currents for reaching high-growth regions. The complex dynamics and underactuation make this challenging even when the currents are known. This is even harder when only short-term imperfect forecasts with increasing uncertainty are available.
We propose a dynamic programming-based method to efficiently solve for the optimal growth value function when true currents are known.
We additionally present three extensions when as in reality only forecasts are known:
(1) our method's resulting value function can be used as
feedback policy to obtain the growth-optimal control for
all states and times, allowing closed-loop control equivalent to re-planning at every time step hence mitigating forecast errors, (2) a feedback policy for long-term optimal growth beyond forecast horizons using seasonal average current data as terminal reward, and (3) a discounted finite-time DP formulation to account for increasing ocean current estimate uncertainty.
We evaluate our approach through 30-day simulations of floating seaweed farms in realistic Pacific Ocean current scenarios. Our method demonstrates an achievement of 95.8% of the best possible growth using only 5-day forecasts. This confirms the feasibility of using low-power propulsion and optimal control for enhanced seaweed growth on floating farms under real-world conditions.
§ INTRODUCTION
In recent years, research has shown promising applications of seaweed biomass for climate mitigation. It can be used as human food, as cattle fodder that reduces methane emissions from burping <cit.>, for bio fuel and plastic to replace oil <cit.>, and as carbon capture when the biomass is sunk to the ocean floor, it removes carbon dioxide from the atmosphere <cit.>. To deliver on this promise, the production needs to scale by extending seaweed farming from near-shore, labor-intensive practices to more automated solutions utilizing the vast expanse of the open oceans <cit.>.
One solution could be floating, autonomous seaweed farms that roam the oceans while growing seaweed <cit.>. The controller of such floating farms needs to be able to control their position to prevent stranding, colliding with ships, or drifting to nutrient-depleted waters where the seaweed dies. While these farms could be steered with powerful ship engines, the drag force of such farms is enormous which renders this approach prohibitively expensive. However, in our recent work, we demonstrated that an ASV can navigate reliably by going with the flow, using its minimal propulsion (0.1 m/s) strategically to nudge itself into ocean currents ([0-2 m/s]) that drift towards its destination <cit.>.
This work has been extended to reduce the risk of stranding <cit.> and to fleets of vessels that navigate while staying connected in a local communication network <cit.>.
In this paper, we investigate the implementation of a low-power steering paradigm in seaweed farming. Rather than focusing on reaching a target area, our objective is to maximize seaweed growth along the trajectory of the farms over extended periods, building upon prior research on optimal deterministic sea farming <cit.>.
From the control perspective, there are four key challenges that we need to tackle. First, the currents that we want to leverage are non-linear and time-varying.
Second, in realistic settings only coarse uncertain forecasts are available to plan on <cit.>. Third, the farm itself is underactuated by which we mean that its propulsion is smaller than the velocity of the surrounding currents, so it cannot easily compensate for forecast errors. Lastly, we want to maximize seaweed growth over multiple weeks/months but forecasts from the leading providers are only 5-10 days long <cit.> and similar to weather forecasts the uncertainty for long-time predictions is very high due to the chaotic system <cit.>. Use the headfigure to explain why long-term reasoning vs greedy is important (i.e. a close medium growth field and a further away high growth field).. In control terms, we are tackling long-term horizon optimization of a state-dependent running cost with an underactuated agent in non-linear time-varying dynamics under uncertainty that increases over time. The long-term dependency of seaweed growth means the objective cannot easily be decomposed into multiple short-term objectives.
§.§ Related Work
Various approaches for time- and energy-optimal planning exist for non-linear, time-varying dynamics like ocean currents <cit.>.
In the context of planning within known currents or flows, researchers have derived HJ reachability equations for exact exact solutions <cit.>, non-linear programming <cit.>, evolutionary algorithms <cit.>, and graph-based search methods <cit.>. However, non-linear programming, evolutionary algorithms, and graph-based search techniques are prone to discretization errors and the non-convex nature of the non-linear programming problem, can lead to infeasibility or solvers getting stuck in local minima.
In contrast, DP based on the HJ equations can solve the exact continuous-time control problem. Recent research has extended the HJ framework <cit.> to multi-time optimal navigation in underactuated current settings <cit.>.
For managing uncertainty in underactuated systems, exact equations optimize the expectation or a risk-function over a stochastic solution of probabilistic ocean flows <cit.>. However, this demands a principled uncertainty distribution for flows, which are non-Gaussian in ocean currents.
Since we require forecast data and most operational forecast systems produce solely deterministic forecasts, this principled approach is not yet suitable for our operational setting. At the same time, robust control techniques, which aim to maximize the objective even in the face of worst-case bounded disturbances, are not suitable when considering realistic error bounds, as the forecast error often equals or exceeds our propulsion capabilities.
Thus, to accommodate deviations from the optimal trajectory resulting from forecast inaccuracies, frequent replanning while considering predictions in a MPC fashion has been proposed <cit.>.
This can be implemented using non-linear programming or by employing the value function of an HJ closed-loop control scheme, which offers the benefits of being both fast and guaranteed optimal.
An emergent approach <cit.> is to use RL to learn how to best operate despite the uncertainty. The agents operate in similar environments (underactuated dynamics and unknown environmental flows) with the short-term objective of station keeping <cit.> or trajectory tracking <cit.>. These objectives differ significantly from our objective of optimizing a long-term objective such as seaweed growth.
There is only a little research in the control field that focuses on maximizing seaweed growth, For example, Bhabra et al. <cit.> maximize seaweed harvesting using autonomous vessels in varied settings. A key distinction to our approach is that the seaweed is grown not on the agent itself, but in discrete seaweed farms, with the objective being to optimize the trajectory and sequence of harvesting rather than continuously maximizing growth. Bhabra et al. <cit.> use a 3D HJ reachability framework in which the harvesting state is augmented into the third dimension. To find the path with maximum growth, they run forward reachability in the state space for seaweed in 3D. This formulation needs to be adapted for closed-loop control and can be computationally demanding due to the added dimension in the HJ calculation.
In order to address the increasing complexity associated with long-time horizons, problems are frequently divided into multiple subproblems using graph-based methods or hierarchical RL <cit.>. These approaches are more appropriate for combinatorial optimization problems, where dividing and conquering in subtasks is effective. However, this is not suitable for our problem involving continuous space and long-time dependencies.
A potential solution to handle growing uncertainty and distribution shifts over time, as well as to balance short-term and future events, is the use of a discount factor. This technique is commonly applied in discrete RL settings <cit.> and continuous-time systems <cit.>.
§.§ Overview of method & contributions
In this paper, we make four main contributions towards controllers that can maximize the growth of floating seaweed farms over long periods.
First, we formulate maximizing seaweed growth as a running cost problem that can be solved with DP in the 2D spatial state of the system (Sec. <ref>). Compared to previous work using HJ reachability in 3D <cit.>, this significantly reduces computational complexity. Additionally, the resulting value function can be used as feedback policy to obtain the growth-optimal control for all states and times. Hence, it can provide control inputs to multiple farms and it can be used to mitigate forecast errors by using it in closed-loop <cit.> which is equivalent to re-planning at every time step.
Second, we propose a method to get a feedback policy for long-term optimal growth beyond the 5-day forecast horizon over which ocean currents are available. For that we estimate the expected growth using historical average currents over a coarse grid and then initialize the DP over the forecast horizon with these values (Fig. <ref>, (Sec. <ref>)).
Third, to account for the growing uncertainty of the ocean current estimates, we introduce a finite-time discounting into the DP PDE
(Sec. <ref>).
Lastly, we are the first to run extensive empirical simulations of floating seaweed farms in realistic current settings in the Pacific Ocean over 30 days. We first investigate how different propulsion of the farms would affect the best achievable seaweed growth based on optimal control with known currents. We then evaluate how close different variations of our feedback policy can get to the best possible growth when daily, 5-day forecasts are available (Sec. <ref>).
In Sec. <ref> we define the problem. Sec. <ref> details the four components of our method. Sec. <ref> contains the closed-loop performance evaluation of our methods and baselines and we conclude with Sec. <ref> and outline future work.
§ PROBLEM STATEMENT
§.§ System Dynamics
We consider a floating seaweed farm with the spatial state ∈ℝ^n, where n=2 for a surface vessel on the ocean. Let the control input of the vessel be denoted by from a bounded set 𝕌∈ R^n_u where n_u is the dimensionality of the control. Then, the spatial dynamics of the system at time are governed by an ODE of the following form:
= f(,,) = v(, ) + g(,, ), ∈ [0, ]
Where the movement of the vessel depends on the drift due to the time-varying, non-linear flow field v(,) →ℝ^n and its control g(,, ).
This makes the common assumption that the drift of the agent directly affects its state and neglects any inertial effects. While our method is generally applicable, we focus on settings where the vessel is underactuated max_g(,, )_2 ≪v(, )_2 most of the time.
We denote the spatial trajectory induced by this ODE with . For a vessel starting at the initial state at time with control sequence , we denote the state at time by ∈ℝ^n. The system dynamics (<ref>) are assumed to be continuous, bounded, and Lipschitz continuous in , <cit.>.
Additionally, the farm has a seaweed mass which evolves according an exponential growth ODE:
=·(, ), ∈ [0, ]
where is the growth factor per time unit, e.g. 20%/day, which depends on nutrients, incoming solar radiation, and water temperature at the spatial state and time .
§.§ Problem Setting
The objective of the seaweed farm starting from at with seaweed mass () is to maximize the seaweed mass at the final time . This implies optimizing the growth over its spatial trajectory .
max_ () = () + max_ ∫_^() · (, )_growth factor d
If the true currents are known, our method (Sec. <ref>) is guaranteed to find the optimal control signal and trajectory.
However, in realistic scenarios only inaccurate, short-term forecasts are available at regular intervals (typically daily). These differ from the true flow by the stochastic forecast error δ(,;ω) where ω is a random variable.
Our goal is then to determine a feedback policy π(,) that results in a high expected seaweed mass 𝔼[()]. Hence, in our experiments (Sec. <ref>) we also evaluate our method empirically over a set of missions , ∼𝕄 and a realistic distribution of real and forecasted ocean currents , ∼𝕍.
§ METHOD
Our method consists of a core that optimizes seaweed growth when the currents are known and three extensions to get a feedback policy π that performs well over long-time horizons when only forecasts are available.
In our core method, we maximize the final seaweed mass when the ocean currents are known by formulating a running cost objective that can efficiently be solved with DP. To tackle situations where only forecasts are available and neither probabilistic nor robust control methods are applicable (see Sec. <ref>) we introduce three extensions. First, we show how the value function we obtain with our core method can be used as feedback control policy π. Applying it closed-loop is equivalent to re-planning at every time step which leads to reliable performance even if the value function was computed with inaccurate forecasts. Second, we show how we can use monthly averages to reason about the expected long-term growth beyond the 5-day forecast horizon. Third, we incorporate the increasing uncertainty in the ocean currents by introducing a finite-time discount factor in our core method.
§.§ Maximizing Seaweed Mass With Known Dynamics
We use continuous-time optimal control where the value function of a trajectory is based on a state and time-dependent reward and a terminal reward :
= ∫_t^(,)d + (, ).
Let = max_ be the optimal value function. Using DP we can derive the corresponding HJ PDE <cit.>:
- =
max_[· f(,,) + (, )
]
(,) = (, )
.
We can then numerically compute on a spatial mesh by integrating the PDE backwards in time <cit.>.
Next, we need to define the reward and terminal reward to maximize (). One approach is to model the seaweed farm with an augmented state x_aug= (, )^⊤∈ℝ^3. If we set =0 and define the reward as =·(, ), the value function is our objective (Eq. <ref>).
However, the computational complexity of solving for scales exponentially with the state dimension. Hence, we want a reward that does not depend on as augmented state.
For that, we introduce the variable η = ln() with the new dynamics η̇ = / = (, ). As η() is strictly increasing in , the control that maximizes η() is equivalent to maximizing (). We can then reformulate Eq. <ref> to η():
max_ η() = η() + max_∫_^ (, ) d .
By setting the reward to = (, ) the optimal value function captures this optimization without requiring :
= max_∫_^ (, ) d.
We can then solve the HJ PDE for in and obtain the optimal control and trajectory that maximize () at lower computational cost. This formulation can be applied more generally to optimize the state of exponential growth or decay ODEs. Note that we can convert the value of to the final seaweed mass of the optimal trajectory starting at , with ():
() = () · e^∫_^ (, ) d = () · e^.
§.§ Feedback Policy Based on Regular Forecasts
The value function from our core method above, allows us to compute the optimal control ^*(,) for all , and hence a feedback policy π(,) for the vessel or multiple vessels in the same region <cit.>. This policy is the optimizer of the Hamiltonian (right side Eq. <ref>):
π(,) = _∈𝕌 f(, , ) ·∇_x
= _∈𝕌 g(, , ) ·∇_x
While π is optimal if was computed based on the true currents , it can also be applied for closed-loop control when imperfect forecasts were used to compute the value function .
In that case, an agent at state executing π_(,) will likely find itself at a different state than anticipated as differs from . But the control that would be growth optimal under can again directly be computed with π_(( + Δ), + Δ). Applying π_ closed-loop is hence equivalent to full-time horizon re-planning with at each time step <cit.>. This notion of re-planning at every time step at a low computational cost ensures good performance despite forecast errors. Additionally, can be updated in periodic intervals. In our experiments, we compute once per day upon receiving new forecasts.
§.§ Reasoning Beyond the Forecast Horizon
As the growth cycles of seaweed typically span weeks to months, our aim is to maximize the seaweed mass at an extended future time after the final time of the available 5-day forecast . A principled way to reason beyond the planning horizon is to estimate the expected growth our seaweed farm will experience from the state onward and add this as terminal reward to Eq. <ref>.
= + 𝔼[J^*_(,) ]
= max_∫_t^ (, ) d
Where is the growth a vessel starting from at will achieve at and 𝔼[J^*_(,) ] estimates the additional growth from to . The expectation is taken over the uncertain future ocean currents.
We propose to estimate 𝔼[J^*_] by computing a new value function J^*_, based on monthly average currents for the region. To compute we again solve Eq. <ref> backwards with (, )=J^*_,(, ).
§.§ Finite-time Discounting to Mitigate Uncertainty
As the oceans are a chaotic system, the uncertainty of the forecasted ocean currents increases over time. We can incorporate this increasing uncertainty in the value function by using the finite-time discounted optimal control formulation:
= ∫_t^ e^-(s-t)/τ(,) d + (, ),
where τ is the discount factor. The smaller τ the more future rewards are discounted. We derive the corresponding HJ PDE by following the steps in <cit.> and instead of Eq. <ref> we obtain:
= -
max_[· f(,,) + (, )
] + /τ
§.§ Summary Control Algorithm Variations
All variations of our method are closed-loop control policies π derived from a value function (Sec. <ref>). The four variations differ only in how the value function is computed. When the true currents are known we compute (Eq. <ref>) for optimal control. When only forecasts are available, we can calculate the for planning horizons up to the end of the forecasts and update it as new forecasts become available (Sec. <ref>). Thirdly, to optimize for growth until > we can calculate an extended value function (Sec. <ref>) using average currents . Lastly, we can discount future rewards with (Sec. <ref>) in any of the above value functions. In algorithm <ref> we detail the discounted, long-term version as it contains all components.
§ EXPERIMENTS
In this section, we empirically evaluate and compare various settings of our control scheme for maneuvering the two-dimensional ASV with holonomic actuation in realistic ocean currents and seaweed growth scenarios.
§.§ Experimental Set-Up
Seaweed Growth Model
Macroalgae growth depends on the algae species, the water temperature, solar irradiance, and dissolved nutrient concentrations, specifically nitrate (NO_3) and phosphate (PO_4) <cit.>. We use the model of the NGR of Wu et al. <cit.> as it models the key growth dependencies without maintaining additional state variables to model the plant-internal nutrient state <cit.>.
We use the parameters of a temperate species from the work of Martins and Marques <cit.> and Zhang et al.<cit.>. In this model, the NGR is determined by the gross growth rate r_growth and the respiration rate r_resp caused by metabolism. Resulting in the change of biomass:
(t) =(t) ·NGR=(t) · (r_growth -r_resp) .
Fig. <ref> shows the NGR for our region in January 2022.
We assume the seaweed growth model is known by the planner.
Realistic Ocean Forecast Simulation
We simulate ocean currents and forecasts based on the oceanographic systems of HYCOM <cit.> and Copernicus <cit.>, similar to prior research<cit.>. Each system offers 1) a 5-10 day forecast model with daily updates, and 2) a so-called hindcast model published a few days later with improved accuracy based on additional data assimilation. To simulate realistic operations the forecasted currents received by the planner need to differ from the true currents used for simulation by a forecast error δ that is comparable to the empirical forecast error of the oceanographic systems.
For our experiments, we use Copernicus hindcasts as and mimic daily 5-day forecasts by giving the planner access to a 5-day sliding time window of HYCOM hindcasts. With this setting, the forecast error δ in our simulation is comparable to the true forecast error as shown in Fig. <ref>. For further notation, we will denote HYCOM hindcast data now as our forecast source and the Copernicus hindcast data as our hindcast source aka our simulated true currents. and are available with 1/12th resolution.
To estimate the expected future growth beyond the 5-day forecast horizon of (Sec. <ref>) we use coarse seasonal averages of the ocean currents in our region. In particular, we use the monthly average currents of 2021 from Copernicus with 1/6th resolution.
Large Scale Mission Generation
We conduct our experiments in the southeast Pacific due to high nutrient concentrations.
For a large representative set of missions, we initially generated |𝕄| = 1325 starting tuples (,), uniformly distributed in time between January and October 2022 and across the specified region of longitude range -130W to -70W and latitude range -40S to 0S. This allows for varying current distributions. The samples were generated maintaining a distance of 0.5 degrees from land to avoid any instant collisions. It is worth noting that issues such as stranding were not considered in this work since an appropriate safety control scheme is proposed in the parallel work <cit.>. We took the intersection of admissible missions between all controllers to evaluate the results, resulting in 1035 missions for analysis. Admissible missions are defined as those that do not strand and remain within the predefined area, as data is unavailable beyond these bounds.
Each mission starts with a seaweed mass of 100kg. A normal seaweed growth cycle is about 60-90 days but we limit our large-scale experiments to a time period of 30 days per mission to keep the required compute tractable at scale.
Evaluated Controllers
We evaluate a variety of controllers under different configurations, which can be classified according to the following criteria: 1) the data utilized as input for the controller, such as the true currents for planning or forecast data and average data , and 2) the controller's planning horizon, which may span the entire 30-day period or a shorter greedy 5-day interval with periodic re-planning. Additionally, we examine controllers employing a discounted value function, as outlined in Sec. <ref>. Moreover, we compare those controllers against the scenario where seaweed farms float freely in the water without any actuation. A comprehensive overview of the configurations for each evaluated controller is provided in Tab. <ref>.
For all long-term (30-day horizon) controllers, we compute the terminal reward term of over the complete predefined area (60x40) on a coarse grid (1/6 resolution). The running cost term of is only computed on a smaller grid around the current farm's position (10 window around) but with with a higher resolution (1/12).
Evaluation Metrics
Our primary objective is to maximize seaweed mass, so we evaluate the results based on the absolute seaweed mass at the end of each mission and the relative improvement in accumulated seaweed mass across different controllers. For relative improvement, we normalize the values within each mission, allowing us to gauge the extent to which a specific controller surpasses a baseline for that mission. This is especially important as the starting position of a mission is a major indicator of achievable growth as illustrated in Fig. <ref>.
Finally, we present the average relative improvements across all missions. We consider either the floating system without actuation or our control method planning on the true currents .
§.§ Experimental Results
We investigate our controller's performance under various maximum thrust limitations (u_max) in two scenarios: 1) The controller receives the true currents as planning input, representing the best-case performance given the selected maximum propulsion; 2) the controller receives daily 5-day and the seasonal averages , approximating real-world conditions. Moreover, we assess the performance for a floating farm where u_max=0m/s.
Fig. <ref> and Tab. <ref> compare seaweed mass distributions at the end of each mission under different settings. The average seaweed growth scales linearly with the farm's maximum thrust. Controllers planning on forecasts exhibit slightly inferior performance due to prediction errors compared to those using true currents. The performance shift in the upper quartile is more pronounced for lower u_max values, possibly because higher u_max values better compensate for forecast errors. Nonetheless, with minimum propulsion of u_max=0.1m/s, our controller performs 9.6% better on average than a freely floating farm.
The starting location of a mission significantly influences 30-day growth, as shown in Fig. <ref>. High-growth missions are situated in the east and south of our region, aligning with nutrient-rich areas also visible in Fig. <ref>.
Higher propulsion in real-world applications may be economically infeasible due to the cubic increase in energy consumption with u_max. Consequently, we set u_max to 0.1m/s for subsequent investigations. We evaluate the performance of two greedy controllers under this setting and various configurations of our long-term controllers.
We aim to increase the performance of the long-term controller operating on forecasts and to match the best growth achievable with true currents . To this end, we employ the discount formulation proposed in Sec. <ref> for two settings (<ref>) to account for increasing uncertainty over time.
As illustrated in Tab. <ref>, both the greedy and long-term controllers outperform the floating scenario. The performance of our greedy controller, planning over a 5-day horizon, closely matches that of the long-term controllers. Using the discounted control scheme slightly improves the long-term controller, yielding the best overall performance.
Fig. <ref> evaluates the floating system, the 5-day greedy, and the long-term controller without discount (two settings: with and +) in a 60-day scenario. The greedy controller aims for the nearest growth map, while the long-term controller balances short-term losses against long-term gains, as demonstrated in the sub-figure of Fig. <ref> depicting seaweed mass over time. The zig-zag shape of the lines is due to nocturnal respiration. The greedy controller is driven out of the region, leading to mission termination and an earlier trajectory end.
§ DISCUSSION
We conducted experiments over a 30-day time horizon to facilitate large-scale testing. In this context, all our controllers substantially outperform the non-actuated floating system. We observed that the performance of our short-term optimizing controller is nearly on par with our long-term controllers. We attribute this to several factors: 1) the nutrient map, and consequently, the growth map, exhibits a smooth gradient that simplifies convergence toward a global optimum, 2) a 30-day time horizon may not adequately capture the short-term and long-term trade-offs, and 3) for higher maximum thrust we would expect to see a higher performance divergence since more distance can be captured but we only evaluated greedy controllers with u_max=0.1.
Since growth cycles typically span 60-90 days, long-term planning is crucial. In such scenarios, the myopic behavior of greedy policies not only leads them to navigate toward low-growth regions in the vicinity but also fails to account for being pushed out of optimal growth regions, as demonstrated in Example <ref> over 60 days. For our large-scale evaluation, we only considered missions that remained within the predefined region. As shown in the 60-day Example <ref>, this often occurs with greedy controllers or the floating case. Consequently, we would likely observe a more favorable relative performance of long-term controllers if we accounted for the filtered missions. We noticed a high variance in seaweed mass across all controllers, which can be attributed to our vessels' inability to reach optimal growth regions within 30 days for many missions. We anticipate that for longer time horizons and increasing maximum thrust, the long-term controller would converge toward the global maximum in the southeast of our region, leading to reduced variance in seaweed mass. Hence, in future work, we want to investigate performance for longer time horizons.
§ CONCLUSION AND FUTURE WORK
In this work, we addressed the challenge of maximizing seaweed growth in underactuated autonomous ocean farms by proposing a growth-maximizing DP approach. This method solves a running cost problem in the 2D spatial state of the system and generates a value function, which serves as a feedback policy for growth-optimal control across all states and times. This policy can control multiple farms and mitigate forecast errors through closed-loop implementation, equivalent to re-planning at every time step. We extended our method for long-term optimal growth beyond the 5-day forecast horizon by estimating expected growth using seasonal average currents and initializing the DP with these values. To account for increasing uncertainty in ocean currents, we introduced finite-time discounting into the DP PDE.
Our extensive empirical evaluation, based on realistic Pacific Ocean current scenarios over 30 days, demonstrated that our approach using only 5-day forecasts and limited propulsion (u_max=0.1) achieved 95.8% of the best possible growth and 9.6% more growth than freely floating. This confirms the feasibility of low-power propulsion and optimal control for enhancing seaweed growth on floating farms under real-world conditions. We further demonstrate that long-term planning gets even more important for time horizons over 30 days.
Future work offers multiple avenues for improvement. One possibility is to learn the terminal reward by employing approximate value iteration <cit.> or a value network, as proposed by Silver et al. <cit.>. This approach could implicitly learn the distribution shift between and ; however, it may require intensive computation for training due to the necessity of i.i.d. samples, which could limit the number of samples taken per mission to just one <cit.>. Another direction is to make the discount factor state-dependent based on the uncertainty of current predictions, which could be estimated historically or using ensemble models <cit.>. Lastly, we plan to conduct field tests with multiple autonomous surface vehicles to further validate our method's practicality in real-world ocean environments.
IEEEtran
|
http://arxiv.org/abs/2307.02708v1 | 20230706011037 | Low-Dose TOF-PET Based on Surface Electron Production in Dielectric Laminar MCPs | [
"Kepler Domurat-Sousa",
"Cameron Poe",
"Henry J. Frisch",
"Bernhard W. Adams",
"Camden Ertley",
"Neal Sullivan"
] | physics.med-ph | [
"physics.med-ph",
"hep-ex"
] |
plain
Version v6b (arXiv pre-print)
August 1, 2023
Low-Dose TOF-PET Based on Surface Electron Production in Dielectric Laminar MCPs
Kepler Domurat-Sousa, Cameron Poe, Henry J. Frisch
Enrico Fermi Institute, University of Chicago
Bernhard W. Adams
Quantum Optics Applied Research
Camden Ertley
SouthWest Research Institute
Neal Sullivan
Angstrom Research, Inc
0.1in
To be submitted to Nuclear Instruments and Methods
-0.5in
We present simulations of whole-body low-dose TOF-PET based on the direct surface production by 511 keV gamma rays of energetic electrons via the Photo-electric and Compton Effects, eliminating the scintillator
and photodetector sub-systems in PET scanners. In a companion paper <cit.> we have described
MCPs constructed from thin dielectric laminae containing heavy nuclei such as lead or tungsten (LMCP).
The laminae surfaces are micro-patterned to form channels, which are then functionalized to support
secondary electron emission in the manner of conventional MCPs.
We have simulated direct conversion using modifications to the TOPAS Geant4-based tool kit.
A 20 × 20 × 2.54 cm^3 LMCP, composed of 150-micron thick lead-glass laminae, is predicted to have a ≥ 30% conversion efficiency to a primary electron that penetrates an interior wall of a microchannel. The subsequent secondary electron shower is largely confined to one channel and can provide high space and time resolutions.
In whole-body PET scanners the technique eliminates the scintillator and photodetector
subsystems. The consequent absence of a photocathode allows assembly of large arrays at
atmospheric pressure and less stringent vacuum requirements, including
use of pumped and cycled systems.
TOPAS simulations of the Derenzo and XCAT-brain phantoms are presented
with dose reductions of factors of 100 and 1000 from a literature
benchmark. New applications of PET at a significantly lower radiation
dose include routine screening for early detection of pathologies, the
use in diagnostics in previously unserved patient populations such as
children, and a larger installed facility base in rural and
under-served populations, where simpler gamma detectors and lower
radiation doses may enable small low-cost portable PET scanners.
§ INTRODUCTION
Positron-Emission Tomography (PET) uses radioactive positron-emitting
tracers to locate areas of high biological activity such as tumors and
hair-line fractures of bones. It complements other modalities that
identify morphologies, and is often used in conjunction with CT or MRI.
In addition PET is used on small animals for development of
pharmaceuticals and treatments.
In the last decade detectors and techniques for Time-of-Flight
Positron-Emission Tomography (TOF-PET) have substantially grown in
sophistication <cit.>.
Among other innovations, high-precision whole-body scanners have been
built and
characterized <cit.>;
TOF-PET with sub-nanosecond coincidence has
recently been developed <cit.>; an international competition to develop sub-10
ps TOF resolution <cit.> is
now in place; and timing with resolutions of 10 ps or below using Cherenkov light in
pre-radiators <cit.> is being
developed by Cherry et al. for higher spatial resolutions and lower
doses <cit.>.
Recently, alternative methods have been
proposed <cit.>
with the goal of achieving resolutions set by the underlying physics
processes rather than by the detector
segmentation <cit.>. The technique, which
like the conventional technique uses the conversion of the gamma rays
in a scintillator followed by photo-detection, is to exploit Compton
Scattering of the gamma rays in low atomic number scintillating media.
Successive Compton scatters are constrained by the two-body Compton
kinematics, allowing precisely locating the first scatter in a large
fraction of events <cit.>.
With similar motivation, here we have adapted the TOPAS Geant4-based
framework <cit.> to study
direct surface conversion of gamma rays to electrons via the Compton
and Photoelectric effects in MCPs constructed from thin micro-patterned
laminae containing heavy nuclei such as lead or
tungsten <cit.>. The direct conversion technique
eliminates the scintillator and photodetector subsystems in TOF-PET
scanners, converting the gamma ray to an electron shower inside a
MCP-based planar vacuum tube, the High-resolution Gamma Multiplier Tube
(HGMT). In addition to the savings in cost, complexity, and bulk from
not using heavy crystals and photodetectors, the absence of a
photocathode allows assembly of large arrays at atmospheric pressure
and much relaxed vacuum requirements, including use of pumped and
cycled systems.
The organization of the paper is as follows. Section <ref>
introduces the HGMT and the specific implementation of a
Laminar MCP (LMCP) on which it is based.
For TOF-PET the LMCP is configured to provide both the appropriate
substrate containing heavy nuclei for surface direct conversion of the
gamma ray to an electron and a configuration of functionalized
micro-patterned channels to supply the multiplication of the resulting
primary electron in an LMCP channel. Section <ref>
presents gamma ray conversion efficiencies and resolutions from TOPAS
simulations of the LMCP.
Section <ref> presents images
from simulations of the Derenzo <cit.> and
XCAT-brain <cit.> phantoms in a whole-body HGMT TOF-PET
detector at reduced dose. Section <ref>
summarizes the results of this first software study and recommends
starting to build and test LMCP/HGMT prototypes. Appendix A discusses
future studies of time resolution that are beyond the current scope.
§ THE HIGH-RESOLUTION GAMMA MULTIPLIER TUBE (HGMT)
The HGMT is a large-area (≥ 100 cm^2), high-gain (10^6-5× 10^7), low-noise MCP-based electron multiplier vacuum
tube, designed to provide correlated high-resolution space/time
measurements of gamma rays via the technique of surface direct
conversion <cit.>.
Figure <ref> shows a sketch of a
detector assembly of two LMCPs formed with laminae that form structured
electron-multiplier channels <cit.>. Gamma rays are
incident from above, and may interact in either LMCP. The open-area
ratio (OAR) is by design small to maximize the area presented to gamma
rays for direct conversion to electrons. The path length of the gamma
rays in the substrate material depends on the OAR, the thickness and
structure of the laminae, and the incident angles of the gamma from the
normal. The laminae are shown with tabs to space the MCPs and to
provide mechanical support between the top and bottom of the hermetic
package.
The LMCP assembly is followed by an application-specific high-bandwidth
anode with sub-mm resolution for
readout <cit.>. Multiple
LMCP/anode assemblies can be stacked in a common vacuum vessel.
§.§ The Laminar Micro-Channel Plate (LMCP)
Figure <ref> shows an example of an MCP body,
(`slab') intended for gamma ray detection made from a heavy-metal
dielectric such as lead-glass <cit.>. The bulk laminae,
which represent the largest fraction of the area of the slab, serve to
convert the incoming gamma ray to an electron.
§.§ Anode Configurations
The anode records the position and time of the arrival of the electron
shower after amplification. There are a number of options for the anode
depending on the application. For low-rate environments such as
low-dose TOF-PET[We note that pile-up of gamma rays from
multiple e^+e^- annihilations has a quadratic dependence on rate; a
dose reduction of a factor of 10^3 reduces pile-up by a factor of
10^6.] there is extensive experience with anodes with 50-ohm
striplines that give sub-mm resolution in both transverse
directions <cit.>. For
high-rate environments, such as at a hadron collider, arrays of 2
dimensional pads patterned to enhance charge sharing provide sub-mm
resolution for 2.54 cm-square pads <cit.>. For
decoupling the anode design from the tube design, as would be
economical for mass production of single HGMT modules for different
uses, an option is capacitive-coupling through an adequately resistive
bottom plate for both stripline and pad anodes <cit.>.
Additionally, anodes can be made from solid-state devices.
§.§ Vacuum and Hermetic Packaging
For gamma rays the tube body can be metal as well as the conventional
glass or ceramic. The package may be non-rectangular or non-planar to
fit non-standard shaped LMCPs <cit.>. Appropriately
spaced tabs on the perimeter of the laminae can provide support against
atmospheric pressure from top to bottom of the tube. As the HGMT has no
photocathode, vacuum sealing can be done at atmospheric pressure, with
a higher target pressure. Sealing with O-rings, active pumping, and cycling to
atmospheric pressure for maintenance or transport become options.
Large systems of HGMTs may be installed in a single vessel such as a
cylindrical vessel with an open bore for a PET subsystem, or a large
planar vacuum vessel for a photon/electron pre-sampler in a particle
physics experiment. For some applications, such as in a large particle
collider experiment or inside the magnet bore in a multi-modality PET
detector, the HGMT thin aspect ratio saves expensive real-estate over
crystal-based gamma ray detection systems.
Figure <ref> shows plan and elevations views of a 5-by-5
array of HGMTs sharing a common planar vacuum package, as would be
appropriate for a shower-max detector in a kaon or at high rapidity in
a collider experiment. Common packaging will enable a higher
packing fraction and economies of shared subsystems. Large systems can
be continuously pumped rather than sealed, and can be brought up to
atmospheric pressure for maintenance or modification.
§.§ Electronics and Readout Systems
The HGMT can share front-end multi-GHz electronics such as the PSEC4 system <cit.>.
In TOF-PET, with the identification of the two gamma rays, full event
reconstruction can be done in real time. Multiple buffering, local
intelligence, and reduced dose will facilitate deadtimeless operation
for faster data acquisition.
§ SIMULATION RESULTS: CONVERSION EFFICIENCIES AND SPACE AND TIME RESOLUTIONS
§.§ Gamma Conversion Efficiency
Figure <ref> displays the efficiency for MCP thicknesses of 1 cm and 2.54 cm (1 inch)
found in the TOPAS simulation for direct conversion of a 511 keV gamma ray versus incident angle from the normal to the lamina. The efficiency includes the creation of a primary electron that enters a channel by crossing a functionalized channel-defining wall.
§.§ Spatial Resolution
Figure <ref> shows the
simulated spatial distributions in the `short' and `long' dimensions
of charge generated by a single secondary electron created on the
`short'-dimension wall, 7.5 mm from the channel exit. The channel has a
uniform profile with transverse dimensions of 50 μm by 2.5
mm.[We have chosen an atypically large value for one dimension
to explore the effects. The size of channels in an HGTM will most
likely be in the tens of microns or less in both dimensions.] The TOPAS
simulation of secondary emission and multiplication is initiated at the
point of a primary electron on the wall of the channel (left-hand
surface in the Figure). The hatched regions represent the spacers
between neighboring channels.
§.§ Time Resolution
The simulation of the time
resolution of an HGMT depends on the choice of many parameters of the
HGMT construction, including details of the materials and shapes of the
channel-forming surfaces, and consequently is beyond the scope of our
simulations. Data from a physical LMCP are essential in narrowing the
options towards high-resolution and robustness. In consequence, in the
simulations of imaging presented in
Section <ref> we have used a
parametric approach, setting the time resolution in the simulation to
100 ps independent of the position of the gamma ray conversion in the
channel. Appendix A presents images simulated at different resolutions, including
one using no TOF information, and discusses possible future strategies for lowering
the spread in times due to the variation in conversion
point to below 100 ps.
§ SIMULATION OF A WHOLE-BODY HGMT TOF-PET SCANNER
§.§ Whole-Body Scanner Configuration
The left-hand panel of Figure <ref> shows a
representative whole-Body TOF-PET scanner made with curved <cit.> HGMT modules. The
scanner benefits from the absence of a layer of scintillator to convert
the gamma rays to optical photons, and the absence of the corresponding
photodetector system with photocathodes to convert the optical photons
to electrons. The right-hand panel shows the XCAT graphics rendition of
the whole-body detector and XCAT phantom used in the TOF-PET
simulation. The scanner is 200 cm long and has a bore radius of 45 cm.
As the required radial distance for detection is less, the HGMT
facilitates integration into multi-modality systems such as PET/MRI and
PET/CT. The absence of the scintillator and photodetector systems also
substantially reduces complexity.
§.§ Simulation of the Derenzo Phantom at Reduced Doses
Figure <ref> shows reconstructed images of the
Derenzo phantom <cit.> at a dose of 150 Bq/mL for the
rods and 50 Bq/mL for the background, a factor of 100 lower than a
benchmark dose at an estimated scan time of 10 minutes <cit.>. The thickness of
the converter LMCP is 1 cm in the left-hand image, and 2.54 cm (1 inch)
in the right-hand image. The timing resolution was taken as 100 ps
(FWHM); the spatial resolution in the plane of the LMCP at the channel
exit was conservatively set to 1 mm in both the `long' and `short'
dimensions.
Figure <ref> shows similar reconstructed
images of the Derenzo phantom at a dose of 15 Bq/mL for the rods and
and 5 Bq/mL for the background, a factor of 1000 lower than the
benchmark dose. The thickness of the converter LMCP is 1 cm in the
left-hand image, and 2.54 cm (1 inch) in the right-hand image.
§.§ Simulation of the XCAT Brain Phantom and 2 cm-Diameter Lesion at Reduced Doses
The XCAT brain phantom <cit.> was also simulated at
reduced doses. We take as a benchmark a dose of 8.25 kBq/mL for white
matter, 33 kBq/mL for gray matter, and 99 kBq/mL for the spherical
lesion <cit.> for an estimated 10-minute scan.
Figure <ref> shows reconstructed images of
the XCAT brain phantom with a 2 cm lesion at a dose reduced by a factor
of 100 from the benchmark: 82.5 Bq/mL for white matter, 330 Bq/mL for
gray matter, and 990 Bq/mL for the lesion. The left-hand panel images
are reconstructed from direct conversion in a lead-glass laminated LMCP
of thickness 1 cm. The right-hand panel is for a thickness of the
lead-glass laminated LMCP of 2.54 cm (1 inch).
Figure <ref> shows reconstructed images of the
XCAT brain phantom with a 2 cm lesion at a dose reduced by a factor of
1000 from the benchmark: 8.25 Bq/mL for white matter, 33 Bq/mL for gray
matter, and 99 Bq/mL for the lesion. The left-hand panel images are
reconstructed from direct conversion in a lead-glass laminated LMCP of
thickness 1 cm. The right-hand panel is for a thickness of the lead-glass
laminated LMCP of 2.54 cm (1 inch).
Figure <ref> shows reconstructed images of
the XCAT brain phantom with a 2 cm-diameter lesion at a dose reduced by a factor
of 10,000 from the benchmark: 0.8 Bq/mL for white matter, 3.3 Bq/mL for
gray matter, and 9.9 Bq/mL for the lesion. The left-hand panel images
are reconstructed from direct conversion in a lead-glass laminated LMCP
of thickness 1 cm. The right-hand panel is for a thickness of the
lead-glass laminated LMCP of 2.54 cm (1 inch). While the image is not good
enough for detailed diagnosis, the image may be enough to suggest a
follow-up scan at a higher dose, and may inform strategies for regular
screening of appropriate populations, such as selective annual exams
for breast cancer.
§ PORTABLE AND ANIMAL TOF-PET SCANNERS
A significantly lower dose may allow the use of portable TOF-PET scanners for applications and
facilities for which PET is not currently possible or economical. Examples are hair-line fractures, for which the current standard of X-rays has a significant rate of non-detection <cit.>, but for which PET has high sensitivity. Figure <ref> shows a sketch of a simple two-module portable scanner on a portable cart that can be adjusted in height and aperture to accommodate legs and arms, for example. Because the HGMT does not have a photocathode, it does not need ultra-high vacuum (UHV), and can be pumped with a small vacuum pump located on the cart. The system can be valved off, transported, and restarted at a new location.
Figure <ref> shows a representative PET scanner for
small animals. The large area of the HGMT may allow coverage of much of
the solid angle with only two HGMTs. An array of multiple HGMTs covering four or six sides can provide coverage for larger animals.
§ SUMMARY AND CONCLUSIONS
We have adapted the TOPAS Geant4-based tool kit to simulate surface direct conversion in a
Laminated Micro-Channel Plate (LMCP) constructed from thin lead-glass laminae 150 micron-thick <cit.>. An LMCP 2.54 cm-deep is predicted to have a ≥ 30% conversion efficiency to a primary electron that penetrates an interior wall of a microchannel. We present space and time resolutions from the subsequent secondary electron shower.
Images from initial simulations of whole-body HGMT TOF-PET scanners at doses reduced from literature benchmarks by factors of 100 and 1000 are presented.
In whole-body PET scanners the technique eliminates the scintillator and photodetector
subsystems. In addition, the absence of a photocathode eliminates many aspects of UHV construction, as it
allows assembly of large arrays at atmospheric pressure with less stringent vacuum requirements, including
use of pumped and cycled systems.
TOPAS simulations of the Derenzo and XCAT-brain phantoms are presented with dose reductions of factors of
100 and 1000 from literature benchmarks. Benefits of such reduction would include
routine screening for early tumor detection, use of PET for pediatric diagnostics, and a larger
installed facility base in rural and under-served populations.
Application-specific implementations of the surface direct production
technique employed in the HGMT are also candidates for large-area
arrays for use in detectors across a wide range of fields in physics.
In conclusion, initial TOPAS Geant4-based simulation studies of
whole-body TOF-PET using direct conversion of the gamma rays to
electrons via the Photoelectric and Compton Effects indicate the
possibility of useful imaging at substantially lower radiation doses.
The LMCP technique of laminated construction of micro-channel plates,
in this case with at least part of each lamina consisting of a material
with heavy nuclei such as those of lead or tungsten, would allow access
to many operational parameters for detector optimization. We hope
others interested in making the unique capabilities of
Positron-Emission Tomography widely and routinely available will join
us in applying resources to building and testing hardware.
§ APPENDIX A: TIME UNCERTAINTY FROM THE VARYING CONVERSION POINT
There is substantial experience with measuring time resolutions
of traditional MCP-PMTs <cit.>,
and there is a wealth of measurements since Ohshima measured a 5 ps
(sigma) resolution for charged particles in
2006 <cit.>. The HGMT is unique in that it has a further
contribution to the time resolution from the variable conversion point in the LMCP.
The left hand panel in Figure <ref>
shows a simulated pulse shape
from a single secondary electron 7.5 mm from the exit end of the channel
and centered on one of the short sides. The right-hand panel
illustrates the first-one-in method by presenting simulations of showers
started from a different multiplicity of secondary electrons.
Poisson statistics predicts that a higher multiplicity of initial
secondary electrons improves the time resolution as the probability of
seeing no electrons in the initial time interval falls exponentially with
the length of the interval. In the simulation of Figure <ref>,
electrons are started one-at-a-time 7.5 mm from the exit of the
channel, centered in the short direction between the two walls. Any
electron exiting the plane enters the amplifying LMCP(s) below,
followed by an anode.
The contribution to the time resolution from the secondary shower in
the limit of large first-strike secondary emission, shown in
Figure <ref>, is well below 50 ps. The
resolution will consequently be dominated by the varying distance of
the start of the shower to the channel exit.
The dependence on the time-of-flight resolution is illustrated in
Figure <ref>, which shows
the simulated image of the XCAT brain at a dose reduced by a factor of
1000 for four different time resolutions (FWHM): 50, 100, 200 ps, and 10 nsec, i.e. a
resolution much larger than the scanner.
One straight-forward strategy to lower the time resolution below 100 ps is to stack multiple HGMT internal
modules, each consisting of a converter LMCP made with lead-glass
followed by one or more amplification sections made from B33 glass or
equivalent and an anode. A stack of these
LGMT sub-modules with a total conversion path length of a several cm
will be less expensive and less bulky than conventional crystal
systems.[Strip-line anodes are constructed from inexpensive two-layer printed circuit boards,
and 150 electronics channels can cover a square meter. The sub-modules share a
common hermetic package and High Voltage distribution.]
Figure <ref> shows the time of
first arrival versus distance from the end of the LMCP channel from the
simulation. Each of the multiple LMCP/anode assemblies would provide a
conversion path length appropriate for the desired time resolution.
Two more-involved strategies are examples that can be explored at the appropriate
time in a program of measurements of actual devices. The first is using
the correlation of pulse height with conversion point in the LMCP. A
channel design with discrete strike points spaced at small intervals,
for example 1 mm, with unobstructed drift paths between them, will
produce some measure of discrete gain versus height from the channel
exit. A second path to explore that illustrates the flexibility of the
LMCP construction technique is to incorporate inductive and/or
capacitive pickups at intervals along the channels before
assembly. For a more speculative example, a two-loop antenna on the
surface of a lamina outside of the channel, with the two loops configured to give the same
sign contribution from the pulse, can be constructed with successive
metal, resistive, and non-conducting layers. The pattern of signals
from loops along the channel would locate the start of the secondary
shower.
§ ACKNOWLEDGMENTS
We thank Joseph Perl and Paul Segars for the exemplary development of TOPAS and XCAT and for their
remarkable user support. We are indebted to Mary Heintz for essential
computational system development and advice, and to Justin Gurvitch and Richmond Yeung for
crucial graphics contributions.
6.0in
99
MGM_NIM_paper K. Domurat-Sousa, C. Poe, Henry J. Frisch, B. W.
Adams, C. Ertley, Neal Sullivan;
Surface Direct Conversion of 511 keV Gamma Rays in Large-Area
Laminated Multichannel-Plate Electron Multipliers; submitted to
Nuclear Instruments and Methods;
arXiv/hep-ex; 2306.11701
Vandenberghe_Moskal_Karp_review_2020
S. Vandenberghe, P. Moskal, J. S. Karp;
State of the art in total body PET
EJNMMI Phys. 2020 May 25;7(1):35.
doi: 10.1186/s40658-020-00290-2.
Vaquero_Kinehan_review_2015 J. J. Vaquero and P. Kinahan; Positron
Emission Tomography: Current Challenges and Opportunities for
Technological Advances in Clinical and Preclinical Imaging Systems
Annual Review of Biomedical Engineering Volume 17, 385; (2015)
Phelps_Cherry_Dahlbom_book_2006 M. E.Phelps, S. R. Cherry, and M.
Dahlbom; PET: Physics,instrumentation, and scanners;
Springer New York (2006); doi.org/10.1007/0-387-34946-4
Vandenberghe_Moskal_Karp_2020_Whole_Body_PET_2020
S. Vandenberghe, P. Moskal, J.S. Karp; State of the art in total
body PET;
EJNMMI Phys. 2020 May 25;7(1):35.
Cherry_Explorer_scattering_2019
R. D. Badawi, H. Shi, and S. R. Cherry et al.
First Human Imaging Studies with the EXPLORER Total-Body PET Scanner;
J Nucl Med. 2019 Mar; 60(3): 299-303. doi: 10.2967/jnumed.119.226498
Lee_Levin_100ps_2021 M.S. Lee, J. Cates, A. Gonzalez-Montoro, and C. Levin;
High-resolution time-of-flight PET detector with 100 ps coincidence time resolution
using a side-coupled phoswich configuration;
Phys. Med. Biol. in press: https://doi.org/10.1088/1361-6560/ac01b5 (2021)
LeCoq_2019_case P. Lecoq, C. Morel and J. Prior;
Case for setting up a 10ps challenge: A step toward reconstruction-less TOF-PET;
Nuovo Cim. C 43 (2020) no.1, 2 doi:10.1393/ncc/i2020-20002-y
LeCoq_2020_10ps_challenge P. LeCoq et al.; Roadmap toward the
10 ps time-of-flight PET challenge; Physicsin Medcine and Biology, Vo.
65, Number 21. Oct, 2020
Credo T. Credo, H. Frisch, H.
Sanders, R. Schroll, and F. Tang;
Picosecond Time-of-Flight Measurement for Colliders Using
Cherenkov Light
Proceedings of the IEEE, Rome, Italy, Oct. 2004; Nuclear Science
Symposium Conference Record, 2004 IEEE, Vol. 1.
Ohshima K. Inami, N. Kishimoto, Y. Enari, M. Nagamine, and T.
Ohshima; A 5-ps Tof-counter
with an MCP-PMT; Nucl. Instr. Meth. A560, p.303, 2006
Anatoly_TestBeam_2010
A. Ronzhin et al.; Development of a 10 ps level time of flight
system in the Fermilab Test beam facility; Nucl. Instr. Meth.
A623,931(2010).
Cherry_Hamamatsu_2021 R. Ota, S. I. Kwon, E. Berg, F. Hashimoto, K. Nakajima, I.
Ogawa, Y. Tamagawa, T. Omura, T. Hasegawa, S. R. Cherry;
Direct positron emission imaging: ultra-fast timing enables reconstruction-free imaging
https://arxiv.org/ftp/arxiv/papers/2105/2105.05805.pdf
Eric_CPAD_talk E. Spieglan;
Using Switchable Fluorescent Molecules to Image Tracks and Measure Energy in Large Liquid Double Beta Decay Detectors; CPAD 2019;
https://agenda.hep.wisc.edu/event/1391/timetable/#20191209.detailed
PET_2021_NIM_paper J.F. Shida, E. Spieglan, B.W. Adams, E. Angelico,
K. Domurat-Sousa, A. Elagin, H. J. Frisch, P. La Rivière, A. H.
Squires; Ionization-activated Multi-State Low-Z Detector Media
Nucl. Inst. and Meth. A; Vol. 1017; Nov. 2021;
PET_2023_TMI_paper K. Domurat-Sousa, C. M. Poe, M. S. McDaniel, E. Spieglan, J. F. Shida,
E. Angelico, B. W. Adams, P. J. L. Riviere, H. J. Frisch, A. H.
Squires;
Simulation of a low-Z-medium detector for low-dose high-resolution TOF-PET
Submitted to IEEE Transactions on Medical Imaging (TMI), May 2023;
arXiv preprint (2023) https://arxiv.org/abs/2305.07173
Allison_JLAB_talk A. H. Squires, Detecting Compton Scatters in Liquid Media for
Low-Dose High-Resolution TOF-PET; DOE-NIH Workshop Advancing Medical Care through Discovery in the Physical Sciences: Radiation Detection March 16, 2023; Jefferson National Accelerator Facility
Moses_fundamental_limits W. W. Moses;
Fundamental Limits of Spatial Resolution in PET;
Nucl Instrum Methods Phys Res A. 2011 Aug
21;648 Supplement 1:S236-S240. doi: 10.1016/j.nima.2010.11.092.
TOPAS_Methods_paper K. Domurat-Sousa, C. Poe;
Methods for Simulating TOF-PET in TOPAS Using a Low-Z Medium;
Submitted to Nuclear Instruments and Methods, June 2023;
arXiv: https://arxiv.org/abs/2306.10192
TOPAS
B. Faddegon, J. Ramos-Mendez, J. Schuemann, J. Shin, J. Perl, H.
Paganetti
The TOPAS tool for particle simulation, a Monte Carlo simulation
tool for physics, biology and clinical research
European Journal of Medical Physics; Volume 72, P114-121, April
(2020); DOI:https://doi.org/10.1016/j.ejmp.2020.03.019
TOPAS_user_support The TOPAS home page is https://sites.google.com/a/topasmc.org/home/home.
Complete user documentation can be found at:
https://topas.readthedocs.io/en/latest/getting-started/intro.html
The TOPAS user forum is also available to TOPAS license holders.
The low-energy packages Penelope and Option 4 are further described in
https://geant4.web.cern.ch/node/1731
Derenzo_phantom S. E. Derenzo; Monte Carlo simulations of
time-of-flight PET with double-ended readout: calibration,
coincidence resolving times and statistical lower bounds.
Phys Med Biol. 2017 May 7;62(9):3828-3858.
XCAT2010paper W. P. Segars, G. Sturgeon, S. Mendonca, J. Grimes, B. M. Tsui;
4D XCAT phantom for multimodality imaging research
Med Phys. 2010 Sep;37(9):4902; doi: 10.1118/1.3480985.
Tang_Naxos F. Tang, C. Ertley, J.-F. Genat, J. Anderson,
K. Byrum, G. Drake, E. May, and G. Sellberg Transmission-Line
Readout with Good Time and Space Resolutions for Planacon
MCP-PMTs, in Topical Workshop on Electronics for Particle
Physics, CERN, pp. 579-583, 2008
anode_paper H. Grabas, R. Obaid, E. Oberla, H.
Frisch J.-F. Genat, R. Northrop, F. Tang, D. McGinnis, B. Adams,
and M. Wetstein RF Strip-line Anodes for Psec Large-area
MCP-based
Photodetectors, Nucl. Instr. Meth. A71, pp124-131, May 2013
patterned_anode_paper J. Park, F. Wu, E. Angelico,
H. J. Frisch, and E. Spieglan;
Patterned anodes with sub-millimeter spatial resolution for large-area MCP-based
photodetector systems;
Nuclear Inst. and Methods in Physics Research, A 985 (2021) 164702; 22
Sept, 2020
MGM_patent K. Domurat-Sousa, C. Ertley, H. J. Frisch, C. Poe,
and N. Sullivan;
A Method of Construction of a Laminated
Multichannel Plate Multiplier; US Provisional Patent Application
Pending, USPTO
history_paper B. Adams et al.;
A Brief Technical History of the Large-Area Picosecond
Photodetector (LAPPD) Collaboration; arXiv:1603.01843 Also see
lappddocs.uchicago.edu.
Oberla_thesis E. Oberla, Charged Particle Tracking in a
Water Cherenkov Optical Time Projection Chamber, Ph.D Dissertation,
University of Chicago, Aug. 2015 The University of Chicago
ProQuest Dissertations Publishing, 2015. 3725533.
Evan_thesis E. Angelico;
Development of Large-Area Mcp-Pmt Photo-Detectors for a Precision
Time-Of-Flight System at the Fermilab Test Beam Facility; Ph.D thesis,
The University of Chicago. ProQuest Dissertations Publishing, 2020.
28023552.
InsideOut_paper E. Angelico, T. Seiss,
B.W. Adams, A. Elagin, H. Frisch, E. Oberla, E. Spieglan; Capacitively coupled Pulse Readout in a 20cm×20cm MCP-based
photodetector Nucl. Instr. Meth. A, 2016
timing_paper
B.W. Adams, A. Elagin, H. Frisch, R. Obaid, E. Oberla, A. Vostrikov, R.
Wagner, J. Wang, M. Wetstein; Timing Characteristics of Large
Area Picosecond Photodetectors; Nucl. Inst. Meth. Phys. Res. A. , Vol.
795, 1 (Sept. 2015).
JF_NIM
J.-F. Genat,G. Varner, F. Tang, H. Frisch;
Signal Processing for Pico-second Resolution Timing
Measurements; Nucl.Instrum.Meth.A607:387-393,Oct. (2009).
e-Print:arXiv:0810.5590
Oberla_Clermont_2014 E. Oberla;
PSEC4 waveform sampler and Large-Area Picosecond Photo-Detectors
readout electronics: Procedings of the Workshop on Picosecond Photon
Sensors, Clermont-Ferrand, 2014. Available at
http://lappddocs.uchicago.edu/documents/243
PSEC4_paper
E. Oberla, J.-F. Genat, H. Grabas, H. Frisch, K. Nishimura, and G
Varner A 15 GSa/s, 1.5 GHz Bandwidth Waveform Digitizing
ASIC,
Nucl. Instr. Meth. A735, 21 Jan. (2014), 452;
zero_for_four One of the authors (HJF) observed non-detection by X-ray for four 90-year-old women who each spent a week in the hospital before a PET scan produced an unmmistakable diagnosis. You can calculate the efficiency from Poisson statistics yourself.
1ps_JTFI_proposal H. Frisch, M. Heintz, J. Park, Eric Oberla, F. Tang, Y. Wah;
(University of Chicago)
T. England, F. Fadim, S. Ganguly, N. J. Pastika, P. Rubinov, K. Yonehara
(Fermilab)
A High-Performance Multi-Channel Low-Power ASIC with
One Pico-second Resolution for Emerging Detector Technologies in Positron-Emission Tomography, Particle Physics, and Astrophysics
Proposal to the Univ. of Chicago Joint Task Force Initiative; June, 2023
timing_workshops For a discussion of the factors that
determine time and space resolution in MCP-based detectors, see the
contributions to: The Factors that Limit Time Resolution in
Photodetectors; Workshop, Univ. of Chicago, Chicago, IL; 28-29 April
2011. See http://psec.uchicago.edu/workshops/timing workshops for a
list of other timing workshops in the Chicago-France series
(P.LeDu/France).
OTPC_paper E. Oberla and H.J. Frisch; Charged particle
tracking in a water Cherenkov optical time-projection chamber;
Nucl. Inst. Meth. Phys. Res. A814, 19 (April 2016);
ISSN 0168-9002; arXiv:1510.00947
Philadelphia_talk
H. J. Frisch; Drifting Photons on Optical Paths, Mirrors, Sub-mm
Resolution in Four Dimensions, and Transverse/Longitudinal Phase Space:
Exploiting Psec Time Resolution. Proceedings of the
5th International Conference on Micro-Pattern Gas Detectors
(MPGD2017); 22-26 May, 2017, Philadelphia, USA; Proceedings in
Science, 2018
ritt_workshop_talk See S. Ritt in Session 5 of the Workshop
The Factors that Limit Time Resolution in Photodetectors,
Univ. of Chicago; April 2011;
https://psec.uchicago.edu/workshops/fast_timing_conf_2011/
Slade_SEY_NIM Z. Insepov, V. Ivanov, S.
J. Jokela, I. V. Veryovkin and A. V. Zinovev;
Comparison of secondary electron emission simulation to
experiment; Nucl. Instr. Meth A639, 155 (2011) This work was
supported by the LAPPD Collaboration.
XCAT_benchmark_dose
E.E. Verwer et al., Harmonisation of PET/CT contrast recovery
performance for brain studies; European Journal of Nucl. Medicine and
Mol. Imaging 48, 8;2856-2870; 2021
|
http://arxiv.org/abs/2307.02102v1 | 20230705082229 | Femtoscopy of $D$ mesons and light mesons upon unitarized effective field theories | [
"Juan M. Torres-Rincon",
"Àngels Ramos",
"Laura Tolos"
] | hep-ph | [
"hep-ph",
"nucl-th"
] | |
http://arxiv.org/abs/2307.00678v2 | 20230702224036 | Langevin dynamics for the probability of Markov jumping processes | [
"Wuchen Li"
] | math.PR | [
"math.PR",
"math.OC"
] |
Finite state Wasserstein common noises]Langevin dynamics for the probability of Markov jumping processes
Li]Wuchen Li
[email protected]
Department of Mathematics, University of South Carolina, 29208.
W. Li's work is supported by AFOSR MURI FP 9550-18-1-502, AFOSR YIP award No. FA9550-23-1-0087, NSF DMS-2245097, and NSF RTG: 2038080.
We study gradient drift-diffusion processes on a probability simplex set with finite state Wasserstein metrics, namely the Wasserstein common noises. A fact is that the Kolmogorov transition equation of finite reversible Markov jump processes forms the gradient flow of entropy in finite state Wasserstein space. This paper proposes to perturb finite state Markov jump processes with Wasserstein common noises and formulate stochastic reversible Markov jumping processes. We also define a Wasserstein Q-matrix for this stochastic Markov jumping process. We then derive the functional Fokker-Planck equation in probability simplex, whose stationary distribution is a Gibbs distribution of entropy functional in a simplex set. Finally, we present several examples of Wasserstein drift-diffusion processes on a two-point state space.
[
[
August 1, 2023
==================
§ INTRODUCTION
Drift diffusions in probability density spaces play essential roles in macroscopic fluctuation theory, non-equilibrium statistical physics (e.g., glass dynamics), and stochastic evolutionary games <cit.>. They describe stochastic behaviors of particles/agents/particles perturbed by Brownian motions (common noises) on population states. A famous example is the Dean–Kawasaki equation (super Brownian motion) <cit.>. Nowadays, the Dean–Kawasaki equation has been shown as a gradient drift-diffusion in Wasserstein-2 space <cit.>. In literature, gradient flows in Wasserstein-2 space form a class of density evolutionary equations <cit.>.
Typical examples are heat equations, which are Wasserstein gradient flows of negative Boltzmann-Shannon entropy. While the Dean–Kawasaki equation adds “Wasserstein common noises” into these density evolutionary dynamics. They introduce a class of stochastic heat equations.
Classical studies of Wasserstein drift diffusion processes are defined on a continuous domain, e.g., a d-dimensional torus. Not much has been studied on a finite state, such as finite weighted graphs or equivalently reversible Markov chain. It has been shown that the gradient operator in finite Wasserstein-2 spaces <cit.> forms the generator of the reversible Markov jumping process <cit.>. The Wasserstein gradient flow belongs to the general Onsager principle <cit.>. Many physical, chemistrical <cit.>, and social models, including stochastic evolutionary game theory <cit.>, are often studied on a finite state space. Natural questions arise:
What are drift diffusion processes in finite state Wasserstein spaces? In particular, what are canonical Wasserstein common noises perturbed reversible Markov jump processes?
This note presents Wasserstein type drift diffusion processes in a finite state simplex set. Following <cit.>, we study the canonical diffusion process in finite state Wasserstein space. We then formulate an over-damped Langevin dynamics in finite state Wasserstein spaces. We also present an example of the gradient drift-diffusion process. When the potential function is the ϕ-divergence, and the activation function is the ϕ-divergence induced mean function, the proposed SDE adds geometric diffusions in the transition equations of finite reversible Markov jumping processes. In particular, we derive a Wasserstein Q-matrix function for modeling common and individual noises towards finite reversible Markov jumping processes. Finally, numerical examples of a two-point space are introduced to illustrate the proposed Langevin dynamics in the probability simplex.
In literature, gradient drift-diffusion processes in Wasserstein-2 space on continuous domain have been studied in <cit.>. In particular, a general Wasserstein gradient drift-diffusion process has been studied in <cit.>, which forms the Dean–Kawasaki equation <cit.>. In fact, the Wasserstein common noise differs from the Larsy-Lions common noise <cit.>, while the later one is widely used in mean-field control and mean-field games <cit.>. Meanwhile, <cit.> demonstrates that the generator of Larsy–Lion's common noise is only a partial Wasserstein Laplacian operator. In contrast to their works, we formulate Wasserstein common noises on finite state spaces, which is constructed from the Laplacian-Beltrami operator on finite state Wasserstein-2 space. We remark that the modeling and computation of Wasserstein common noises are essential research directions in transport information geometry <cit.>. Moreover, Wasserstein common noises have vast applications in modeling dynamics from chemical reaction diffusion in Onsager principle <cit.>, finite state evolutionary games <cit.>, mean field games <cit.>, and data sciences sampling problems <cit.>. Mathematically, Wasserstein common noises on finite states also bring a class of challenging degenerate stochastic processes whenever the process stays on the boundary of the probability simplex set. We leave theoretical studies and numerical simulations of Wasserstein drift diffusions on discrete states in future work.
This paper is organized as follows. In section <ref>, we briefly review the finite state Wasserstein-2 metric with gradient, divergence, and Laplacian operators. We next write the gradient-drift diffusion process on a probability simplex set. We also formulate the Fokker-Planck equation in finite state Wasserstein space.
In section <ref>, we present the modeling motivation of this paper. First, we review that the generator (Q-matrix) of the reversible Markov jumping process is the gradient descent of divergence functions. We then add a stochastic perturbation into the finite reversible Markov jumping process and develop a Wasserstein Q-matrix for reversible Markov jump processes. Finally, several examples and numerical simulations of Wasserstein drift diffusions on a two-point space are presented in section <ref>.
§ WASSERSTEIN COMMON NOISES IN PROBABILITY SIMPLEX
In this section, we formulate the canonical diffusion process in a discrete probability simplex set embedded with Wasserstein-2 metrics. We then formulate the gradient drift diffusion in probability simplex, which is a over-damped Wasserstein Langevin dynamics.
§.§ Finite state Wasserstein-2 space
We review the Wasserstein-2 type metric on finite state sample space <cit.>; see also geometric computations in <cit.>.
Consider a weighted undirected finite graph G=(I, E, ω), which contains the vertex set I={1,⋯, n}, the edge set E, and the weights set ω. Here ω=(ω_ij)_i,j∈ I∈ℝ^n× n is a symmetric matrix, such that
ω_ij=ω_ji>0
if (i,j)∈ E;
0 otherwise.
The set of neighbors or adjacent vertices of i is denoted by N(i)={j∈ I (i,j)∈ E}. Define the volume vector on weighted graph as π=(π_i)_i=1^n, such that
π_i:=∑_j∈ N(i)ω_ij/∑_(i,j)∈ Eω_ij.
We review gradient, divergence, and Laplacian operators on graphs. Given a function Φ I →ℝ, denote Φ=(Φ_i)_i=1^n∈ℝ^n. Define a weighted gradient as a function ∇_ωΦ E →ℝ,
(i,j) ↦ (∇_ωΦ)_i,j :=√(ω_ij) (Φ_j-Φ_i).
We call it a potential vector field on E. A general vector field is a function on E such that
v=( v_ij)_(i,j)∈ E, which is anti-symmetric:
v_ij=-v_ji, (i,j) ∈ E.
The divergence of a vector field v is defined as a function div_ω(v) E →ℝ,
i ↦ div_ω(v)_i := ∑_j∈ N(i)√(ω_ij) v_ij.
For a function Φ on V, the weighted graph Laplacian Δ_ωΦ V →ℝ satisfies
Δ_ωΦ:=div_ω∇_ωΦ, i.e., i ↦ Δ_ωΦ_i
= ∑_j∈ N(i)ω_ij (Φ_j-Φ_i).
We use the convention that Δ_ω∈ℝ^n× n denotes a negative semi-definite matrix.
We next introduce the Wasserstein-2 type metric on a finite state. Denote the simplex set as
𝒫(I) = {p=(p_i)_i=1^n∈ℝ^n ∑_i∈ I p_i=1, p_i≥ 0},
where p is a probability vector and p_i represents the discrete probability function on a node i∈ I. For simplicity of illustration, we only consider the interior of probability simplex set.
Denote the tangent space of p∈𝒫(I) as
T_p𝒫(I) = {(σ_i)_i=1^n∈ℝ^n∑_i∈ Iσ_i=0 }.
Define an activation function θ: ℝ^+×ℝ^+ →ℝ^+, such that
(i)
θ(x, y)=θ(y, x);
(ii)
θ(x, y)> 0, xy≠ 0;
(iii)
θ(x, y)∈ C^2;
(iv)
θ(x, y) = 0, xy=0.
There are many choices of activation functions; see <cit.>.
[Geometric mean]
θ(x,y)=√(xy).
[Harmonic mean]
θ(x,y)=1/1/x+1/y.
[Logarithm mean]
θ(x,y)=x-y/log x-log y.
[ϕ' mean]
θ(x,y)=x-y/ϕ'(x)-ϕ'(y),
where ϕ∈ C^1(ℝ; ℝ) is a convex function with ϕ(1)=0. If ϕ(x)=xlog x-x, then ϕ'(x)=log x and the ϕ' mean recovers the logarithm mean.
Denote L(p)=(L(p)_ij)_1≤ i,j≤ N, such that
L(p)_ij:=
-ω_ijθ_ij(p) if j≠ i;
∑_k∈ N(i)ω_kiθ_ki(p) if j=i,
where θ_ij is an average function defined as
θ_ij(p):=θ(p_i/π_i, p_j/π_j), for any i, j∈ I.
We also denote
L(p):=-div_ω (θ(p)∇_ω)=-div_ω (θ∇_ω).
From now on, we call L(p) the probability weighted Laplacian matrix.
When θ_ij(p)>0, L(p) is a symmetric matrix, whose diagonalization satisfies
L(p)=U(p)[ 0 ; λ_1(p) ; ⋱ ; λ_n-1(p) ]U(p)^,
where 0<λ_1(p)≤⋯≤λ_n-1(p) are eigenvalues of L(p) in the ascending order,
and U(p)=(u_0(p),u_1(p),⋯, u_n-1(p))∈ℝ^n× n is the orthogonal matrix of eigenvectors, with u_0=1/√(n)(1,⋯, 1)^. We also denote the pseudo-inverse of L(p) as L(p)^†. In other words,
L(p)^†=U(p)[ 0 ; 1/λ_1(p) ; ⋱ ; 1/λ_n-1(p) ]U(p)^ .
The finite state Wasserstein-2 metric is defined as follows.
The inner product g^W:𝒫(I)× T_p𝒫(I)× T_p𝒫(I)→ℝ is given as
g^W(p)(σ_1,σ_2):= σ_1^L(p)^†σ_2=Φ_1^L(p)Φ_2
= 1/2∑_(i,j)∈ E(∇_ωΦ_1)_ij(∇_ωΦ_2)_ijθ_ij(p),
where
σ_k=L(p)Φ_k=-div_ω (θ∇_ωΦ_k)∈ T_p𝒫(I), k=1,2.
The inner product g^W defines a Wasserstein-2 metric on the simplex set 𝒫(I). From now on, we name (𝒫(I), g^W) the probability manifold (PM).
We last present gradient, divergence, and Laplace-Beltrami operators in probability manifold (𝒫(I), g^W).
The volume form in (𝒫(I), g^W) satisfies
dvol_W:=Π(p)^-1/2dp, with Π(p):=Π_i=1^n-1λ_i(p),
where λ_i(p) are positive eigenvalues of the matrix function L(p) and dp is the Euclidean volume form in ℝ^n. Denote ∇_p, ∇_p·, ∫· dp as gradient, divergence, integration operators in Euclidean space ℝ^n, respectively.
Denote 𝔽∈ C^∞(𝒫(I); ℝ), and denote a vector function ℍ=(ℍ_i)_i=1^n∈ C^∞(ℙ(I); ℝ^n).
(i) The gradient operator grad_W C^∞(𝒫(I); ℝ)→ C^∞(𝒫(I); ℝ^n) satisfies
grad_W𝔽(p):= L(p)∇_p𝔽(p)
= (-div_ω (θ∇_ω∇_p𝔽(p))_i)_i=1^n
= (-∑_j∈ N(i)√(ω_ij)_ij(p)(∇_ω∇_p)_i,j𝔽(p))_i=1^n,
where
(∇_ω∇_p)_i,j𝔽(p):=√(ω_ij)(∂/∂ p_j-∂/∂ p_i)𝔽(p).
(ii) The divergence operator div_W C^∞(𝒫(I); ℝ^n)→ C^∞(𝒫(I);ℝ) satisfies
div_Wℍ(p):= Π(p)^1/2∇_p·(Π(p)^-1/2ℍ(p)).
(iii) The Laplace-Beltrami operator Δ_W C^∞(𝒫(I);ℝ)→ C^∞(𝒫(I);ℝ) satisfies
Δ_W 𝔽(p):= div_W(grad_W𝔽(p))
= Π(p)^1/2∇_p·(Π(p)^-1/2 L(p)∇_p𝔽(p))
= -1/4∑_(i,j)∈ E(∇_ω∇_p)_i,j𝔽(p)(∇_ω∇_p)_i,jlogΠ(p) _ij(p)
+1/2∑_(i,j)∈ E(∇_ω∇_p)_i,j(∇_ω∇_p)_i,j𝔽(p)_ij(p)
+1/2∑_(i,j)∈ E(∇_ω∇_p)_i,j𝔽(p)(∇_ω∇_p)_i,j_ij(p),
where
(∇_ω∇_p)_i,j(∇_ω∇_p)_i,j𝔽(p):= (√(ω_ij)(∂/∂ p_j-∂/∂ p_i))^2𝔽(p)
= ω_ij(∂^2/∂ p_i^2-2∂^2/∂ p_i∂ p_j+∂^2/∂ p_j^2)𝔽(p),
and
(∇_ω∇_p)_i,jθ_ij(p):=√(ω_ij)(∂/∂ p_j-∂/∂ p_i)θ_ij(p).
§.§ Finite state cannocial Wasserstein common noises
We are ready to introduce a canonical diffusion process on a manifold (𝒫(I),g^W).
Consider an Ito stochastic differential equation
dp_t=div_ω(θ(p_t) ∇_ω∇_plogΠ(p_t)^1/2/θ(p_t))dt+√(2)div_ω(√(θ(p_t))dB^E_t),
where p_t=p(t) is the solution of SDE (<ref>), B_t^E:=(B^E_ij(t))_1≤ i,j≤ N with B_ij^E(t)=B_ij^E=1/√(2)(B_ij-B_ji), and B_ij, 1≤ i,j≤ N, are standard independent Brownian motions in ℝ^n× n with mean zero and unity rate variance. In details, for any i∈ I, equation (<ref>) satisfies
dp_i(t)= ∑_j∈ N(i)√(ω_ij)(∇_ω∇_p)_i,jlogΠ(p(t))^1/2/θ_ij(p(t))θ_ij(p(t)) dt
+∑_j∈ N(i)√(ω_ijθ_ij(p(t))) (dB_ij(t)-dB_ji(t)),
where
(∇_ω∇_p)_i,jlogΠ(p)^1/2/θ(p):=√(ω_ij)(∂/∂ p_j-∂/∂ p_i)(logΠ(p)^1/2/θ_ij(p)).
We call the solution of (<ref>) the √(2)-Wasserstein common noise on finite states.
We next present Kolmogorov forward and backward operators for SDE (<ref>).
Denote the probability density function and the test function as
ℙ(p)∈ C^∞(𝒫(I); ℝ), Φ(p)∈ C^∞(𝒫(I); ℝ).
Then the Kolmogorov forward operator of SDE (<ref>) satisfies
𝖫^*_Wℙ(p)= 1/2∇_p·(ℙ(p) L(p) ∇_plogΠ(p))+∇_p·(L(p) ∇_pℙ(p))
= 1/2(∇_pℙ(p), L(p) ∇_plogΠ(p))+1/2ℙ(p)∇_p·(L(p)∇_plogΠ(p))+∇_p·(L(p) ∇_pℙ(p)).
And the Kolmogorov backward operator of SDE (<ref>) satisfies
𝖫_WΦ(p)= -1/2(∇_pΦ(p), L(p)∇_plogΠ(p))+∇_p·(L(p)∇_pΦ(p)).
In details,
𝖫^*_Wℙ(p)= 1/4∑_(i,j)∈ E (∇_ω∇_p)_i,jℙ(p) (∇_ω∇_p)_i,jlogΠ(p)_ij(p)
+1/4ℙ(p)∑_(i,j)∈ ElogΠ(p)θ_ij(p)
+1/4ℙ(p)∑_(i, j)∈ E(∇_ω∇_p)_i,j(∇_ω∇_p)_i,jlogΠ(p)_ij(p)
+1/2∑_(i,j)∈ E(∇_ω∇_p)_i,j(∇_ω∇_p)_i,jℙ(p)_ij(p)
+1/2∑_(i,j)∈ E (∇_ω∇_p)_i,jℙ(p)(∇_ω∇_p)_i,jθ_ij(p),
and
𝖫_WΦ(p)
= -1/4∑_(i,j)∈ E (∇_ω∇_p)_i,jΦ(p)(∇_ω∇_p)_i,jlogΠ(p)_ij(p)
+1/2∑_(i,j)∈ E(∇_ω∇_p)_i,j(∇_ω∇_p)_i,jΦ(p)_ij(p)
+1/2∑_(i,j)∈ E(∇_ω∇_p)_i,jΦ(p)(∇_ω∇_p)_i,j_ij(p).
The derivations of L_W^* and L_W are provided in appendix.
§.§ Langevin dynamics in finite state Wasserstein space
We next derive the overdamped Langevin dynamics in finite state Wasserstein space. It forms a gradient drift diffusion processes in probability simplex set.
Given 𝕍∈ C^∞(P(I); ℝ), consider the gradient drift diffusion process
dp_t=div_ω(θ(p_t) ∇_ω∇_p[𝕍(p_t)+βlogΠ(p_t)^1/2/θ(p_t)])dt+√(2β)div_ω(√(θ(p_t))dB^E_t),
where β>0 is a scalar. In details, for any i∈ I, equation (<ref>) satisfies
dp_i(t)= ∑_j∈ N(i)√(ω_ij)(∇_ω∇_p)_i,j(𝕍(p(t))+βlogΠ(p(t))^1/2/θ_ij(p(t)))θ_ij(p(t))dt
+√(β)∑_j∈ N(i)√(ω_ijθ_ij(p(t))) (dB_ij(t)-dB_ji(t)).
The Fokker-Planck equation of SDE (<ref>) forms
∂/∂ tℙ(t,p)= ∇_p·(ℙ(t,p)L(p)∇_p𝕍(p))+β𝖫^*_Wℙ(t,p),
where the solution ℙ(t, p) represents the probability density function of SDE (<ref>).
Assume that Z:=∫_𝒫(I)e^-1/β𝕍(p)Π(p)^-1/2dp<+∞. Then the stationary solution of equation (<ref>) satisfies
ℙ^*(p)=1/Ze^-1/β𝕍(p)Π(p)^-1/2.
The Kolmogorov forward equation of SDE (<ref>) satisfies
∂𝕡(t,p)/∂ t= div_W(𝕡(t,p)grad_W𝕍(p))+βΔ_W𝕡(t,p)
= Π(p)^1/2∇_p·(L(p)[𝕡(t,p)∇_p𝕍(p)+ β∇_p𝕡(t,p)] Π(p)^-
1/2).
Again, denote ℙ(t,p)=𝕡(t,p)Π(p)^-1/2, then we have
∂ℙ(t,p)/∂ t
= ∇_p·(ℙ(t,p)L(p)∇_p𝕍(p))+β𝖫^*_Wℙ(t,p)
= β∇_p·(ℙ(t,p)L(p)∇_plogℙ(t,p)/e^-1/β𝕍(p)Π(p)^-1/2).
This finishes the proof.
We note that the dynamical behaviors of SDEs (<ref>) or (<ref>) are often complicated when p_i, p_j are close to zero. They are degenerate SDEs on the boundary point of simplex set. In modeling of finite state population games, we need to construct some reflecting boundary conditions to ensure the wellposedness of SDE (<ref>). We leave their studies in future works.
§ STOCHASTIC REVERSIBLE MARKOV JUMP PROCESSES
In this section, we present an important example of gradient drift diffusion process (<ref>). This is the main result of this paper.
We first review the fact that the gradient flow in (𝒫(I), g^W) forms Kolmogorov forward equations for finite state reversible Markov jump processes. In other words, there exists a Q-matrix, the generator of finite reversible Markov jumping process, which is a gradient descent direction of entropy in (𝒫(I), g^W). We next demonstrate that the proposed SDE adds geometric diffusions in the transition equations of finite reversible Markov jumping processes. In particular, we derive a Wasserstein Q-matrix function for modeling common noises and individual noises towards finite reversible Markov jumping processes.
In this section, we always consider an activation function:
θ(x,y)=x-y/ϕ'(x)-ϕ'(y),
where ϕ∈ C^1(ℝ;ℝ) is a convex function with ϕ(1)=0. Let the functional 𝕍 in equation (<ref>) be the ϕ-divergence:
𝕍(p)=D_ϕ(pπ):=∑_i=1^nϕ(p_i/π_i)π_i,
where π∈ℝ^n is defined in (<ref>). One example of ϕ-divergence is the Kullback–Leibler (KL) divergence. E.g., ϕ(x)=xlog x-x, then D_ϕ(pπ)=D_KL(pπ)=∑_i=1^np_ilogp_i/π_i.
§.§ Reversible Markov jumping process
We first review that gradient flows of ϕ-divergences in (𝒫(I), g^W) form reversible Markov jump processes; shown in <cit.> and strong Onsager gradient flows <cit.>. In other words, let β=0. In this case, SDE (<ref>) forms an ordinary differential equation, which is the gradient flow of ϕ-divergence in (𝒫(I), g^W):
dp_i(t)/dt= div_ω (θ(p(t))∇_ω∇_pD_ϕ(p(t)π))_i
= ∑_j∈ N(i)ω_ijθ_ij(p(t))(∂/∂ p_j-∂/∂ p_i)D_ϕ(p(t)π)
= ∑_j∈ N(i)ω_ijp_j(t)/π_j-p_i(t)/π_i/ϕ'(p_j(t)/π_j)-ϕ'(p_i(t)/π_i)(ϕ'(p_j(t)/π_j)-ϕ'(p_i(t)/π_i))
= ∑_j∈ N(i)ω_ij(p_j(t)/π_j-p_i(t)/π_i),
where we use the fact that θ_ij(p)=θ(p_i/π_i, p_j/π_j)=p_j/π_j-p_i/π_i/ϕ'(p_j/π_j)-ϕ'(p_i/π_i) and θ_ij(p)(ϕ'(p_j/π_j)-ϕ'(p_i/π_i))=p_j/π_j-p_i/π_i.
In fact, gradient flow equation (<ref>) is a Kolmogorov forward equation for a time-continuous reversible Markov chain. We need to exchange notations in reversible Markov chains and finite weighted graphs G=(I, E, ω). In other words, denote
Q_ij:=ω_ij/π_i if j≠ i;
-∑_k∈ N(i)ω_ik/π_i if j=i.
Thus equation (<ref>) satisfies
dp_i(t)/dt=∑_j=1^n [Q_jip_j(t)-Q_ijp_i(t)].
The Q-matrix is the generator of a reversible Markov chain in I. It satisfies the row sum zero condition:
∑_j=1^n Q_ij = 0, Q_ij≥ 0, for j≠ i.
And π=(π_i)_i=1^n∈ℝ^n defined in (<ref>) is an invariant measure for ODE (<ref>) with the detailed balance relation
Q_ijπ_i = Q_jiπ_j.
§.§ Stochastic reversible Markov jumping process
We next demonstrate that the gradient drift diffusion process in (𝒫(I), g^W) forms a stochastic reversible Markov jumping process on finite states.
Let β>0 be a positive scalar. Consider SDE (<ref>) as the gradient drift-diffusion flow of ϕ-divergence in (𝒫(I), g^W):
dp_i(t)= ∑_j∈ N(i)ω_ijθ_ij(p(t))(∂/∂ p_j-∂/∂ p_i)D_ϕ(p(t)π)dt
+β∑_j∈ N(i)ω_ijθ_ij(p(t))(∂/∂ p_j-∂/∂ p_i)logΠ(p(t))^1/2/θ_ij(p(t))dt
+√(β)∑_j∈ N(i)√(ω_ijθ_ij(p(t))) (dB_ij(t)-dB_ji(t)).
From equation (<ref>) and the definition of Q-matrix in (<ref>), we rewrite SDE (<ref>) as follows:
dp_i(t)= ∑_j=1^n [Q_jip_j(t)-Q_ijp_i(t)]dt
+β∑_j∈ N(i)ω_ijθ_ij(p(t))(∂/∂ p_j-∂/∂ p_i)logΠ(p(t))^1/2/θ_ij(p(t))dt
+√(β)∑_j∈ N(i)√(ω_ijθ_ij(p(t))) (dB_ij(t)-dB_ji(t)).
We next study several properties of SDE (<ref>). We define a Wasserstein diffusion perturbed Q-matrix, namely the Wasserstein Q-matrix.
Assume that p_i>0 for all i∈ I. Define a matrix function Q^W=(Q^W_ij)_1≤ i,j≤ N∈ℝ^n× n, where Q^W_ijℝ^n×ℝ×ℝ^n× n→ℝ, such that
Q^W_ij(p, β, Ḃ):=
Q_ij+a_ij(p) if j≠ i;
-∑_k∈ N(i)(Q_ik+a_ik(p)) if j=i,
where
a_ij(p):=1/p_imax{0, A_ji(p)},
and
A_ij(p):=βω_ijθ_ij(p)(∂/∂ p_j-∂/∂ p_i)logΠ(p)^1/2/θ_ij(p)+√(βω_ijθ_ij(p)) (Ḃ_ij(t)-Ḃ_ji(t)).
Using the matrix function Q^W, we rewrite SDE (<ref>) as follows.
SDE (<ref>) satisfies
ṗ_i(t)=∑_j=1^n [Q^W_ji(p(t), β, Ḃ)p_j-Q^W_ij(p(t), β, Ḃ)p_i].
In addition, Q^W satisfies the row sum zero condition:
∑_j=1^n Q^W_ij(p,β, Ḃ) = 0, Q^W_ij(p, β, Ḃ)≥ 0, for j≠ i.
If β=0, then the Wasserstein Q-matrix forms the Q-matrix. I.e.,
Q^W(p,0,Ḃ)=Q.
We check that
Q^W_ji(p)p_j-Q^W_ij(p)p_i= Q_jip_j-Q_ijp_i+1/p_jmax{0, A_ij(p)}p_j-1/p_imax{0, A_ji(p)}p_i
= Q_jip_j-Q_ijp_i+max{0, A_ij(p)}-max{0, A_ji(p)}.
From the fact that A_ij(p)=-A_ji(p), we have
A_ij(p)=max{0, A_ij(p)}-max{0, -A_ij(p)}=max{0, A_ij(p)}-max{0, A_ji(p)}.
This finishes the proof.
We last demonstrate the Fokker-Planck equations for SDE (<ref>). We also present an invariant distribution of SDE (<ref>).
Denote ℙ(t, p) as the solution of the probability density function of SDE (<ref>).
Then
∂/∂ tℙ(t,p)+∇_p·(ℙ(t,p) (∑_j=1^n [Q_jip_j-Q_ijp_i] )_i=1^n)=β𝖫^*_Wℙ(t,p).
Assume that Z=∫_𝒫(I)e^-1/βD_ϕ(pπ)Π(p)^-1/2dp<+∞, then the stationary solution of equation (<ref>) satisfies
ℙ^*(p)=1/Ze^-1/βD_ϕ(pπ)Π(p)^-1/2.
The proof directly follows from Proposition <ref>. We have
∂ℙ(t,p)/∂ t= -∇_p·(ℙ(t,p) (∑_j=1^n [Q_jip_j-Q_ijp_i] )_i=1^n)+β𝖫^*_Wℙ(t,p)
= ∇_p·(ℙ(t,p)L(p)∇_pD_ϕ(pπ))+βL^*_Wℙ(t,p)
= ∇_p·(ℙ(t,p)L(p)∇_pD_ϕ(pπ))+β∇_p·(ℙ(t,p)L(p)∇_plogℙ(t,p)/Π(p)^-1/2)
= β∇_p·(ℙ(t,p)L(p)∇_plogℙ(t,p)/e^-1/βD_ϕ(pπ)Π(p)^-1/2).
Clearly, the stationary density of equation (<ref>) satisfies
ℙ^*(p)=1/Ze^-1/βD_ϕ(pπ)Π(p)^-1/2,
where Z<+∞ is a normalization constant.
§ EXAMPLES ON A TWO POINT SPACE
In this section, we present several examples of Wasserstein gradient drift diffusion processes (<ref>) on a two-point state.
Consider a two-point graph I={1,2}, with ω_12=ω_21>0, ω_11=ω_22=0, and π_1=π_2=1/2.
Denote p=(p_1, p_2)^∈𝒫(I)⊂ℝ^2 as the probability function. In this case,
L(p)=[ θ_12(p)ω_12 -θ_12(p)ω_12; -θ_12(p)ω_12 θ_12(p)ω_12 ].
The eigenvalue of L(p) can be computed explicitly. In other words,
Π(p)=λ_1(p)=2ω_12θ_12(p).
The Wasserstein gradient drift-diffusion (<ref>) satisfies
{ dp_1(t) =ω_12θ_12(p(t))(∂/∂ p_2-∂/∂ p_1)[𝕍(p)-β/2logθ_12(p(t))]dt
+√(βω_12θ_12(p(t))) (dB_12(t)-dB_21(t)),
dp_2(t)=ω_12θ_12(p(t))(∂/∂ p_1-∂/∂ p_2)[𝕍(p)-β/2logθ_12(p(t))]dt
+√(βω_12θ_12(p(t))) (dB_21(t)-dB_12(t)),
.
where (B_12, B_21)∈ℝ^2 are standard independent Brownian motions.
The two dimensional SDE (<ref>) can be further simplified into a one dimensional equation. Denote x(t): =p_1(t)∈ [0,1], p_2(t)=1-x(t), h=√(ω_12)>0, V(x):=𝕍(p)=𝕍(x,1-x), and θ(x):=θ_12(p). Note that ∂/∂ p_1θ_12(p)-∂/∂ p_2θ_12(p)=d/dxθ(x)=θ'(x), and ∂/∂ p_1𝕍(p)-∂/∂ p_2𝕍(p)=d/dxV(x)=V'(x). Write B(t)=1/√(2)(B_12(t)-B_21(t)). Then SDE (<ref>) satisfies
dx_t=-h^2[θ(x)V'(x_t)-β/2θ'(x_t)]dt+h√(2βθ(x_t))dB_t,
where x_t∈ [0,1] is the solution. Thus the Fokker-Planck equation of SDE (<ref>) satisfies
∂_tρ(t,x)= h^2∂_x(ρ(t,x)[θ(x)V'(x)-β/2θ'(x)])+β h^2∂_xx(ρ(t,x)θ(x))
= β h^2∂_x(ρ(t,x)θ(x)∂_xlogρ(t,x)/e^-1/βV(x)θ(x)^-1/2).
And the stationary density of SDE (<ref>) satisfies
ρ^*(x)=1/Ze^-V(x)/βθ(x)^-1/2,
where we assume that Z=∫_0^1e^-V(y)/βθ(y)^-1/2 dy<+∞.
[Wasserstein common noises on a two point space]
Let β=1 and 𝕍(p)=0. The SDE (<ref>) forms the canonical Wasserstein common noise:
dx_t =h^2/2θ'(x_t)dt+h√(2θ(x_t))dB_t.
In this case, assume that Z=∫_0^1θ(y)^-1/2 dy<+∞, the stationary density in simplex set satisfies
ρ^*(x)=1/Zθ(x)^-1/2.
In particular, let θ be a geometric mean, i.e., θ(x)=2√(x(1-x)). Then SDE (<ref>) forms
dx_t=h^21-2x_t/x_t^1/2(1-x_t)^1/2dt+ 2hx_t^1/4(1-x_t)^1/4dB_t.
We simulate the above SDE numerically in the time interval [0,1] by the Euler–Maruyama scheme, for parameters h=0.1, t∈ [0,1], x_0=0.5.
[Individual and Wasserstein common noises on a two point space]
Let β=1 and 𝕍(p)=p_1logp_1/π_1+p_2logp_2/π_2, i.e., V(x)=xlog x+(1-x)log(1-x)+log 2.
Then SDE (<ref>) satisfies
dx_t =h^2[-θ(x_t)(log x_t-log (1-x_t))+1/2θ'(x_t)]dt+h√(2θ(x_t))dB_t.
And the stationary density in simplex set satisfies
ρ^*(x)=1/Z(1/x)^x(1/1-x)^1-xθ(x)^-1/2, Z=∫_0^1(1/y)^y(1/1-y)^1-yθ(y)^-1/2 dy<+∞.
In particular, let θ be a logarithm mean, i.e., θ(x)=2(2x-1)/log x-log(1-x). Then SDE (<ref>) forms
dx_t= h^2[2(1-2x_t)+(1-2x_t)/(x_t-x_t^2)(log x_t-log(1-x_t))^2+2/(log x_t-log(1-x_t))]dt
+ 2h√(2x_t-1/log x_t-log(1-x_t))dB_t.
Again, we simulate the above SDE numerically in the time interval [0,1] by the Euler–Maruyama scheme, for parameters h=0.1, t∈ [0,1], x_0=0.5.
§ DISCUSSIONS
In this paper, we present Wasserstein common noises in probability simplex set, which is built from the Laplace-Beltrami operator in finite state Wasserstein space. We also derive a drift-diffusion process in probability simplex. Extending the equivalent relationship between gradient flows and reversible Markov jumping processes, we introduce a class of stochastic reversible Markov jumping processes. The stochastic perturbation is added from the canonical Wasserstein common noise on finite states.
We remark that equation (<ref>) is known as the strong Onsager gradient flow <cit.>:
dp/dt=-L(p)∇_pD_ϕ(pπ)=div_ω(θ(p) ∇_ω∇_pD_ϕ(pπ)),
where ∇_pD_ϕ(pπ) is the generalized force and L(p) is the Onsager response matrix. In this sense, the proposed SDE (<ref>) satisfies Onsager gradient drift diffusions:
dp_t= div_ω(θ(p_t) ∇_ω∇_pD_ϕ(p_tπ))dt+βdiv_ω(θ(p_t) ∇_ω∇_plogΠ(p_t)^1/2/θ(p_t))dt+√(2β)div_ω(√(θ(p_t))dB^E_t),
Individual noisesWasserstein common noises
where β>0 is a scalar for the canonical Wasserstein diffusion and Π(p_t) is the product of positive eigenvalues of the Onsager response matrix L(p_t).
In future work, we shall investigate properties of Wasserstein drift-diffusion processes on discrete states. In particular, the boundary set and corners of the probability simplex bring difficulties for the existence of strong solutions of SDEs (<ref>). One has to assume that the square root of activation function √(θ) is Lipschitz, which is often not satisfied for many divergence induced activation functions. The other interesting question is about the entropy dissipation analysis for probability density function supported on a simplex set; see <cit.>. In applications, we remark that finite states Wasserstein drift-diffusion processes are essential in modeling and computations of population games in social dynamics. Typical examples include stochastic evolutionary dynamics <cit.>, mean field games <cit.>, and estimation problems in data sciences <cit.>. More importantly, we shall develop fast and accurate algorithms to compute and model Wasserstein drift diffusion processes arised in social sciences, biology, evolutionary game theory, and Bayesian and AI sampling problems.
10
am2006
L. Ambrosio, N. Gigli, and G. Savaré.
Gradient flows: in metric spaces and in the space of probability
measures.
Springer Science & Business Media, 2006.
cardaliaguet2019master
P. Cardaliaguet, F. Delarue, J. Lasry, and P. Lions.
The master equation and the convergence problem in mean field games.
Princeton University Press, 2019.
WD3
O. Chodosh.
A lack of Ricci bounds for the entropic measure on
Wasserstein space over the interval.
Journal of Functional Analysis, 262(10):4570–4581, 2012.
chow2012
S. N. Chow, W. Huang, Y. Li, and H. Zhou.
Fokker–Planck equations for a free energy
functional or Markov jumping process on a graph.
Archive for Rational Mechanics and Analysis, 203(3):969–1008,
2012.
CG
Y.T. Chow, and W. Gangbo.
A partial Laplacian as an infinitesimal generator on the Wasserstein space.
Journal of Differential Equations, v.267, 2019
Dean
D. Dean.
Langevin equation for the density of a system of interacting Langevin processes.
Journal of Physics A: Mathematical and General, Volume 29, Number 24, 1996.
WD2
M. Döring, and W. Stannat.
The logarithmic Sobolev inequality for the Wasserstein
diffusion.
Probability Theory and Related Fields, 145(1-2):189–209, 2009.
WWG
Y. Dukler, W. Li, A. Lin, and G. Montufar.
Wasserstein of Wasserstein Loss for Learning Generative Models.
ICML, 2019.
FY
D. Foster, and P. Young,
Stochastic evolutionary game dynamics.
Theoretical Population Biology, Volume 38, Issue 2, 219–232, 1990.
FAV
P. Fuchs, A. Jungel, and M. von Renesse.
On the Lagrangian structure of quantum fluid model.
Discrete and Continuous Dynamical Systems series A, 34(4): 1375-1396, 2014.
G
W. Gangbo, W. Li, and C. Mou.
Geodesic of minimal length in the set of probability measures on
graphs.
ESAIM: COCV, Volume 25, 2019.
GLL
Y. Gao, W. Li, and J.G. Liu.
Master equations for finite state mean field games with nonlinear activations
arXiv:2212.05675, 2022.
Hanggi84
P. Hanggi, H. Grabert, P. Talkner, and H. Thomas.
Bistable systems: Master equation versus Fokker-Planck modeling.
Physical Review A, 29(1), 371, 1984.
hofbauer1988theory
J. Hofbauer, and K. Sigmund.
The theory of evolution and dynamical systems: mathematical aspects of selection.
Cambridge University Press Cambridge, 1988.
Hopf
E. Hopf.
Statistical hydromechanics and functional calculus.
J. Rat. Mech. Anal., 1(1):87Ð123, 1952.
KK
K. Kawasaki.
Stochastic model of slow dynamics in supercooled liquids and dense colloidal suspensions.
Physica A: Statistical Mechanics and its Applications, Volume 208, Issue 1, Pages 35-64, 1994.
KLR
V. Konarovskyi, T. Lehmann, and M. von Renesse.
On Dean–Kawasaki Dynamics with Smooth Drift Potential.
Journal of Statistical Physics, 2020.
WD1
V. Konarovskyi, and M. von Renesse.
Modified Massive Arratia flow and Wasserstein diffusion.
Communications on pure and applied mathematics, 2018.
PL1
J. Lasry, and P. Lions.
Mean field games.
Jpn. J. Math, 2, 229Ð-260, 2007.
LiG
W. Li.
Transport information geometry: Riemannian calculus on probability simplex.
Information Geometry, 5, 161Ð207, 2022.
EM1
J. Maas.
Gradient Flows of the Entropy for Finite Markov Chains.
Journal of Functional Analysis, 261(8):2250–2292, 2011.
MM
J. Maas, and A. Mielke.
Modeling of Chemical Reaction Systems with Detailed Balance Using Gradient Structures.
J Stat Phys, 181, 2257Ð2303, 2020.
M
A. Mielke.
A Gradient Structure for Reaction–diffusion
Systems and for Energy-Drift-Diffusion Systems.
Nonlinearity, 24(4):1329, 2011.
WD
M.K. von Renesse, and K.-T. Sturm.
Entropic measure and Wasserstein diffusion.
The Annals of Probability, 37(3):1114–1191, 2009.
ON
L. Onsager.
Reciprocal relations in irreversible processes,
I+II. Physical Review, 37,
405–426, 1931.
vil2008
C. Villani.
Optimal Transport: Old and New.
Number 338 in Grundlehren der mathematischen Wissenschaften.
Springer, Berlin, 2009.
abbrv
§ APPENDIX: DERIVATIONS OF L^*_W AND L_W
The derivation of Fokker-Planck equation for SDE (<ref>) is standard. We omit it here.
We only show the derivation from the Wasserstein Laplacian-Beltrami operator on simplex set to the Kolmogorov forward and backward operators in a simplex set.
The Laplace–Beltrami operator in (𝒫(I), g^W) satisfies
Δ_W𝕡(p)= Π(p)^1/2∇_p·(Π(p)^-1/2L(p) ∇_p𝕡(p)),
where 𝕡∈ C^∞(𝒫(I); ℝ) is a probability density function on simplex set w.r.t. vol_W. Here
∫_𝒫(I)𝕡(p)dvol_W(p)=1.
Denote a probability density function of simplex set w.r.t. Lebesgue measure in ℝ^n as
ℙ(p)=𝕡(p) Π(p)^-1/2.
Then operator (<ref>) forms
L_W^*ℙ(p)= ∇_p·(Π(p)^-1/2L(p) ∇_p𝕡(p))
= ∇_p·(𝕡(p)Π(p)^-1/2L(p) ∇_plog𝕡(p))
= ∇_p·(ℙ(p) L(p)∇_plogℙ(p)/Π(p)^-1/2)
= 1/2∇_p·(ℙ(p) L(p) ∇_plogΠ(p))+∇_p·(L(p) ∇_pℙ(p)),
where we use the fact
∇_p 𝕡(p)=𝕡(p)∇_plog𝕡(p), ∇_pℙ(p)=ℙ(p)∇_plogℙ(p).
In details, we have
∇_p·(L(p) ∇_pℙ(p))= ∑_i=1^n∑_j=1^n∂/∂ p_j(L(p)_ij∂/∂ p_iℙ(p))
= ∑_i=1^n∑_j=1^n(∂/∂ p_jL(p)_ij∂/∂ p_iℙ(p)+L(p)_ij∂^2/∂ p_i∂ p_jℙ(p))
= ∑_i=1^n∑_j∈ N(i)ω_ij(∂/∂ p_i-∂/∂ p_j)θ_ij(p)∂/∂ p_iℙ(p)
-∑_i=1^n∑_j∈ N(i)ω_ijθ_ij(p)∂^2/∂ p_i∂ p_jℙ(p)+∑_i=1^n∑_k∈ N(i)ω_ikθ_ik(p)∂^2/∂ p_i∂ p_iℙ(p)
= 1/2∑_(i,j)∈ Eω_ij(∂/∂ p_i-∂/∂ p_j)θ_ij(p)(∂/∂ p_i-∂/∂ p_j)ℙ(p)
+1/2∑_(i,j)∈ Eω_ijθ_ij(p)(∂^2/∂ p_j∂ p_j+∂^2/∂ p_i∂ p_i-2∂^2/∂ p_i∂ p_j)ℙ(p).
In above derivation, we use the fact that
∑_j=1^n∂/∂ p_jL(p)_ij= ∑_j≠ i∂/∂ p_jL(p)_ij+∂/∂ p_iL(p)_ii
= -∑_j∈ N(i)ω_ij∂/∂ p_jθ_ij(p)+∑_k∈ N(i)ω_ki∂/∂ p_iθ_ki(p)
= ∑_j∈ N(i)ω_ij(∂/∂ p_i-∂/∂ p_j)θ_ij(p).
Similarly, we can derive
∇_p·(ℙ(p) L(p) ∇_plogΠ(p))=(∇_pℙ(p), L(p)∇_plogΠ(p))+ℙ(p)∇_p·(L(p) ∇_plogΠ(p)).
This finishes the proof.
We next derive the Kolmogorov backward operator for SDE (<ref>).
∫_𝒫(I)Φ(p)𝖫^*_Wℙ(p)dp=∫_𝒫(I)ℙ(p)𝖫_WΦ(p)dp.
Clearly, we have
𝖫_WΦ(p)= -1/2(∇_pΦ(p), L(p)∇_plogΠ(p))+∇_p·(L(p)∇_pΦ(p)).
This finishes the proof.
|
http://arxiv.org/abs/2307.01785v1 | 20230704154601 | Old but Not Obsolete: Dimensional Analysis in Nondestructive Testing and Evaluation | [
"Tamburrino Antonello",
"Sardellitti Alessandro",
"Milano Filippo",
"Mottola Vincenzo",
"Laracca Marco",
"Ferrigno Luigi"
] | eess.SP | [
"eess.SP"
] |
§ OLD BUT NOT OBSOLETE: DIMENSIONAL ANALYSIS IN NONDESTRUCTIVE TESTING AND EVALUATION
Antonello Tamburrino1,2, Alessandro Sardellitti1, Filippo Milano1, Vincenzo Mottola1, Marco Laracca3,
and Luigi Ferrigno1,4
1Dept. of Electrical and Information Engineering, University of Cassino and Southern Lazio,
03043 Cassino (FR), Italy
e-mail: {tamburrino, alessandro.sardellitti, filippo.milano, ferrigno}@unicas.it
2Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI-48824, USA
3Dept. of Astronautics, Electrical and Energy Engineering, Sapienza University of Rome,
00186 Rome, Italy
e-mail: [email protected]
4Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT), Italy
Abstract
This paper introduces dimensional analysis in Non–Destructive Testing & Evaluation (NDT&E) problems. This is the first time that this approach is adopted in the framework of NDT&E, and the paper opens to the development of probes and methods to simultaneously estimate several parameters with a simple approach.
The most important theorem of dimensional analysis is the Buckingham's theorem, based on the concept that the laws of the physics do not depend on the particular set of units chosen. The core of this theorem is a systematic reduction in the number of variables describing a physical problem. This reduction is equal to k, the number of fundamental dimensions required to describe the variables of the physical problem in its original setting. This makes the approach ideal when the number of variables of the physical problem is not much greater than k.
In this work, we demonstrate the effectiveness of the approach for the simple problem of the simultaneous estimation of the thickness and electrical conductivity of a conducting plate, via Eddy Current Testing. The approach is original, effective, efficient, and currently patent-pending. All the aspects, from theory to experimental validation, are provided, and it is proved that the proposed method achieves a very good accuracy over a wide range of thicknesses and electrical conductivities. Moreover, the proposed method is compatible with the in-line and real-time estimation of thickness and electrical conductivity in an industrial environment.
Keywords: Dimensional analysis; Buckingham’s theorem; Non–Destructive Testing & Evaluation (NDT&E); Eddy Current Testing (ECT); Multi-parameter simultaneous estimation; Thickness estimation; Electrical conductivity estimation.
§ INTRODUCTION
Problems related to Non–Destructive Testing and Evaluation (NDT&E) generally involve several variables. As a matter of fact, the outcome of a
NDT&E test depends on (i) the parameters describing the probe (geometry, materials, ...), (ii) the physical and geometrical parameters of the sample under testing, (iii) the geometrical parameters describing the position of the probe with respect to the sample under testing and (iv) environmental factors. The number of variables involved and the correlated nature of these variables (e.g., excitation frequency and thickness estimation <cit.>) make an NDT&E problem difficult to handle. To this end, a methodology that can reduce systematically the complexity of problems by decreasing the number of variables involved, plays a very important role. For instance, this reduction of number of variables has a major impact when a physical problem is modelled either via a numerical approach or via a machine learning approach <cit.>. In both cases there is an exponential reduction of the number of required numerical simulations or the size of the training database.
To this purpose, dimensional analysis is a mathematical technique for analyzing problems involving physical quantities <cit.>.
Dimensional analysis can be used to simplify complex equations, by highlighting the fundamental quantities describing a problem. Specifically, by only analyzing the physical dimensions of the variables involved in an equation, it is possible to determine a smaller number of fundamental quantities describing the original problem. This simplify the computation of the solution of the original problem <cit.>. Dimensional analysis is commonly used in physics, engineering and other sciences to derive equations and verify experimental results <cit.>. This contribution is the first systematic study on the beauty and effectiveness of dimensional analysis, other than <cit.> where dimensional analysis was merely applied in a thickness estimation problem.
Within dimensional analysis, Buckingham’s theorem plays a key role. Buckingham’s theorem has its roots in the concept that the equations of physics cannot be affected by the choice of the units of the physical quantities <cit.>. It states that any physical law can be written in terms of dimensionless groups and it provides a procedure to find these dimensionless parameters, also called groups. The key is that the number of groups is smaller than the number of the original variables. For instance, if a physical problem is modelled by an equation of the type g( q_1, …, q_n )=0 and the physical dimensions of the q_is are expressed by a set of k fundamental dimensions, then the Buckingham's theorem allows to cast the original physical problem as G ( π_1,…,π_p )=0, where p=n-k and π_1,…,π_p are the dimensionless group. The Buckingham's theorem brings a problem to its fundamental form through the groups <cit.>, reducing the quantities involved <cit.> and decreasing the mathematical complexity of the problem of interest.
In the scientific literature, there are many original applications where the Buckingham’s theorem has been successfully applied. Although this theorem was proposed a long time ago, it is still applied in science and engineering (... old but not obsolete ..., paraphrasing the sci-fi movie Terminator Genisys) <cit.>. In <cit.>, dimensional analysis was applied to processing the biological cells by using microfluidic devices. In <cit.> using dimensionless groups, the authors studied the characteristics of different bearing parameters as the temperature varies. Furthermore, in <cit.> groups were adopted for a most effective description of the characteristic parameters of the thermal balance for the energy demand evaluation of a high-performance non-residential building. The creation of a rapid impedance model for proton exchange membrane fuel cells using physical and geometric parameters was analyzed in <cit.>. It is suggested to define dimensionless groups according to Buckingham's theorem so that the relationships between the fundamental dimensions and the physical variables involved in the process under discussion can be adequately described. This strategy was helpful in solving issues where first-principles models are unknown, challenging to build, or impossible to compute. In <cit.>, methods were developed based on the use of Buckingham’s theorem for optimizing tests inside wind tunnels. In <cit.> a comparative study on the wire electrical discharge machining of reinforcement materials was realized by means of Buckingham's theorem. The theorem was used to model the input variables and thermophysical characteristics of wire electric discharge machining on the material removal rate and surface roughness of aluminum and steel.
Although this approach has been widely adopted in various physical applications, to the best of our knowledge this methodology has never been applied in the NDT&E contest. In this paper, we propose a new methodology to simultaneously estimate the thickness and the electrical conductivity of conductive plates by means of Eddy Current Testing (ECT). The proposed approach is based on dimensionless groups derived from the celebrated Buckingham’s theorem <cit.>.
This specific applications is motivated by recognizing that the measurement of the value thickness and electrical conductivity of conductive materials is a crucial factor in all production and manufacturing processes (e.g., heat treatment, rolling and pressing). Indeed, these two quantities directly affect the quality properties of finished products, such as hardness, toughness, and tensile strength <cit.>. In this scenario, accurate and real-time monitoring of thickness and electrical conductivity of conductive materials are essential to improve production quality and efficiency. In-line measurement techniques are essential because they enable automatic quality control during the production phase, ensuring product or materials with proper accuracies, reasonable prices and short inspection times, as required in the Industry 4.0 paradigm.
The possibility of applying ECT methods to the simultaneous estimation of several parameters, such as thickness, electrical conductivity and lift-off, has been widely studied in the literature. ECT methods are characterized by low costs hardware and experimental set-up, contactless measurements, insensitivity to non-conductive materials such as paints, dust, etc. In <cit.> was proposed a method to simultaneously measure the thickness and electrical conductivity of the conductive sample, based on a single-frequency ECT method analyzing the phase of the mutual impedance. ECT sensing systems using anisotropic magnetoresistive sensors for simultaneous estimation of thickness and electrical conductivity was proposed in <cit.>, while in <cit.> a new eddy current sensing method with a material-independent model for coupled-parameter estimation was proposed. An improved Newton iterative method to detect thickness, electrical conductivity, permeability, and lift-off of the conductive sample based on multi-frequency excitation ECT was developed in <cit.>. Pulsed Eddy Current (PEC) techniques were also analyzed for multi-parametric estimation. For instance, in <cit.> the possibility to determine the thickness and electrical conductivity of conductive coatings on conductive plates was investigated using a PEC method, while in <cit.> a transient eddy-current measurement approach was proposed.
In this paper, we use the Buckingham’s theorem to simultaneously estimate the thickness and electrical conductivity of conductive plates. In particular, the major contributions of the paper are:
* a proper relationship in term of dimensionless groups among the measured quantity and the physical variables affecting it, in a frequency domain ECT experiment. We assume that the measured quantity is the self or mutual impedance of the probe. This assumption is not restrictive of the generality of the method;
* a method together with its algorithm counterpart for the simultaneous estimation of the thickness and electrical conductivity by using dimensionless groups;
* an versatile experimental set–up for the simultaneous estimation of the thickness and electrical conductivity, based on single-frequency or multi-frequency measurements.
Compared with methods already established in the literature, the methodology proposed in this paper offers some advantages. First, the Buckingham’s theorem makes it possible to reduce the number of variables to be considered, thus leading to a reduction in the computational complexity of the problem. Second, it makes possible to establish the structure of the relationships existing between the variables involved. Third, the proposed procedure is compliant with applications where the simultaneous estimation is required under in-line and real-time industrial conditions. Fourth, the proposed approach guarantees excellent accuracy.
The paper is organized as follows. In Section <ref> we briefly summarize the Buckingham’s theorem and show an example of application for modelling a simple RLC circuit. In Section <ref> we apply the Buckingham’s theorem to the specific problem and we derive the essential structure of the relationship between the relevant variables in terms of groups. Section <ref> provides the method for the simultaneous estimation of the thickness and electrical conductivity of a nonmagnetic plate. Section <ref> contains descriptions of the experimental set–up, case studies and experimental results. Finally, the conclusions are drawn in Section <ref>.
§ NOTATIONS
In this work we assume the following standard notations:
X represents a complex number;
Z (upper-case) represents a complex impedance value;
𝐯 represents a real valued vector.
§ THE BUCKINGHAM’S THEOREM
The dimensional analysis includes the set of all methods useful to reduce the complexity of a physical problem, before carrying out its quantitative analysis. Buckingham’s theorem (1914) is a fundamental tool to achieve this result <cit.>. Its root lies in previous publications by Lord Rayleigh (1877), J. Bertrand (1878), A. Vaschy (1892) and D. Riabouchinsky (1911). In its essence, Buckingham’s theorem states that any physical law can be expressed in terms of dimensionless parameters, called groups , since physical laws are independent from the system of units.
Buckingham's theorem is stated as follows.
Let a physical problem involving n dimensional scalar variables be modeled by a scalar equation of the type:
g (q_1, q_2, q_3, … ,q_n ) = 0.
Let the physical dimensions of all variables expressed in term of a set of k fundamental dimensions D_1,…,D_k:
q_i= D_1^a_i1×…× D_k^a_ik, i=1,…,n,
then there exist p=n-k dimensionless groups _1, _2, … , _p such that (<ref>) can be cast in the form
G (π_1, π_2, π_3, …, π_p ) = 0.
Theorem <ref> extend trivially to other cases where the laws of physics are described by vectors (tensors) quantities and/or multiple equations.
The Buckingham's theorem does not give the explicit expression for G, given g, but, rather, has to be derived explicitly, after the groups have been computed.
An example of physical dimensions are those from the SI base units: (time), (length), (mass), (electric current), (absolute temperature), (amount of substance) and (luminous intensity). However, any set of fundamental dimensions can be used in Theorem <ref>.
The Buckingham's theorem for the special case when there is one dependent variable, i.e. q_1=f( q_2,…,q_n ) gives
π_1=F ( π_2,…,π_p ).
The Buckingham's theorem extent similarly for the case of two or more dependent variables.
A possible method to find the dimensionless groups is described in detail in Appendix <ref>. The RLC series circuit provides an example of application to make the concepts crystal clear. In a RLC circuit the phasor I of the electrical current circulating in the elements (see Figure <ref>) is a function of the phasor of the voltage generator and of the passive components, i.e.
I=f ( E,ω,R,L,C)
that, as shown in Appendix <ref>, can be cast in dimensionless form as:
π_1=F ( π_2,π_3)
where
π_1=RI/E, π_2=ω L/R, π_3=1/ω R C.
The total number of physical variables required to describe the RLC series circuit reduces from n=6 of equation (<ref>) to p=3 of equation (<ref>), because the dimensions of I,E,ω,R,L,C can be expressed through a set of k=3 physical dimensions related to time, voltage and current.
The derivation of (<ref>) and (<ref>) does not rely on the explicit knowledge of f. However, the Buckingham's theorem does not provide the explicit expression for F, but rather, it must be derived from the knowledge of f and the related groups. For the RLC circuit, from
I=E/R+jω L -j/ω C
and the groups in (<ref>), it can be easily found that
R I/E=1/1+jω L/R -j 1/ω R C,
i.e. F(π_2,π_3)=1/[1+ j ( π_2 - π_3) ].
It is worth noting that the groups are not unique. The same law of physics can be expressed by means of many different sets of groups.
The main advantage of the Buckingham’s theorem consists in the reduction of the number of relevant variables describing a problem from n to p, being p=n-k. This approach is very effective especially when n is of the order of few units. This is because k is of the order of few units, k ≤ 7 in the SI, thus making p a fraction of n when n is of the order of few units. For instance, in analyzing the RLC circuit we have n=6 and k=3, thus yielding p=3, that is one-half of n. This has a major impact in reducing the amount of experimental and/or numerical data required to make correlations of physical variables. Indeed, for fully characterizing the function f appearing in (<ref>), the parameters array ( E,ω,R,L,C ) has to be varied in ℂ×ℝ^4, whereas the full characterization of the function F in (<ref>) requires the parameters array ( π_2,π_3 ) to be varied in ℝ^2. This latter option (evaluation of F) is definitely less expensive, in terms of number of experimental test or numerical simulation, if compared to the first one (evaluation of f). Summing up, dimensional analysis is a very powerful technique for formulating physics problems in their most basic forms, by minimizing the degrees of freedom of the problem.
Another advantage offered by Buckingham’s theorem is that its application does not require the a priori knowledge of the law relating the key physical quantities. It can be applied starting from only the knowledge of the physical variables describing the phenomenon. This is very important when the laws constraining the physical quantities are unknown, and it provides a guide for finding such laws.
§ DIMENSIONAL ANALYSIS IN EDDY CURRENT TESTING
The possibility of applying the ECT for the simultaneous estimation of multiple parameters, such as thickness and electrical conductivity, has attracted the interest of many researchers for a long time.
An Eddy Current Probe (ECP) is commonly made by an driving coil, that generates a time-varying magnetic flux density inducing, in turn, a current density in a conductive material, and a receiver coil or field sensor to sense the reaction magnetic flux density due to the induced eddy currents. Probes of different shape and arrangement can be adopted, which are based on a single or multiple coils for producing the driving field and for measuring the response of the material under testing <cit.>, one coil for the driving field and a magnetic flux density sensor for the sensing the response <cit.>, and so on. The tests can be carried out by means either single-frequency or multiple-frequencies approaches, and analyzing different measured quantities such as the variation of the sensed magnetic flux density due to the presence of the material under test or the variation of the self-impedance (for a single coil application) or the mutual-impedance (for a multiple coils application). In all these ECT scenarios, Buckingham’s theorem can be suitably applied for a deep understanding of the structure of the relationships between the variables describing the physical problem and for getting a simplification of the model thanks to the reduction of the number of degrees of freedom required to describe the system.
To demonstrate the effectiveness of the Buckingham’s theorem for the simultaneous evaluation of thickness and conductivity via ECT data, a specific case study was considered. Without loss of generality, it has been considered an ECP made of two coaxial coils (T/R configuration): the upper coil used as driving coil and the lower coil as pick-up coil. Coaxial coils and nonmagnetic materials have been assumed. Figure <ref> shows the geometry of the problem, with both the conductive plate and the ECP.
In the case of interest (T/R coil configuration), the measured quantity is ΔŻ_m = Ż_m,plate-Ż_m,air, i.e. is the difference of the mutual impedance between the coils when the ECP is located on the plate (Ż_m,plate) and in air (Ż_m,air), respectively, at a prescribed angular frequency. Hereafter we assume coaxial coils and nonmagnetic materials. The key physical quantities determining ΔŻ_m ( ω ) are (see Figure <ref>):
* the parameters describing the geometry of the probe: the internal r_1 and external r_2 radii of the coils, the height h_1 and number of turns N_1 of the receiving coil, the height h_2 and number of turns N_2 of the driving coil and the separation between the coils d;
* the angular frequency ω of the driving current applied for the test;
* the thickness Δ h and the electrical conductivity σ of the conductive plate;
* the magnetic permeability of the vacuum μ_0 and the corresponding magnetic reluctance ν_0=1/μ_0.
* the lift-off l_0 between the plate and the ECP and the tilting of the ECP probe w.r.t. the perpendicular to the plate.
The geometrical parameters of the ECP are normalized with respect to a length D=r_2 and are grouped in a dimensionless vector 𝐭=( r_1/D, h_1/D, h_2/D, d/D ). The normalization constant D represents the size of the probe. Other choices for D can be equally made.
All the listed parameters (geometry of the probe, number of turns of the coils, conductivity and thickness of the plate, lift-off and tilting of the ECP with respect to the plate) affect the mutual impedance between the transmitting and receiving coils, i.e.
ΔŻ_m/N_1 N_2 = f ( ω,σ,ν_0,Δ h,D,t,l_0,θ).
The evaluation of f, either by a numerical method or an experimental campaign, is not a trivial task, because its cost/time increases exponentially with the number of the arguments.
Equation (<ref>) involves a total of nine variables (n=9), with seven real-valued and scalar independent variables ω,σ,Δ h, D, l_0, θ,ν_0, one real-valued vector independent variable 𝐭 and one complex-valued dependent variable ΔŻ_m/N_1 N_2. Variables D and 𝐭 correspond to five real-valued and scalar variables (r_1,r_2,h_1,h_2,d).
All variables can be expressed in terms of three fundamental dimensions (k=3), length L, time T and impedance Ω, as showed in Table <ref>.
Buckingham’s theorem allows to obtain six dimensionless groups (p=n-k=6) as those listed in Table <ref> (see Appendix <ref> for details), where it has been assumed ν_0, ω and D as repeating variables. It is worth noting that each dimensionless variable of the original problem, as 𝐭 and θ, is assigned to a dimensionless group, i.e. π_5=𝐭 and π_6=θ. Consequently, (<ref>) can be expressed as π_1 = F ( π_2,π_3,π_4,π_5,π_6 ), that is
ΔŻ_mν_0/N_1 N_2 ω D = F ( D √(ωσ/2 ν_0),Δ h/D,l_0/D,𝐭,θ)
= F ( D/δ,Δ h/D,l_0/D,𝐭,θ),
where F is a proper function and the skin-depth δ is equal to
δ = √(2 ν_0/ωσ).
By taking into account that the purpose of this study is to measuring the thickness and the electrical conductivity of a conductive plate, given the characteristics 𝐭 of the ECP and the lift-off l_0 and tilting θ, it is convenient to focus on groups π_1, π_2 and π_3 since π_4, π_5 and π_6 are known. Consequently, the final dimensionless relationship under analysis is:
ΔŻ_mν_0/N_1 N_2 ω D = F ( D √(ωσ/2 ν_0),Δ h/D),
where, with a slight abuse of notations, the values of π_4, π_5 and π_6 are understood. Equation (<ref>) has to be compared with the counterpart of (<ref>) for prescribed (understood) 𝐭, l_0 and θ:
ΔŻ_m/N_1 N_2 = f ( ω,σ,ν_0,Δ h,D ).
The impact of the Buckingham's Theorem is relevant. First, by starting from (<ref>) involving a complex function of five real arguments, it is possible to get an equivalent relationship requiring a complex function F of two real arguments, without the explicit knowledge of the original function f. The new (reduced) function F can be easily computed numerically of measured experimentally, since it is defined in ℝ^2 rather than in ℝ^5. Moreover, by combining (<ref>), with the groups of Table <ref>, it is easy to realize that f and F are tightly related:
f ( ω,σ,ν_0,Δ h,D ) = ω D/ν_0 F ( D √(ωσ/2 ν_0),Δ h/D),
thus the computation or experimental evaluation of F gives the values of f.
Second, the Buckingham's Theorem allows to represent the inverse problem in the two-dimensional (π_2,π_3) plane, rather than in a five-dimensional space. This last remark is at the foundation of the method for the simultaneous estimate of σ and Δ via level curves, described in Section <ref>.
Finally, we highlight that the same approach can be applied to the case of a single coil ECP or to the case when the reaction magnetic flux density is measured by a field sensor.
§ SIMULTANEOUS ESTIMATE OF THICKNESS AND ELECTRICAL CONDUCTIVITY VIA DIMENSIONLESS GROUPS
In this Section the effectiveness of the concept of dimensional analysis is demonstrated with reference to the problem of the simultaneous estimation of the electrical conductivity and thickness of a metallic plate. This is a relevant problem, from the practical perspectives.
This Section is organized in two parts. In the first one an approach based on level curves is proposed. Level curves can be easily introduced, thank to the groups, that play a key role. The second part is devoted to translate the physical limits of the method in terms of level curves.
Without loss of generality, a planar geometry has been considered for demonstrating the effectiveness of dimensional analysis. The same treatment can be applied to non-planar geometries like tubes.
§.§ groups, level curves and estimation method
Thanks to the abstract representation of (<ref>), where the (complex) measured quantity π_1 is a function defined in the (π_2,π_3) plane, it is possible to introduce a set of level curves with respect to π_1. This is possible because the measured quantity Ż_m and the unknowns σ and Δ h are not mixing in the groups.
To get a level curve in the (π_2,π_3) plane, it is required to prescribe the value of a real valued quantity. For instance, it is possible to chose either the real part, or the imaginary part, or the magnitude, or the phase of π_1, or any other real function of π_1. Figure <ref> shows the level curves for the four basic quantities: π_1, π_1, |π_1| and π_1, in the (π_2,π_3) plane. The parameters for the underlying ECP are provided in Table <ref>.
The level curves of Figure <ref> have been obtained from a numerical evaluation of F(π_2,π_3) in a range of values for π_2 and π_3.
The numerical evaluation has been carried out by evaluating ΔŻ_m via the the semi-analytic model by Dodd and Deeds <cit.>. π_2 has been varied in the range [2.82; 28.2], whereas π_3 in the range [4.2 × 10^-3; 42 × 10^-3]. These ranges have been obtained by retaining σ constant and varying Δ h and ω, as in Table <ref>.
The level curves are a powerful tool to solve equation
F(π_2,π_3) = π_1.
Specifically, from the measurement of Ż_m at a prescribed angular frequency ω, it is possible to compute π_1 as ΔŻ_mν_0 /N_1 N_2 ω D. Then, from the specific value of π_1, it is possible to solve (<ref>) by finding the intersection point between the level curves from two or more different plots of Figure <ref>, as shown in Figure <ref>.
Once π_2 and π_3 have been evaluated, the unknown electrical conductivity σ and thickness Δ h can be evaluated as
σ =2 ν_0/ω( π_2/D)^2
Δ h = D π_3.
The step-by-step algorithm is:
* Measure ΔŻ_m at a prescribed ω;
* compute π_1=ΔŻ_mν_0 /N_1 N_2 ω D;
* find the level curves for at least two plots of Figure <ref>;
* find the intersection point (π_2,π_3) for the selected level curves;
* compute σ and Δ h via (<ref>) and (<ref>).
Summing up, dimensional analysis allows to cast the problem of retrieving σ and Δ h in very simple terms as intersection of level curves in a plane. This is because the five primary parameters (ω,σ,ν_0,Δ h, D) influencing the measured data combine in the very compact form given by groups π_2 and π_3, rather than individually as in (<ref>). For the same reason, i.e. that the influence parameters combine in a compact form, it is computationally feasible to compute numerically the function F(·,·), that is a function of two parameters rather than f(·,·,·,·,·), that depends on five parameters. Finally, it is worth noting that the F(·,·) can be pre-computed and stored once for all, given 𝐭, l_0 and θ.
§.§ Processing multiple measurements
The noise is a major issue when dealing with experimental data. To increase the accuracy of the method, it is proper to process the impedance measured at multiple frequencies. In this case, the level curves intersection procedure has to be repeated at each angular frequency, thus obtaining a set of points in the (π_2,π_3) plane, as showed in Figure <ref>.
Since π_2 is independent on ω, whereas π_3 depends on ω, being proportional to √(ω), the intersection points are distributed along a horizontal line. To each intersection point related to the impedance measured at the i-th angular frequency ω_i corresponds an estimate σ_i and Δ h_i of the electrical conductivity and thickness of the plate. The final estimate of σ and Δ h can be obtained by processing all the σ_is and the Δ h_is at an improved robusteness since combines the information from different frequencies.
Alternatively, it is possible to plot the level curves on the (σ,Δ h) plane, by means of (<ref>) and (<ref>). This latter strategy is extremely convenient because all the level curves, regardless the angular frequency, intersect at the same point, as showed in Figure <ref>. The intersection point gives directly the estimate of σ and Δ h.
§.§ Regions of operation
The (π_2,π_3) plane can be divided in different regions, yielding to different information that be inferred from the measured data.
There are three basic conditions that have to be considered:
* the skin-depth δ(ω) is enough smaller than Δ h. i.e. π_2 π_3 is enough larger than 1;
* the size of the probe D is much smaller than Δ h, i.e. π_3 ≫ 1;
* the skin-depth δ(ω) is much larger than Δ h, i.e. π_2 π_3 ≪ 1 and the size of the probe D is much larger than Δ h, i.e. π_3 ≪ 1.
In the first case (regions (c), (f) and (i) of Figure <ref>), the skin-depth is smaller than Δ h and this prevents the thickness Δ h to be retrieved from the data (see <cit.>), i.e. the dimensionless impedance π_1. This behaviour can be easily recognized in the plots of Figure <ref>, where the level curves become almost vertical, meaning that the π_3 does not affect the measured data π_1, i.e. Δ h cannot be retrieved by the knowledge of π_1. We found numerically that the region where the level curves are almost vertical corresponds to π_2 π_3 > 3, as shown in Figure <ref>. However, in these regions, the electrical conductivity can be still retrieved from π_1, since a change in the electrical conductivity, i.e. in π_2, determines a change of π_1.
In the second case (regions (a), (b) and (c) of Figure <ref>), the probe is geometrically too small to interact with the bottom of the plate, regardless the skin-depth, i.e. regardless the value of π_2=D/δ. This prevents the thickness Δ h to be retrieved from π_1, but the electrical conductivity σ can be still retrieved. As for the previous case, the level curves are almost vertical in regions (a), (b) and (c).
In the third case (region (g) of Figure <ref>), the probe is much larger than the thickness Δ h and the plate is fully penetrated by the electromagnetic field. In this case is possible to retrieve only the surface electrical conductivity of the plate, i.e. the σΔ h product <cit.>.
In the remaining regions (d), (e) and (h) of Figure <ref>, it is possible to retrieve both the electrical conductivity σ and the thickness Δ h, starting from the dimensionless impedance π_1. This is the so-called feasibility region, where the largest amount of information can be retrieved from the measured data. Region (h) deserves to be highlighted, because it opens to the possibility of measuring both the electrical conductivity and the thickness of thin and very thin plates.
The plots of the level curves provide another precious but less recognized information. Specifically, at the higher frequencies where the thicknesses cannot be retrieved (regions (c), (f) and (i) of Figure <ref>), the spacing between the level curves increases, as showed in Figure <ref>. This means that the gradient of the measured data π_1 with respect to π_2 decreases for increasing π_2. In other terms, at large angular frequencies and/or electrical conductivities, the sensitivity of the measurement with respect to σ decreases, despite some authors claimed that it is convenient estimating the electrical conductivity in such conditions, because the data does not depend on the thickness Δ h <cit.>.
AT: da qui
In frequency domain operations, a proper selection of the frequency of the excitation signal is a task of paramount relevance.
As well known, the strength of eddy current signals increases with the frequency of the driving signal, thus improving of the Signal-to-Noise Ratio (SNR) of the measured signals. At the higher frequencies, i.e. at the smaller skin-depths, the electromagnetic field does not penetrate completely the plate, thus the thickness of the plate does not affect the measurements and, consequently, cannot be estimated from the measured data <cit.>. This well-known limitation appears clearly from structure of the level curves. Indeed, as the thickness increases at a prescribed angular frequency, i.e. π_3 is increased at constant π_2, it is found that the level curves becomes vertical (see Figure <ref>), meaning that at such thicknesses the (dimensionless) measured quantity π_1 does not depend on π_3. The region where the level curves are vertical, corresponds to the region where the data π_1 does not allow to estimate Δ h and, therefore, measurements from this region have to be discarded.
This region is coherent with the limit sets by the skin-depth defined in (<ref>). Indeed, the physical requirement that the plate has to be fully penetrated by the electromagnetic field, i.e. Δ h has to be enough smaller than δ(ω) can be easily brought into the (π_2,π_3) plane. Specifically, by translating this condition as k Δ h ≤δ(ω), where k is a constant of the order of unity, it results that
1/π_2 π_3 > k.
Therefore, the feasibility region, i.e. the region of the (π_2,π_3) plane where the electrical conductivity and the thickness can be retrieved from the measured data, consists of the points below a brach of the hyperbola 1/π_2 π_3=k, as shown in Figure <ref> for k=1. To estimate both the electrical conductivity and the thickness, the intersection between different level curves must appear below this hyperbola.
§ THE PROPOSED MEASUREMENT PROCEDURE
The measurement procedure consists of three main phases subdivided into elementary steps as sketched in Figure <ref>.
The first phase is made off-line, once the parameters of interest are defined (e.g. thickness and electrical conductivity ranges to be estimated, probe characteristics, and frequency range to be analyzed). It is characterized by three steps namely: parameters definition, numerical simulations, and experimental calibration. This phase is performed once for all, and is repeated only if any of the defined characteristics (e.g probe, frequency range, etc.) changes.
The second phase is made in-line and it is characterized by two main steps: the experimental execution of the test at a defined excitation frequency (fixed inside the frequency range defined in Phase 1); the processing of the measured quantities to estimate the thickness and electrical conductivity. With the aim to improve the quality of the thickness and electrical conductivity estimation, Phase 2 can be repeated for different values of the frequency.
Finally, in the third phase, the final estimation of the thickness and electrical conductivity is provided by means of the level curves of Figure <ref> or its equivalent of Figure <ref>, in case of measurement at multiple frequencies.
In the following the details for each individual phase are provided.
Phase 1: Off-line activities
Step # 1.1 Parameters definition
In this step, the characteristics of both the SUT and the ECP are defined. In particular, the following parameters are prescribed: (i) geometry, dimensions and physical parameters of the ECP; (ii) thickness and electrical conductivity ranges to be explored; (iii) range of the excitation frequencies to be analysed.
The main parameters used in this paper are summarized in Tables <ref> and <ref>.
Step # 1.2 Numerical simulations
This step consists in an off-line numerical simulation to evaluate the level curves, given the parameters defined in Step # 1.1.
In detail, for each value of thickness and frequency, the corresponding simulated value of ΔŻ_m (ΔŻ_m,sim) is obtained using the semi-analytic models developed by Dodd and Deeds <cit.> (other simulation tools can be used without leading the generality of the proposal). Then, the resulting values of the dimensionless groups (π_1, π_2, π_3) are calculated, according to the dimensionless groups listed in Table <ref>.
In this phase, it is important to generate dimensionless curves for the higher number of different thicknesses and electrical conductivities. This means to make several simulations changing the thickness and the electrical conductivity (the product ωσ) inside the ranges defined in Step # 1.1 with a suitable resolution step allowing to have one level curve for each thickness and ωσ product. In this phase, the desired resolution capability in the Δ h - σ estimation is defined. The time needed to do this task is not an issue since it is made once.
Step # 1.3 Experimental calibration
In order to use the level curves simulated in Step # 1.2 to estimate unknown thickness and electrical conductivity from experimental results, a suitable experimental calibration phase is needed. The aim is to check the agreement between the simulated and experimental data due to the adopted simulation model, the uncertainty in the knowledge of the geometrical and physical characteristics of the ECP, the experimental noise, the measurement uncertainty, and so on. In particular, considering a number of reference conductive plates with known electrical conductivity and thickness and using the ECP defined in Step # 1.1, several experimental tests have been carried out for each excitation frequency used to create the simulated level curves (Step # 1.2) obtaining the experimental values of ΔŻ_m (ΔŻ_m,exp). A calibration factor c is then evaluated by the ratio between numerical and experimental results for each considered frequency:
c(f) = ΔŻ_m,sim(f)/ΔŻ_m,exp(f).
Phase 2: In-line activities
Step # 2.1 Experimental execution of the test at a defined excitation frequency
This step consists of the experimental acquisition of the data to evaluate ΔŻ_m,exp. This is an in-line experimental measurement activity performed at a defined excitation frequency (f^*) chosen from the values adopted in Step # 1.1, on a SUT with unknown thickness and electrical conductivity (inside the ranges defined in Step # 1.1). This step can be performed at only one frequency or repeated at other frequency values in order to improve the quality in the measure of the unknown thickness and conductivity.
Step # 2.2 Data processing to estimate thickness and electrical conductivity
Firstly, to the obtained experimental ΔŻ_m,exp(f^*), the calibration factor c(f^*) is applied and the check of compliance with the feasibility region defined in Figure <ref> is made. Then, according to the dimensionless groups defined in Table <ref>, the (π_1,exp)^*, (π_1,exp)^*, |π_1,exp|^*, (π_1,exp)^* values are calculated. Each one of these values corresponds to a dimensionless curve on the simulated contour map obtained in Step # 1.2. Finally, the point of intersection of the four level curves is found in order to estimate the π_2^* value for the x-axis coordinate and the π_3^* value for the y-axis coordinate (as represented in the Figures <ref> and <ref>).
Given the π_2^* and the π_3^* estimated quantities, the thickness can be evaluated as Δ h^* = D π_3^* while the electrical conductivity can be evaluated as σ^* =2 ν_0/2 π f^*( π_2^*/D)^2.
It is worth noting that phase 2 is repeated at a different frequencies in two possible cases: either for a failure in the check of compliance with the feasibility region or to improve the accuracy of the estimate of the thickness and the electrical conductivity, leveraging measurements from different frequencies.
Phase 3: Data processing for thickness and electrical conductivity estimation in case of multi-frequency approach
This last phase is made only if the operator has decided to use more than one frequency to execute the measurement of thickness and conductivity. This choice is made with the purpose to improve the measurement uncertainty since for each frequency can be made a Δ h - σ estimation. So it is possible to make suitable choices for the definition of the final results such us evaluating them as the mean value of the Δ h and σ values obtained at the different frequencies.
§ EXPERIMENTAL CHARACTERIZATION OF THE PROPOSED MEASUREMENT METHOD
§.§ Experimental set–up
To make the experimental characterization of the proposed measurement method, an experimental set-up was developed for the estimation of the Ż_m,air and Ż_m,plate values. In particular, Steps # 1.3 and # 2.1 of the procedure described above, are involved.
The experimental set-up is composed of an ECP, a waveform generator, a current probe, two signal amplifiers, a data acquisition board and a Personal Computer (PC). A schematic block diagram of the experimental set-up is shown in Figure <ref>.
The ECP consists of two coaxial coils, one used as exciting coil and one as receiver coil. Details about the coils geometry and their dimensions are provided in Figure <ref> and Table <ref>, respectively. An Agilent 33120A waveform generator is used to provide the excitation current to the ECP exciting coil. The excitation current is sensed by means of a Tektronix TCP202A current probe. Both the voltage proportional to the excitation current (output of the current probe) and the output voltage on the receiver coil are conditioned by means of two SR560 Stanford Research System low-noise signal amplifiers. A TIE-PIE Engineering Handyscope HS5-540XMS-W5^TM data acquisition board is adopted to digitize both the output signals provided by the two conditioning units (the signal amplifiers).
Even if the proposal can be exploited by choosing a single-frequency approach, with the aim to make a metrological characterization of the proposed measurement method, the tests have been carried out at different frequencies by applying a swept-sine signal to feed the exciting coil (so also the Phase 3 described in Section <ref> has been carried out). In particular, the considered frequency range was from 300 Hz to 3 kHz, with a frequency step equal to 50 Hz. For each applied sine signal, the RMS current value of 115 mA is considered. The conditioning unit is characterized by a bandpass filter with a bandwidth between 30 Hz and 10 kHz and an amplification gain of 200 for the voltage proportional to the excitation current and 100 for the output voltage on the receiver coil. In order to optimize the data processing in the time domain, the signals digitization is performed by adopting a sampling frequency 1000 times the considered signal frequency and acquiring 4 periods of each signal. A script developed in the MATLAB^TM environment, running on a dual-core PC, manages the automation of the whole developed measurement station and performs the signal processing to evaluate the desired quantities. The final outputs are the mutual impedances Ż_m,air and Ż_m,plate considering the tests carried out on the air and the conductive plate respectively.
§.§ Results and discussion
The experimental set-up described in Section <ref> is used to make both the off-line activity (Step # 1.3) and the in-line activity (Step # 2.1). As far as Step # 2.1, the tests have been carried out on six plates with known thicknesses and electrical conductivities. The main characteristics of the considered plates are listed in Table <ref> and are contained in the parameter ranges (see Step # 1.1) used to create the simulated dimensionless curves (see Step # 1.2).
For each considered plate and for each adopted excitation frequency value, 20 repeated measurements were carried out to investigate the repeatability in the estimation of both Δ h and σ.
To quantitatively analyze the goodness of the proposed method, the following figures of merit have been defined.
* The average of both the thickness (Δ h_f) and the electrical conductivity (σ_f) at each frequency, calculated as the average of all the electrical conductivities and thicknesses estimated for the 20 repetitive measurements carried out at each considered frequency.
* The mean absolute relative error of the estimated thickness (ϵ_rf, Δ h) and electrical conductivity (ϵ_rf, σ) at each frequency with respect to the corresponding known values (see equations (<ref>) and (<ref>) respectively).
ϵ_rf, Δ h = |Δ h_f-Δ h|/Δ h· 100
ϵ_rf, σ = |σ_f-σ|/σ· 100
* The standard deviation of the 20 obtained relative errors at each frequency std_ϵ_rf, Δ h and std_ϵ_rf, σ.
* The overall obtained average thickness (Δ h) and electrical conductivity (σ): it can be calculated as the average of all the thicknesses and electrical conductivities estimated for each excitation frequency and for each repetition for the considered frequency.
* The mean absolute relative error of the estimated thickness (ϵ_r, Δ h) and electrical conductivity (ϵ_r, σ) with respect to the nominal thickness and electrical conductivity respectively (see equations (<ref>) and (<ref>) respectively).
ϵ_r, Δ h = |Δ h-Δ h|/Δ h· 100
ϵ_r, σ = |σ-σ|/σ· 100
* The estimated absolute standard deviations (std_Δ h, std_σ) on all the estimated thicknesses and electrical conductivities.
* The estimated relative standard deviations (std_ϵ_r, Δ h and std_ϵ_r, σ) on overall absolute relative errors.
In detail, for all the considered frequencies that fall inside the feasible operating area, Figure <ref> shows the behaviour of the mean absolute relative errors ϵ_rf, Δ h (a), and the corresponding standard deviation std_ϵ_rf, Δ h (b) related to the thickness estimation. The corresponding figures of merit in the case of conductivity estimation are reported in Figure <ref> (a) (ϵ_rf,σ) and Figure <ref> (b) (std_ϵ_rf, σ).
As expected, for all the analysed plates, both the error and the standard deviations are generally more significant at low excitation frequencies for both Δ h and σ (due to the weakness of the eddy current at those frequency values). Increasing the excitation frequencies, relative error assumes suitable values always lower than 5 % for the thickness and 4 % for the electrical conductivity. Similar behaviors can be observed for the standard deviations (see Figure <ref> (b) and Figure <ref> (b)). The obtained results prove the suitability of the proposed method to measure both thickness and electrical conductivity with a single frequency measurement. This is confirmed by the minimum values obtained for the errors and standard deviations of both thickness and conductivity that can reach values lower than 0.1 %.
Several considerations can be made on the suitable choice of the optimal frequency to be used for the single-frequency approach or the range of frequencies to be adopted if the multi-frequency solution is considered. This kind of analysis are outside the scope of this first experimental validation of the proposed method. It will be made in a future work where a deep metrological characterization will be proposed. To complete the analysis of the data related to this first experimental validation, Tables <ref> and <ref> show the considered figure of merits in terms of error and standard deviation on the overall measured data (all frequencies and all repetitions) for the thickness and the electrical conductivity estimation respectively. These results can be seen as a possible output of Phase 3 of the proposed method in the case of a multi-frequency approach. Obviously, the mean values shown in the tables are affected by the poorness of the results at the low-frequency values that could be suitably avoided by selecting the frequency ranges.
§ CONCLUSION
This paper introduces dimensional analysis in Non–Destructive Testing & Evaluation problems for the first time.
The use of dimensional analysis allows to represent a physical system via a minimal set of variables, the so-called groups. The groups are dimensionless and, moreover, can be found from the knowledge of only the physical dimensions of the variables describing the original problem. This allows (i) to reduce the computational cost when simulating numerically the physical system of interest, as required for quantitative inversion of the data, training of Artificial Intelligence algorithms, repeated simulation to design a new probe, etc, and (ii) represent an inverse problem in a reduced dimensional space, as in our application where a simple inverse problem was represented in a plane.
To present the approach in a crystal clear manner, it was proposed an Eddy Current Testing method for the simultaneous estimation of thickness and electrical conductivity of conductive plates. The method has been presented from the concept to a successful experimental validation carried out on metallic plates with different thicknesses (from 1 to 4 mm) and electrical conductivities (from 17 to 58 MS/m). The method can be applied to either single or multi-frequency data and its negligible computational cost makes it suitable for industrial in-line inspections.
A complete metrological characterization of the proposed method is currently in progress. Future work will be the optimization of the frequency range for the inspection as well as the introduction of multi-frequency excitation signals.
§ CONSTRUCTION OF THE DIMENSIONLESS GROUPS
Here the method to compute the dimensionless group is sketched. We refer to <cit.> and <cit.> for details.
From a general perspective, each group can be expressed as
π_i = q_1^α_i1⋯ q_j^α_ij⋯ q_n^α_in
where the exponents α_ij, j = 1,2,…,n, are rational numbers such that π_i is a dimensionless quantity.
To find the α_ijs, a set of k so called repeating variables (see <cit.>), chosen among the n dimensional quantities q_1,…,q_n, has to be defined. The repeating variables must satisfy the following constraints: (i) their products, with proper exponents, give all the physical dimensions of the underlying problem; (ii) are independent; (iii) their arbitrary nontrivial products do not generate a dimensionless quantity; (iv) they should not be dependent variables of the problem, if any. Assuming the k repeating variables are q_1,…,q_k, each group is expressed as
π_i=q_1^α_i1×…× q_1^α_ik q_k+i, i=1,…,n-k,
and coefficients α_ijs are found by imposing each group to be dimensionless.
As example of application, the procedure for the construction of the dimensionless groups for the simple RLC series circuit of Figure <ref>, operated in the frequency domain, is proposed. The physical quantities are n=6 and listed in Table <ref>.
The electrical current I is assumed to depend on the other quantities, i.e.
I = f ( E,R,L,C,ω).
A set of possible repeating variables is R,E,ω. Indeed, (i) the fundamental dimensions can be expressed in term of monomial products of the repeating variables (see Table <ref>), (ii) the repeating variables are independent and (iii) arbitrary nontrivial monomial products do not generate dimensionless quantities and (iv) they are not depending variables. Condition (iii) can be checked algebraically. Indeed, such condition is satisfied if and only if the kernel of the matrix made by the coefficients of the fundamental dimensions for the repeating variables is {0}. In this specific case, the exponents of the fundamental dimensions for ω, E and R give the matrix
[ 0 0 -1; 0 1 0; -1 1 0 ],
that is invertible and, therefore, its kernel is {0}.
The groups arising from these choices, are of the form
π_1 = R^α_11 E^α_12ω^α_13 I
π_2 = R^α_21 E^α_22ω^α_23 L
π_3 = R^α_31 E^α_32ω^α_33 C.
To compute coefficients α_ij, it suffices to write a system of equations imposing that each group is dimensionless. For instance, with reference to π_1, this condition is
A^0V^0T^0=( A^-1V^1T^0 ) ^α_11( A^0V^1T^0 ) ^α_12( A^0V^0T^-1) ^α_13( A^1V^0T^0 ),
that gives α_11=1, α_12=-1 and α_13=0 as solution of (<ref>).
0 = -α_11 + 1
0 = +α_11 + α_12
0 = -α_13
The complete list of groups is given in (<ref>).
|
http://arxiv.org/abs/2307.01178v1 | 20230703174422 | Learning Mixtures of Gaussians Using the DDPM Objective | [
"Kulin Shah",
"Sitan Chen",
"Adam Klivans"
] | cs.DS | [
"cs.DS",
"cs.LG",
"stat.ML"
] |
theoremTheorem
lemma[theorem]Lemma
definition[theorem]Definition
fact[theorem]Fact
assumption[theorem]Assumption
remark[theorem]Remark
Characterisation of three-body loss in 166Er and optimised production of large Bose–Einstein condensates
Robert P. Smith
August 1, 2023
========================================================================================================
Recent works have shown that diffusion models can learn essentially any distribution provided one can perform score estimation.
Yet it remains poorly understood under what settings score estimation is possible, let alone when practical gradient-based algorithms for this task can provably succeed.
In this work, we give the first provably efficient results along these lines for one of the most fundamental distribution families, Gaussian mixture models.
We prove that gradient descent on the denoising diffusion probabilistic model (DDPM) objective can efficiently recover the ground truth parameters of the mixture model in the following two settings:
* We show gradient descent with random initialization learns mixtures of two spherical Gaussians in d dimensions with 1/poly(d)-separated centers.
* We show gradient descent with a warm start learns mixtures of K spherical Gaussians with Ω(√(log(min(K,d))))-separated centers.
A key ingredient in our proofs is a new connection between score-based methods and two other approaches to distribution learning, the EM algorithm and spectral methods.
§ INTRODUCTION
In recent years diffusion models <cit.> have emerged as a powerful framework for generative modeling and now form the backbone of notable image generation systems like DALL·E 2 <cit.>, Imagen <cit.>, and Stable Diffusion <cit.>. At the heart of this framework is a reduction from distribution learning to denoising or score estimation. That is, in order to generate new samples from a data distribution q given a collection of independent samples, it suffices to learn the score function, i.e., the gradient of the log-density of the data distribution when convolved with varying levels of noise (see Section <ref>). A popular and well-studied objective for score matching is the denoising diffusion probabilistic model (DDPM) objective due to <cit.>. Optimizing this objective amounts to solving the following type of problem: given a noisy observation x of a sample x from q, estimate the mean of the posterior distribution over x.
While a number of theoretical works <cit.> have established rigorous convergence guarantees for diffusion models under mild assumptions on the data distribution, these works assume the existence of an oracle for score estimation and leave open whether one can actually provably implement such an oracle for interesting families of data distributions. In practice, the algorithm of choice for score estimation is simply to train a student network via gradient descent (GD) to fit a set of examples (x,x). We thus ask:
Are there natural data distributions under which GD provably achieves accurate score estimation?
In this work, we consider the setting where q is given by a mixture of Gaussians. Concretely, we assume that there exist centers μ_1^*,…,μ_K^*∈^d such that
q = 1/K∑^K_i=1(μ_i^*, ) .
We answer the above question in the affirmative for this class of distributions:
Gradient descent on the DDPM objective with random initialization efficiently learns the parameters of an unknown mixture of two spherical Gaussians with 1/poly(d)-separated centers.
When there is a warm start of the centers, gradient descent on the DDPM objective efficiently learns the parameters an unknown mixture of K spherical Gaussians with Ω(√(log(min(K,d))))-separated centers.
The DDPM objective is described in Algorithm <ref>. The term “efficiently” above means that both the running time and sample complexity of our algorithm is polynomial in the dimension d, the inverse accuracy 1/ϵ, and the number of components K. In the informal discussion, we often work with population gradients for simplicity, but in our proofs we show that empirical estimates of the gradient suffice (full details can be found in the Appendix).
We refer to Section <ref> for a formal description of the quantities used in Algorithm <ref>. Note that there are by now a host of different algorithms for provably learning mixtures of Gaussians (see Section <ref>).
For instance, it is already known that expectation-maximization (EM) achieves the quantitative guarantees of Theorems <ref> and <ref> <cit.>, and in fact even stronger guarantees are known via the method of moments. Unlike works based on the method of moments however, our algorithm is practical. And unlike works based on EM, it is based on an approach which is empirically successful for a wide range of realistic data distributions. Furthermore, as we discuss in Section <ref>, the analysis of Algorithm <ref> leverages an intriguing and, to our knowledge, novel connection from score estimation to EM, as well as to another notable approach for learning mixture models, namely spectral methods. Roughly speaking, at large noise levels, the gradient updates in Algorithm <ref> are essentially performing a type of power iteration, while at small noise levels, the gradient updates are performing the “M” step in the EM algorithm.
§.§ Related work
Theory for diffusion models. A number of works have given convergence guarantees for DDPMs and variants <cit.>. These results show that, given an oracle for accurate score estimation, diffusion models can learn essentially any distribution over ^d (e.g. <cit.> show this for arbitrary compactly supported distributions). Additionally, two recent works <cit.> have used Eldan's stochastic localization <cit.>, which is a reparametrization in time and space of the reverse SDE for DDPMs, to give sampling algorithms for certain distributions arising in statistical physics.
As we discuss next, these works are end-to-end in that they also give provable algorithms for score estimation via approximate message passing, though the statistical task they address is not distribution learning.
Provable score estimation. There is a rich literature giving Bayes-optimal algorithms for various natural denoising problems via methods inspired by statistical physics, like approximate message passing (AMP) (e.g. <cit.>) and natural gradient descent (NGD) on the TAP free energy <cit.>. The abovementioned works <cit.> (see also <cit.>) build on these techniques to give algorithms for the denoising problems that arise in their implementation of stochastic localization. These works on denoising via AMP or NGD are themselves part of a broader literature on variational inference, a suitable literature review would be beyond the scope of this work, see e.g. <cit.>.
We are not aware of any provable algorithms for score estimation explicitly in the context of distribution learning. That said, it may be possible to extract a distribution learning result from <cit.>. While their algorithm was for sampling from the Sherrington-Kirkpatrick (SK) model given the Hamiltonian rather than training examples as input, if one is instead given training examples drawn from the SK measure, then at sufficiently high temperature one can approximately recover the Hamiltonian <cit.>. In this case, a suitable modification <cit.> should be able to yield an algorithm for approximately generating fresh samples from the SK model given training examples.
Learning mixtures of Gaussians. The literature on provable algorithms for learning Gaussian mixture models is vast, dating back to the pioneering work of Pearson <cit.>, and we cannot do justice to it here. We mention only works whose quantitative guarantees are closest in spirit to ours and refer to the introduction of <cit.> for a comprehensive overview of recent works in this direction. For mixtures of identity-covariance Gaussians in high dimensions, the strongest existing guarantee is a polynomial-time algorithm <cit.> for learning the centers as long as their pairwise separation slightly exceeds Ω(√(log K)) based on a sophisticated instantiation of method of moments inspired by the quasipolynomial-time algorithms of <cit.>. By the lower bound in <cit.>, this is essentially optimal. In contrast, our Theorem <ref> only applies given one initializes in a neighborhood of the true parameters of the mixture. We also note the exponential-time spectral algorithm of <cit.> and quasipolynomial-time tensor-based algorithm of <cit.>, which achieve density estimation even in the regime where the centers are arbitrarily closely spaced and learning the centers is information-theoretically impossible.
A separate line of work has investigated the “textbook” algorithm for learning Gaussian mixtures, namely the EM algorithm <cit.>. Notably, for balanced mixtures of two Gaussians with the same covariance, <cit.> showed that finite-sample EM with random initialization converges exponentially quickly to the true centers. For mixtures of K Gaussians with identity covariance, <cit.> showed that from an initialization sufficiently close to the true centers, finite-sample EM converges exponentially quickly to the true centers as long as their pairwise separation is Ω(√(log K)). In particular, <cit.> establish this local convergence as long as every center estimate is initialized at distance at most Δ/2 away from the corresponding true center, where Δ is the minimum separation between any pair of true centers; this radius of convergence is provably best possible for EM.
Lastly, we note that there are many works giving parameter recovery algorithms mixtures of Gaussians with general mixing weights and covariances, all of which are based on method of moments <cit.>. Unfortunately, for general mixtures of K Gaussians, these algorithms run in time at least d^O(K), and there is strong evidence <cit.> that this is unavoidable for computationally efficient algorithms.
§.§ Technical overview
We begin by describing in greater detail the algorithm we analyze in this work. For the sake of intuition, in this overview we will focus on the case of mixtures of two Gaussians (K=2) where the centers are well-separated and symmetric about the origin, that is, the data distribution is given by
q = 1/2𝒩(μ^*,) + 1/2𝒩(-μ^*,) .
At the end of the overview, we briefly discuss the key challenges for handling smaller separation and general K.
Loss function, architecture of the score function and student network. The algorithmic task at the heart of score estimation is that of denoising. Formally, for some noise level t > 0, we are given a noisy sample
X_t = exp(-t) X_0 + √(1 - exp(-2t)) Z_t ,
where X_0 is a clean sample drawn from the data distribution q, and Z_t∼(0,). Conditioning on X_t induces some posterior distribution over the noise Z_t, and our goal is to form an estimate s for the mean of this posterior which achieves small error on average over the randomness of X_0 and Z_t. That is, we would like to minimize the DDPM objective, which up to rescaling is
given by[The real DDPM objective is slightly different, see (<ref>). The latter is what we actually consider in this paper, but this distinction is unimportant for the intuition in this overview.]
L_t(s) = 𝔼_X_0, Z_ts(X_t) - Z_t^2 .
As discussed in the introduction, the algorithm of choice for minimizing this objective in practice is gradient descent on some student network. To motivate our choice of architecture, note that when the data distribution is given by (<ref>), the true minimizer of L_t is, up to scaling,
tanh(⟨μ^*_t, x⟩)μ^*_t - x , where μ^*_t ≜μ^* exp(-t) .
See Appendix <ref> for the derivation. Notably, Eq. (<ref>) is exactly a two-layer neural network with tanh activation. As a result, we use the same architecture for our student network when running gradient descent.
That is, given weights μ∈^d, our student network is given by s_μ(x) ≜tanh(μ^⊤ x)μ - x. The exact gradient updates on μ are given in Lemma <ref>.
As we discuss next, depending on whether the noise level t is large or small, this update closely approximates the update in one of two well-studied algorithms for learning mixtures of Gaussians: power method and EM respectively.
Learning mixtures of two Gaussians. We first provide a brief overview of the analysis and then go into the details of the analysis. We start with mixtures of two Gaussians of the form (<ref>) where μ^* is Ω(1). In this case, we analyze the following two-stage algorithm. We first use gradient descent on the DDPM objective with large t starting from random initialization. We show that gradient descent in this “high noise” regime resembles a type of power iteration and gives μ that has a nontrivial correlation with μ^*_t. Starting from this μ, we then run gradient descent with small t. We show that the gradient descent in this “small noise” regime corresponds to the EM algorithm and converges exponentially quickly to the ground truth.
Large noise level: connection to power iteration. When t is large, we show that gradient descent on the DDPM objective is closely approximated by power iteration. More precisely, in this regime, the negative gradient of L_t(s_μ) is well-approximated by
-∇_μ L_t(s_μ) ≈ (2μ_t^* μ_t^*⊤ - r ) μ ,
where r is a scalar that depends on μ (See Lemma <ref>). So the result of a single gradient update with step size η starting from μ is given by
μ' ≜μ - η∇_μ L_t(s_μ) ≈ ((1 - η r ) + 2 ημ_t^* μ_t^*⊤ ) μ .
This shows us that each gradient step can be approximated by one step of power iteration (without normalization) on the matrix (1 - η r ) + 2 ημ_t^* μ_t^*⊤. It is know that running enough iterations of the latter from a random initialization will converge in angular distance to the top eigenvector, which in this case is given by μ^*_t. This suggests that if we can keep the approximation error in (<ref>) under control, then gradient descent on μ will also allow us to converge to a neighborhood of the ground truth. We implement this strategy in Lemma <ref>. Next, we argue that once we are in a neighborhood of the ground truth, we can run GD on the DDPM objective at low noise level to refine our estimate.
Low noise level: connection to the EM algorithm. When t is small, we show that gradient descent on the DDPM objective is closely approximated by EM. Here, our analysis uses the fact that μ^* is sufficiently large and requires that we initialize μ to have sufficiently large correlation with the true direction μ^*_t. We can achieve the latter using the large-t analysis in the previous section.
Provided we have this, when t is small it turns out that the negative gradient is well-approximated by
-∇_μ L_t(s_μ) ≈E_X ∼N(μ_t^*, ) [ tanh ( ⟨μ, X⟩ ) X ] - μ .
Note that the expectation is precisely the “M”-step in the EM algorithm for learning mixtures of two Gaussians (see e.g. Eq. (2.2) of <cit.>).
We conclude that a single gradient update with step size η starting from μ is given by mixing the old weights μ with the result of the “M”-step in EM:
μ' ≜μ - η∇_μ L_t(s_μ) ≈ (1 - η) μ + ηE_X ∼N(μ_t^*, ) [ tanh ( ⟨μ, X ⟩ ) X ]_“M” step in the EM algorithm .
<cit.> and <cit.> showed that EM converges exponentially quickly to the ground truth μ^*_t from a warm start, and we leverage ingredients from their analysis to prove the same guarantee for gradient descent on the DDPM objective at small noise level t (see Lemma <ref>).
Extending to small separation. Next, suppose we instead only assume that μ^* is Ω(1/poly(d)), i.e. the two components in the mixture may have small separation. The above analysis breaks down for the following reason: while it is always possible to show that gradient descent at large noise level converges in angular distance to the ground truth, if μ^* is small, then we cannot translate this to convergence in Euclidean distance.
We circumvent this as follows. Extending the connection between gradient descent at large t and power iteration, we show that a similar analysis where we instead run projected gradient descent over the ball of radius μ^* yields a solution arbitrarily close to the ground truth, even without the EM step.[Note that although μ^* is unknown, we can estimate its norm from samples.] The projection step can be thought of as mimicking the normalization step in power iteration.
It might appear to the reader that this projected gradient-based approach is strictly superior to the two-stage algorithm described at the outset. However, in addition to obviating the need for a projection step when separation is large, our analysis for the two-stage algorithm has the advantage of giving much more favorable statistical rates. Indeed, we can show that the sample complexity of the two-stage algorithm has optimal dependence on the target error (1/ϵ^2), whereas we can only show a suboptimal dependence (1/ϵ^8) for the single-stage algorithm.
Extending to general K. The connection between gradient descent on the DDPM objective at small t and the EM algorithm is sufficiently robust that for general K, our analysis for K = 2 can generalize once we replace the ingredients from <cit.> and <cit.> with the analogous ingredients in existing analyses for EM with K Gaussians. For the latter, it is known that if the centers of the Gaussians have separation Ω(√(logmin(K,d))), then EM will converge from a warm start <cit.>. By carefully tracking the error in approximating the negative gradient with the “M”-step in EM, we are able to show that gradient descent on the DDPM objective at small t achieves the same guarantee.
§.§ Preliminaries
Diffusion models.
Throughout the paper, we use either q or q_0 to denote the data distribution and X or X_0 to denote the corresponding random variable on R^d. The two main components in diffusion models are the forward process and the reverse process. The forward process transforms samples from the data distribution into noise, for instance via the Ornstein-Uhlenbeck (OU) process:
d X_t = - X_t d t + √(2) d W_t with X_0 ∼ q_0 ,
where (W_t)_t≥ 0 is a standard Brownian motion in ^d.
We use q_t to denote the law of the OU process at time t. Note that for X_t ∼ q_t,
X_t = exp(-t) X_0 + √(1 - exp(-2t)) Z_t with X_0 ∼ q_0, Z_t ∼N(0, ) .
The reverse process then transforms noise into samples, thus performing generative modeling. Ideally, this could be achieved by running the following stochastic differential equation for some choice of terminal time T:
d X^←_t = {X^←_t + 2∇_x ln q_T-t(X^←_t)} dt + √(2) dW_t with X^←_0 ∼ q_T ,
where now W_t is the reversed Brownian motion. In this reverse process, the iterate X^←_t is distributed acccording to q_T - t for every t∈[0,T], so that the final iterate X^←_T is distributed according to the data distribution q_0. The function ∇_x ln q_t is called the score function, and because it depends on q which is unknown, in practice one estimates it by minimizing the score matching loss
min_s_t E_X_t ∼ q_t[ s_t(X_t) - ∇_x ln q_t(X_t) ^2 ] .
A standard calculation (see e.g. Appendix A of <cit.>) shows that this is equivalent to minimizing the DDPM objective in which one wants to predict the noise Z_t from the noisy observation X_t, i.e.
min_s_t L_t(s_t) = E_X_0, Z_t[ s_t(X_t) + Z_t/√(1 - exp(-2t))^2 ] .
While we have provided background on diffusion models for context, in this work we focus specifically on the optimization problem (<ref>).
Mixtures of Gaussians.
We consider the case of learning mixtures of K equally weighted Gaussians:
q = q_0 = 1/K∑_i=1^K N(μ_i^*, ),
where μ_i^* denotes the mean of the i^th Gaussian component. We define θ^* = {μ_1^*, μ^*_2 … , μ_K^* }.
For the mixtures of two Gaussians, we can simplify the data distribution as
q = q_0 = 1/2N(μ^*, ) + 1/2N(-μ^*, ).
Note that distribution in Eq. (<ref>) is equivalent to the distribution Eq. (<ref>) with K=2 because shifting the latter by its mean will give the former distribution, and furthermore the necessary shift can be estimated from samples. The following is immediate:
If q_0 is a mixture of K Gaussians as in Eq. (<ref>), then for any t > 0, q_t is the mixture of K Gaussians given by
q_t = 1/K∑_i=1^K N(μ_i, t^*, ) where μ_i, t^* ≜μ_i^* exp(-t) .
See Appendix <ref> for a proof of this fact. We can see that the means of q_t get rescaled according to the noise level t. We also define θ_t^* = {μ_1,t^*, μ_2,t^*, …, μ_K,t^* }.
The score function for distribution q_t, for any t > 0, is given by
∇_x ln q_t(x) = ∑_i=1^K w^*_i, t(x) μ_i, t^* - x , where w_i, t^*(x) = exp(- x-μ_i, t^* ^2/2 ) /∑_j=1^K exp(- x-μ_j, t^* ^2/2 ) .
For a mixture of two Gaussians, the score function simplifies to
∇_x log q_t(x) = tanh( μ^*⊤_t x ) μ_t^* - x , whereμ_t^* ≜μ^* exp(-t)
See Appendix <ref> for the calculation.
Recall that ∇_x log q_t(x) is the minimizer for the score-matching objective given in Eq. (<ref>). Therefore, we parametrize our student network architecture similarly to the optimal score function. Our student architecture for mixtures of K Gaussians is
s_θ_t(x) = ∑_i=1^K w_i, t(x) μ_i, t - x , where w_i, t(x) ≜exp(- x-μ_i, t^2 / 2 )/∑_j=1^K exp(- x-μ_j, t^2 / 2 )
μ_i, t ≜μ_i exp(-t).
where θ_t = {μ_1,t, μ_2,t, …, μ_K, t} denotes the set of parameters at the noise scale t.
For mixtures of two Gaussians, we simplify the student architecture as follows:
s_θ_t(x) = tanh(μ_t^⊤ x)μ_t - x ,
where μ_t≜μexp(-t).
As θ_t only depends on μ_t in the case of mixtures of two Gaussians, we simplify the notation of the score function from s_θ_t(x) to s_μ_t(x) in that case. We use μ̂_t and μ̂_t^* to denote the unit vector along the direction of μ_t and μ_t^* respectively. Note that we often use μ_t (or θ_t) to denote the current iterate of gradient descent on the DDPM objective and μ'_t to denote the iterate after taking a gradient descent step from μ_t.
Expectation-Maximization (EM) algorithm. The EM algorithm is composed of two steps: the E-step and the M-step. For mixtures of Gaussians, the E-step computes the expected log-likelihood based on the current mean parameters and the M-step maximizes this expectation to find a new estimate of the parameters.
[See e.g., <cit.> for more details]
When X is the mixture of K Gaussian and {μ_1, μ_2, …, μ_K } are current estimates of the means, the population EM update for all i ∈{1,2,…,K} is given by
μ_i' = E_X[w_i(X) X]/E_X[w_i(X)] , where w_i(X) = exp(- X-μ_i^2 / 2 ) /∑_j=1^K exp(- X-μ_j^2 / 2 ) .
The EM update for mixtures of two Gaussians given in Eq. (<ref>) simplifies to
μ' = E_X ∼N(μ^*, )[ tanh(μ^⊤ X) X].
An analogous version of the EM algorithm, called the gradient EM algorithm, takes a gradient step in the direction of the M-step instead of optimizing the objective in the M-step fully.
[See e.g., <cit.> for more details]
For all i ∈{1,2,…,K},
the gradient EM-update for mixtures of K Gaussian is given by
μ_i' = μ_i + η E_X[ w_i(X)(X - μ_i) ],
where η is the learning rate.
§ WARMUP: MIXTURES OF TWO GAUSSIANS WITH CONSTANT SEPARATION
In this section, we formally state our result for learning mixtures of two Gaussians with constant separation. This case highlights the main proof techniques, namely viewing gradient descent on the DDPM objective as power iteration and as the EM algorithm.
§.§ Result and algorithm
There is an absolute constant c > 0 such that the following holds. Suppose a mixture of two Gaussians with the mean parameter μ^* satisfies μ^* > c. Then, for any ϵ > 0, there is a procedure that calls Algorithm <ref> at two different noise scales t and outputs μ such that μ - μ^* ≤ϵ with high probability. Moreover, the algorithm has time and sample complexity poly(d) / ϵ^2 (see Theorem <ref> for more precise quantitative bounds).
Algorithm.
The algorithm has two stages. In the first stage we run gradient descent on the DDPM objective described in Algorithm <ref> from a random Gaussian initialization and noise scale t_1 for a fixed number of iterations H where t_1 = O(log d) (“high noise”) and H = poly(d, 1/ϵ). In the second stage, the procedure uses the output of the first step as initialization and runs Algorithm <ref> at a “low noise” scale of t_2 = O(1).
§.§ Proof outline of Theorem <ref>
We provide a proof sketch of correctness of the above algorithm and summarize the main technical lemmas here. All proofs of the following lemmas can be found in Appendix <ref>.
Part I: Analysis of high noise regime and connection to power iteration. We show that in the large noise regime, the negative gradient -∇ L_t(s_t) is well-approximated by 2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t. Recall that this result is the key to showing the resemblance between gradient descent and power iteration. Concretely, we show the following lemma:
For t=O(log d), the gradient descent update on the DDPM objective L_t(s_t) can be approximated with 2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t:
-∇ L_t(s_t) - 2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t ≤poly(1/d).
From Lemma <ref>, it immediately follows that μ't, the result of taking a single gradient step starting from μ_t, is well-approximated by the result of taking a single step of power iteration for a matrix whose leading eigenvector is μ^*_t:
μ'_t = μ_t - η∇ L_t(s_μ) ≈ ((1 - 3ημ_t ^2 ) + 2μ_t^* μ_t^*⊤ ) μ_t .
The second key element is to show that as a consequence of the above power iteration update, the gradient descent converges in angular distance to the leading eigenvector. Concretely, we show the following lemma:
Suppose μ_t' is the iterate after one step of gradient descent on the DDPM objective from μ_t. Denote the angle between μ_t and μ_t^* to be θ and between μ'_t and μ_t^* to be θ'. In this case, we show that
tanθ'= maxκ_1 tanθ, κ_2 ,
where κ_1 < 1 and κ_2 ≤ 1 / poly(d).
Note tanθ' < tanθ implies that θ' < θ or equivalently μ̂_t'μ̂_t^* > μ̂_t μ̂_t^*.
Thus, the above lemma shows that by taking a gradient step in the DDPM objective, the angle between μ_t and μ_t^* decreases. By iterating this, we obtain the following lemma:
Running gradient descent from a random initialization on the DDPM objective L_t(s_μ) for t = O(log d) gives μ_t for which μ̂_tμ̂_t^* is Ω(1).
Note that we cannot keep running gradient descent at this high noise scale and hope to achieve μ such that μ - μ^* is O(ϵ). This is because Lemma <ref> can only guarantee that the angle between μ_t and μ_t^* is O(ϵ), but this does not imply μ - μ^* is O(ϵ). Instead, as described in Part II, we will proceed with a smaller noise scale.
Part II: Analysis of low noise regime and connection to EM. In the low noise regime, we run Algorithm <ref> using the output from Part I as our initialization. Our analysis here shows that whenever the initialization μ_t satisfies the condition of μ̂_tμ̂_t^* being Ω(1), μ_t - μ_t^* contracts after every gradient step. To start with, we show that the result of a population gradient step on the DDPM objective L_t(s_μ) results in the following:
μ'_t = (1 - η) μ_t + η E_x ∼N(μ_t^*, )[ tanh (μ_t^⊤ x) x ] + η G(μ_t, μ_t^*),
where μ'_t is the parameter after a gradient step, η is the learning rate, and function G is given by
G(μ, μ^*) = E_x ∼N(μ^*, ) [ - 1/2tanh”( μ^⊤ x ) μ^2 x + tanh'( μ^⊤ x ) μ^⊤ x x - tanh'( μ^⊤ x ) μ ].
Note we use the population gradient here only for simplicity; in the Appendix we show that empirical estimates of the gradient suffice.
After some calculation, we can show that
μ'_t - μ^*_t ≤ (1 - η) μ_t - μ^*_t + ηE_x ∼N(μ_t^*, )[ tanh (μ_t^⊤ x) x ] - μ_t^* + η G(μ_t, μ^*_t) .
Using Fact <ref>, we know that E_x ∼N(μ_t^*, )[ tanh (μ_t^⊤ x) x ] is precisely the result of one step of EM starting from μ_t, and it is known <cit.> that the EM update contracts the distance between μ_t and μ_t^* as follows:
E_x ∼N(μ_t^*, )[ tanh (μ_t^⊤ x) x ] - μ_t^* ≤λ_1 μ_t - μ_t^* for some λ_1 < 1
It remains to control the second term in Eq. (<ref>), for which we prove the following:
When μ^* = Ω(1) and the noise scale t = O(1), then for every μ with μ̂μ̂^* being Ω(1), the following inequality holds:
G(μ_t, μ_t*) ≤λ_2 μ_t - μ_t^* for some λ_2 < 1 .
Combining Eq. (<ref>) and Lemma <ref> with Eq. (<ref>), we have
μ'_t - μ^*_t ≤ (1 - η(1 - λ_1 - λ_2) ) μ_t - μ^*_t .
We can set parameters to ensure that λ_1 + λ_2 < 1 and therefore that μ_t - μ^*_t contracts with each gradient step. Applying Lemma <ref> and Eq. (<ref>), we obtain the following lemma summarizing the behavior of gradient descent on the DDPM objective in the low noise regime.
For any ϵ > 0 and for the noise scale t = O(1), starting from an initialization μ_t for which μ̂_tμ̂_t^*=Ω(1), running gradient descent on the DDPM objective L_t(s_μ) will give us mean parameter μ such that μ - μ^* ≤ O(ϵ).
Combining Lemma <ref> and Lemma <ref>, we obtain our first main result, Theorem <ref>, for learning mixtures of two Gaussians with constant separation. For the full technical details, see Appendix <ref>.
§ EXTENSIONS: SMALL SEPARATION AND MORE COMPONENTS
§.§ Mixtures of two Gaussians with small separation
In this section, we briefly sketch how the ideas from Section <ref> can be extended to give our second main result, namely on learning mixtures of two Gaussians even with small separation. We defer the full technical details to Appendix <ref>.
Suppose a mixture of two Gaussians has mean parameter μ^* that satisfies μ^* = Ω( 1/poly (d) ). Then, for any ϵ > 0, there exists a modification of Algorithm <ref> that provides an estimate μ such that μ - μ^* ≤ O(ϵ) with high probability. Moreover, the algorithm has time and sample complexity poly(d)/ϵ^8 (see Theorem <ref> for more precise quantitative bounds).
Algorithm modification. The algorithm that we analyze runs projected gradient descent on the DDPM objective but only in the high noise scale regime where t = O(log d). At each step, we project the iterate μ to the ball of radius R, where R is an empirical estimate for μ^* obtained by drawing samples x_1,…,x_n from the data distribution and forming R≜ (1/n∑_i=1^n x_i^2 - d)^1/2.
Proof sketch.
Lemma <ref> and Lemma <ref> apply even when the components of the mixture have small separation, and they show that running gradient descent on the DDPM objective results in μ_t and μ_t^* being O(1) close in angular distance. Although our analysis can be extended to show that gradient descent can achieve O(ϵ) angular distance, this does not guarantee that μ_t - μ_t^* is O(ϵ). If in addition to being O(ϵ) close in angular distance, we also have that μ_t≈μ_t^*, then it is easy to see that μ_t - μ_t^* is indeed O(ϵ).
Observe that if R is approximately equal to μ_t^*, then the projection step in our algorithm ensures that our final estimate μ_t satisfies this additional condition of μ_t≈μ^*_t. It is not hard to show that R^2 is an unbiased estimate of μ^*_t^2, so standard concentration shows that taking n = poly(d, 1/ϵ) suffices to ensure that R is sufficiently close to μ_t^*.
§.§ Mixtures of K Gaussians, from a warm start
In this section, we state our third main result, namely for learning mixtures of K Gaussians given by Eq. (<ref>) from a warm start, and provide an overview of how the ideas from Section <ref> can be extended to obtain this result.
(Separation)
For a mixture of K Gaussians given by Eq. (<ref>), for every pair of components i, j ∈{1,2, …, K} with i ≠ j, we assume that the separation between their means μ_i^* - μ_j^* ≥ C√(log (min(K, d))) for sufficiently large absolute constant C > 0.
(Initialization)
For each component i ∈{1,2,…,K}, we have an initialization μ_i^(0) with the property that μ_i^(0) - μ_i^* ≤ C'√(log (min(K, d) ) ) for sufficiently small absolute constant C' > 0.
Suppose a mixture of K Gaussians satisfies Assumption <ref>. Then, for any ϵ = Θ(1/poly(d)), running gradient descent on the DDPM objective (Algorithm <ref>) at low noise scale t=O(1) and with initialization satisfying Assumption <ref> results in mean parameters {μ_i }_i=1^K such that with high probability, the mean parameters satisfy μ_i - μ_i^* ≤ O(ϵ) for each i ∈{1, 2, …, K}. Additionally, the runtime and sample complexity of the algorithm is poly(d, 1/ϵ) (see Theorem <ref> for more precise quantitative bounds).
We provide a brief overview of the proof here. The full proof can be found in Appendix <ref>.
Proof sketch. For learning mixtures of two Gaussians, we have already established the connection between gradient descent on the DDPM objective and the EM algorithm. For mixtures of K Gaussians, however, in a local neighborhood around the ground truth parameters θ^*, we show an equivalence between gradient EM (recall gradient EM performs one-step of gradient descent on the “M” step objective) and gradient descent on the DDPM objective. In particular, our main technical lemma (Lemma <ref>) shows that for noise scale t=O(1) and for any μ_i that satisfies μ_i - μ_i^* ≤ O(√(log (min(K, d) ) ) ), we have
-∇_μ_i,t L_t( s_θ_t ) ≈E_X_t[ w_i, t(X_t)(X_t - μ_i, t) ].
Therefore, the iterate μ_i, t' resulting from a single gradient step on the DDPM objective L_t( s_θ_t ) with learning rate η is given by
μ_1, t' = μ_1,t - η∇_μ_1,t L_t( s_θ_t ) ≈μ_1,t + η E_X_t[ w_1, t(X_t)(X_t - μ_1, t) ].
Comparing Fact <ref> with Eq. (<ref>), we see the correspondence in this regime between gradient descent on the DDPM objective to gradient EM. Using this connection and an existing local convergence guarantee from the gradient EM literature <cit.>, we obtain our main theorem for mixtures of K Gaussians. Full details can be found in Appendix <ref>.
§ ACKNOWLEDGMENTS
SC would like to thank Sinho Chewi, Khashayar Gatmiry, Frederic Koehler, and Holden Lee for enlightening discussions on sampling and score estimation.
alpha
equationsection
theoremsection
Roadmap. In Appendix <ref>, we provide proofs of some simple lemmas from Section <ref> and some basic inequalities. In Appendix <ref> we give additional notation and preliminaries. In Appendix <ref>, we provide the proof details for Theorem <ref>, our result on learning mixtures of two Gaussians with constant separation. In Appendix <ref>, we extend this analysis to give a proof of Theorem <ref>, our result on learning mixtures of two Gaussians with small separation. In Appendix <ref>, we provide the proof details for Theorem <ref>, our result on learning mixtures of K Gaussians. Finally, in Appendix <ref> we give further deferred proofs.
§ PROOFS FROM SECTION <REF>
§.§ X_t is a mixture of Gaussians
Suppose X_0 is mixture of K Gaussians with density function given by
q_0 = 1/K∑_i=1^K N(μ_i, 0^*, )
We know that X_t = exp(-t) X_0 + √(1 - exp(-2t)) Z_t where Z_t ∼N(0, ). Then, by change of variable of probability density, we have
pdf of exp(-t) X_0 = 1/K∑_i=1^K N( μ_i, 0^* exp(-t) , exp(-2t)· )
pdf of √(1 - exp(-2t)) Z_t = N( 0, (1 - exp(-2t))·) .
Combining these, we have
q_t(X_t) = 1/K∑_i=1^K N( μ_i, t^* , I) whereμ_i, t^* = μ_i, 0^* exp(-t) ,
as claimed.
§.§ Derivation of score function
For mixtures of K Gaussians in the form of Eq. (<ref>), the score function at time t is given by
∇log q_t(x) = -∑_i=1^K e^ - x - μ_i, t^* ^2 /2 (x - μ_i, t^* ) /∑_j=1^K e^ -x - μ_j, t^* ^2 /2
= ∑_i=1^K w_i, t^*(x) μ_i, t^* - x where w_i, t^*(x) = e^ - x - μ_i, t^* ^2 /2/∑_j=1^K e^ -x - μ_j, t^* ^2 /2.
For mixtures of two Gaussians in the form of Eq. (<ref>), the score function is given by
∇log q_t(x) = w_1, t^*(x) μ_1,t^* + w_2, t^*(x) μ_2, t^* - x
= w_1, t^*(x) μ^* - (1 - w_1, t^*(x)) μ^* - x
= (2w_1, t^*(x) - 1) μ^* - x
By simplifying w_1, t^*(x), we obtain
w_1, t^*(x) = 1/1 + exp( x - μ^* ^2/2 -x + μ^* ^2/2 )
= 1/1 + exp( -2 μ^*⊤ x )
= σ( 2 μ^*⊤ x )
where σ( · ) denotes the sigmoid function. Using Eq. (<ref>) in Eq. (<ref>), we obtain
∇log q_t(x) = tanh( μ^*⊤ x ) μ^* - x.
§ ADDITIONAL NOTATIONS AND PRELIMINARIES
In this section, we provide additional notations and preliminaries for the proofs to follow. Recall that we use L_t(s_θ_t) to denote the population denoising loss at noise scale t.
L_t(s_θ_t) = E[ s_θ_t(X_t) + Z_t/√(1 - exp(-2t))^2 ].
We use L_t(s_θ_t(x_0, z_t)) to denote the denoising loss at noise scale t on a sample x_0 from the data distribution and z_t from the standard Gaussian distribution:
L_t(s_θ_t(x_0, z_t)) = s_θ_t(x_t) + z_t/√(1 - exp(-2t))^2,
where x_t = exp(-t) x_0 + √(1 - exp(-2t)) z_t. We use α_t as shorthand notation for exp(-t) and β_t as shorthand notation for √(1 - exp(-2t)).
For mixtures of two Gaussians, we use B to denote the upper bound on μ^* ^2, that is,
μ^* ^2 ≤ B .
Throughout, we assume that B = poly(d).
For any vector v, we use v̂ to denote the unit vector along the direction of v. For a vector v, we use [v]_i to denote the i^th coordinate of v. Similarly, for a matrix X, we use [X]_i to denote the i^th row of the matrix. For any positive integer n, we use [n] to denote the set {1, 2, …, n}. We use N(μ, σ^2 ·) to denote the standard Gaussian with mean μ and covariance σ^2 ·. Sometimes, we use a shorter notation N_μ to denote N(μ, ). For any two quantities X and Y that are both implicitly functions of some parameter a over _≥ 0, we use the shorthand X ≲ Y and X = O(Y) interchangeably to denote that there exists absolute constant C > 0 such that for all a sufficiently large, X(a) ≤ C Y(a). We also use the shorthand X ≳ Y and X = Ω(Y), defined in the obvious way.
Finally, we will use the following standard bounds.
The sub-Gaussian norm of a random variable X ∈R, denoted by X is defined as
X = inf{ t > 0 : E[ exp(X^2 / t^2) ] ≤ 2 }.
The sub-Gaussian norm has the following properties:
* (Bounded): Any bounded random variable X (i.e., there is a finite A for which |X| ≤ A with probability 1) is sub-Gaussian:
X≤ A /√(ln 2)
* (Centering): If X is a sub-Gaussian random variable, then X - E[X] is also a sub-Gaussian random variable. Specifically, the following holds for some absolute constant C.
X - E[X] ≤ C X
* (Moment generating function bound): If X is a sub-Gaussian random variable with E[X] = 0, then
E[ exp( λ X ) ] ≤exp( C λ^2 X^2 ) for all λ∈R,
where C is some absolute constant.
* (Sum of sub-Gaussian random variables): If X_1 and X_2 are mean zero sub-Gaussian random variables, then
X_1 + X_2≤ X_1 + X_2 .
* (Product with a bounded random variable): If X is a sub-Gaussian random variable and Y is a bounded random variable Y ∈ [0, 1], then
X Y≤X .
The sub-exponential norm of a random variable X ∈R, denoted by X_ψ_1 is defined as
X_ψ_1 = inf{ t > 0 : E[ exp( |X| / t ) ] ≤ 2 }.
The sub-exponential norm has the following properties:
* (Sum of sub-exponential distributions): If X_1 and X_2 are mean-zero sub-exponential random variables, then X_1 + X_2 is also a mean-zero sub-exponential variable. Specifically,
X_1 + X_2 _ψ_1≤√(2) ( X_1 _ψ_1 + X_2 _ψ_1 ) .
* (Centering) If X is a sub-exponential random variable, then X - E[X] is sub-exponential with
X - E[X]_ψ_1≤ C X_ψ_1,
where C is some absolute constant.
The proof follows from following the equivalent definition of a sub-exponential random variable: If any random variable X satisfies
E[ exp(λ X) ] ≤exp( C X _ψ_1^2 λ^2 ) for all λ such that λ≤1/ C X _ψ_1^2,
for some constant C, then X is sub-exponential random variable with sub-exponential norm X_ψ_1.
Then, for any λ≤1/2C max( X_1 _ψ_1^2, X_2 _ψ_1^2 ), the MGF of X_1 + X_2 is given by
E[ exp( λ(X_1 + X_2) ) ] ≤E[ exp(2 λ X_1) ]^1/2E[ exp(2 λ X_2) ]^1/2
≤exp( C X_1 _ψ_1^2 2λ^2 ) exp( C X_2 _ψ_1^2 2λ^2 )
≤exp( C λ^2 (2 X_1 _ψ_1^2 + 2 X_2 _ψ_1^2 ) ) .
Using X_1 _ψ_1 + X_2 _ψ_1≥max( X_1 _ψ_1, X_2 _ψ_1 ), we know that above inequality is true for any λ with | λ | ≤1/2C ( X_1 _ψ_1 + X_2 _ψ_1 )^2 ≤1/2C max( X_1 _ψ_1^2, X_2 _ψ_1^2 ). This completes the proof.
(Bernstein's inequality for sub-exponential random variable) Let X_1, X_2, …, X_N be independent, mean zero, sub-exponential random variables. Then, for every ϵ≥ 0, we have
[ | 1/N∑_i=1^N X_i | ≥ϵ] ≤ 2 exp[ -c N min( ϵ/max_i X_i_ψ_1, ϵ^2/ (max_i X_i_ψ_1 )^2 ) ]
where c > 0 is some absolute constant.
§ LEARNING MIXTURES OF TWO GAUSSIANS WITH CONSTANT SEPARATION
In this section, we provide the details and proofs for learning mixtures of two Gaussians with constant separation. Our results in this section can be summarized in the following theorem statement.
Let q be a mixture of two Gaussians (in the form of Eq. (<ref>)) with mean parameter μ^* satisfying μ^* > c for some absolute constant c > 0. Recalling that B denotes an a priori upper bound on μ^*, we have that for any ϵ≤ϵ' where ϵ' ≲1/d^2 B^9, there exists a procedure satisfying the following. If the procedure is run for at least Ω(B^6log(d/ϵ)) iterations with at least poly(d,B)/ϵ^2 samples from q, then it outputs μ such that μ - μ^* ≤ϵ with high probability.
As described earlier, the procedure first runs gradient descent on the DDPM objective described in Algorithm <ref> from a random Gaussian initialization in a high noise scale regime with noise scale t_1 = O(log d). It then uses the output of the first step as initialization and runs the Algorithm <ref> in a low noise scale regime with noise scale t_2 = O(1).
We begin by calculating the form of the gradient updates:
For any noise scale t > 0, the gradient update for the mixture of two Gaussians on the DDPM objective is given by
-∇_μ_t L_t(s_μ_t) = E_x ∼N(μ_t^*, )[ ( tanh (μ_t^⊤ x) - 1/2tanh”( μ_t^⊤ x ) μ_t ^2 + tanh'( μ_t^⊤ x ) μ_t^⊤ x ) x ]
- μ_t - E_x ∼N(μ_t^*, )tanh'( μ_t^⊤ x ) μ_t .
The proof of Lemma <ref> is given in Appendix <ref>.
§.§ High noise regime–connection to power iteration
Here we show that running population gradient descent on the DDPM objective at high noise scale behaves like power iteration on the covariance matrix of the data and thus reaches an iterate μ with constant correlation with μ^*.
For any noise scale t > t' and number of samples n > n' where t' ≲log d and n'=Θ( d^4 B^3/ϵ^2), with high probability, the negative gradient of the diffusion model objective L_t(s_t) can be approximated by 2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t. More precisely, given independent samples {x_i,t}_i=1,…,n from q_t generated using noise vectors {z_i,t}_i=1,…,n sampled from N(0,), we have
-∇( 1/n∑_i=1^n L_t(s_μ_t(x_i, t, z_i, t)) ) - 2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t ≤ 250 √(d)μ_t ^5 + 10 μ_t^3 μ_t^*^2 + ϵ .
Recall that the population gradient update on the DDPM objective is given by
-∇ L_t(s_μ_t) = E_x ∼N(μ_t^*, ) [ tanh (μ_t^⊤ x) x - 1/2tanh”( μ_t^⊤ x ) μ_t ^2 x + tanh'( μ_t^⊤ x ) μ_t^⊤ x x ]
- μ_t - E_x ∼N(μ_t^*, ) [tanh'( μ_t^⊤ x ) μ_t]
= E_x ∼N(μ_t^*, ) [ tanh (μ_t^⊤ x) x - 1/2tanh”( μ_t^⊤ x ) μ_t ^2 x + tanh'( μ_t^⊤ x ) μ_t^⊤ x μ_t^*
+ tanh”( μ_t^⊤ x ) μ_t^⊤ x μ_t ] - μ_t ,
where the last equality follows from the Stein's lemma on E_x ∼N(μ_t^*, ) [ tanh'( μ_t^⊤ x ) μ_t^⊤ x x ], as
E_x ∼N(μ_t^*, ) [ tanh'( μ_t^⊤ x ) μ_t^⊤ x x ] = E_x ∼N(μ_t^*, ) [ tanh'( μ_t^⊤ x ) μ_t^⊤ x μ_t^* + tanh'( μ_t^⊤ x ) μ_t + tanh”( μ_t^⊤ x ) μ_t^⊤ x μ_t ] .
Using Taylor's theorem, we know that
tanh(μ_t^⊤ x) = μ_t^⊤ x - 2/3 ( μ_t^⊤ x )^3 + O( ξ(x)^5 ) where ξ (x) ∈ [0, μ_t^⊤ x]
tanh(μ^⊤ x) x = μ^⊤ x x - 2/3 ( μ_t^⊤ x )^3 x + O( ξ(x)^5 x )
E_x ∼N(μ_t^*, ) [ tanh (μ_t^⊤ x) x ] - E_x ∼N(μ_t^*, ) [ μ_t^⊤ x x - 2/3 ( μ_t^⊤ x )^3 x ] ≤E[ ξ(x)^5 x ] ≲√(d)μ_t^5
where the last inequality follows from E[ ξ(x)^5 x ] ≤E[ | μ_t^⊤ x |^5 x ] ≤E[ | μ_t^⊤ x|^10 ] ^1/2E[ x^2 ] ^1/2 ≲μ_t^5 √(d + μ_t^*^2)≲√(d)μ_t^5. Similarly, using Taylor's theorem, we get
tanh”(μ_t^⊤ x) = -2 μ_t^⊤ x + O( ξ(x)^3 ) where ξ (x) ∈ [0, μ_t^⊤ x]
tanh”(μ_t^⊤ x) - 1/2μ_t^2 x + μ_t^⊤ x μ_t = -2 μ_t^⊤ x + O( ξ(x)^3 ) -1/2μ_t^2 x + μ_t^⊤ x μ_t
E[ tanh” (μ_t^⊤ x) ( -1/2μ_t^2 x + μ_t^⊤ x μ_t ) ] - E[ -2 μ_t^⊤ x -1/2μ_t^2 x + μ_t^⊤ x μ_t ]
≤ -1/2μ_t^2 E_x ∼N( μ_t^*, I ) [ O( ξ(x)^3 ) x ] + E_x ∼N( μ_t^*, I ) [ O( ξ(x)^3 ) μ_t^⊤ x μ_t ]
≤1/2μ_t^2 E [ | μ_t^⊤ x |^3 x ] + μ_tE[ | μ_t^⊤ x |^4 ]
≤1/2μ_t^2 √( E [ | μ_t^⊤ x |^6 ] E[ x^2 ] ) + μ_tE[ | μ_t^⊤ x |^4 ]
≤ 10 μ_t^5 √(d) + 6 μ_t^5
Using Taylor's theorem for tanh', we get
tanh'(μ_t^⊤ x) = 1 - (μ_t^⊤ x)^2 + O( ξ(x)^4 ) where ξ(x) ∈ [0, μ_t^⊤ x]
tanh'(μ_t^⊤ x) μ_t^⊤ x μ_t^* = μ_t^⊤ x μ_t^* - (μ_t^⊤ x)^3 μ_t^* + O( ξ(x)^4 μ_t^⊤ x μ_t^* ) where ξ(x) ∈ [0, μ_t^⊤ x]
E[ tanh'(μ_t^⊤ x) μ_t^⊤ x μ_t^* ] - E[ μ_t^⊤ x μ_t^* - (μ_t^⊤ x)^3 μ_t^* ] ≤E[ ξ(x)^4 (μ_t^⊤ x) μ_t^* ]
≤E[ | μ_t^⊤ x |^5 ] μ_t^*≲μ_t^*μ_t ^5
Additionally, we have
E_x ∼N(μ_t^*, )[ x x^⊤μ_t (1 + μ_t^2) - 2/3 (μ_t^⊤ x)^3 x - 2 μ_t ( μ_t^⊤ x )^2 + μ_t^⊤ x μ_t^* - (μ_t^⊤ x)^3 μ_t^* ]
= (I + μ_t^* μ_t^*⊤) μ_t (1 + μ_t^2) - 5/3E[ (μ_t^⊤ x)^3 μ_t^* ] + μ_t^* μ_t^*⊤μ_t - 4 E[ μ_t ( μ_t^⊤ x )^2 ]
= (I + μ_t^* μ_t^*⊤) μ_t (1 + μ_t^2) - 5 μ_t^* /3 ( ( μ_t^⊤μ_t^* )^3 + 3 ( μ_t^⊤μ_t^* ) μ_t^2 )
+ μ_t^* μ_t^*⊤μ_t - 4 μ_t ( μ_t^2 + ( μ_t^⊤μ_t^* )^2 )
= μ_t^* μ_t^*⊤μ_t (2 - 4 μ_t^2) + μ_t (1 - 3 μ_t^2) - 5 μ_t^* ( μ_t^⊤μ_t^* )^3 /3 - 4 μ_t ( μ_t^⊤μ_t^* )^2
where the second equality uses Stein's lemma on E[ (μ_t^⊤ x)^3 x ] and E[xx^⊤] = + μ_t^* μ_t^*⊤ and the third equality uses Gaussian moments for E[ (μ_t^⊤ x)^2 ] and E[ (μ_t^⊤ x)^3 ].
Putting it all together and using triangle inequality, we obtain the desired bound on -∇ L_t(s_μ_t) - (2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t ).
-∇ L_t(s_μ_t) - (2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t )
≤ -∇ L_t(s_μ_t) - E[ x x^⊤μ_t (1 + μ_t^2) - 2/3 (μ_t^⊤ x)^3 x - 2 μ_t ( μ_t^⊤ x )^2 + μ_t^⊤ x μ_t^* - (μ_t^⊤ x)^3 μ_t^* - μ_t ]
+ E[ x x^⊤μ_t (1 + μ_t^2) - 2/3 (μ_t^⊤ x)^3 x - 2 μ_t ( μ_t^⊤ x )^2 + μ_t^⊤ x μ_t^* - (μ_t^⊤ x)^3 μ_t^* - μ_t ]
- 2μ_t^* μ_t^*⊤μ_t - 3μ_t^2 μ_t
≤ 200 √(d)μ_t^5 + 10 μ_t^5 √(d) + 6 μ_t^5 + 20 μ_t^*μ_t ^5 + 10 μ_t^3 μ_t^*^2
≤ 250 √(d)μ_t ^5 + 10 μ_t^3 μ_t^*^2
Using Lemma <ref> and triangle inequality, we obtain the result.
We will use the following simple bound on the correlation between the ground truth and a random initialization:
A randomly initialized μ_0 ∼N(0, ) satisfies that μ̂_0 μ̂^* ≥1/2d with probability at least 1 - O(d^-1/2).
For μ_0 ∼N(0, I), we know that μ_0μ̂^* ∼N(0, I). Using Gaussian anti-concentration, with probability at least 1 - 1/√(d) , we have μ_0 μ̂^* ≥ 1/√(d). Because the L_2 norm of a Gaussian vector is sub-exponential, with probability at least 1 - exp(-Ω(d)), we have μ_0≤ 2 √(d). Using the norm bound, with probability at least 1 - 1/√(d) - exp( - O(d)) = 1 - O(d^-1/2), we obtain the claimed bound on μ̂_̂0̂μ̂^*.
We can now track the correlation between the iterates of gradient descent and the ground truth:
Suppose that the vector μ_t satisfies |⟨μ̂_t, μ̂^*_t⟩| ≥1/2d, and let μ'_t denote the iterate resulting from a single empirical gradient step with learning rate η starting from μ_t. Suppose that the empirical gradient and the population gradient differ by at most ϵ. Denote the angle between μ_t (resp. μ'_t) and μ_t^* by θ (resp. θ'). Then
tanθ' = maxκ_1 tanθ, κ_2
for
κ_1 = 1 - 3 ημ_t^2 / 1 -3ημ_t ^2 + η( μ_t^*^2 - 500 √(d^3)μ_t ^4 - 20 d μ_t^2 μ_t^*^2 - ηϵ ) ,
κ_2 = 500 η√(d^3)μ_t ^4 + 20 η d μ_t ^2 μ_t^*^2 + ηϵ/μ_t^* ^2 and ϵ≲d ϵ/μ_t .
Define μ̂^*⊥_t as the orthogonal vector to μ_t^* in the plane of μ_t and μ_t^*. Note that μ'_t still lies in this plane, so the orthogonal vector to μ_t^* in the plane of μ'_t and μ_t^* is also given by μ̂^*⊥_t.
We have
tanθ' = μ̂^*⊥μ̂'_t /μ̂^*_t μ̂'_t = μ̂^*⊥_t μ'_t /μ̂^*_t μ'_t
= μ̂^*⊥_t μ_t + η F(μ_t, μ_t^*) + μ̂^*⊥_t - η∇ L_t(s_t) - η F(μ_t,
μ_t^*) + ηϵ/μ̂^*_t μ_t + η F(μ_t, μ_t^*) + μ̂^*⊥_t - η∇ L_t(s_t) - η F(μ_t,
μ_t^*) - ηϵ
where F(μ, μ^*) = 2μ^*_t μ^*⊤_t μ_t- 3μ_t^2 μ_t
≤ σ_2 μ̂^*⊥_t μ_t + η∇ L_t(s_t) + F(μ_t, μ_t^*) + ηϵ/σ_1 μ̂^*_t μ_t - η∇ L_t(s_t) + F(μ_t, μ_t^*) - ηϵ
where σ_1 and σ_2 are the first and second eigenvalues of + F(μ_t,μ^*_t) = (1-3 ημ_t^2) + 2 ημ_t^* μ_t^*⊤,
given by
σ_1 = 1 + η(2 μ_t^*^2 -3μ_t^2)
σ_2 = 1 - 3 ημ_t^2 .
The last inequality (<ref>) follows from the fact that
μ̂^*_t μ_t + η F(μ_t, μ_t^*) = μ̂^*⊤_t ( (1- 3 ημ_t^2) + 2 ημ_t^* μ_t^*⊤ )μ_t
= μ^⊤_t ( (1- 3 ημ_t^2) + 2 ημ_t^* μ_t^*⊤ )μ̂^*_t = σ_1 μ^⊤_t μ̂^*_t
because μ̂^* is the first eigenvector of (1- 3 ημ_t^2) + 2 ημ_t^* μ_t^*⊤.
Recall from Lemma <ref> that the deviation between the negative population gradient and the power iteration update F(μ_t,μ^*_t) is bounded by
∇ L_t(s_t) + F(μ_t, μ_t^*) /μ_t μ̂^*_t ≤ 250 η√(d)μ_t ^4 + 10 ημ_t^2 μ_t^*^2 /μ̂_t μ̂^*_t ≤ 500 η√(d^3)μ_t ^4 + 20 d ημ_t^2 μ_t^*^2 .
Substituting this into Eq. (<ref>), we get
tanθ'
≤σ_2 μ̂^*⊥_t μ_t + η∇ L_t(s_t) + F(μ_t, μ_t^*) + ηϵ/μ̂^*_t μ_t (σ_1 - 500 η√(d^3)μ_t ^4 - 20 d ημ_t^2 μ^*_t^2 - ηϵ ) where ϵ≲d ϵ/μ
≤σ_2 /σ_1 tanθ + 1/σ_1 500 η√(d^3)μ^4 + 20 d ημ^2 μ^*_t^2 + ηϵ
where σ_1 ≜σ_1 - 500 η√(d^3)μ^4 - 20 d ημ^2 μ^*_t^2 - ηϵ
≤( 1 - ημ^*_t^2 /σ_1 ) σ_2 /σ_1 - ημ^*_t ^2 tanθ + ( ημ^*_t^2 /σ_1 ) 500 η√(d^3)μ_t ^4 + 20 d ημ_t^2 μ^*_t^2 + ηϵ/ημ^*_t ^2
≤max( σ_2 /σ_1 - ημ^*_t ^2 tanθ, 500 η√(d^3)μ_t ^4 + 20 η d μ_t^2 μ^*_t^2 + ηϵ/μ^*_t^2 )
where the last inequality uses the fact that convex combinations of two values is less than the maximum of two values.
Finally, we obtain the following bound on the correlation between the ground truth and the final iterate of gradient descent:
For any h ∈ℕ, let μ^(h)_t denote the iterate after h empirical gradient steps with learning rate η = 1/20 starting from random initialization, where the empirical gradients are estimated from at least Θ(d^4 B^3/ϵ^2) samples. Let θ^(h) denote the angle between μ^(h)_t and μ^*_t. For any ϵ≲1/d^2 B^9, there exists H' ≲ B^6 log d such that for any H ≥ H', if
1/B^3≤μ_t^*≤1/B^2, we have
tanθ^(H)≲ 1 .
Denote the h-th iterate of gradient descent by μ^(h)_t. In Lemma <ref> we show that μ^(h)_t≤1/B^2 for all h. We would like to apply the bound in Lemma <ref> to argue that the angle with μ^*_t decreases when going from μ^(h)_t to μ^(h+1)_t. Using that 1/B^3≤μ^*_t≤1/B^2 and μ_t≤1/B^2, we can bound the quantity κ_1 that appears in Lemma <ref> by
κ_1 ≤1 - 3ημ_t^2/1 - 3ημ_t^2 + η/B^6(1 - 500√(d^3)/B^2 - 20d/B^2 - ϵ d B^9)
≤1/1 + η/B^6(1 - 500√(d^3)/B^2 - 20d/B^2 - ϵ d B^9)≤1/1 + η/2B^6 .
On the other hand, for B a sufficiently large polynomial in d, we can again use that 1/B^3≤μ^*_t≤1/B^2 and μ_t≤1/B^2 to bound the quantity κ_2 that appears in Lemma <ref> by
κ_2 ≤500η√(d^3)/B^2 + 20η d/B^4 + B^9η dϵ≲η/d .
As μ̂μ̂^* ≥1/2d, this implies |tanθ^(h)| ≤ 2d. Without loss of generality assume that tanθ^(h)≤ 2d.
By Lemma <ref>, for any h we either have tanθ^(h)≲η/d ≪ 1, in which case we are done as this bound will also hold for subsequent iterates, or tanθ^(h)≲ (1 + η/2B^6)^-1tanθ^(h-1). If the latter happens consecutively for H ≥log d/log(1 + η/2B^6) steps, then because (1 + η/2B^6)^-H = 1/d, the angle θ will satisfy tanθ≤ 2d· (1/d) ≲ 1. The proof is complete because, by hypothesis, H ≥4B^6log d/η≥log d/log(1 + η/2B^6) (the last inequality follows from log (1 + x) ≥x/2 for any 0 < x < 1).
When parameter μ_t satisfies μ_t≤1/B^2 for the noise scale t = O(log d) and μ'_t is the new parameter after performing a gradient descent update on the DDPM objective at noise scale t = O(log d), then parameter μ'_t satisfies μ'_t≤1/B^2.
When μ_t≤ 0.9 μ_t^*≤0.9/B^2, we have
μ_t' ≤μ_t + η F(μ_t, μ_t^* ) + η ( - ∇ L_t(s_μ_t) - F(μ, μ^* ) ) + ηϵ≤ (1 + 2 ημ_t^*^2 ) μ_t + 1/d B^9
≤ 1.05 μ_t + 1/d B^9≤1/B^2.
When μ_t≥ 0.9 μ_t^*, then maximum eigenvalue of F(μ_t, μ_t^* ) is negative. Therefore, μ'_t is less than 1/B^2. Specifically, we have
μ_t' ≤μ_t + η F(μ_t, μ_t^* ) + η ( - ∇ L_t(s_μ_t) - F(μ, μ^* ) ) + ηϵ
≤ (1 + η (2 μ_t^*^2 - 3 μ_t^2 ) ) μ_t + 1/d B^9≤ (1 - 0.01 μ_t^*^2 ) μ_t + 1/d B^9≤1/B^2.
§.§ Low noise regime - connection to EM algorithm
In the previous section we showed how to obtain a warm start by running gradient descent on the DDPM objective at high noise. We now focus on proving the contraction of μ_t - μ_t^* starting from this warm start, by running gradient descent at low noise. We first prove the contraction for population gradient descent and then, we argue that the empirical gradient descent concentrates well around the population gradient descent.
As before, we denote μ_t as the current iterate and μ'_t as the next iterate obtained by performing (population) gradient descent on the DDPM objective with step size η. We upper bound μ_t' - μ_t^* as follows:
μ_t' - μ_t^* = μ_t - η∇_μ_t L_t(s_μ_t) - μ_t^*
= (1 - η) (μ_t - μ_t^*) +η E_x ∼N(μ_t^*, 1)[ ( tanh (μ_t^⊤ x) - 1/2tanh”( μ_t^⊤ x ) μ_t ^2
+ tanh'( μ_t^⊤ x ) μ_t^⊤ x ) x ] - η E_x ∼N(μ_t^*, 1) [tanh'( μ_t^⊤ x ) μ_t] - ημ_t^*
≤ (1 - η) μ_t - μ_t^* +ηE_x ∼N(μ_t^*, 1) [ tanh (μ_t^⊤ x) x ] - μ_t^* + η G(μ_t, μ_t^*) ,
where
G(μ_t, μ_t^*) ≜E_x ∼N(μ_t^*, )[ - 1/2tanh”( μ_t^⊤ x ) μ_t ^2 x + (tanh'( μ_t^⊤ x ) μ_t^⊤ x) x - tanh'( μ_t^⊤ x ) μ_t ] .
Recall that E_x ∼N(μ_t^*, 1) [tanh (μ_t^⊤ x) x ] is the EM update for mixtures of two Gaussians (See Fact <ref>). If we can show that the G(μ_t,μ^*_t) term above is “contractive” in the sense that it is decreasing in μ_t - μ^*_t, then we can invoke existing results on convergence of EM to show that the distance between the current iterate and μ^*_t contracts in a single gradient step <cit.>. Our goal is thus to control G(μ_t, μ_t^*).
For this, we start with the 1D case in Lemma <ref>. We then extend to the multi-dimensional case in Lemma <ref>.
Let μ, μ^* > 0, and consider μ∈ [c, 4μ^*/3] for some constant c. In this one-dimensional case, the function G specializes to
G(μ, μ^*) = E_x ∼N(μ^*, 1)[ - 1/2tanh”( μ x ) μ^2 x + tanh'( μ x ) μ x^2 - tanh'( μ x ) μ] ,
and we have
G(μ, μ^*) ≤ 0.01 μ - μ^*
The proof uses the fact that the function G only contains first or higher-order derivatives of the tanh function and all the derivatives of tanh decay exponential quickly as μ increases. Therefore, when μ is at least a constant, we obtain the result. The complete proof of lemma <ref> is given in Appendix <ref>.
For any noise scale t, when the current parameter at noise scale t, μ_t, satisfies μ_t ∈ [c, 4 μ̂_t μ_t^* /3] for some sufficiently large constant c, then the following inequality holds:
G(μ_t, μ_t^*)≤ 0.01 μ_t - μ_t^*
Suppose {v_1, v_2, …, v_d} are d orthonormal directions such that v_1 = μ̂_t and v_2 is either of the two unit vectors _t which are orthogonal to μ̂_t in the plane of μ_t and μ_t^*. Recall that
G(μ_t, μ_t^*) = E_x ∼N(μ_t^*, )[ - 1/2tanh”( μ_t^⊤ x ) μ_t ^2 x + (tanh'( μ_t^⊤ x ) μ_t^⊤ x) x - tanh'( μ_t^⊤ x ) μ_t ]
= E_x ∼N(0, I)[ - 1/2tanh”( μ_t^⊤ (x + μ_t^*) ) μ_t ^2 (x+ μ_t^*)
+ tanh'( μ_t^⊤ (x+ μ_t^*) ) (μ_t^⊤ (x + μ_t^*)) (x + μ_t^*) - tanh'( μ_t^⊤ (x + μ_t^*) ) μ_t ]
= E_α_1, α_2, …, α_d ∼N(0, 1)[ - 1/2tanh”( μ_t ( α_1 + μ̂_t^⊤μ_t^*) ) μ_t ^2 ( ∑_i α_i v_i + μ_t^*)
+ tanh'( μ_t ( α_1 + μ̂_t^⊤μ_t^*) ) μ_t ( α_1 + μ̂_t^⊤μ_t^*) ( ∑_i α_i v_i + μ_t^*)
- tanh'( μ_t ( α_1 + μ̂_t^⊤μ_t^*) ) μ_t ] ,
where in the last equality we rewrote x ∼N(0, I) as ∑_i=1^d α_i v_i for α_i ∼N(0, 1). Therefore, we have
μ̂_̂t̂ G(μ_t, μ_t^*)
= E_α_1, α_2, …, α_d ∼N(0, I)[ - 1/2tanh”( μ_t (α_1 + μ̂_t^⊤μ_t^*) ) μ_t ^2 (α_1 + μ̂_̂t̂^⊤μ_t^*)
+ tanh'( μ_t (α_1 + μ̂_̂t̂^⊤μ_t^*) ) μ_t (α_1 + μ̂_t^⊤μ_t^*)^2 - tanh'( μ_t (α_1 + μ̂_̂t̂^⊤μ_t^*) ) μ_t ]
= E_α_1 ∼N(μ̂^⊤_t μ_t^*, 1)[ -1/2tanh”(μ_tα_1) μ_t^2 α_1 + tanh'( μ_tα_1 ) μ_tα_1^2 - tanh'( μ_tα_1 ) μ_t ] .
By taking μ_t to be μ and ⟨μ̂_t, μ^*_t⟩ to be μ^*, we observe the similarity between the right side of the above equation and the one-dimensional definition of G defined in Eq. (<ref>). Using Lemma <ref> and if μ_t∈ [c, 4 μ̂_t μ_t^* /3], we have
μ̂_̂t̂ G(μ_t, μ_t^*) ≤ 0.01 μ̂_̂t̂μ_t - μ̂_̂t̂μ_t^*
Taking the dot product of G(μ_t, μ_t^*) with v_2 = _t, we have
_t G(μ_t, μ_t^*) = E_α_1, α_2, …, α_d ∼N(0, 1)[ - 1/2tanh”( μ_t ( α_1 + μ̂_t^⊤μ_t^*) ) μ_t ^2 ( α_2 + _t μ_t^* )
+ tanh'( μ_t ( α_1 + μ̂_t^⊤μ_t^*) ) μ_t ( α_1 + μ̂_t^⊤μ_t^*) ( α_2 + _t μ_t^* ) ]
= E_α_1 ∼N( μ̂_t^⊤μ_t^* , 1)[ - 1/2tanh”( μ_t α_1 ) μ_t ^2 _t μ_t^*
+ tanh'( μ_t α_1 ) μ_t α_1 _t μ_t^*]
= _t μ_t^* E_α_1 ∼N( μ̂_t^⊤μ_t^* , 1)[ - 1/2tanh”( μ_t α_1 ) μ_t ^2 + tanh'( μ_t α_1 ) μ_t α_1 ] .
In Lemma <ref> below, we show that when μ_t ∈ [c, 4 μ̂_t μ_t^* /3], the expectation in the last expression is upper bounded by 0.01. Therefore, we have
| _t G(μ_t, μ_t^*) | ≤ 0.01 | _t μ_t^* | | _t G(μ_t, μ_t^*) | ≤ 0.01 | _t μ_t - μ_t^*|
Observe that for i=3,…,d, G(μ_t, μ_t^*) v_i = 0. Therefore, we have
G(μ_t, μ_t^*)^2 = ∑_i=1^d v_i G(μ_t, μ_t^*) ^2 ≤ 0.01^2 μ_t - μ_t^* ^2 .
The next Lemma ensures that the parameter μ_t after a few steps of gradient descent on the DDPM objective stays in the region where the function G satisfies G(μ_t, μ_t^*)≤ 0.01 μ_t - μ_t^*. Recall that the condition of the Lemma is satisfied because we initialize at the warm start obtained by gradient descent in the high noise regime.
Suppose the angle between initialization μ̂_t^(0) and optimal parameter μ_t^* is Θ(1), then for any h, we have μ_t^(h)∈ [c, 4 μ̂_t^(h)μ_t^* /3].
The proof of Lemma <ref> is given in Appendix <ref>. Finally, we are ready to prove the main result of this section:
To obtain the contraction of μ_t^(h) - μ_t^* after a gradient descent step on the DDPM objective, we write μ_t^(h+1) - μ_t^* in terms of μ_t^(h) - μ_t^* as follows:
μ_t^(h+1) - μ_t^* = μ_t^(h) - η∇ L_t(s_μ_t^(h)) - μ_t^* + η( 1/n∑_i=1^n ∇ L_t(s_μ_t^(h) (x_i, z_i) ) ) - ∇ L_t(s_μ_t^(h) )
≤ (1 - η) μ_t^(h) - μ_t^* +η E_x ∼N(μ_t^*, 1) [ (tanh (μ_t^(h)^⊤ x) ) x ] - μ_t^* + η G(μ_t^(h), μ_t^*) + ηϵ ,
where in the last step we used Lemma <ref> below to bound the distance between the population and empirical gradient.
Recall that gradient descent in the low noise regime was initialized using the output of the gradient descent in the high noise regime. Therefore, μ̂_t^(0)μ̂_t^*≳ 1. Using Lemma <ref>, we know that the condition on Lemma <ref> is always satisfied. Using the contractivity of G established in Lemma <ref> combined with <cit.>, and choosing η = 0.05, we conclude that the distance to the ground truth contracts:
μ_t^(h+1) - μ_t^* ≤ (1 - 0.05) μ_t^(h) - μ_t^* + 0.01 μ_t^(h) - μ_t^* + 0.01 μ_t^(h) - μ_t^* + ηϵ
≤ 0.97 μ_t^(h) - μ_t^* + ηϵ.
Applying the above for all h ∈ [H], we obtain
μ_t^(H) - μ_t^* ≤ 0.97^H μ_t^(0) - μ_t^* + 50ϵ.
The choice of H given in the Theorem statement proves the result.
§ LEARNING MIXTURES OF TWO GAUSSIANS WITH SMALL SEPARATION
In this section, we extend the analysis for learning mixtures of two Gaussians with constant separation, provided in Section <ref>, to the low-separation regime and prove the following:
For any L > 0, let q be a mixture of two Gaussians (in the form of Eq. (<ref>)) with mean parameter μ^* satisfying μ^* > L. Recalling that B denotes an a priori upper bound on μ^*, we have that for any ϵ≤ϵ', where ϵ' ≲1/d^2 B^9, there exists a procedure satisfying the following. If the procedure is run for at least poly(d, B, 1/L)1/ε^3 iterations with at least poly(d, B, 1/L)*1/ε^8 samples from q, then it outputs μ such that μ - μ^* ≤ϵ with high probability.
As described in Section <ref>, the algorithm is a simple modification of Algorithm <ref> in which gradient descent is replaced by projected gradient descent. We start in Lemma <ref> by showing that the projection step in the algorithm ensures that the norm of the current iterate μ_t is approximately that of μ^*_t. Then in Lemma <ref>, we extend the analysis of Lemma <ref> to show that every projected gradient step contracts the distance to the ground truth. Combined with Lemma <ref>, this allows us to conclude the proof of Theorem <ref>.
Let x_1,…,x_n be independent samples from q, and define radius parameter R by R^2 ≜1/n∑^n_i=1x_i^2 - d. For any ϵ > 0, provided that n≳B^4 + d^2/ϵ^2 L^2,
we have |R - μ^*| ≤ε with high probability.
Observe that we can write the random variable corresponding to the mixture of two Gaussians X_0 = X = Z + p μ^* where Z∼N(0, I) and p is a Rademacher random variable. Using Theorem 3.1.1 (concentration of norms) from <cit.>, we know that Z - √(d)_ψ_2≲ 1. Therefore, sub-Gaussian norm X_0_ψ_2≲Z_ψ_2 + p μ^*_ψ_2≲ B + √(d). Using Lemma 2.7.4 from <cit.>, we have X_0^2 _ψ_1≲X_0_ψ_2^2 ≲ B^2 + d. Therefore, using number of samples n specified in the Lemma statement, with high probability, we have
| 1/n∑_i=1^n x_i^2 - E[X_0^2 ] | ≤εL| μ^2 - μ^* ^2 | ≤εL| μ - μ^* | ≤ε
where the penultimate implication uses the fact that E_X_0[ X_0^2 ] = E[ Z^2 + μ^*^2 ] = d + μ^*^2.
Assume that L≤μ^*≤ B. Then, for any small ε > 0, running projected GD on diffusion models with step size η = 1/20 at noise scale t = logd/ε for number of steps H > H' and number of samples n > n' steps will achieve
μ^(H) - μ^* ≲ d^2 B^4 ε,
where H' = d^2/L^2 ε^3 and n' = d^10 B^3/ϵ^8 L^6.
Recalling that μ^*_t = μ^*_0 exp(-t), note that for t = logd/ε, εL/d≤μ_t^*≤ε B/d.
We would like to apply Lemma <ref>. Note that we may apply this even though it is only stated for gradient descent (without projection). The reason is that it bounds the change in angle between the iterate and the ground truth after a single gradient step, and this angle is unaffected by projection.
Suppose we take one projected gradient step with learning rate η starting from an iterate μ_t. As μ_t was the result of a projection, by Lemma <ref> we have εL/d≲μ_t^(h)≲ε B/d.
We now bound κ_2 in Lemma <ref>:
κ_2 = 500 η√(d^3)μ_t ^4 + 20 η d μ_t ^2 μ_t^*^2 + ηϵ/μ_t^* ^2
≲ 500 η√(d^7)μ_t ^2 + 20 η d μ_t ^2 + d^2 ϵ/μ_t^*^3
≤ 550 d^7/2 B^2 exp(-2t) + d^5 ϵ/ε^3 L^3
≲ d^2 B^2 ε,
where the last inequality follows by choosing population gradient estimation error parameter ϵ = ε^4 L^3 / d^3 with the number of samples n' = d^11 B^6/ϵ^8 L^6. Additionally, κ_1 in Lemma <ref> is given by
κ_1 = 1 - 3 ημ_t^2 / (1 -3ημ_t ^2) + η( μ_t^*^2 - 500 √(d^3)μ_t ^4 - 20 d μ_t^2 μ_t^*^2 - ϵ )
= 1 - 3 ημ_t^2 / (1 -3ημ_t ^2) + ημ_t^*^2 ( 1 - κ_2 )
≲ 1 - 3 ημ_t^(h)^2 / (1 -3ημ_t^(h)^2) + ημ_t^*^2 ( 1 - d^2 B^2ε )
≤ 1 / 1 + L^2 ε^2/20 d^2 ( 1 - d^2 B^2ε ) .
Using bounds on κ_1 and κ_2 and Lemma <ref>, we conclude that if θ (resp. θ') is the angle between μ_t (resp. the next iterate of projected gradient descent after μ_t) and μ^*_t
tanθ' ≤max( 1 / 1 + L^2 ε^2/20 d^2 ( 1 - B^2ε ) tanθ, d^2 B^2 ε) .
Doing projected gradient descent for H = 20 d^2/L^2 ϵ^3 steps, if θ^(h) denotes the angle between the h-th iterate and μ^*_t, we obtain
tanθ^(H) ≤tanθ^(h+1)≤max( ( 1 / 1 + L^2 ε^2/20 d^2 ( 1 - d^2 B^2ε ) )^H tanθ^(0), d^2 B^2 ε)
≤max( tanθ^(0)/ 1 + H L^2 ε^2/20 d^2 ( 1 - B^2ε ) , d^2 B^2 ε) ≤ d^2 B^2 ε ,
where the last inequality uses 1 + H L^2 ε^2/20 d^2 ( 1 - B^2ε ) ≥1/ε for ε≲1/B^3. Additionally, for a random initialization, Lemma <ref> shows that cosθ^(0)≥1/2d which implies tanθ^(0)≤√(^2θ^(0) - 1 )≲ d. Using Lemma <ref>, we have μ^(H)≥μ^* - ϵ which implies -2μ^(H)μ^* cosθ^(H)≤ -2μ^* ^2 cosθ^(H) + 2B ϵ and μ^(H)^2 ≤μ^* ^2 + 3 B ϵ. Using this result, we obtain
μ^(H) - μ^* ^2 = μ^(H)^2 + μ^* ^2 - 2 μ^(H)μ^* cosθ^(H)
≲ 2 μ^* ^2 - 2 μ^* ^2 cosθ^(H) + 5 B ε≲ 2B^2 (1 - 1/√(1 + d^4 B^4 ε^2)) + 5 B ε≲ d^2 B^4 ε,
where the last inequality follows from the fact that √(1+x)≤ 1+√(x) for any x > 0.
§ LEARNING MIXTURES OF K GAUSSIANS FROM A WARM START
In this section, we provide details about our main result on learning mixtures of K Gaussians. We start by describing our main theorem in this case.
Let q be a mixture of Gaussians (in the form of Eq. (<ref>)) with center parameters θ^* = {μ_1^*, μ_2^*, …, μ_K^*}∈R^d satisfying the separation Assumption <ref>, and suppose we have estimates θ for the centers such that the warm initialization Assumption <ref> is satisfied. For any ϵ > ϵ_0 and noise scale t where
ϵ_0 = 1 / poly(d) and t = Θ(ϵ) ,
gradient descent on the DDPM objective at noise scale t' (Algorithm <ref>) outputs θ = {μ_1, μ_2, …, μ_K } such that min_i μ_i - μ_i^* ≤ϵ with high probability. The algorithm runs for H ≥ H' iterations and uses n ≥ n' number of samples where
H' = Θ(log( ϵ^-1log d)) and n'= Θ(K^4 d^5 B^6 / ϵ^2) .
We first give an overview of the proof for population gradient descent, and then show that the empirical gradients concentrate well around the population gradients. We start by simplifying the population gradient update for mixtures of K Gaussians using Stein's lemma in Lemma <ref>, which yields
- ∇_μ_1,t L_t( s_θ_t ) = E [ w_1, t(X_t) (X_t - μ_1, t) ] + [extra terms] ,
recalling the notation of Eq. (<ref>).
As discussed in the body of the paper, E [ w_1, t(X_t) (X_t - μ_1, t) ] is precisely the update for the gradient EM algorithm (see Fact <ref>) and known results for the latter <cit.> can be used to show that the distance μ_1, t - μ_1, t^* contracts in each step when the separation Assumption <ref> and the warm initialization Assumption <ref> are satisfied. Therefore, showing that the “extra terms” do not disturb the progress coming from the gradient EM update is sufficient. We prove that the “extra terms” are 1/poly(d) in Lemma <ref> when the separation Assumption <ref> and warm initialization Assumption <ref> hold.
The intuition behind Lemma <ref> is as follows: We start with a key observation that each of the “extra terms” either contains w_1, t(X_t)(1 - w_1, t(X_t)) or w_1, t(X_t) w_j, t(X_t) where j ≠ 1. Note that the w_1, t(X_t) can be interpreted as the conditional probability of the underlying component being N(μ_1, t, I) given X_t. When Assumption <ref> and Assumption <ref> are satisfied, Proposition 4.1 of <cit.> shows that
E_X_t ∼N(μ_1, t^*, I)[ w_j, t(X_t) ] ≲ 1/poly(d) for any j ≠ 1 .
This result can be extended to show both E_X_t [w_1, t(X_t)(1 - w_1, t(X_t)) ] ≲ 1/poly(d) as well as E_X_t[ w_1, t(X_t) w_j, t(X_t)]≲ 1/poly(d) for any j ≠ 1 (see Lemma <ref> for the proof). Using these bounds, we conclude that [“extra terms”] ≲ 1/poly(d) in Lemma <ref>.
§.§ EM and population gradient descent on DDPM objective
We begin by writing out the gradient update explicitly:
For any noise scale t > 0, the gradient of the population DDPM objective E [ L_t( s_θ_t(X_t) ) ] with respect to parameter μ_1, t is given by
∇_μ_1, t L_t( s_θ_t ) = E[ - w_1, t(X_t) (X_t - μ_1, t) + w_1, t(X_t) (X_t - μ_1, t) ∑^K_i=1 w_i, t(X_t) μ_i, t^⊤ (X_t - μ_1, t)
+ w_1, t(X_t) μ_1, t - w_1, t(X_t) (X_t - μ_1, t)^⊤μ_1, t (X_t - μ_1, t) - w_1, t(X_t) ∑^K_i=1 w_i, t(X_t) μ_i, t
- w_1, t(X_t) ∑^K_i=1∇_x w_i, t (X_t)^⊤μ_i, t (X_t - μ_1, t) ]
where w_1, t(x) and μ_1, t are defined in Eq. (<ref>).
Recall that the score function of mixture of Gaussians is given by
s_θ_t(X_t) = ∑_i w_i, t( X_t ) μ_i, t - X_t
Finding the gradient ∇_μ_1, t w_i,t(X_t), we have
∇_μ_1, t w_i, t(X_t) = w_1, t(X_t) (1 - w_1,t (X_t)) (X_t - μ_1, t) if i=1
- w_1, t(X_t) w_i, t(X_t) (X_t-μ_1, t) otherwise.
The gradient of the score function is given by
∇_μ_1, t s_θ_t(X_t) = ∇_μ_1, t w_1, t(X_t) μ_1,t + ∑_i=2^K ∇_μ_1, t w_i, t(X_t) μ_i, t
= w_1, t(X_t)(1 - w_1, t(X_t)) μ_1, t (X_t - μ_1, t)^⊤ + w_1, t(X_t) I - w_1, t(X_t) ∑_i=2^K w_i, t(X_t) μ_i, t (X_t - μ_1, t)^⊤
= w_1, t(X_t) μ_1, t (X_t - μ_1, t)^⊤ + w_1, t(X_t) I - w_1, t(X_t) ∑_i=1^K w_i, t(X_t) μ_i, t (X_t - μ_1, t)^⊤ .
The gradient of 1/2s_θ_t^2 is given by
1/2∇s_θ_t(X_t)^2 = ∑_j=1^d [ s_θ_t(X_t) ]_j [∇_μ_1, t s_θ_t(X_t) ]_j = ∇_μ_1, t s_θ_t(X_t)^⊤ s_θ_t(X_t)
where [∇_μ_1, t s_θ_t(X_t) ]_j is j^th row of ∇_μ_1, t s_θ_t(X_t) .
The gradient of this is given by
∇_μ_1,t s_θ_t(X_t)^⊤ Z_t/β_t = 1/β_t ( w_1, t(X_t) (X_t - μ_1, t) μ_1, t^⊤ Z_t + w_1, t(X_t) Z_t
- w_1, t(X_t) ∑_i=1^K w_i, t(X_t) (X_t - μ_1, t) μ_i, t^⊤ Z_t )
Applying Stein's lemma to the expectation of the first term in Eq. (<ref>), we have
E_X_0, Z_t [ w_1, t(X_t) (X_t - μ_1, t) μ_1, t^⊤ Z_t ] = ∑_j=1^d E_X_0, Z_t [ w_1, t(X_t) (X_t - μ_1, t) μ_1, t, j Z_t, j ]
= ∑_j=1^d E_X_0, Z_t [ w_1, t(X_t) β_t e_j μ_1, t, j + β_t ∇_x w_1, t(X_t)^⊤ e_j (X_t - μ_1, t) μ_1, t, j ]
= E_X_0, Z_t [ w_1, t(X_t) β_t μ_1, t + β_t ∇_x w_1, t(X_t)^⊤μ_1, t (X_t - μ_1, t) ]
The expectation of the second term in Eq. (<ref>) simplifies to β_t E_X_t[ ∇_x w_1, t(X_t) ] by Stein's Lemma. Each summand in the third term in Eq. (<ref>) simplifies as following:
E_X_0, Z_t w_1, t(X_t) w_i, t(X_t) (X_t - μ_1, t) μ_i, t^⊤ Z_t
= ∑_j=1^d E_X_0, Z_t w_1, t(X_t) w_i, t(X_t) (X_t - μ_1, t) μ_i, t, j Z_t, j
= ∑_j μ_i, t, jE_X_0, Z_t[ w_1, t(X_t) w_i, t(X_t) β_t e_j + β_t w_1, t(X_t) ∇_x w_i, t (X_t)^⊤ e_j (X_t - μ_1, t)
+ β_t ∇_x w_1, t(X_t)^⊤ e_j w_i, t(X_t) (X_t - μ_1, t)]
= β_t E_X_0, Z_t[ w_1, t(X_t) w_i, t(X_t) μ_i, t + w_1, t(X_t) ∇_x w_i, t (X_t)^⊤μ_i, t (X_t - μ_1, t)
+ ∇_x w_1, t(X_t)^⊤μ_i, t w_i, t(X_t) (X_t - μ_1, t)]
Combining the gradients of all the terms of Eq. (<ref>), we have
∇_μ_1,t L_t( s_θ_t )
= E[ w_1, t(X_t) (X_t - μ_1, t) μ_1, t^⊤ s_θ_t(X_t) + w_1, t(X_t) s_θ_t(X_t) - w_1, t(X_t) (X_t - μ_1, t) ∑_i w_i, t(X_t) μ_i, t^⊤ s_θ_t(X_t)
+ ∇_x w_1, t(X_t) + w_1, t(X_t) μ_1, t + ∇_x w_1, t(X_t)^⊤μ_1, t (X_t - μ_1, t) - w_1, t(X_t) ∑_i w_i, t(X_t) μ_i, t
- w_1, t(X_t) ∑_i ∇_x w_i, t (X_t)^⊤μ_i, t (X_t - μ_1, t) - ∑_i ∇_x w_1, t(X_t)^⊤μ_i, t w_i, t(X_t) (X_t - μ_1, t) ]
= E[ - w_1, t(X_t) (X_t - μ_1, t) + w_1, t(X_t) (X_t - μ_1, t) ∑_i w_i, t(X_t) μ_i, t^⊤ (X_t - μ_1, t)
+ w_1, t(X_t) μ_1, t - w_1, t(X_t) (X_t - μ_1, t)^⊤μ_1, t (X_t - μ_1, t) - w_1, t(X_t) ∑_i w_i, t(X_t) μ_i, t
- w_1, t(X_t) ∑_i ∇_x w_i, t (X_t)^⊤μ_i, t (X_t - μ_1, t) ] ,
where the last equality uses Lemma <ref>. Specifically, it uses
∇_x w_1, t(X_t) + w_1, t(X_t) s_θ_t(X_t) = - w_1, t(X_t) (X_t - μ_1, t)
(∇_x w_1, t(X_t) + w_1, t(X_t)s_θ_t(X_t) )^⊤μ_1, t (X_t - μ_1, t) = -w_1, t(X_t)( X_t - μ_1,t )^⊤μ_1, t (X_t - μ_1, t) .
We will also need the following intermediate calculation:
For any i ∈ [K], the gradient of w_i, t(X_t) with respect to X_t is given by
∇_x w_i, t(X_t) = -w_i, t(X_t) (X_t - μ_i, t) - w_i, t(X_t) s_θ_t(X_t)
= - w_i, t(X_t) (1 - w_i, t(X_t) ) (X_t - μ_i, t) + w_i, t(X_t) ·∑_j∈[K]: j≠ i w_j, t(X_t) (X_t - μ_j, t ) .
By taking the gradient of w_i, t(X_t) and simplifying it, we get the result:
∇_x w_i, t(X_t) = - exp( -X_t - μ_i, t^2 / 2 ) (X_t - μ_i, t) /∑_j=1^K exp( -X_t - μ_j, t^2 / 2σ^2 )
+ exp( -X_t - μ_i, t^2 / 2 )·∑_j=1^K exp( -X_t - μ_j, t^2 / 2 ) (X_t - μ_j, t) /∑_j=1^K exp( -X_t - μ_j, t^2 / 2 ) ^2
= -w_i, t(X_t) (X_t - μ_i, t) + w_i, t(X_t) ∑_j=1^K w_j, t(X_t) (X_t - μ_j, t )
= - w_i, t(X_t) (1 - w_i, t(X_t) ) (X_t - μ_i, t) + w_i, t(X_t) ∑_j=1, j≠ i^K w_j, t(X_t) (X_t - μ_j, t ) .
We are now ready to establish the connection between gradient descent on the DDPM objective and the gradient EM update, for mixtures of K Gaussians:
Suppose the centers of the mixture of K Gaussians are well-separated according to Assumption <ref>, and the parameters θ = {μ_1, μ_2, …, μ_K } that the student network is initialized to satisfy the warm start Assumption <ref>. Then, for noise scale t = O(1), gradient descent on the DDPM objective is close to the gradient EM update:
∇_μ_1,t L_t( s_θ_t ) + E [ w_1, t(X_t) (X_t - μ_1, t) ] ≲K^2 B^2/ d^ c_r^2/4000 = 1/poly(d) ,
where c_r is a large constant.
Observe that the first term in the expression for the population gradient of the DDPM objective in Lemma <ref> is exactly the gradient EM update for the mixture of K Gaussian in Fact <ref>. To prove the closeness between the GD update and the gradient EM update, we will show that the additional terms in Lemma <ref> are small.
Note that when the ground truth parameters θ^* = {μ_1^*, μ_2^*, …, μ_K^* } satisfy Assumption <ref>, θ_t^* also satisfies Assumption <ref> for t = O(1). Similarly, it is straightforward to show that when the parameters θ satisfy Assumption <ref>, θ_t = {μ_1, t, μ_2, t, …, μ_K, t} also satisfies the assumption.
We focus on the d ≤ K case for this proof. A similar calculation with projection onto O(K) dimensional subspace of μ_i,t^* will give the result for d ≥ K case <cit.>.
Using Lemma <ref> below, we have
E[ w_1, t(X_t)(1 - w_1, t(X_t)) (X_t - μ_1,t)(X_t - μ_1,t)^⊤] μ_1, t≤d^2 c_r^2 B/ d^ c_r^2/1000 ,
for any i ∈ [K]. We can simplify additional terms as
∑_i=2^K E [ w_1, t(X_t) w_i, t(X_t) (X_t - μ_1, t) (X_t - μ_1,t)^⊤μ_i, t ]
≤∑_i=2^K E [ w_1, t(X_t) w_i, t(X_t) (X_t - μ_1, t) (X_t - μ_1,t)^⊤μ_i, t ]
≤∑_i=2^K √(E[ | w_1, t(X_t) w_i, t (X_t) |^2 ] ·E[ (X_t - μ_1, t) (X_t - μ_1,t)^⊤μ_i, t^2 ] )
≤K B^2/ d^ c_r^2/2000 ,
where in the last step we used the second part of Lemma <ref>.
This will allow us to prove that E[w_1, t(X_t) (X_t - μ_1, t) ∑_i=1^K w_i, t(X_t) μ_i, t^⊤ (X_t - μ_1, t) - w_1, t(X_t) (X_t - μ_1, t)^⊤μ_1, t (X_t - μ_1, t) ] is small.
Using the expression for ∇_x w_i, t (X_t) from Lemma <ref>, we have
∑_i=1^K w_1, t(X_t) ∇_x w_i, t (X_t)^⊤μ_i, t (X_t - μ_1, t)
= - ∑_i=1^K w_1,t(X_t) w_i, t(X_t) (1 - w_i, t(X_t) ) (X_t - μ_1, t) (X_t - μ_i, t)^⊤μ_i, t
+ ∑_i=1^K ∑_j=1, j≠ i^K w_1, t(X_t) w_i, t(X_t) w_j, t(X_t) (X_t - μ_1, t ) (X_t - μ_j, t )^⊤μ_i, t .
The first term can be simplified as follows:
∑_i=1^K E[ w_1,t(X_t) w_i, t(X_t) (1 - w_i, t(X_t) ) (X_t - μ_1, t) (X_t - μ_i, t)^⊤μ_i, t]
≤∑_i=1^K E[ w_1,t(X_t) w_i, t(X_t) (1 - w_i, t(X_t) ) (X_t - μ_1, t) (X_t - μ_i, t)^⊤μ_i, t]
≤∑_i=2^K √(E [ w_1,t(X_t)^2 w_i, t(X_t)^2 ] ·E[ (1 - w_i, t(X_t) )^2 · X_t - μ_1, t^2 · X_t - μ_i, t^2 ·μ_i, t^2 ] )
≲K B^2/ d^ c_r^2/4000 ,
where the last inequality follows from
E[ X_t - μ_1, t^2 X_t - μ_i, t^2 ] ≤√(E[ X_t - μ_1, t^4] E[ X_t - μ_i, t^4] )≲ B^2 .
Similarly, by simplifying the second term, we get
∑_i=1^K ∑_j=1, j≠ i^K E[ w_1, t(X_t) w_i, t(X_t) w_j, t(X_t) (X_t - μ_1, t ) (X_t - μ_j, t )^⊤μ_i, t]
≤∑_i=1^K ∑_j=1, j≠ i^K √(E[ w_i, t^2(X_t) w_j, t^2(X_t) ] E[ w_1, t^2(X_t) (X_t - μ_1, t ) (X_t - μ_j, t ) μ_i, t^2 ] )≲K^2 B^2/ d^ c_r^2/4000 ,
where the last inequality uses Lemma <ref>. Simplifying the following term using Lemma <ref>, we have
E[ w_1, t(X_t) μ_1, t - w_1, t(X_t) ∑_i=1^K w_i, t(X_t) μ_i, t ]
≤∑_i=2^K E[ w_1, t(X_t) w_i, t(X_t) μ_i, t] + ∑_i=2^K E[ w_1, t(X_t) w_i, t(X_t) μ_1, t] ≤2 K B/ d^ c_r^2/200 .
Combining all the results, we obtain the theorem statement.
The above proof made use of the following two helper lemmas which follow from prior work analyzing EM for learning mixtures of Gaussians:
There is some absolute constant c_r > 0 for which the following holds. For any θ = {μ_1, μ_2, …, μ_K } such that μ_i - μ_i^* ≤c_r/4√(log d) for all i ∈ [K] and any j such that j ≠ i, we have
E_X_t ∼N( μ^*_i, t, I ) [ w_j, t(X_t) ] ≤1/ d^ c^2_r / 100 .
Additionally, for any j ≠ k such that j ∈ [K] and k ∈ [K], we have
E_X_t[ w_j, t(X_t) w_k, t(X_t) ] ≤1/ d^ c^2_r / 200 .
Using Proposition 4.1 from <cit.>, for any θ = {μ_1, μ_2, …, μ_K } such that μ_i - μ_i^* ≤c_r/4√(log d) for all i ∈ [K] and j ≠ i, we have
E_X_t ∼N( μ^*_i, t, I ) [ w_j, t(X_t) ] ≤1/ d^ c^2_r / 100 .
Computing the expectation of the product of the weights w_j,t and w_k,t for any distinct j,k, we have
E_X_t[ w_j, t(X_t) w_k, t(X_t) ] = ∑_i=1^K 1/KE_x ∼N( μ^*_i, I )[ w_j, t(x) w_k, t(x) ]
≤1/K∑_i=1^K √(E_x ∼N( μ^*_i, I )[ w_j, t(x)^2 ] E_x ∼N( μ^*_i, I )[ w_k, t(x)^2 ] )
≤1/ d^ c_r^2/200
where the last inequality uses the fact that either i ≠ j or i ≠ k and w_j, t(x)^2 ≤ w_j, t(x) ≤ 1.
Suppose X is distributed according to a mixture of K Gaussians with centers θ^* = {μ^*_1,…,μ^*_K} as in Eq. (<ref>). For any θ = {μ_1, μ_2, …, μ_K } such that μ_i - μ_i^* ≤c_r/4√(log d) for all i ∈ [K], then for any distinct i, j ∈ [K], we have
E_X[ w_i(X, μ) (1 - w_i(X, μ)) (X - μ_i)(X - μ_i)^⊤ ] _𝗈𝗉 ≤d^2 c_r^2/d^c_r^2/1000
E_X[ w_i(X, θ) w_j(x, θ) (X - μ_i) (X - μ_j)^⊤ ] _𝗈𝗉 ≤d^2 c_r^2/d^c_r^2/1000
§.§ Closeness between population gradient descent and empirical gradient descent
In this section, we show that the population gradient descent on the DDPM objective is close to the empirical gradient descent for mixtures of K Gaussians.
For any ϵ that is Θ(1/poly(d)) and noise scale t > t' where t' ≲ 1, the empirical estimate of gradient descent update on the DDPM objective with the number of samples n > n' concentrates well to the population gradient descent update where n' = O(K^4 d^5 B^6/ϵ^2). More specifically, the following inequality holds with probability at least 1 - exp(-d^0.99):
∇_μ_1,t( 1/n∑_i=1^n L_t(s_θ_t( x_i, 0, z_i, t )) ) - ∇_μ_1,t L_t(s_θ_t ) ≤ϵ.
Recall that the population gradient is given by
∇_μ_1,t L_t(s_θ_t) = E[ 1/2∇_μ_1,t s_θ_t(X_t) ^2 + ∇_μ_1,t s_θ_t(X_t)^⊤ Z_t/β_t ] ,
where
E[ 1/2∇_μ_1,t s_θ_t(X_t) ^2 ] = E[ ( w_1, t(X_t) (X_t - μ_1, t) μ_1, t^⊤ + w_1, t(X_t) ·
- w_1, t(X_t) ∑_i=1^K w_i, t(X_t) (X_t - μ_1, t) μ_i, t^⊤) ·∑_i=1^K (w_i, t( X_t ) μ_i, t - X_t) ] ,
and
E[ ∇_μ_1,t s_θ_t(X_t)^⊤ Z_t ] = E[ ( w_1, t(X_t) (X_t - μ_1, t) μ_1, t^⊤ Z_t
+ w_1, t(X_t) Z_t - w_1, t(X_t) ∑_i=1^K w_i, t(X_t) (X_t - μ_1, t) μ_i, t^⊤ Z_t ) ] .
We will prove that the sample estimate of each coordinate in Eq. (<ref>) concentrates well around the expectation. We will prove the concentration of the first coordinate and a similar analysis holds for other coordinates. For the rest of the proof, we use x_t to denote the first coordinate of X_t and μ_i, t to indicate the first coordinate μ_i, t.
For any random variable Y ∈R, we use Y _ψ_1 to denote the sub-exponential norm of Y and Y _ψ_2 to denote the sub-gaussian norm of Y (See lemma <ref> for details). Using properties of a sub-Gaussian random variable from Lemma <ref>, we get
∑_j=1^K w_1, t(X_t) w_j, t (X_t) (x_t - μ_1, t) μ_1, t^⊤μ_j, t_ψ_2
≲ ∑_j=1^K w_1, t(X_t) w_j, t (X_t) (x_t - μ_1, t) μ_1, t^⊤μ_j, t_ψ_2*(Using sum of sub-Gaussian random variables property in Lemma <ref>)
≲ ∑_j=1^K w_1, t(X_t) w_j, t (X_t) μ_1, t^⊤μ_j, t z _ψ_2 + w_1, t(X_t) w_j, t (X_t) μ_1, t^⊤μ_j, t (τ - μ_1, t ) _ψ_2
≲ KB^2 + K B^3 ≲ K B^3,
where the third inequality follows by writing x_t = z + τ where z ∼N(0, 1) and τ is a random variable that takes μ_i, t^* for every i ∈ [K] with probability 1/K. The fourth inequality follows from the sub-Gaussian property of a bounded random variable and the product of a sub-Gaussian random variable with bounded random variable property in Lemma <ref>.
Using the sum of sub-Gaussian random variable property in Lemma <ref>, we have
∑_i=1^K w_1, t(X_t) w_i, t(X) μ_i, t_ψ_2≲∑_i=1^K w_1, t(X_t) w_i, t(X) μ_i, t≲ K B.
Using properties of the sub-Gaussian random variable from Lemma <ref> in a similar way of Eq. (<ref>), we have
∑_i=1^K ∑_j=1^K w_1, t(X_t) w_i, t(X_t) w_j, t(X_t) μ_i, t^⊤μ_j, t ( x_t - μ_1, t) _ψ_2
≤ ∑_i=1^K ∑_j=1^K w_1, t(X_t) w_i, t(X_t) w_j, t(X_t) μ_i, t^⊤μ_j, t ( x_t - μ_1, t) _ψ_2
≤ ∑_i=1^K ∑_j=1^K w_1, t(X_t) w_i, t(X_t) w_j, t(X_t) μ_i, t^⊤μ_j, t z _ψ_2 + w_1, t(X_t) w_i, t(X_t) w_j, t(X_t) μ_i, t^⊤μ_j, t (τ - μ_i,t) _ψ_2
≤ K^2 B^2 + K^2 B^3
≲ K^2 B^3
We know that w_1, t(X_t) μ_1, t^⊤ X_t ≤∑_i=1^d μ_1, t(i) X_t(i) ≲ d B^2 and x_t - μ_1,t≲ B. Using the fact that the product of two sub-Gaussian random variables is a sub-exponential random variable, we have
w_1, t(X_t) μ_1, t^⊤ X_t (x_t - μ_1,t) _ψ_1 ≤x_t - μ_1,t_ψ_2 w_1, t(X_t) μ_1, t^⊤ X_t _ψ_2≲ dB^3
The sub-gaussian norm of w_1, t(X_t) x_t term in the gradient is given by
w_1, t(X_t) x_t ≤ X_t ≲ Z + τ≲ B
Using the property that the product of two sub-Gaussian random variables is a sub-exponential random variable, we obtain
w_1, t(X_t) (x_t - μ_1,t ) ( ∑_i=1^K w_i, t(X_t) μ_i, t^⊤ X_t ) _ψ_1
≲ w_1, t(X_t) (x_t - μ_1,t ) ( ∑_i=1^K w_i, t(X_t) μ_i, t^⊤ X_t ) _ψ_2
≲ K d B^3
For any random variable Y, we know that X_ψ_1≤X. Therefore, combining Eq. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we have
[∇_μ_1, t s_θ_t(X_t)^⊤ s_θ_t(X_t) ]_1 - E[∇_μ_1, t s_θ_t(X_t)^⊤ s_θ_t(X_t) ]_1 _ψ_1 ≲ [∇_μ_1, t s_θ_t(X_t)^⊤ s_θ_t(X_t) ]_1 _ψ_1
≲ K^2 d B^3
Now, we shift our focus on obtaining the sub-exponential norm of ∇_μ_1,t s_θ_t(X_t)^⊤ Z_t. Using w_1, t(X_t) (x_t - μ_1, t) _ψ_2≲ B and μ_1, t^⊤ Z_t _ψ_2≲ d B, we obtain
w_1, t(X_t) (x_t - μ_1, t) μ_1, t^⊤ Z_t _ψ_1≤ w_1, t(X_t) (x_t - μ_1, t) _ψ_2μ_1, t^⊤ Z_t _ψ_2≲ d B^2
Using Lemma <ref>, we have w_1, t(X_t) z_t ≤ z_t ≲ 1. For the last term, we have
w_1, t(X_t) (x_t - μ_1, t) ∑_i=1^K w_i, t(X_t) μ_i, t^⊤ Z_t _ψ_1 ≤ w_1, t(X_t) (x_t - μ_1, t) ∑_i=1^K w_i, t(X_t) μ_i, t^⊤ Z_t _ψ_2
≲ K d B^2
Combining Eq. (<ref>), (<ref>), we have
[∇_μ_1,t s_θ_t(X_t)^⊤ Z_t]_1 /β_t - E [∇_μ_1,t s_θ_t(X_t)^⊤ Z_t]_1 /β_t _ψ_1≲ [∇_μ_1,t s_θ_t(X_t)^⊤ Z_t]_1 /β_t _ψ_1≲K d B^2/β_t ,
where [∇_μ_1,t s_θ_t(X_t)^⊤ Z_t]_1 denotes the first coordinate of ∇_μ_1,t s_θ_t(X_t)^⊤ Z_t. Combining Eq. (<ref>) and Eq. (<ref>), we have
[∇_μ_1,t L_t(s_θ_t( X_t ))]_1 - [∇_μ_1,t L_t(s_θ_t )]_1 _ψ_1≲K^2 d B^3/β_t
For each i.i.d. sample x_i, t, the term [∇_μ_1,t L_t(s_θ_t( x_i, t ))]_1 - [∇_μ_1,t L_t(s_θ_t )]_1 is also independent and identically distributed. Therefore, using Lemma <ref>, for any ϵ that is Θ(1/poly(d)), we have
[ | 1/n∑_i=1^n [∇_μ_1,t L_t(s_θ_t( x_i, t ))]_1 - [∇_μ_1,t L_t(s_θ_t )]_1 | ≥ϵ] ≤ 2 exp( -n ϵ^2 β_t^2 / K^4 d^2 B^6 ).
A similar analysis will give the concentration for each coordinate. Using the union bound and rescaling ϵ as ϵ/d, with probability at least 1 - 2 d exp( -n ϵ^2 β_t^2 / K^4 d^4 B^6 ), we have
∇_μ_1,t( 1/n∑_i=1^n L_t(s_θ_t( x_i, t )) ) - ∇_μ_1,t L_t(s_θ_t) ≤ϵ
Note that for any t = Ω(1), β_t ≥ c for some constant c. Therefore, choosing n provided in the Lemma <ref> statement, we obtain the result.
§.§ Proof of Theorem <ref>
For any training iteration h, assume that parameters θ_t^(h) are such that μ_i, t^(h) - μ_i, t^*≤c_r/4√(log d) we can write the update on the DDPM objective as follows:
μ_1,t^(h+1) - μ_1, t^* = μ_1,t^(h) - η∇( 1/n∑_i=1^n L_t( s_θ_t^(h)(x_i, 0, z_i, t) ) ) - μ_1, t^*
≤ μ_1, t^(h) + η E [ w_1, t(X_t) (X_t - μ_1, t^(h)) ] - μ_1, t^*
+ η - ∇_μ_1,t L_t( s_θ_t ) - E [ w_1, t(X_t) (X_t - μ_1, t^(h)) ]
+ η∇_μ_1,t L_t( s_θ_t ) - ∇_μ_1,t( 1/n∑_i=1^n L_t( s_θ_t^(h)(x_i, 0, z_i, t) ) ) .
Using Lemma <ref>, Lemma <ref> and Theorem 3.2 from <cit.>, for any η∈ (0, K), we have
μ_1,t^(h+1) - μ_1, t^* ≤ 1 - 3 η/8Kμ_1, t^(h) - μ_1, t^* + η K^2 B^2/ d^c_r^2/4000 + ηϵ.
Choosing η = 2 K/3, c_r to be sufficiently large constant and ϵ to be Θ(1/poly(d)), we have
μ_1,t^(h+1) - μ_1, t^* ≤3/4μ_1,t^(h) - μ_1, t^* + ϵ
By assumption <ref>, μ_1,t^(0) - μ_1, t^* ≤ O(√(log d)) and therefore, choosing H to be Ω( log (log d /ϵ ) ), we obtain the result.
§ ADDITIONAL PROOFS
§.§ Proof of Lemma <ref>
By calculating the negative gradient of the DDPM objective in Eq. (<ref>), we obtain
-∇_μ_t L_t(s_μ_t)
= - E_X_0, Z_t [ ( tanh(μ_t^⊤ X_t) I + tanh'( μ_t^⊤ X_t ) X_t μ_t^⊤ ) ( s_μ_t(X_t) + Z_t/β_t ) ]
= - E[ ( tanh(μ_t^⊤ X_t) I + tanh'( μ_t^⊤ X_t ) X_t μ_t^⊤ ) ( tanh( μ_t^⊤ X_t ) μ_t - X_t + Z_t/β_t ) ]
= E[ - tanh^2(μ_t^⊤ X_t) μ_t - tanh( μ_t^⊤ X_t ) tanh'( μ_t^⊤ X_t ) X_t μ_t ^2 + tanh (μ_t^⊤ X_t) X_t
+ tanh'( μ_t^⊤ X_t ) μ_t^⊤ X_t X_t - tanh(μ_t^⊤ X_t) Z_t/β_t - tanh'( μ_t^⊤ X_t ) X_t μ_t^⊤Z_t/β_t ]
By simplifying the gradient terms involving Z_t by the Stein's identity as in Lemma <ref> and plugging it back in the gradient, we obtain
-∇_μ_t L_t(s_μ_t) = E[ tanh (μ_t^⊤ X_t) - tanh( μ_t^⊤ X_t ) tanh'( μ_t^⊤ X_t ) μ_t^2 + tanh'( μ_t^⊤ X_t ) μ_t^⊤ X_t X_t ]
- μ_t - Etanh”( μ_t^⊤ X_t ) μ_t ^2 X_t - Etanh'( μ_t^⊤ X_t ) μ_t
= E[ tanh (μ_t^⊤ X_t) - 0.5 tanh”( μ_t^⊤ X_t ) μ_t ^2 + tanh'( μ_t^⊤ X_t ) μ_t^⊤ X_t X_t ]
- μ_t - Etanh'( μ_t^⊤ X_t ) μ_t
Observe that tanh (μ^⊤ x) - 1/2tanh”( μ^⊤ x ) μ^2 + tanh'( μ^⊤ x ) μ^⊤ x x and tanh'( μ^⊤ x ) are even functions and X_t is a symmetric distribution, therefore, for any even function f, we can write E_X_t[ f( X_t ) ] = 1/2E_X_t ∼N(μ_t^*, )[ f( X_t ) ] + 1/2E_X_t ∼N(-μ_t^*, I)[ f( X_t ) ] = E_X_t ∼N(μ_t^*, )[ f( X_t ) ]. Applying this property of the even function on the gradient update, we obtain the result.
When random variable X_t = α_t X_0 + β_t Z_t where Z_t ∼N(0, I), α_t = exp(-t) and β_t = √(1 - exp(-2t)), then for any t>0, the following two equations hold.
E_X_0, Z_t [ tanh( μ_t^⊤ X_t ) Z_t/β_t + tanh^2 ( μ_t^⊤ X_t ) μ_t ] = μ_t
E_X_0, Z_t [ tanh'( μ_t^⊤ X_t ) μ_t^⊤ Z_t /β_t X_t ] = E_X_0, Z_ttanh”( μ_t^⊤ X_t ) μ_t ^2 X_t + tanh'( μ_t^⊤ X_t ) μ_t
Applying Stein's lemma on the first term, we get the first equation of the statement in the Lemma.
E_X_0, Z_ttanh( μ_t^⊤ X_t ) Z_t/β_t = E_X_0, Z_ttanh( μ_t^⊤ ( α_t X_0 + β_t Z_t ) ) Z_t/β_t = E_X_0, Z_ttanh' ( μ_t^⊤ X_t ) μ_t
= E_X_0, Z_t 1 - tanh^2 ( μ_t^⊤ X_t ) μ_t
For the second term, we have
E [ tanh'( μ_t^⊤ X_t ) μ_t^⊤ Z_t /β_t X_t ] = E[ tanh'( μ_t^⊤ X_t ) μ_t^⊤ Z_t /β_t α_t X_0 ] + Etanh'( μ_t^⊤ X_t ) μ_t^⊤ Z_t Z_t
= ∑_i=1^d E[ α_t X_0 tanh'( μ_t^⊤ X_t ) μ_t(i) Z_t(i) /β_t ] + Etanh'( μ_t^⊤ X_t ) μ_t + Etanh”( μ_t^⊤ X_t ) μ_t^⊤ Z_t β_t μ_t
= ∑_i=1^d E[ α_t X_0 tanh”( μ_t^⊤ X_t ) μ_t(i) μ_t(i) ] + Etanh'( μ_t^⊤ X_t ) μ_t + Etanh”( μ_t^⊤ X_t ) μ_t^⊤ Z_t β_t μ_t
where the second equality follows from the Stein's lemma on the E[tanh'( μ_t^⊤ X_t ) μ_t^⊤ Z_t Z_t ] and the last equality follows from the Stein's lemma on E [ α_t X_0 tanh”( μ_t^⊤ X_t ) μ_t(i) Z_t(i) ]. Applying Stein's inequality on the Etanh”( μ_t^⊤ X_t ) μ_t^⊤ Z_t β_t μ_t, we obtain
= Eα_t X_0 tanh”( μ_t^⊤ X_t ) μ_t ^2 + Etanh'( μ_t^⊤ X_t ) μ_t + ∑_i=1^d β_t μ_t Etanh”'( μ_t^⊤ X_t ) μ_t(i) β_t μ_t(i)
= E X_t tanh”( μ_t^⊤ X_t ) μ_t ^2 - Eβ_t Z_t tanh”( μ_t^⊤ X_t ) μ_t ^2 + Etanh'( μ_t^⊤ X_t ) μ_t
+ β_t^2 μ_t ^2 μ_t Etanh”'( μ_t^⊤ X_t )
= E X_t tanh”( μ_t^⊤ X_t ) μ_t ^2 + Etanh'( μ_t^⊤ X_t ) μ_t .
§.§ Proof of Lemma <ref>
Recall that the gradient update for any μ_t^* is given by
- ∇_μ^*_t L_t(s_μ^*_t)
= G(μ_t^*, μ_t^*) + ηE_x ∼N(μ_t^*, )[ tanh (μ^*⊤_t x) x ] - ημ_t^*
We know that E_x ∼N(μ_t^*, )[ tanh (μ^*⊤_t x) x ] = μ_t^* (Eq.(2.1) of <cit.>) and ∇_μ^*_t L_t(s_μ^*_t) = 0 because μ^*_t is a stationary point of the regression objective of diffusion model. This implies that G(μ^*_t, μ^*_t) = 0 for any μ^*_t.
Note that this proof only talks about 1D case therefore, for the purpose of this proof, we use a to denote μ and b to denote μ^*. In 1D, using Mean value theorem, we have
G( a, b) - G( a, a ) / b - a = d G(a, ξ)/ d ξ for some ξ∈ [a, b] (if a < b)
Using the fact that G(a, a) = 0 in Eq. (<ref>), we have
G(a, b) = d G(a, ξ)/ d ξ b - a
Observe that it suffices to prove d G(a, ξ)/ d ξ≤ 0.01 to obtain the lemma. By computing the gradient of G, we obtain
d G(a, ξ) / d ξ = ηE_x ∼N(ξ, 1)[ 2 tanh'(a x) a x + tanh”(a x) -3 a^2/2 + a^2 x^2 - 1/2 a^3 x tanh”'( a x ) ]
For the first term, we have
E_x ∼N(ξ, I)[ tanh'(a x) a x ] = 1/√(2 π)∫_-∞^∞tanh'(a x) a x e^- (x - ξ)^2 /2 dx
= 1/√(2 π)∫_0^∞tanh'(a x) a x e^- (x - ξ)^2 /2 - e^- (x + ξ)^2 /2 dx
≤1/√(2 π)∫_0^∞ e^-a x a x e^- (x - ξ)^2 /2 dx
≤a e^a^2 - 2 a ξ/2/√(2 π)∫_0^∞ x e^-(x - ξ + a)^2 /2 dx
≤ a e^a^2 - 2 a ξ/2 ( √(2/π)e^ - (ξ - a)^2 /2 + (ξ - a) erfξ - a /√(2) )
≤ a e^-ξ^2/2 + a ξ - a e^-2a(ξ - a) - a^2 /2
Using Lemma 1 of <cit.>, we know that E_x ∼N(ξ, I)[ tanh'(a x) a x ] > 0. Therefore, we have
E_x ∼N(ξ, I)[ tanh'(a x) a x ] ≤ a e^-ξ^2/2 + aξ - a e^-2a(ξ - a) - a^2 /2
For the second term, we have
E_x ∼N(ξ, 1)[ tanh”(a x) ( -3 a^2/2 + a^2 x^2 ) ]
= 1/√(2π)∫_0^∞ a^2 tanh”( a x ) ( -3/2 + x^2 ) exp( - (x - ξ)^2 /2 ) - exp( - (x + ξ)^2 /2 ) dx
≤1/√(2π)∫_0^√(3/2) a^2 e^-2 a x ( 3/2 - x^2 ) exp( - (x - ξ)^2 /2 ) dx
≤3/√(2π) a^2 exp( -a^2/16 )
Assuming a ≥√(6), then when ξ≥ a ≥√(6), we have exp( - (x - ξ)^2 /2 ) ≤exp( - a^2/4 ) and when ξ≤ a, using ξ≥3 a/4, we have exp( - (x - ξ)^2 /2 ) ≤exp( - a^2/16 ). For the lower bound, we have
E_x ∼N(ξ, 1)[ tanh”(a x) ( -3 a^2/2 + a^2 x^2 ) ]
= 1/√(2 π)∫_0^∞tanh”( a x ) ( -3 a^2/2 + a^2 x^2 ) exp( - (x - ξ)^2 /2 ) - exp( - (x + ξ)^2 /2 ) dx
≥1/√(2 π)∫_√(3/2)^∞tanh”( a x ) ( -3 a^2/2 + a^2 x^2 ) exp( - (x - ξ)^2 /2 ) - exp( - (x + ξ)^2 /2 ) dx
≥1/√(2 π)∫_√(3/2)^∞tanh”( a x ) a^2 x^2 exp( - (x - ξ)^2 /2 ) - exp( - (x + ξ)^2 /2 ) dx
≥ - 8 a^2/√(2 π)∫_√(3/2)^∞ e^-2 a x x^2 exp( - (x - ξ)^2 /2 ) - exp( - (x + ξ)^2 /2 ) dx
≥ - 8 a^2 e^- √(6) a /√(2π)∫_√(3/2)^∞ x^2 exp( - (x - ξ)^2 /2 ) dx ≥ - 8 a^2 e^- √(6) a
Using upper bound and lower bound, we have
E_x ∼N(ξ, 1)[ tanh”(a x) a^2 ( -3/2 + x^2 ) ] ≤ 8 a^2 e^- √(6) a
For the third term, we have
| E_x ∼N(ξ, 1) [ a^3 x/2tanh”'( a x ) ] |
= | 1/32 √(2 π)∫_0^∞ a^3 x σ(2 a x)(1 - σ(2 a x)) 1 - 6 σ(2a x)(1 - σ(2 a x) ) ( exp( - (x - ξ)^2 /2)
- exp( - (x + ξ)^2 /2) ) dx |
≤| 3a^3/16 √(2 π)∫_0^∞ x σ^2(2 a x)(1 - σ(2 a x))^2
( exp( - (x - ξ)^2 /2) - exp( - (x + ξ)^2 /2) ) dx |
≤3a^3/16 √(2 π)∫_0^∞ x e^-a xexp( - (x - ξ)^2 /2) dx
≤a^3/10 e^-ξ^2/2 + a^3/10ξ - a e^-2a(ξ - a) - a^2 /2 .
We can lower bound the third term as follows:
E_x ∼N(ξ, 1) [ a^3 x/2tanh”'( a x ) ]
≥1/2 √(2 π)∫_0^c a^3 x tanh”'(ax) ( exp( - (x + ξ)^2 /2) - exp( - (x - ξ)^2 /2) ) dx
≥a^3/2 √(2 π)∫_0^c x exp( - (x - ξ)^2 /2) exp -2 ξ x - 1 dx
≥ -a^3 ξ/√(2 π)∫_0^c x^2 exp( - (x - ξ)^2 /2) dx ≥ -ξexp( - ξ^2/4) /√(2 π)
Using all the bounds, we have
d G(a, ξ) / d ξ ≤a^3/10 e^-ξ^2/2 + a^3/10ξ - a e^-2a(ξ - a) - a^2 /2 + 8 a^2 e^- √(6) a + a e^-ξ^2/2 + a ξ - a e^-2a(ξ - a) - a^2 /2
When ξ≥ a and a ≥ c for some sufficiently large constant c (for example, c=25), then, we have
d G(a, ξ) / d ξ ≤a^3/10 e^-a^2/2 + a^3/10ξ - a e^ - a^2 /2 + 8 a^2 e^- √(6) a + a e^-a^2/2 + a ξ - a e^ - a^2 /2≤ 0.01
When 3 a/4≤ξ≤ a and a > c for sufficiently large constant c (for example, c=25), we have
d G(a, ξ) / d ξ ≤a^3/10 e^-9 a^2/32 + a^4/40 e^- a^2 /4 + 8 a^2 e^- √(6) a + a e^-a^2/2 + a^2/4 e^- a^2 /4≤ 0.01
Pluggint the bound on | d G(a, ξ) / d ξ| in Eq. (<ref>), we obtain the final result.
§.§ Proof of Lemma <ref>
We will prove this by induction. For h=0, this is true because the algorithm initializes the gradient descent on the low noise regime with the output of gradient descent on the high noise regime, and the output is guaranteed to have μ̂_t^(0)μ̂_t^* to be Ω(1) and by assumption μ_t^* > c', therefore μ_t^(0)∈ [c, 4 μ̂_t^(0)μ_t^* /3].
Suppose μ_t^(h)∈ [c, 4 μ̂_t^(h)μ_t^* /3], then we know that μ_t^(h+1) - μ_t^* < μ_t^(h) - μ_t^*. To prove μ_t^(h+1)∈ [c, 4 μ̂_t^(h+1)μ_t^* /3], first we will prove that μ̂_t^(h)μ_t^(r+1)∈ [c, 6 μ̂_t^(h)μ_t^* /5 ]. Note that the update in the direction of μ̂_tμ_t works like 1D. Therefore, we have a contraction for it as follows.
μ̂_t^(h)μ_t^(h+1) - μ̂_t^(h)μ_t^* < μ̂_t^(h)μ_t^(h) - μ̂_t μ_t^*
If μ_t^(h)≤μ̂_t^(h)μ_t^*, then using Lemma <ref>, we know μ̂_t^(h)μ_t^(h+1)≤6 μ̂_t^(h)μ_t^* /5 and μ̂_t^(h)μ_t^(h+1)≥μ_t^(h)≥ c because of the contraction. If μ_t^(h)≥μ̂_t^(h)μ_t^* and μ̂_t^(h)μ_t^(h+1)≥μ̂_t^(h)μ_t^*, then μ̂_t^(h)μ_t^(h+1)≤μ_t^(h) because of the contraction. If μ_t^(h)≥μ̂_t^(h)μ_t^* and μ̂_t^(h+1)μ_t^(h)≤μ̂_t^(h)μ_t^*, then using μ̂_t^(h+1)μ_t^(h)≥μ_t^(h) - U(μ̂_t^(h)μ_t^(h), μ̂_t^(h)μ_t^* )≥4 μ̂_t^(h)μ_t^* /5≥4 μ̂_t^(0)μ_t^* /5≥ c from Lemma <ref>, we get the result that μ̂_t^(h)μ_t^(h+1)∈ [c, 6 μ̂_t^(h)μ_t^* /5 ]. Now, using Lemma <ref>, we get
μ̂_t^(h)μ_t^(h+1)∈ [c, 6 μ̂_t^(h)μ_t^* /5 ] μ_t^(h+1)∈[ c/cosα_h , 6 μ_t^* cosβ_h /5 cosα_h ]
μ_t^(h+1)∈[ c, 4 μ_t^* cosβ_h+1/3]
μ_t^(h+1)∈[ c, 4 μ̂_t^(h+1)μ_t^* /3]
Suppose the angle between μ^(r) and μ^* is β_r and α_r is the angle between μ^(r) and μ^(r+1) and assume the contraction is true at time r. Assume that β_0 ∈ (0, π/2). Then:
α_r ∈ (0, π/2) ∀ r and cosβ_r ≤cosβ_r+1
which implies that
cosβ_r ≤cosβ_r+1 ∀ r μ̂^(r)μ^* ≥μ̂^(0)μ^*
First, we will prove that if β_r ∈ (0, π/2) and μ^(r)∈ [c, 4 μ̂_t^(r)μ_t^* /3], then α_r ∈ (0, β_r) for any r. We denote α_r > 0 if μ^(r) moves towards μ^(r)⊥ and hence towards μ^*. The following simple observation of μ̂^(r)⊥μ^(r+1)≥ 0 proves that α_r > 0.
μ̂^(r)⊥μ^(r+1)
= E_x ∼N(μ^*, 1)[ η( tanh (μ^(r)⊤ x) - 1/2tanh”( μ^(r)⊤ x ) μ^(r)^2 + tanh'( μ^(r)⊤ x ) μ^(r)⊤ x )·μ̂^(r)⊥ x ]
= E_x ∼N(0, 1)[ η( tanh (μ^(r)⊤ (x + μ^*)) - 1/2tanh”( μ^(r)⊤ (x + μ^*) ) μ^(r)^2
+ tanh'( μ^(r)⊤ (x + μ^*) ) μ^(r)⊤ (x + μ^*) ) ·μ̂^(r)⊥ (x + μ^*) ]
= E_α_1, α_2 ∼N( μ̂^(r)μ^* , 1)[ η( tanh ( μ^(r)α_1) - 1/2tanh”(μ^(r)α_1 ) μ^(r)^2
+ tanh'(μ^(r)α_1 ) μ^(r)α_1 ) (α_2 + μ̂^(r)⊥μ^* ) ]
= E_α_1, α_2 ∼N( μ̂^(r)μ^* , 1)[ η( tanh ( μ^(r)α_1) - 1/2tanh”(μ^(r)α_1 ) μ^(r)^2
+ tanh'(μ^(r)α_1 ) μ^(r)α_1 ) ·μ̂^(r)⊥μ^* ] > 0 ,
where in the last step we used the fact that μ̂^(r)μ^* > 0 and μ̂^(r)⊥μ^* > 0.
Now, we will prove that α_r > β_r which will prove that α_r ∈ (0, β_r). Note that
α_r = μ̂^(r)μ^(r+1)/μ̂^(r)⊥μ^(r+1)where
μ̂^(r)μ^(r+1) = (1-η) μ^(r) + ηE_α_1 ∼N(μ̂^(r)⊤μ^*, 1) [ tanh(μ^(r)α_1 ) α_1 ]
+ ηE_α_1 ∼N(μ̂^(r)⊤μ^*, 1)[ -1/2tanh”(μ^(r)α_1) μ^(r)^2 α_1 + tanh'(μ^(r)α_1 ) μ^(r)α_1^2
- tanh'(μ^(r)α_1 ) μ^(r) ]
μ̂^(r)⊥μ^(r+1) = ημ̂^(r)⊥μ^*E_α_1 ∼N( μ̂^(r)⊤μ^* , 1) [ tanh(μ^(r)α_1 ) - 1/2tanh”( μ^(r)α_1 ) μ^(r)^2
+ tanh'( μ^(r)α_1 ) μ^(r)α_1 ]
and β_r = μ̂^(r)μ^* /μ̂^(r)⊥μ^*
Observe the fact that to prove a+c'/b+c - a/b > 0, it is sufficient to prove c' > ac/b for b, c > 0. Using this observation, to prove α_r > β_r, it is sufficient to prove
(1 - η - ηE [ tanh'(μ^(r) x)] ) μ^(r) + ηE_x[ -1/2tanh”(μ^(r) x ) μ^(r)^2 (x - μ̂^(r)μ^* )
+ tanh'(μ^(r) x )(x^2 - μ̂^(r)μ^* x) + tanh(μ^(r) x )(x - μ̂^(r)μ^* ) ] > 0,
where the expectation is wrt N( μ^(r)μ^* , 1 ). Lemma <ref> shows that this is indeed true.
For any η = 1/20, assuming a ∈ [30, 4b/3], we have
(1 - η - ηE_x ∼N(b, 1)[ tanh'(ax) ] ) a
+ η E_x ∼N(b, 1)[ -1/2tanh”( a x ) a^2 (x - b) tanh'(ax)(x^2 - bx) + tanh(ax) (x - b) ] > 0 .
First, we will find the upper bound on E[tanh”(ax)(x - b)].
E[ tanh”(ax) (x - b) ] = ∫_-∞^∞tanh”(ax) (x - b) exp(-(x - b)^2/2) dx
≤∫_0^b tanh”(ax)(x - b) exp(-(x - b)^2/2) dx
≤∫_0^b tanh”(ax) x exp(-(x - b)^2/2) dx
≤∫_0^b exp(-ax) x exp(-(x - b)^2/2) dx
≤exp(a^2 - 2ab/2) ∫_0^b x exp(-(x - b)^2 + 2a(x-b) + a^2 /2) dx
≤exp(a^2 - 2ab/2) ∫_0^∞ x [ exp(-(x - b + a)^2 /2) + exp(-(x + b - a)^2 /2) ] dx
≤exp(-b^2/2) + a - b·exp(a^2 - 2ab/2) .
Now, for the second term, we have
E_x ∼N(b, 1)[tanh'(ax)(x^2 - bx)]
= ∫_-∞^∞tanh'(ax) x (x - b) exp( -(x-b)^2/2) dx
≥ - b ∫_0^b x e^-axexp( -(x-b)^2/2) dx
≥ - b exp( a^2 - 2ab /2) ∫_0^∞ x [ exp(-(x - b + a)^2 /2) + exp(-(x + b - a)^2 /2) ] dx
≥ -b exp(-b^2/2) - b a-b·exp( a^2 - 2ab /2)
We can rewrite the last term as E_x ∼N(0, 1)[ tanh(a(x + b))x ]. Using the fact that tanh(a(x+b)) > tanh(a(-x + b)), we get that E_x ∼N(0, 1)[ tanh(a(x + b))x ] > 0. Finally, using the upper bound on E[tanh'(ax)], we get the following lower bound.
(1 - η - η E_x ∼N(b, 1)[ tanh'(ax) ] ) a + ηE_x ∼N(b, 1)[ -1/2tanh”( a x ) a^2 (x - b) + tanh'(ax)(x^2 - bx) ]
≥a/20(19 - 4 e^a^2 - 2 a b/2) + 1/20( -a^2/2[ exp(-b^2/2) + a - bexp(a^2 - 2ab/2) ]
- b exp(-b^2/2) - b a-bexp( a^2 - 2ab /2) ) ≥ 1 .
For any a, b > 0 and a ∈ [30, 4b/3], the following holds. Define
U(a, b) ≜ηE_x ∼N(b, 1)[ ( tanh (a x) - 1/2tanh”( a x ) a^2 + tanh'( a x ) a x ) x ] - ηE_x ∼N(b, 1)tanh'( a x ) a - η a .
When the learning rate η=1/20, is given by, we have
U(a, b)≤a+b/10
We upper bound each term in U(a, b) and they apply triangle inequality to get the result. We start with |E_x ∼N(b, 1) tanh”(a x) a^2 x |:
-E_x ∼N(b, 1) tanh”(a x) a^2 x = a^2/8 √(2 π)∫_0^∞ x σ(2 a x) (1 - σ(2 a x))(2 σ(2 a x) - 1) e^-(x - b)^2/2 + e^-(x + b)^2/2 dx
≤a^2/4 √(2 π)∫_0^∞ x e^-2a x e^-(x - b)^2/2 dx
≤a^2/4 √(2 π)∫_0^∞ e^-a x x e^-(x - b)^2/2 dx
≤a^2/2 e^-b^2/2 + a^2/2b - a e^--2a(b - a) - a^2 /2
E_x ∼N(b, 1) [ tanh'(a x) a x^2 ] = 1/√(2π)∫_0^∞tanh'(a x) a x^2 e^- (x - b)^2/2 + e^- (x + b)^2/2 dx
≤ a ∫_0^∞ e^-a x x^2 e^-(x - b)^2/2 dx
≤ a e^ a^2 - 2 a b /2∫_0^∞ x^2 e^-(x - b + a)^2/2 dx
≤ 2a (a - b)^2 e^ a^2 - 2 a b /2
-E_x ∼N(b, 1)[ a tanh'(a x) ] = -a/√(2π)∫_0^∞tanh'(a x) e^- (x - b)^2/2 + e^- (x + b)^2/2 dx
≥ -a ∫_0^∞ e^-a x e^-(x - b)^2/2 dx
≥ - a e^ a^2 - 2 a b /2∫_0^∞ e^-(x - b + a)^2/2 dx
≥ - 4a e^ a^2 - 2 a b /2 .
Now, using the fact that tanh'(x) and -tanh”(x)x are always positive, we have the following upper bound.
U(a, b) ≤η | E_x ∼N(b, 1)[ ( tanh (a x) - 1/2tanh”( a x ) a^2 + tanh'( a x ) a x )· x ] |
+ η a + η | - E_x ∼N(b, I)tanh'( a x ) a |
≤η(2 b + a + a^2/2 e^-b^2/2 + a^2/2b - a e^-2a(b - a) - a^2 /2 + 2a (b - a)^2 e^ a^2 - 2 a b /2 + 2a e^ a^2 - 2 a b /2)
If b ≥ a and a ≥ 30, then we have
U(a, b) ≤η2 b + a + 0.1
If b ≤ a ≤4b/3 and a ≥ 30, then
U(a, b) ≤η2 b + a + 0.1
Using η = 1/20 and for any a > 30, we have
U(a, b) ≤a + b/10.
§.§ Additional proofs for mixtures of two Gaussians
Suppose a, b > 0 satisfy a ∈ [30, 4 b /3], then the following inequality holds:
| E_x ∼N( b , 1) [ - 0.5 tanh”( a x ) a^2 + tanh'( a x ) a x ] | ≤ 0.01
We first show that E_x ∼N( b , 1) [ - 0.5 tanh”( a x ) a^2 ] > 0 for any a, b > 0.
E_x ∼N( b , 1) [ - 0.5 tanh”( a x ) a^2 ] = -0.5 a^2 ∫_-∞^∞tanh”(ax) exp( -0.5( x - b )^2 ) dx
= -0.5 a^2 ∫_0^∞tanh”(ax) ( exp( -0.5( x - b )^2 ) - exp( -0.5( x + b )^2 ) ) dx > 0
where the last inequality follows from exp( -0.5( x - b )^2 ) > exp( -0.5( x + b )^2 ) and tanh”(ax) < 0 for x > 0. We can upper bound E_x ∼N( b , 1) [ - 0.5 tanh”( a x ) a^2 ] as follows:
E_x ∼N( b , 1) [ - 1/2tanh”( a x ) a^2 ] ≤ -1/2 a^2 ∫_0^∞tanh”(ax) exp( - 1/2 ( x - b )^2 ) dx
≤ a^2 ∫_0^∞exp(-ax) exp( - 1/2 ( x - b )^2 ) dx
≤ a^2 exp( 1/2(a^2 - 2 a b) ) ∫_0^∞exp( -1/2 ( x - b + a )^2 ) dx
≤ a^2 exp( 1/2(a^2 - 2 a b) )
When a ≤ b, by writing a^2 - 2 a b = -2a(b - a) - a^2 ≤ - a^2, we have E [ - 1/2tanh”( a x ) a^2 ] ≤ 0.005 for a ≥ 30. When a ∈ [b, 4b/3], a^2 - 2 a b = ≤ - 2b^2/9, we have | E [ - 1/2tanh”(a x) a^2 ] | ≤ 0.005. Similar to the E_x ∼N( b , 1) [ - 1/2tanh”( a x ) a^2 ], we prove E_x ∼N( b , 1) [ tanh'( a x ) ax ] > 0 and E_x ∼N( b , 1) [ tanh'( a x ) ax ] < 0.005. Combining bounds for | E [ tanh'( a x ) ax ] | and | E [ - 1/2tanh”( a x ) a^2 ] | using triangle inequality, we obtain the result.
|
http://arxiv.org/abs/2307.02655v2 | 20230705210142 | Hamiltonian fragmentation in dimension four with application to spectral estimators | [
"Habib Alizadeh"
] | math.SG | [
"math.SG"
] |
: A Tool to Estimate Disk Masses with CO Isotopologues
[
August 1, 2023
======================================================
We prove a new Hamiltonian extension and consequently a fragmentation result in dimension 4 for the symplectic manifold ^2× S^2. Polterovich and Shelukhin have recently constructed a family of functionals on the space of time dependent Hamiltonian functions on S^2× S^2(a) for certain rational 0 < a < 1, called Lagrangian spectral estimators. Using our fragmentation result we prove that the restriction of their functionals to the subdomain ^2(c)× S^2(a) is a uniformly C^0-continuous functional where 0 < c < 1. As an application of our results, we show that the complement of a Hofer ball in the group of compactly supported Hamiltonian diffeomorphisms of ^2(c)× S^2(a) contains a C^0-open subset.
Finally, we show that the aforementioned group equipped with the Hofer distance admits an isometric embedding of an infinite dimensional flat space for suitable values of parameters c and a.
arabic
§ INTRODUCTION
A symplectic manifold is an even dimensional smooth manifold that admits a closed non-degenerate 2-form which is called a symplectic form. A Hamiltonian diffeomorphism of a symplectic manifold (M,) is a diffeomorphism that is the time-one map of the flow of a time dependent vector field X_H where H is a smooth compactly supported time-dependent Hamiltonian function on M. The vector field X_H is determined by the Hamiltonian H by the equation X_H⌟ = -dH. If (M,) is a compact symplectic manifold, then (M,) is the set of all Hamiltonian diffeomorphisms of M that are compactly supported in the interior of M. The set (M,) is a normal subgroup of (M,) where (M,) is the set of all diffeomorphisms of M that preserve the symplectic structure . This remarkable group (of Hamiltonian diffeomorphisms) has been extensively studied from different points of view, in particular, its geometry and algebraic structure. Other than the natural topologies that one can imagine on this group, such as C^0, C^1, C^∞-topologies, one could also consider natural topologies coming from Finsler structures. The tangent space of (M,) at identity is 𝒜 := C_c^∞(M) and it is C^∞_0(M) (the space of mean zero smooth functions) when M is a closed manifold. A norm on it defines a Finsler structure on (M,) and consequently a pseudo-distance between the points of the group. It was proved by Eliashberg-Polterovich <cit.> that the norm L_p for all finite p≥ 1 defined by
H_L_p:= (∫_M|H|^p^n)^1/p
defines a degenerate, indeed the zero, pseudo-distance for all finite p. But it turns out that the norm L_∞ defined by,
H_L_∞:= max H - min H
will result to a non-degenerate pseudo-distance. This highly non-trivial fact was first proved by Hofer <cit.> for M = ^2n, see also an alternative proof by Viterbo <cit.>, then it was extended by Polterovich <cit.> to a wide class of symplectic manifolds with a nice behaviour at infinity, in particular for all closed symplectic manifolds with []∈ H^2(M,), and finally Lalonde-McDuff <cit.> proved it in full generality using the theory of pseudo-holomorphic curves of Gromov. This metric is called the Hofer metric.
§.§ Main theorem
The interaction of the two topologies, the C^0-topology and the Hofer topology induced by the Hofer metric, on the group (M,) is very subtle and has become interesting due to its applications. Recently, Cristofaro Gardiner, Humiliére and Seyfaddini <cit.> presented the first proof of the simplicity conjecture <cit.> using the PFH spectral invariants. Along the proof, they prove a key lemma which shows an interesting interaction of the C^0-topology and the Hofer topology.The lemma states that, for a given ϵ > 0 and a disk B ⊂ S^2, any C^0-small enough Hamiltonian diffeomorphism of S^2 supported in the upper hemisphere is ϵ-close to a Hamiltonian diffeomorphism supported in B with respect to the Hofer metric, see <cit.>. The proof of the lemma, boils down to the symplectic extension and fragmentation results of Entov-Polterovich-Py <cit.> in dimension 2. The extension lemmas are very technical and specific to dimension 2. In Section <ref> we will use the theory of pseudo-holomorphic curves of Gromov to prove some analogous 4-dimensional Hamiltonian extension lemmas. In Section <ref> we prove some fragmentation lemmas for the 4-dimensional symplectic manifold ^2× S^2. These lemmas will be used to prove the following Hofer approximation result for ^2× S^2 in Section <ref>:
Let M = (S^2× S^2, σ⊕σ) and N = (^2× S^2, 1/2_0⊕σ) where ^2 is the standard unit disk in ^2 identified with the upper hemisphere of the first factor in M, and _0 and σ are normalized area forms on ^2 and S^2 respectively, with total area 1. Let B be a topological-disk in S^2. Then, for every ϵ > 0 there exists δ > 0 so that the following holds; for every g∈_N(M) satisfying d_C^0(g,id) < δ there exist ψ∈_B× S^2(M) with
d_H(g, ψ) < ϵ.
Here, by _X(Y) we mean, the group of Hamiltonian diffeomorphisms of Y compactly supported in the interior of X. The area of the factors in M and the area of the disk in N are irrelevant and could be any other positive real numbers, see Remark <ref>.
§.§ Outline of the proof
To prove our Hamiltonian extension lemma for ^2× S^2 we use pseudo-holomorphic theory of Gromov. Namely, let D_1⊂ D_2⊂ D_3⊂^2 be some horizontal strips in ^2 that contain the line {y = 0}. Let g∈(^2× S^2) be a Hamiltonian diffeomorphism that is C^0-close enough to identity. Then, we would like to find a Hamiltonian diffeomorphism ψ such that it coincides with g on D_1× S^2 and it is supported in D_3× S^2. Restrict g to D_1× S^2 and extend it by identity to 𝒰^u× S^2 where 𝒰^u is a carefully chosen large disk in S^2 which contains every point of the sphere except a disk in the upper half of the area enclosed between ∂ D_1 and ∂ D_3 in ^2. (Here, we think of ^2 as the upper hemisphere of S^2.) Then, use Theorem <ref> to extend it to an element ψ_u of (S^2× S^2). Construct another extension ψ_d of the restriction of g to D_1× S^2 where this time we exclude a disk from the lower part of the area enclosed between ∂ D_1 and ∂ D_3. Then, the diffeomorphism defined by ψ:= ψ_u∘ψ_d∘ g^-1 restricted to ^2× S^2 will have the desired properties.Let us now sketch the proof of our fragmentation result. Divide ^2 into N horizontal strips. Consider the covering of ^2× S^2 by D_i× S^2, i = 1,…, N. Around each intersection line D_i∩ D_i+1, consider very thin horizontal strips D_i,1⊂ D_i,2⊂ D_i,3 and execute the extension lemma on each of these sets of strips and this will fragment g into g_1∘…∘ g_N∘θ where g_i is supported in D_i× S^2 for i = 1,…, N and θ is supported in a disjoint union of arbitrary small disks.It would be a natural question to ask whether the similar fragmentation result holds for any give cover of the ^2× S^2 with closed subsets:
Let ^2× S^2 = ⊔_1≤ i ≤ N D_i be an arbitrary cover of ^2× S^2 by finitely many connected closed subsets D_i. Is there exist a neighborhood Ν of id in (^2× S^2) such that for every g ∈Ν there would exist g_1,…, g_N, θ∈(^2× S^2) with g = g_1∘…∘ g_N∘θ and supp(g_i)⋐ D_i and supp(θ) is arbitrary small.
See Remark <ref> for when the cover is in the form ^2× S^2 = ⊔_iD_i× S^2 where ^2 = ⊔_iD_i is any cover of the disk.To prove the main theorem, Theorem <ref>, first consider large enough integers k,N > 0 and the covering i=1kN⊔ D_i× S^2 consist of small stabilized horizontal strips. Choose δ > 0 small enough so that any g∈ B_C^0(id, δ) can be fragmented into g_1∘…∘ g_kN∘θ where g_i is supported in D_i× S^2 and θ is supported in a disjoint union of disks with small enough total area. Since the supports of g_i's are disjoint, they commute. We now partition them into N groups of cardinality k and denote the composition of the elements of the ith group by f_i. Hence, we have g = f_1∘…∘ f_N∘θ. One can find Hamiltonian diffeomorphisms h_i, i = 1,…, N and h of S^2× S^2 with small Hofer norm that map the support of f_i, i=1,…,N and θ into B respectively. Then, the Hamiltonian diffeomorphism ψ:= i=1NΠh_if_ih_i^-1∘ h_θθ h_θ^-1 will be Hofer close to g and supported in B× S^2.
§.§ Lg
Recently, Polterovich and Shelukhin <cit.> showed that a certain family of Lagrangian tori in M_a:= S^2(1)× S^2(a) is non-displaceable, where 0 < a < 1. Associated to this non-displaceable family of Lagrangian tori they, in particular, constructed a new functional on the space of time-dependent Hamiltonian functions on M_a (a rational) called Lagrangian spectral estimators. These functionals satisfy a long list of remarkable properties, see Theorem <ref>. Using these spectral estimators they proved many interesting results including the existence of an infinite dimensional flat space in the group (S^2) and presented an alternative proof of the Simplicity conjecture. The value of these spectral estimators only depend on the homotopy class of the flow of a mean-zero Hamiltonian. Therefore, they define a functional on the universal cover of (S^2× S^2(2a)). In Section <ref>, following the strategy of Evans <cit.>, which was heavily inspired by Abreu's works <cit.>), we will prove that the group (^2× S^2) is a weakly contractible space. In particular, this means that it has trivial fundamental group. Hence, restricting the spectral estimators to the subspace (^2(1/2)× S^2(a)) we derive a well-defined functional on the Hamiltonian group (^2(1/2)× S^2(a)) satisfying many remarkable properties.These functionals denoted by c_k,B where k is a positive integer and B > 0 is a positive rational number, will not be C^0-continuous. However, as an application of our main theorem, we show in Section <ref> that their difference τ_k,k',B,B':= c_k,B - c_k',B' for any two different data k,B and k',B' is a uniformly C^0-continuous functional.
The functional τ_k,k',B,B': (^2(1/2)× S^2(a)) → is a uniformly C^0-continuous functional, for small enough rational number 0 < a < 1.
The area 1/2 for the disk in the first factor is irrelevant and could be any number in (0,1].
§.§ Applications
In the last two sections of the paper we show some applications of Theorem <ref>. In Section <ref> we show an application to an interesting question initially posed by Le Roux <cit.> which concerns the interaction of the C^0-topology and Hofer topology:
Let (M,) be a symplectic manifold and let (M,) be the group of compactly supported Hamiltonian diffeomorphisms of M. Let A > 0 be a fixed positive number and d_H be the Hofer metric, see Definition <ref>. Define the following subset of (M,),
E_A(M,):= {ϕ∈(M,) : d_H(ϕ, id) > A}.
Does the set E_A(M,) have a non-empty C^0 interior ?
For symplectically aspherical manifolds with infinite spectral diameter, Buhovski, Humiliére and Seyfaddini <cit.> proved that the set E_A(M,) contain a non-empty C^0-interior. In <cit.> Y. Kawamoto in particular constructed a C^0-continuous homogenuous quasimorphism on the Hamiltonian group of S^2(1)× S^2(1) and used them to give a positive answer to the above question in this case. For the product symplectic manifold (M× M, ⊕ -) where (M,) is a closed symplectically aspherical manifold Mailhot <cit.> positively answered Le Roux's question. In Section <ref> we give a positive answer to Le Roux's question for the 4-dimensional symplectic manifold ^2(c)× S^2(a) where 0 < a < 1 is any rational number and 0 < c < 1 is any positive number. As another application of our results, following <cit.>, we will prove in Section <ref> that an infinite dimensional flat space isometrically embeds into the group of compactly supported Hamiltonian diffeomorphisms of ^2(c)× S^2(a).
The space (C^∞_c(0,b), d_C^0) isometrically embeds into ((^2(c) × S^2(a), d_H) where 0 < a < 1 is any rational number, c is any positive number satisfying 0 < 1/2 + b < c < 1 and b satisfies 0 < b < 1/6(1 - a). Here, d_C^0, d_H are the C^0-distance and the Hofer distance respectively.
Acknowledgement
This research is part of my PhD program at the Université de Montreal under the supervision of Egor Shelukhin. I would like to thank him for proposing the project and guiding me through it. I also thank him for many useful discussions and also pointing out the possible applications of our main theorem. I am grateful to Marcelo S. Atallah, Filip Broćić and Dylan Cant for helpful discussions, and I thank Pierre-Alexander Mailhot for creating the pictures. This research was partially supported by Fondation Courtois.
§ THEORY OF PSEUDO-HOLOMORPHIC CURVES
In Section <ref> we will prove some Hamiltonian extension lemmas for which we shall use a drop of the theory of pseudo-holomorphic curves of Gromov in dimension 4. We shall also use this theory in Section <ref> to study the topology of the group (^2× S^2). In this section we review some of its results that we shall need in the next sections.
<cit.>
Let (M,) be a 4-dimensional compact connected symplectic manifold and let J be any -tamed almost complex structure. Let A∈ H_2(M,) be an integer homology class with A.A = p ≥ 0 which is represented by a symplectically embedded 2-sphere. Suppose that there are no symplectically embedded 2-sphere S in M with -1 ≤ S.S < p-1 and every J-holomorphic sphere has positive Chern number. Then every J-holomorphic sphere representing A is embedded, the unparametrized moduli space ℳ_0,0(A,J) is compact and the evaluation map
ev: ℳ_0,p + 1(A,J) → M^p + 1\Δ^p + 1,
is a diffeomorphism, where ℳ_g,k(A,J) is the unparametrized moduli space of holomorphic surfaces of genus g with k marked points and Δ^p+1 is the fat diagonal, i.e. all the tuples with at least two of the components being equal.
Using Theorem <ref> one can prove the following theorem which will be used later in Section <ref>:
<cit.>
Let (M,) be a compact connected symplectic 4-manifold that does not contain any symplectically embedded 2-sphere with self-intersection number -1. Let A,B ∈ H_2(M,) be two integer homology classes that are represented by symplectically embedded 2-spheres and satisfy the following:
A.B = 1, A.A = 0, B.B = 0.
Let σ∈Ω^2(S^2) be an area form with ∫_S^2σ = 1. Then the following holds:
* There is a diffeomorphism ψ: S^2× S^2→ M such that,
ψ^* = aπ_1^*σ + bπ_2^*σ, a = ∫_A, b = ∫_B
* If 𝒰⊂ S^2 is an open disk and ι: 𝒰× S^2∪ S^2×𝒰→ M is an embedding such that:
ι^* = aπ_1^*σ + bπ_2^*σ, a = ∫_A, b = ∫_B
ι_*([S^2×{w}]) = A, ι_*([{z}× S^2])= B
for all z,w∈𝒰, then for any compact set D ⊂𝒰 the diffeomorphism ψ in (1) can be chosen to agree with ι on D × S^2∪ S^2× D.
In Section <ref>, we will have only an embedding 𝒰× S^2 of part (2) of the theorem above, but as is observable but not quite obvious along the proof of the theorem, see <cit.>, one does not need both embeddings 𝒰× S^2 and S^2×𝒰 to extend the embedding. We will need some other inputs from the theory of pseudo-holomorphic curves in Section <ref> which we discuss them in the following,
Let M_a = (S^2× S^2, _a) where _a = ⊕ a, is a standard area form with total area 1 and 0 < a < 1 is a positive number. Let J be any regular _a-tamed almost complex structure. Then, there is a unique J-holomorphic curve through any point of M_a representing the class [S^2×{*}].
Let A = [S^2×{*}]. Then, A.A = 0 and A is represented by a symplectically embedded 2-sphere by its definition. The condition on the existence of symplectically embedded spheres S with self-intersection -1 ≤ S.S < 0 - 1 holds tautologically. For any J-holomorphic sphere u: S^2→ M_a representing a homology class C, we have,
(ℳ_0,0(C,J)) = 4 + 2c_1(C) - 6 ≥ 0,
hence we have c_1(C)≥ 1. Therefore, by Theorem <ref> the evaluation map
ev: ℳ_0,1(A,J) → M_a
is a diffeomorphism. This exactly means that through any point of M_a there is a unique J-holomorphic curve that represents the class A and goes through the point.
The above corollary will be used in the proof of the weak contractiblity of the space of configurations in Section <ref>, see Definition <ref> for the definition of configurations.
§ EXTENSION LEMMAS FOR LG
In the following, whenever M is a compact manifold with boundary, by Ham(M,) we shall mean the group of Hamiltonian diffeomorphisms of M that are compactly supported in the interior of M. For any subset B⊂ M, by _B(M) we shall mean the group of Hamiltonian diffeomorphisms of M that are compactly supported inside B.
Let ^2 be the standard disk in ^2 with area 1. A topological disk is a subset of the plane that is the image of a disk D⊂^2 under a homeomorphism of the standard disk ^2.
Let M = (^2× S^2, _0⊕σ) where _0 and σ are some area forms on ^2 and S^2 respectively. Let f ∈(M) be a Hamiltonian diffeomorphism of M. We define the size of f as follows:
s(f):= inf{ρ: there exist a top-disk D⊂ with area less than ρ and supp(f) ⊂ D× S^2}.
§.§ Extension Lemma 1
In the following we prove a Hamiltonian extension lemma. The reader seeking to understand the main result of the paper may skip the lemma below as it is not used in the rest of the paper except in the proof of Lemma <ref> which is independent of the main theorem as well. The following lemma is a 4-dimensional analogue of the extension lemma <cit.> proved by Le Roux.
(Extension Lemma 1)
Let M = (^2× S^2, 1/2_0⊕σ) where ^2 is the standard unit disk in ^2 and _0 and σ are normalized area forms on ^2 and S^2 with total area 1 respectively. Let D_1, D_2⊂^2 be two disjoint topological disks such that ∂ D_i∩∂^2 is connected and non-empty for i = 1,2. (^2\ (D_1∪ D_2) is still a disk.) Let ψ∈(M) be a Hamiltonian diffeomorphism satisfying:
ψ(D_1× S^2) ∩ (D_2× S^2) = ∅.
Then there exist a Hamiltonian diffeomorphism ϕ∈(M) that coincides with ψ on D_1× S^2 and it is identity on D_2× S^2.
Think of ^2 as the upper hemisphere of a sphere S^2(1) and extend ψ to a diffeomorphism S^2× S^2 by identity. Restrict ψ to 𝒰_1× S^2∪ S^-× S^2 where 𝒰_1 is a small neighborhood of D_1. Then, extend the restriction by identity to 𝒰× S^2 and call it ι where 𝒰:= 𝒰_1∪𝒰_2∪ S^-_ϵ, 𝒰_2 is a small neighborhood of D_2 and S^-_ϵ is a small neighborhood of the lower hemisphere on which ψ is identity. The above extension is possible because ψ(D_1× S^2) ∩ D_1× S^2 = ∅. Therefore, we obtain a symplectic embedding ι: 𝒰× S^2→ S^2× S^2. Since ^2\ (D_1∪ D_2) is a disk, we know that 𝒰 is a disk as well. Hence, we are able to apply Theorem <ref> to the embedding ι and extend it to a symplectomorphism ϕ. Gromov <cit.> proved that the group of symplectomorphisms of (S^2× S^2, σ⊕σ) has two connected components which are classified by their action on the second homology group. Since ϕ restricts to the inclusion ι and it preserves the symplectic form, it must induce identity on the second homology. Therefore, it is in the connected component of the identity, and since S^2× S^2 is simply connected, we deduce that ϕ is a Hamiltonian diffeomorphism. The Hamiltonian diffeomorphism ϕ is identity on a neighborhood of D_2× S^2, it is identity on a neighborhood of the lower hemisphere and coincides with ψ on a neighborhood of D_1× S^2.
§.§ Extension Lemma 2
In the following we prove another extension lemma which will be used in the next section to prove a Hamiltonian fragmentation result.
(Extension Lemma 2)
Let M = (^2× S^2, 1/2_0⊕σ) where ^2 is the standard unit disk in ^2 and _0 and σ are normalized area forms on ^2 and S^2 respectively with total area 1. Define
D(a,b) := {(x,y)∈^2 : a ≤ y ≤ b}, a,b∈ [-1,1],
D_1:= D(-_1, _1), D_2:= D(-_2, _2), D_3:= D(-_3, _3)
where 0 < _1 < _2 < _3 < 1. Let g: ^2× S^2→^2× S^2 be a compactly supported Hamiltonian diffeomorphism that satisfies the following,
g(D_1× S^2) ∪ g^-1(D_1× S^2)⋐ D_2× S^2,
g(D_2× S^2) ∪ g^-1(D_2× S^2)⋐ D(-a,a)× S^2,
g^-1(D(_3,1)× S^2) ⊂ D(a,1)× S^2
g^-1(D(-1, -_3)× S^2) ⊂ D(-1, -a)× S^2,
where _2 < a < _3 is a fixed real number. Then there exist a Hamiltonian diffeomorphism ψ∈(M) compactly supported in D_3× S^2 that restricts to g on D_1× S^2.
identifying ^2 with the upper hemisphere of S^2, consider the following subsets of S^2,
𝒰^u := S^-_ϵ∪ D(-1,-a) ∪ D(-_2,1)
𝒰^d := S^-_ϵ∪ D(a,1) ∪ D(-1,_2)
where S^-_ϵ is a small enough neighborhood of the lower hemisphere on which g is identity. See Figure <ref>.
Restrict g to D(-_2,1)× S^2 and extend the restriction to a symplectic embedding of 𝒰^u× S^2 into S^2× S^2 by identity. Use Theorem <ref> to extend the resulting embedding to a symplectomorphism ψ_u: S^2× S^2→ S^2× S^2. Restrict g to D(-1,_2)× S^2 and extend it by identity to 𝒰^d× S^2, and do the same construction to obtain a symplectomorphism ψ_d of S^2× S^2. The resulting symplectomorphisms are Hamiltonian diffeomorphisms. To see this, note that they induce identity on the second homology, then use a theorem of Gromov that states the group π_0((S^2× S^2, σ⊕σ)) is isomorphic to _2, and finally use the fact that S^2× S^2 is simply connected. The Hamiltonian diffeomorphism ψ := ψ_u∘ψ_d∘ g^-1 satisfies the desired properties. Namely, if p ∈ D(_3,1)× S^2 then since g^-1(p)∈ D(a,1)× S^2 we have
ψ(p) = ψ_u∘ψ_d∘ g^-1(p) = ψ_u∘ g^-1(p) = g∘ g^-1(p) = p,
if p∈ D(-1,-_3)× S^2 then
ψ(p) = ψ_u∘ψ_d∘ g^-1(p) = ψ_u∘ g ∘ g^-1(p) = ψ_u(p) = p
and finally if p ∈ D(-_1,_1) × S^2, then
ψ(p) = ψ_u∘ψ_d∘ g^-1(p) = ψ_u∘ g ∘ g^-1(p) = ψ_u(p) = g(p).
Clearly ψ is identity on S^-_ϵ× S^2. Therefore the restriction of ψ to ^2× S^2 is the desired Hamiltonian diffeomorphism.
§ FRAGMENTATION LEMMAS FOR LG
In this section we shall prove some fragmentation lemmas for the symplectic 4-manifold ^2× S^2. We shall later in the next section, use it to prove some Hofer approximation theorem, the main theorem, which will be the essential part of the proof of Theorem <ref>.
§.§ Fragmentation Lemma 1
A reader who seeks the proof of the main theorem may skip Lemma <ref> below and go straight to Lemma <ref>, since it shall not be used in the proof of the main theorem. The following lemma is a 4-dimensional analogue of a fragmentation result <cit.> proved by Le Roux.
(Fragmentation Lemma 1)
Let M = (^2× S^2, 1/2_0⊕σ) where ^2 is the standard unit disk in ^2 and _0 and σ are normalized area forms with total area 1 on ^2 and S^2 respectively. Let > 0 be a positive number. Then there exist a C^0-neighborhood Ν_ of the identity in (M) and a positive integer N such that for every f∈Ν_ there exist ϕ_1,…, ϕ_N∈(M) with s(ϕ_i) < , i=1,…, N and f = ϕ_1∘ϕ_2∘…∘ϕ_N.
Let m > 0 be a big positive integer such that 1/m < ϵ. Divide the unit disk ^2 into m horizontal strips with equal areas 1/2m and call the strips D_1,…, D_m. Define the following C^0-neighborhood of the identity in (M,),
Ν_:= {f∈(M,) : f(D_i× S^2)∩ D_j× S^2 = ∅, |i-j| > 1 }.
We claim that this neighborhood satisfies the property stated in the lemma with N = m - 1. Let f∈Ν_. Then, f satisfies:
f(D_1× S^2)∩((∪_i ≥ 3D_i)× S^2) = ∅.
By Lemma <ref> we imply that there exists a Hamiltonian diffeomorphism ϕ_1 that coincides with f on D_1× S^2 and it is identity on (∪_i ≥ 3D_i)× S^2. Define f_2:= ϕ_1^-1f. Then, f_2∈Ν_ as well, and we can repeat the same argument for the function f_2, and two disjoint sets (D_1∪ D_2)× S^2 and (∪_i≥ 4D_i× S^2), and obtain a Hamiltonian diffeomorphism ϕ_2 that is f_2 on (D_1∪ D_2)× S^2 and it is identity on (∪_i≥ 4D_i× S^2). We proceed m-1 steps until we get m-1 Hamiltonian diffeomorphisms ϕ_1,…, ϕ_m-1 such that,
ϕ_m-1^-1∘…∘ϕ_1^-1∘ f = id
supp(ϕ_i)⊂ (D_i∪ D_i+1)× S^2.
Note that for every i the area of (D_i∪ D_i+1)× S^2 is equal to 1/m which is less than . So, we have s(ϕ_i) < for all i=1,…, m-1.
Entov-Polterovich-Py proved a C^0-small fragmentation result for all surfaces <cit.>, where they use extension lemmas <cit.> on cylinders and strips to reduce the statement to the case of disk which was proved by Le Roux <cit.>. We showed an analogous Hamiltonian extension result holds in Lemma <ref> and proved an analogous C^0-small fragmentation result for ^2× S^2. Now, one may ask if the following holds;Let Σ be a compact oriented surface (possibly with boundary). Given a positive real number a, then there exist a neighborhood Ν of id in (Σ× S^2) and an integer N such that for every f∈Ν there exist ϕ_1,…, ϕ_N∈(Σ× S^2) satisfying the following:
* f = ϕ_1∘ϕ_2∘…∘ϕ_N
* for all i, supp(ϕ_i) ⊂ D_i× S^2 where D_i is some topological disk in Σ with area less than a.
Our method seems to fail when one tries to imitate the proof of Entov-Polterovich-Py for surfaces. The issue is the following: in Theorem <ref> which was crucial to the proof of Lemma <ref>, the fact that 𝒰 is a disk in S^2 is important. But if for instance, one considers a disk in the interior of a surface and restrict a Hamiltonian diffeomorphism g to a tubular neighborhood U of the boundary of the disk, then since g is not identity close to the boundary of U (neither can be connected to the identity within a slightly bigger neighborhood) we can not argue as in Lemma <ref>.
§.§ Fragmentation Lemma 2
In Lemma <ref>, we proved a fragmentation where a Hamiltonian diffeomorphism is decomposed into Hamiltonian diffeomorphisms that have small supports. But in fact what one would need to prove the Hofer approximation theorem in the next section, Theorem <ref>, is a decomposition into Hamiltonian diffeomorphisms with small and disjoint supports. Thus, in the following we prove a fragmentation lemma where a Hamiltonian diffeomorphism decomposes into Hamiltonian diffeomorphisms with small and disjoint supports.
(Fragmentation Lemma 2)
Let M = (^2× S^2, 1/2_0⊕σ) where ∫_^2_0 = ∫_S^2σ = 1. Let ρ > 0 be a positive number and m > 0 be a positive integer. Divide the unit disk ^2 into m horizontal strips with equal area with respect to the area form 1/2_0 and denote them by D_i. Define U_i to be the interior of D_i for all i. Then, there exists δ > 0 such that for every g∈(M) with d_C^0(g,id) < δ there are g_i∈_U_i× S^2(M) for i = 1, …, m, and θ∈_U× S^2(M) where U is a disjoint union of topological disks in ^2 with total area less than ρ, so that, g = g_1∘…∘ g_m∘θ.
For every i ∈{ 1,…, m-1}, define V_i,1⊂ V_i,2⊂ V_i,3⊂ D_i∪ D_i+1 be small enough horizontal strips so that D_i∩ D_i+1⊂ V_i,1 and for all i ≠ j we have V_i,3∩ V_j,3 = ∅, See Figure <ref>. Let δ_0 > 0 be a small enough positive number that satisfies the following, for every g∈(M), we have d_C^0(g, id) < δ implies that g satisfies the properties of Lemma <ref> for all collections of strips {V_i,j}_j=1^3. Let g∈(M) with d_C^0(g,id) < δ. Applying Lemma <ref> to g, we find a Hamiltonian diffeomorphism ψ_i of ^2× S^2 compactly supported in V_i,3× S^2 and restricts to g on V_i,1× S^2. Define θ:= ψ_1ψ_2…ψ_m-1. Then we have gθ^-1 = g_1… g_m where g_i is compactly supported in U_i× S^2 for all i = 1,…, m.
Given a cover ^2 = i = 1N⊔ D_i of the standard disk by some topological disks, one can find a C^0-neighborhood Ν of the identity in (^2× S^2) so that the following holds; for every g∈Ν there exist g_1,…, g_N, θ∈(^2× S^2) such that supp(g_i) ⊂ D_i× S^2 and the support of θ is arbitrarily small. To see this, take one of the covering disks that intersects the boundary of ^2, say D_1. Consider the decomposition (∂ D_1\ (∂ D_1∩∂^2)) ∪{p_1,…, p_l} = γ_1∪…∪γ_k∪ L_1∪…∪ L_l_1 into disjoint arcs where p_i's are isolated intersection points of ∂ D_1 and ∂^2, L_i's are the ones that intersect ∂^2 in at least one of the points p_i's and l_1≤ l.
Fix some small enough tubular neighborhoods U_1⊂ U_2⊂ U_3 around each arc. Remove D_1 from the disk ^2 and repeat the process for the closure of each connected component of ^2\ D_1, which is a disk with an induced cover. Now define the neighborhood Ν to be consist of elements g that satisfy the conditions of Lemma <ref> for all of these tubular neighborhoods. Finally, for any g∈Ν, execute the extension lemma as before for one tubular neighborhood at a time (in order) and get a decomposition g = f_1∘ f_2∘…∘ f_M∘θ where M is a large but finite fixed number, each f_i is supported only in one of the disks in the original covering and θ has arbitrarily small support. Since f_i's have disjoint support, we can partition them as follows, define g_i:= supp(f_r)⊂ D_iΠ f_r for i = 1,…, N, then we have g = g_1∘…∘ g_N∘θ.
§ HOFER APPROXIMATION BY FIXED SUPPORTS FOR LG
(Hofer norm)
Let (M,) be a symplectic manifold, denote by C^∞_c(M× [0,1],) the space of compactly supported Hamiltonian functions and let (M,) be the group of compactly supported Hamiltonian diffeomorphisms. Let H∈ C^∞_c(M× [0,1],), and define,
H:= ∫_0^1(maxH_t - minH_t) dt.
For a Hamiltonian diffeomorphism ϕ∈(M,) we define the Hofer-norm of ϕ by the following:
ϕ_H:= H↦ϕinfH,
where the infimum is taken over all compactly supported Hamiltonian functions H whose time-one map is ϕ. For two Hamiltonian diffeomorphisms ϕ, ψ we define their Hofer distance by:
d_H(ϕ, ψ) := ϕψ^-1_H.
Non-degeneracy of this distance function is highly non-trivial, see introduction.
Let (M,) be a symplectic manifold and let (M,) be its group of compactly supported Hamiltonian diffeomorphisms. Let X ⊂(M,) be a subset and ϕ∈(M,) be an element. We define the Hofer distance of ϕ from the subset X as follows:
d_H(ϕ, X) := inf_g∈ Xd_H(ϕ, g).
For any two subsets X, Y ⊂(M,) we define their Hofer distance as follows,
d_H(X,Y):= sup_x∈ Xd_H(x, Y).
Let X,Y ⊂(M,) be two subsets then we define the Hofer distance between X,Y by the following:
d_H(X,Y):= sup_g∈ Xd_H(g,Y)
In the following lemma _B× S^2(M) and _N(M) denote the group of Hamiltonian diffeomorphisms of M compactly supported in B× S^2 and N respectively.
Let M := (S^2× S^2, σ⊕σ) and N := (^2× S^2, 1/2_0⊕σ) where ^2 is the standard unit disk in ^2 identified with the upper hemisphere of the first factor in M, and _0 and σ are normalized area forms on ^2 and S^2 respectively, with total area 1. Let B be a topological-disk in S^2. Then, for every ϵ > 0 there exists δ > 0 so that the following holds; for every g∈_N(M) satisfying d_C^0(g,id) < δ there exist ψ∈_B× S^2(M) with
d_H(g, ψ) < ϵ.
To prove Theorem <ref> we will be using Lemma <ref>, and Lemma <ref>. Recall that we proved those lemmas using Lemma <ref> and <ref> for which we used a theorem of Gromov on the topology of the symplectomorphism group of (S^2× S^2, σ⊕σ) where σ is an area form with total area 1 on both factors. Later, we will need to consider the more general case M_a := (S^2× S^2, σ⊕ aσ) where a > 0 is a small positive number and still σ is a standard area form on the sphere with total area 1. In <cit.>, Abreu studied the topology of the group Symp_h(M_a) of symplectomorphisms that act as identity on the second homology of M_a for 1 < a ≤ 2 and proves that,
H^*(Symp_h(M_a)/(SO(3)× SO(3))) = if * = 4k, 4k + 1, k ≥ 0
0 o.w.
Later, McDuff and Abreu <cit.> extended his work to the general case and calculated the rational cohomology of the group of symplectomorphisms of (M_a) for all a > 1. In particular, they proved that the group (M_a) is path-connected for all a > 1, and consequently for all a > 0 by rescaling the symplectic form. The connectedness of the group Symp_h(M_a) is exactly what we need for the proof of Lemma <ref> and Lemma <ref> to work for M_a, a > 0 as well. Hence, it is easy to see that Theorem <ref> holds for M_a for all a > 0.
In section <ref> we work with an alternative definition of the Hofer norm which is denoted by ._h and defined as follows:
ϕ_h:= H↦ϕinf∫_0^1max|H_t| dt,
where the infimum is taken over all mean zero Hamiltonian functions whose time-one map is ϕ. Here by mean zero we mean ∫_MH_t^n = 0 for all t∈ [0,1]. It is an easy observation that for every Hamiltonian diffeomorphism ϕ we have,
ϕ_h≤ϕ_H≤ 2ϕ_h.
Namely, it follows from the following two facts; for a function b∈ C^∞([0,1]) we have H + b = H, for a mean zero Hamiltonian H we have maxH_t . minH_t≤ 0 and consequently,
max|H_t|≤maxH_t - minH_t≤ 2max|H_t|.
Let a∈ (0,1/2). Let B_1,…, B_k⊂ S^+ be disjoint open topological disks with area less than a. Let B ⊂ S^2\{p} be a topological disk with area(B) > ka. Then there exists a Hamiltonian diffeomorphism h∈(S^2× S^2,σ⊕σ) supported in (S^2\{p_-})× S^2 such that h(∪_iB_i× S^2) ⊂ B× S^2 and h_H≤ 2a, where ._H is the Hofer norm and p is the south pole.
It is known that there exists an h_1∈(S^2,σ) supported in S^2\{p_-} so that h_1(∪_iB_i) ⊂ B and h_1_H≤ 2a, see <cit.>. If G_1: S^2× [0,1]→ is a Hamiltonian function generating h_1 then G: S^2× S^2× [0,1]→ defined by G(x,y,t):= G_1(x,t) generates the Hamiltonian diffeomorphism h = h_1× id of (S^2× S^2, σ⊕σ). We have that,
∫_0^1(max_S^2× S^2G(.,.,t) - min_S^2× S^2G(.,.,t))dt = ∫_0^1(max_S^2G_1(.,t) - min_S^2G_1(.,t))dt.
Thus, we obtain, h_H≤h_1_H≤ 2a, h is supported in (S^2\{p_-})× S^2 and h(∪_iB_i× S^2)⊂ B× S^2.
We are now ready to prove Theorem <ref>:
(proof of Theorem <ref>).
Let N be a positive integer so that 1/2N < area(B) and m be a multiple of N so that 2(N+1)/m <. Let ρ = 1/2m. For the choices ρ and m defined above, let δ > 0 be given by Lemma <ref>. Here we identify (S^+,σ_|_S^+) with the disk (^2,1/2_0). Let U_i be the image of the open strips defined in Lemma <ref> under the identification. So, by Lemma <ref> there exists g_1,…, g_m and θ in _S^+× S^2(S^2× S^2, σ⊕σ) such that g = g_1∘…∘ g_m∘θ and supp(g_i)⊂ U_i× S^2 and θ is supported in V_N+1× S^2 where V_N+1 is disjoint union of topological disks with total area less than ρ = 1/2m. Since the supports of g_i's are disjoint they must commute. Hence, defining
f_j = ∏_i≡ j mod(N) g_i, j= 1,…, N
f_N+1:= θ
we must have, g = f_1∘…∘ f_N+1. The support of f_j for j∈{1,…, N} lies in V_j× S^2 where
V_j:= _i≡ j mod(N)U_i.
The area of V_j for all j∈{1,…, N} is m/N.1/2m = 1/2N < area(B) and area(V_N+1) < area(B) as well. Therefore, by Lemma <ref> there exist h_1,…,h_N+1∈(S^2× S^2,σ⊕σ) supported in (S^2\{p})× S^2 such that
h_j(V_j× S^2)⊂ B× S^2,
moreover h_j_H≤ 2.1/2m = 1/m. Define the following Hamiltonian diffeomorphism,
ϕ:= ∏_j=1^N+1h_j∘ f_j∘ h_j^-1.
Then, the support of ϕ lies inside the subset B× S^2 and we obtain,
d_H(g,ϕ)≤∑_j=1^N+1d_H(f_j,h_jf_jh_j^-1) ≤∑_j=1^N+1 2h_j_H≤2(N+1)/m < .
Note that in the first inequality we have used the bi-invariant property of the Hofer distance. namely, for Hamiltonian diffeomorphisms a,b,c,d we have,
d_H(ab,cd)≤ d_H(ab, cb) + d_H(cb, cd) = d_H(a,c) + d_H(b,d).
§ TOPOLOGY OF LG
In this section we study the topology of the group Symp_c(N_a) of compactly supported symplectomorphisms of the 4-dimensional symplectic manifold N_a = (^2× S^2, 1/2_0⊕ a) where _0, are standard area forms on ^2, S^2 respectively, with total area 1 and a > 1 is a number. Here we equip the group by the C^∞-topology. We shall prove that the aforementioned group is weakly contractible. This will in particular prove that the group is connected, and since the symplectic manifold N_a is simply connected, we deduce that the group _c(N_a) = Symp_c(N_a) is also weakly contractible. In section <ref>, we use this fact, in particular the fact that π_1(_c(N_a)) = 0, to prove that the spectral estimator c_k,B defined on the universal cover _c(N_a) descends to the Hamiltonian group _c(N_a). We shall follow the lines of the proof of the weak contractibility of Symp_c(^*×) in <cit.> and we try to keep our notations consistent. Let us set up some notations first. We shall think of (^2, 1/2_0) as (S^2, 1/2) minus the point z = ∞. Denote (S^2× S^2, 1/2⊕ a) by M'_a and {∞}× S^2⊂ M'_a by C_2, then we have, N_a = M_a\ C_2. Let C_3 denote the sphere S^2×{∞} and define U' := M'_a\ (C_2∪ C_3).
(0,0) – (3,0);
[dashed] (3,0) – (3,2);
[dashed] (3,2) – (0,2);
(0,2) – (0,0);
[yshift = 0.3cm] at (1.5,2) C_3;
[xshift = 0.3cm] at (3,1) C_2;
at (1.5,1) U';
(Stein manifolds)
An Stein manifold consists of a smooth complex manifold (W,J) and a smooth function ϕ: W→ [0,∞) such that the form := -d(dϕ∘ J) is a positive (1,1)-form. Associated to an Stein manifold we consider a vector field Z that is dual to θ:= -dϕ∘ J with respect to , which we call Liouville vector field. We say that an Stein manifold is of finite type, if there exist a K > 0 so that dϕ(Z) > 0 on ϕ^-1((K, ∞)).
<cit.>
Let (W,J) be a complex manifold and ϕ_1,ϕ_2 be two finite type Stein structures on W. Then,
Symp_c(W,-dd^cϕ_1) ≃ Symp_c(W,-dd^cϕ_2),
where d^c(.):= d(.) ∘ J and ≃ is the weak homotopy equivalence.
Let U' be as above. The group Symp_c(U') is weakly contractible.
It is not hard to see that U' admits a compatible complex structure J with respect to which will be bi-holomorphic to ×. One can show that there is a function ϕ: U'→ such that -d(dϕ∘ J) = _a. It is deduced from Theorem <ref> that,
Symp_c(U')≃ Symp_c(×).
Finally, it is a theorem of Gromov that the later group is weakly contractible, and this finishes the prove.
(Standard configuration)
Let M'_a, C_2,C_3 be as above. A configuration in M'_a is a symplectically embedded sphere S⊂ M'_a that satisfies the following:
* S is homologous with C_3.
* there is an almost complex structure J∈𝒥(M'_a,_a) that makes S and C_2 holomorphic.
* the two spheres S and C_2 intersect symplectically orthogonally.
A standard configuration in M'_a is a configuration that satisfies the following additional property:
* there is a neighborhood ν of C_2 such that ν∩ S = ν∩ C_3.
Denote the set of all configurations by 𝒞 and the set of all standard configurations by 𝒞_0. The spaces are topologized as subsets of the quotient space
C^∞(S^2, M_a) / Diff(S^2).
(See <cit.>)
Let S be a configuration in M_a'. Denote by ℋ(S) the space of all -tamed almost complex structures that make both S and C_2 holomorphic. Then the space ℋ(S) is a weak contractible space if it is not empty.
The space of configurations 𝒞 is weakly contractible and the inclusion map ι: 𝒞_0→𝒞 is a weak homotopy equivalence. The argument for later is analogous to the one in <cit.> which uses the Gompf isotopy to turn an isotopy inside the space of configurations into an isotopy inside the space of standard configurations. For the proof of the weak contractibility of the space of configurations 𝒞 see proof of Proposition 5.1 in <cit.>, where they use Remark <ref> above and Corollary <ref>. The group Symp_c(N_a) acts on the space of standard configurations by ϕ . S:= ϕ(S) where ϕ∈ Symp_c(N_a) and S∈𝒞_0. Here Symp_c(N_a) is considered as a subgroup of Symp_c(M'_a) consist of maps that are supported away from C_2. The action is a transitive action, see <cit.>. This can be proved using the path connectedness of the space of standard configurations, symplectic neighborhood theorem and finally the generalized Banyaga's isotopy extension theorem. Given the facts above, one observes that the following map is a fibration:
Symp_c(N_a)→𝒞_0, ϕ↦ϕ(C_3).
Let Symp_c(N_a, C_3) denote the fiber of the above fibration, which is consist of compactly supported symplectomorphisms of M'_a that fix C_3 set-wise and supported away from C_2. Therefore,
Symp_c(N_a, C_3) ≃ Symp_c(N_a).
Symp_c(N_a) is weakly contractible.
By Remark <ref> it is enough to prove that Symp_c(N_a,C_3) is weakly contractible. This group fits into a short exact sequence of group homomorphisms
0→ Stab(N_a, C_3)→ Symp_c(N_a, C_3) → Symp_c(S^2\{∞})→ 0,
where the group Stab(N_a, C_3) is the group of compactly supported symplectomorphism of M'_a that fix the configuration C_3 point-wise and supported away from C_2, the first map is inclusion and the second map is the restriction map to C_3 (identified with S^2 with a fixed parametrization). The group Symp_c(S^2\{∞}) is a contractible space. Hence, we have,
Stab(N_a, C_3) ≃ Symp_c(N_a, C_3).
Therefore, it is remained to prove that the group Stab(N_a, C_3) is weakly contractible. Denote by 𝒢 the group of symplectic gauge transformations of the symplectic normal bundle to C_3 inside M'_a which are identity at the point ∞. Here by symplectic gauge transformation we just mean a bundle isomorphism that preserves the symplectic structure fiber-wise. The group 𝒢 is a weakly contractible space, see <cit.>. Consider the fibration below,
Stab^0(N_a, C_3) → Stab(N_a, C_3)→𝒢,
where the second map is defined by taking ϕ∈ Stab(N_a, C_3) to dϕ : TC_3^⊥→ TC^⊥_3 where TC_3^⊥ is the symplectic normal bundle to C_3, and the fiber Stab^0(N_a, C_3) is the group of all symplectomorphisms of M_a that fix C_3 point-wise and have differential equal to identity along the configuration C_3 and supported away from C_2. The set 𝒢 is also weakly contractible. Therefore,
Stab^0(N_a, C_3) ≃ Stab(N_a, C_3).
Using a Moser type argument one proves that Stab^0(N_a, C_3) is weak homotopy equivalent to the group Symp_c(U') and the later group is proved to be weakly contractible in Corollary <ref>, and this finishes the prove.
§ LAGRANGIAN SPECTRAL ESTIMATORS
Let M_a = S^2× S^2 with the symplectic form ⊕ a where is the standard area form on S^2 with total area 1, and a < 1 be a positive rational number. In <cit.>, Polterovich and Shelukhin constructed new functionals on the space of time dependent Hamiltonian functions (spectral estimators), on the group of Hamiltonian diffeomorphisms (group estimators) and on the Lie algebra of functions on a symplectic manifold (algebra estimators), for the symplectic manifold M_a, that satisfy a number of remarkable properties. In this section we recall the existence theorem of their functional on the space of time dependent Hamiltonian functions (spectral estimators) and list their properties, see <cit.> for more details. We will define a real-valued invariant τ_k,k',B,B' as the difference of two of the spectral estimators and we prove that its restriction to the Hamiltonian functions that are compactly supported in the subdomain N_a:= ^2× S^2 descends to the Hamiltonian group _c(N_a), where ^2 is identified with the upper hemisphere. In section <ref> we prove that the map
τ_k,k',B,B': _c(N_a)→
is uniformly C^0-continuous.
§.§ Existence of spectral estimators
Before we state the existence theorem of spectral estimators we need a few definition and notations. We shall follow the notation in <cit.>. Let z : S^2→ be the height function on S^2, where we think of S^2 as the standard sphere with radius 1/2 in ^3 with the area form 1/π. Let 0 < C < B be two positive numbers and k > 0 be an integer such that 2B + (k-1)C = 1. Denote l^0,j_k,B := z^-1(-1/2 + B + jC) for j = 0,…, k-1 and let l^0_k,B be the union of the circles l^0,j_k,B. Let a be a positive number that satisfies 0 < a/2 < B-C. Let S be the equator of the second factor S^2 in M_a. Define the following subsets,
L_k,B:= l^0_k,B× S
L^j_k,B:= l^0,j_k,B× S, j = 0,…, k-1.
Let us now recall an important invariant called the Calabi invariant,
(Calabi invariant)
Let (M,) be an 2n-dimensional symplectic manifold and U⊂ M be a displaceable open subset. The Calabi invariant is a map Cal : _c(U)→ defined by,
Cal({ϕ^t_H}):= ∫_0^1∫_UH_t^n dt,
where H is a Hamiltonian supported in [0,1]× U and generates the class [{ϕ^t_H}]. The Calabi invariant is a well-defined homomorphism.
We say that U is displaceable from V, where U,V are subsets of (M,), if there exist a Hamiltonian diffeomorphism M such that ϕ(U)∩ V = ∅.
We are now ready to state the existence theorem of the spectral estimators,
<cit.>
Let k,B,a be as above where B,a are rational numbers, and let M_a = (S^2× S^2, ⊕ a). There exist a map c_k,B: C^∞([0,1]× M_a)→ that satisfies the following properties:
* (Hofer-Lipschitz) For each G,H ∈ C^∞([0,1]× M_a) we have,
|c_k,B(G) - c_k,B(H)|≤∫_0^1max|G_t - H_t|dt.
* (Monotonicity) For G,H ∈ C^∞([0,1]× M_a) and G ≤ H we have
c_k,B(G)≤ c_k,B(H).
* (Normalization) For G ∈ C^∞([0,1]× M_a) and b ∈ C^∞([0,1]) we have
c_k,B(G + b) = c_k,B(G) + ∫_0^1b(t) dt.
* (Lagrangian control) If H ∈ C^∞([0,1]× M_a) and H(t,-)_|_L^j_k,B≡ c_j(t)∈, then we have
c_k,B(H) = 1/k0≤ j< k∑∫_0^1c_j(t) dt.
* (Independence of Hamiltonian) The following function is well defined:
(M_a)→, [ϕ] ↦ c_k,B(H),
where H is a mean zero Hamiltonian that generates the class [ϕ]. By mean zero we mean ∫_MH_t^n = 0 for all t∈ [0,1].
* (Sub-additivity) For all ϕ,ψ∈(M_a) we have,
c_k,B(ϕψ) ≤ c_k,B(ϕ) + c_k,B(ψ).
* (Calabi property) If H ∈ C^∞([0,1]× M_a) is supported in [0,1]× U for an open subset U disjoint from L_k,B, then,
c_k,B(ϕ_H) = -1/vol(M_a)Cal(ϕ_H).
* (Controlled additivity) Let ψ∈(M_a) such that there is a Hamiltonian H that is supported in [0,1]× U for an open subset U disjoint from L_k,B, and generates ψ. Then for all ϕ∈(M_a) we have,
c_k,B(ψϕ) = c_k,B(ψ) + c_k,B(ϕ).
We shall consider the restriction of the functions c_k,B to the Hamiltonian functions that are compactly supported in N_a := (^2× S^2, 1/2_0⊕ a) where we identify ^2 with the upper hemisphere of the first factor S^2 and _0 is the standard area form on ^2 with total area 1. By the "Independence of Hamiltonian" property of the spectral estimators, to show that they descend to the group _c(N_a), it is enough to prove that the fundamental group of _c(N_a) is trivial. This was proved in Section <ref>, see Theorem <ref>. Hence, we finally have a well-defined map c_k,B: _c(N_a)→ that in particular satisfies the following properties:
* (Hofer-Lipschitz) For ϕ, ψ∈_c(N_a) we have
|c_k,B(ϕ) - c_k,B(ψ)| ≤ϕψ^-1_h.
See Remark <ref> for the definition of ._h and its relation with the other Hofer-norm defined in Definition <ref>.
* (Calabi property) For ϕ∈_c(N_a) that is supported in an open subset U disjoint from L_k,B we have,
c_k,B(ϕ) = -1/vol(M_a)Cal(ϕ),
where ϕ is a lift of ϕ inside _c(U).
* (Controlled additivity) Let ψ∈_c(N_a) be supported in an open subset U disjoint from L_k,B and let ϕ∈_c(N_a) be any Hamiltonian diffeomorphism, then,
c_k,B(ψϕ) (*)= c_k,B(ψ) + c_k,B(ϕ)
To see this, choose a lift ψϕ∈_c(N_a) of ψϕ and let H ∈ C^∞([0,1]× M_a) be a mean zero Hamiltonian supported in [0,1]× N_a that generates the chosen lift. Choose a lift ψ of ψ in _c(U) and let G∈ C^∞([0,1]× M_a) be supported in [0,1]× U, have mean zero and generate the lift. Then, the Hamiltonian function G# H will have mean zero and generates a lift of ϕ in _c(N_a) which we call ϕ. Hence, by the "Controlled additivity property" in Theorem <ref> and the choices we made, the equality (*) is implied.
§.§ Lg-continuity of Lg
Here, we prove the C^0-continuity of the invariant τ_k,k',B,B': _c(N_a)→ defined below. To prove this, we will be using our main theorem that we proved in Section <ref>, see Theorem <ref>, the Hofer-Lipschitz property of c_k,B invariants that was stated in Section <ref>, see Properties <ref>, and the fact that τ is invariant under some specific perturbations, see Lemma <ref> below.
Let k,B, C and k',B', C' be as above, 0 < a/2 < min{B - C, B' - C'}, and B,B',a be rational numbers. Let M_a, N_a be defined as before. Then we define τ_k,k',B,B': _c(N_a)→ by the following:
τ_k,k',B,B'(ϕ):= c_k,B(ϕ) - c_k',B'(ϕ).
We will be omitting the indices unless necessary.
To prove the C^0-continuity of the invariant τ in Section <ref>, we use a property of the invariant proved in <cit.>, which for the convenience of the reader we write it in the lemma below,
Let τ = τ_k,k',B,B': _c(N_a)→ be the invariant defined in Definition <ref>. Let U⊂ N_a be an open subset that is disjoint from L_k,B and L_k',B'. Let ϕ∈_U(N_a) be a Hamiltonian diffeomorphism of N_a compactly supported in U and let θ∈_c(N_a) be any Hamiltonian diffeomorphism. Then, τ(θϕ) = τ(θ). In particular, τ(ϕ) = 0.
τ(θϕ) = c_k,B(θϕ) - c_k',B'(θϕ)
(1)= (c_k,B(θ) + c_k,B(ϕ)) - (c_k',B'(θ) + c_k',B'(ϕ))
(2)= c_k,B(θ) - c_k',B'(θ) = τ(θ).
The equality (1), (2) follow from the controlled additivity and Calabi properties of both c_k,B, c_k',B' respectively. (See the Properties <ref>.) For the last part it is enough to set θ = id.
(C^0-Continuity)
The invariant τ_k,k',B,B': _c(N_a)→ is C^0-continuous.
Let ϵ > 0 be a given positive number and B be an open topological disk in ^2 that is disjoint from l^0_k,B and l^0_k',B', and define U:= B× S^2. (Recall that we think of ^2 as the upper hemisphere of the first factor of M_a.) Let δ > 0 be given by Theorem <ref> for ϵ/2 and B. Note that Theorem <ref> and other lemmas used in its proof, are stated in the case a = 1, but see Remark <ref> to make sure that Theorem <ref> holds for all a > 0 as well. Let θ∈_c(N_a) be any Hamiltonian diffeomorphism. Define the following C^0-neighborhood of θ,
Ν_δ(θ):= {θϕ : d_C^0(ϕ, id) < δ}.
Choose any element θϕ∈Ν_δ(θ) and let ψ∈_c(N_a) be the Hamiltonian supported in U that is given by the Theorem <ref> for ϕ, then we have,
|τ(θϕ) - τ(θ)| = |τ(θϕ) - τ(θψ)|
≤ 2(θϕ)(θψ)^-1_h = 2 ϕψ^-1_h≤ 2 ϕψ^-1_H≤ 2ϵ/2 = ϵ.
The first equality is proved in Lemma <ref>, the first inequality is followed by the Hofer-Lipschitz property of both c_k,B, c_k',B', see Properties <ref>, the equality afterwards is by the conjugation invariance of the Hofer norm, and for the inequality between two different Hofer norms see Remark <ref>.
§ APPLICATIONS
§.§ Lg-open sets in the complement of Hofer balls in Lg
In this section we show a simple application of our C^0-continuity result to the following question which was initially posed by Le Roux in <cit.>:
Let (M,) be a symplectic manifold and let (M,) be the group of compactly supported Hamiltonian diffeomorphisms of M. Let A > 0 be a fixed positive number and d_H be the Hofer metric on the group, see Definition <ref>. Define the following subset of (M,),
E_A(M,):= {ϕ∈(M,) : d_H(ϕ, id) > A}.
Does the set E_A(M,) have a non-empty C^0 interior ?
Here we consider the symplectic manifold ^2(1/2)× S^2(a) where 0 < a < 1 is a rational number.
Let N_a = (^2× S^2, 1/2_0⊕ a), where _0, are standard area forms with total area 1 on the disk and sphere respectively and 0 < a < 1 is a rational number. Then the set E_A(N_a) has non-empty C^0-interior for every A > 0.
Let A > 0 be a positive number. Let B,B' > a be some rational numbers and let C, k, C', k' be some positive numbers that satisfy the assumptions of Definition <ref>. Consider the functional τ_k,k',B,B', which was proved to be C^0-continuous in Theorem <ref>. Let ϕ∈(N_a) be a Hamiltonian diffeomorphism that satisfies |τ(ϕ)| > 2A + 1. Now consider the C^0-ball around ϕ with radius δ > 0, B_C^0(ϕ, δ), where δ is such that, if d_C^0(ψ, ϕ) < δ then
|τ(ψ) - τ(ϕ)| < 1.
Now for every ψ∈ B_C^0(ϕ, δ) we have,
d_H(ψ, id) ≥1/2|τ(ψ)| > 1/2(2A) = A.
Therefore we have,
B_C^0(ϕ, δ)⊂ E_A(N_a).
Here, the area 1/2 of the disk is irrelevant and it could be any positive number c∈ (0,1].
§.§ Infinite dimensional flats in Lg
In this section we answer the question of whether one can isometrically embed a flat space into (N'_a) equipped with the Hofer distance where N'_a = (^2× S^2, (1/2 + b)_0⊕ a), 0 < b < 1/6(1 - a) and 0 < a < 1, and _0, are standard area forms on the disk and the sphere respectively, with total area form 1. In the following theorem we show that one can isometrically embed an infinite dimensional flat space into the group of compactly supported Hamiltonian diffeomorphisms of N'_a. Here, the factor 1/2 + b can be replaced by any real number c satisfying 1/2 + b < c < 1.
The space (C^∞_c(0,b), d_C^0) isometrically embeds into ((N'_a), d_H) where 0 < a < 1 is any rational number and 0 < b < 1/6(1 - a) is any positive number, and d_C^0, d_H are the C^0-distance and the Hofer distance respectively.
Let h∈ C^∞_c(0,b) and define h^#: [-1/2, 1/2]→ as follows:
h^#(z):=
h(z) z∈ (0,b)
-h(2b - z) z∈ (b, 2b)
h(-z) z ∈ (-b, 0)
-h(-2b - z) z ∈ (-2b, -b)
0 o.w,
see Figure <ref> on the left. Let z: S^2→ [-1/2, 1/2] be the normalized moment map of the natural Hamiltonian action of S^1 on S^2. (When S^2 is the standard sphere in ^3 with radius 1/2 and area form 1/π, the map z is just the height function.) Define the following homomorphism:
Ψ : C^∞_c(0,b)→_c(N_a')
h↦ϕ^1_h^#∘ z∘π_1,
where ϕ^1_h^#∘ z∘π_1 is the restriction of the time-one map of the autonomous Hamiltonian h^#∘ z∘π_1 of M_a to the subdomain N'_a, here π_1 is the projection to the first factor on M_a. We will argue as in the proof of <cit.>. Let Ψ: C^∞_c(0,b)→_c(N_a) be the lift of Ψ that takes h to the homotopy class [{ϕ^t_Γ(h)}] where Γ(h) := h^#∘ z∘π_1 restricted to N'_a. Then, we have,
Ψ(h)_h≤∫_0^1max|Γ(h)| dt = h_C^0,
where ._h is defined as the infimum of the Hofer norm where the infimum is taken over all mean zero Hamiltonian functions generating the same homotopy class, see Remark <ref> for the definition of Hofer norm. To finish the proof we shall prove that the reverse inequality also holds. Since, the map Ψ is a homomorphism and the Hofer norm is invariant under the inverse operation, without loss of generality, we assume that h_C^0 = h(x_0) > 0 for some x_0∈ (0,b). For a,b as in the statement we consider the invariant c_2,B_i: _c(N'_a)→ from Theorem <ref> where B_i = 1/2 - x_i and {x_i}_i≥ 1 is an increasing sequence of rational numbers converging to x_0 and lie in (0,b), see Figure <ref> on the right.
Note that, in order to have such an invariant we need the parameters to satisfy the following,
2B_i + C_i = 1 a/2 < B_i - C_i,
which hold since we have,
b < 1/6(1 - a) a/2 < 1/2 - 3b < 1/2 - 3x_i = B_i - C_i.
By the Hofer-Lipschitz property of the invariant c_2,B_i we have,
c_2,B_i([{ϕ^t_Γ(h)}]) ≤Ψ(h)_h,
and by the Lagrangian-control property we have,
c_2,B_i([{ϕ^t_Γ(h)}]) = 1/2(h(1/2 - B_i) + h(-1/2 + B_i))
= 1/2(h(x_i) + h(-x_i)) = h(x_i).
So, we derive the following inequality:
Ψ(h)_h≥ c_2,B_i([{ϕ^t_Γ(h)}]) = h(x_i) for all i≥ 1
Ψ(h)_h≥h_C^0.
Therefore, the map Ψ is an isometric embedding, i.e. for all h∈ C^∞_c(0,b) we have,
Ψ(h)_h = h_C^0.
Since the space _c(N'_a) is a weakly contractible space, see Section <ref>, in particular it has a trivial fundamental group, hence the map Ψ descends to an isometric embedding of C^∞_c(0,b) into (N'_a).
§ REMARKS ON LG
In this section we explain how the analogous results hold for the 4-dimensional symplectic manifold (^2×^2, c_0⊕ a_0) where _0 is the standard area form on ^2 and 0 < a,c < 1 are some certain real numbers.
Let P_a = (^2×^2, 1/2_0⊕ a_0) where a > 0 is a positive real number and _0 is the standard are form on ^2. Let D_1,D_2, D_3 be define as in Lemma <ref>. Suppose g∈(P_a) is a Hamiltonian diffeomorphism that satisfies the following:
g(D_1×^2) ∪ g^-1(D_1×^2)⋐ D_2×^2,
g(D_2×^2) ∪ g^-1(D_2×^2)⋐ D(-a,a)×^2,
g^-1(D(_3,1)×^2) ⊂ D(a,1)×^2
g^-1(D(-1, -_3)×^2) ⊂ D(-1, -a)×^2,
where a is as in Lemma <ref>. Then, there exist a Hamiltonian diffeomorphism ψ∈(P_a) that is compactly supported in D_3×^2 and extends g on D_1×^2.
The proof is analogous to the proof of Lemma <ref>. For the notations used below see Lemma <ref>. The only difference is the following: first we think of the the second factor ^2(a) as the upper hemisphere of S^2(2a). Pick a small neighborhood 𝒱 of the lower hemisphere S^-(2a) such that g is supported away from ^2×𝒱. Restrict g to D(-ϵ_1, 1) and extend it by identity to 𝒰^u× S^2∪ S^2×𝒱. Then, use Theorem <ref> to extend it to a symplectomorphism ψ_u of S^2× S^2. Restrict g to D(-1, ϵ_1) and extend it by identity to 𝒰^d× S^2∪ S^2×𝒱. Then, use Theorem <ref> to extend it to a symplectomorphism ψ_d of S^2× S^2. Then, the map ψ:= ψ_u∘ψ_d∘ g^-1 is supported away from S^-× S^2∪ S^2× S^-(2a) and its restriction to ^2(1/2)×^2(a) is the desired map. Note that the only difference is that this time we have used the fact that in part (2) of Theorem <ref> one can extend both embeddings 𝒰× S^2 and S^2×𝒰 at the same time.
Therefore, the argument for the Fragmentation Lemma <ref> works for P_a as well. Namely, we have the following,
Let P_a = (^2×^2, 1/2_0⊕ a_0) where _0 is the standard area form with total area 1 and a > 0 is a real number. Let ρ > 0 be a positive number and m > 0 be a positive integer. Divide the unit disk ^2 into m horizontal strips with equal area with respect to the area form 1/2_0 and denote them by D_i. Define U_i to be the interior of D_i for all i. Then, there exists δ > 0 such that for every g∈(P_a) with d_C^0(g,id) < δ there are g_i∈_U_i×^2(M) for i = 1, …, m, and θ∈_U×^2(M) where U is a disjoint union of topological disks in ^2 with total area less than ρ, so that, g = g_1∘…∘ g_m∘θ.
See the proof of Lemma <ref> and whenever needed use Lemma <ref> instead of Lemma <ref>.
Analogously, Theorem <ref> holds for P_a as well. So, we have,
Let M_a = (S^2× S^2, σ⊕ aσ) and P_a = (^2×^2, 1/2_0⊕ a_0) where σ, _0 are the standard area forms on S^2, ^2 with total area 1 respectively, and a > 0 is a real number. Here, we think of ^2(a) as the sphere S^2(a) minus the south pole. Let B be a topological-disk in ^2(1/2). Then, for every ϵ > 0 there exist δ > 0 such that,
d_H(B_C^0(id, δ), Ham_B×^2(M) < ϵ,
where B_C^0(id, δ) is the C^0-ball of radius δ around id in _P_a(M_a).
See the proof of Theorem <ref>. Note that in the proof of Theorem <ref>, although the Hamiltonian diffeomorphisms h_j's constructed by Lemma <ref> are not supported in P_a, but once one conjugates g_i's by these diffeomorphisms, the resulting Hamiltonian diffeomorphisms will be compactly supported in P_a since g_i's are so.
Finally, in Corollary <ref> we have proved that the group (^2×^2) is weakly contractible and in particular has trivial fundamental group. This will define a well defined spectral estimator τ_k,k',B,B': (^2(1/2)×^2(a))→ for 0 < a < 1 small enough rational number and k,k',B,B' satisfying the properties in Definition <ref>. Similar to the proof of Theorem <ref>, it can be proved that this invariant is also uniformly C^0-continuous with all the remarkable properties discussed in Section <ref>. Therefore, we have the following results for P_a where 0 < a < 1 is any rational number. See the corresponding result for ^2× S^2 in Section <ref> and <ref>
Let P_a as before and 0 < a < 1 be a rational number. Then, the set E_A(P_a) contains a C^0-open subset for every A > 0.
See the proof of Theorem <ref>
The space (C^∞_c(0,b), d_C^0) isometrically embeds into ((P'_a), d_H) where P_a':= (^2×^2, (1/2 + b)_0⊕ a_0), 0 < a < 1 is a rational number, b satisfies 0 < b < 1/6(1 - a) and d_C^0, d_H are the C^0-distance and the Hofer distance respectively.
See the proof of Theorem <ref>. Think of ^2(a) as a sphere with area a minus its south pole. We only need to make a slight perturbation of the embedding Ψ. Let β: ^2(a)→ be a smooth radial cut off function that is 1 on {r ≤ R - ϵ} and is 0 on {r ≥ R - ϵ/2} where ϵ > 0 is a small number depending on a and R is the radius of ^2(a). Set Γ(h)'(x,y):= β(y) h^#∘ z∘π_1(x,y). Then, define the embedding Ψ as follows:
Ψ: C^∞_c(0,b)→(P_a')
h↦ϕ^1_Γ(h)',
where ϕ^1_Γ(h)' is the time-one map of the autonomous flow corresponding to Γ(h)' restricted to the subdomain P_a'. The rest of the argument in the proof of Theorem <ref> goes through with this perturbed embedding.
abbrvnat
*
|
http://arxiv.org/abs/2307.01085v1 | 20230703150710 | Some challenges of calibrating differentiable agent-based models | [
"Arnau Quera-Bofarull",
"Joel Dyer",
"Anisoara Calinescu",
"Michael Wooldridge"
] | cs.MA | [
"cs.MA",
"cs.AI",
"q-fin.TR",
"stat.ML"
] |
[
Some challenges of calibrating differentiable agent-based models
equal*
Arnau Quera-Bofarullequal,cs
Joel Dyerequal,cs,inet
Anisoara Calinescucs
Michael Wooldridgecs
csDepartment of Computer Science, University of Oxford
inetInstitute for New Economic Thinking, Oxford
Arnau [email protected]
Joel [email protected]
Machine Learning, ICML
0.3in
]
Agent-based models (ABMs) are a promising approach to modelling and reasoning about complex systems, yet their application in practice is impeded by their complexity, discrete nature, and the difficulty of performing parameter inference and optimisation tasks. This in turn has sparked interest in the construction of differentiable ABMs as a strategy for combatting these difficulties, yet a number of challenges remain. In this paper, we discuss and present experiments that highlight some of these challenges, along with potential solutions.
§ INTRODUCTION
Agent-based models (ABMs, see <ref> for a brief overview) have gained considerable popularity across a range of disciplines, due to their ability to accurately simulate complex systems at a granular level. While these models offer unique advantages, their complexity presents significant challenges, for example in terms of parameter calibration <cit.>. For such tasks, multiple factors contribute to their difficulty, including the intractability of the ABM's likelihood function, and the often black-box and non-differentiable nature of the ABM.
These drawbacks of ABMs have motivated research into the construction of differentiable ABMs <cit.>, for example through the use of differentiable programming and by exploiting automatic differentiation (AD) frameworks. AD – a methodological cornerstone in machine learning, largely underpinning the success of deep learning paradigms due to its ability to accurately compute derivatives within models – circumvents issues present in alternative approaches to model differentiation by applying the chain rule of differentiation at a computational level, resulting in exact derivatives.
Despite recent progress, the challenges involved in building and benefitting from differentiable ABMs remain under-explored, and there exists little guidance to practitioners interested in implementing and exploiting differentiable ABMs.
The aim of this paper is therefore to discuss some central challenges in applying AD to ABMs.
§ CHALLENGES
§.§ Discrete randomness
The issue of differentiating through discrete structures is inherent in ABMs, which simulate discrete events, transitions, and interactions that are incompatible with conventional AD. Initial efforts to implement AD within ABMs have primarily centred on transforming the ABMs' discrete control flow structure with continuous approximations <cit.>. Furthermore, the use of the Gumbel-Softmax (GS) reparametrisation trick <cit.> allows for the differentiation of discrete randomness, and has been deployed effectively in epidemiological ABMs <cit.>. However, this approach does not provide an ideal solution. While it allows for gradient calculations, GS does not guarantee unbiased or low-variance gradients <cit.>. Developing unbiased and lower variance methods such as StochasticAD <cit.> in the Julia programming language <cit.> is currently an active field of research, but we limit the scope of our discussion here to GS-based methods for discrete ABMs.
Despite the potential lack of robustness of GS, GS-based differentiable ABM implementations have shown great success in improving the calibration <cit.> and sensitivity analyses <cit.> of ABMs. In the Experiments section below, we further show that gradients obtained using the GS trick are robust enough to enable fast and accurate Bayesian inference (see <ref>).
§.§ Reverse- vs. Forward-mode AD
In Reverse-mode AD (RMAD), a computation graph must be stored that records all operations performed within the model, such that the gradients of the model outputs with respect to the input parameters can be obtained. This contrasts with Forward-mode AD (FMAD), where the gradients are computed during the forward simulation. There are two important computational considerations when comparing FMAD vs. RMAD. The first is that the computational time associated with FMAD scales with the number of model inputs, while that of RMAD scales with the number of model outputs. In machine learning, the latter option is more prevalent, since machine learning models often have many more inputs than outputs.
However, the computation graph that RMAD must store in (often GPU) memory can be extremely large, hindering the possibility of differentiating through large models. This is particularly pertinent for ABMs: the size of the computation graph grows with the number of agents and time-steps, which can pose a challenge to the use of RMAD for ABMs with a large number of agents and time-steps.
To address this, in <ref> we discuss a differentiating strategy that alternates between FMAD and RMAD when calibrating ABMs, and we apply it to an epidemiological simulation involving 8 million agents.
§.§ Monte Carlo gradient estimation
Since ABMs are typically stochastic models, it can often be the case that practitioners are interested in performing an optimisation problem of the form
min_ω∈Ω𝔼_z ∼ p_ω[ℒ(z)],
where p_ω∈{ p_ω' : ω' ∈Ω} is a probability distribution on some domain 𝒵 indexed by a parameter ω belonging to some set Ω, and ℒ : 𝒵→ℝ is a loss function. For example, certain parameter calibration procedures that seek to identify the parameters in some set that minimise some discrepancy 𝒟(·, y) between the model output x and real-world data y can be cast in the form
min_∈𝔼_x∼ p(·|)[𝒟(x, y)],
where p(·|) is the ABM's likelihood function. The gradients of a differentiable ABM can then be exploited by gradient-assisted methods for minimising the objective in (<ref>), by finding a Monte Carlo estimate of the expression
∇_ω𝔼_z ∼ p_ω[ℒ(z)].
For differentiable ABMs, a Monte Carlo estimate of this gradient can be obtained using the path-wise derivative via reparametrisation tricks <cit.>. In such cases, derivatives of the form ∂x_t / ∂ω_i will contribute to the estimate. To properly benefit from access to the differentiable ABM's gradients in these settings, it is critical that low-variance, low-bias Monte Carlo estimates of (<ref>) are available. However, as we will demonstrate in <ref>, naively estimating these gradients by accounting for both (a) the explicit dependency of each x_t on ω_i, and (b) the implicit dependency on ω_i, mediated by the x_1:t-1 that result from the recursive structure of ABMs, can result in unusable gradient estimates with prohibitively large variances. Consequently, modifications to vanilla AD can become necessary, as illustrated in <ref>.
§ EXPERIMENTS
In this section, we present experiments on the use of gradient-assisted calibration methods for two ABMs, where each experiment serves to highlight different combinations of the challenges described in Section <ref>.
While there exist
many different gradient-assisted calibration methods,
we focus
on a variational approach to Bayesian parameter inference termed Generalised Variational Inference <cit.> – a likelihood-free Bayesian inference approach that has previously been used to calibrate the parameters ∈ℝ^d of a differentiable ABM <cit.>. Here, a variational procedure targets a “generalised” posterior <cit.>
π_w, y() ∝ e^-w·ℓ(y, )π(),
where π() is a prior distribution, ℓ(y, ) is a loss function capturing the compatibility between the observed data y and the behaviour of the ABM at parameter vector , and w > 0 is a hyperparameter. To target this posterior, we train a normalising flow q_ϕ with trainable parameters ϕ to minimise the Kullback-Liebler divergence KL(q_ϕ‖π_w, y) from q_ϕ to π_w, y, yielding the minimising parameters
ϕ_w,y,π
= min_ϕ{ w 𝔼_q_ϕ[ℓ(y, )] + KL(q_ϕ‖π)}.
Further details
are provided in <ref>. Code to reproduce the results and perform GVI on differentiable ABMs can be found at <https://github.com/arnauqb/blackbirds>.
§.§ The Brock & Hommes model
The Brock & Hommes model <cit.> is a heterogeneous agent model for the price x_t ∈ℝ of an asset over time 1 ≤ t ≤ T. At each time step, the agents in the model subscribe to one of a set of J > 1 trading strategies, each of which is characterised by a trend-following parameter g_j and bias parameter b_j, j ∈{1, …, J}. Following <cit.>, we note that
the price x_t may be written as deterministic transformations f_t of the input parameters = (g_1, …, g_J, b_1, …, b_J), auxiliary parameters , and standard Normal random variables:
x_t = f_t(ϵ_1, …, ϵ_t, , ), ϵ_t ∼𝒩(0,1).
Further details are provided in <ref>. Thus, provided ℓ is chosen to be a differentiable function of , this enables us to employ gradient-based approaches to minimising the objective (<ref>) that exploit the reparameterisation trick.
Fixing g_1 = b_1 = b_4 = 0 and g_4 = 1.01, we consider the task of calibrating parameters g_2, g_3, b_2, b_3 given synthetic data y = (y_1, …, y_T) generated from the model at (g_2, g_3, b_2, b_3) = (0.9, 0.9, 0.2, -0.2) with T=100. We follow <cit.> and
target the generalised posterior (<ref>) given by the choice
ℓ(y, ) = MMD^2(ℙ_T, ℙ_),
where MMD^2(ℙ_T, ℙ_) is the maximum mean discrepancy between the empirical measure of returns ℙ_T = (y_1, …, y_T) and the distribution ℙ_ of returns implied by the simulator at parameters . Using a Gaussian kernel within the MMD computation, the operations comprising evaluation of ℓ(y, ) are also all differentiable and deterministic, enabling evaluation of the term ∇_ϕ𝔼_∼ q_ϕℓ(y, ) in (<ref>) using the reparameterisation trick (see Appendix <ref>).
Despite our ability to compute the partial derivatives ∂x_t / ∂ϕ_i exactly,
the posterior estimator q_ϕ struggles to train with a gradient-assisted approach to minimising (<ref>). This can be seen in <ref>, in which the objective function decreases slowly with the number of epochs when trained with AdamW <cit.> and using the vanilla pathwise derivative (red curve). Indeed, we see in this case that the access to the simulator's gradients appears to offer no improvement over the score-based gradient, shown with the purple curve and obtained as
∇_ϕ𝔼_q_ϕ[ℓ(y, )] = 𝔼_q_ϕ[ℓ(y, ) ∇_ϕlog q_ϕ()].
Drawing inspiration from the literature on backpropagation-through-time (e.g. truncated back-propagation in the context of RNN training, see <cit.>), we consider pruning a subset of the paths in the computation graph that contribute to each of the ∂x_t / ∂_i
as a possible solution to this problem. We achieve this by invoking an appropriate operation (e.g. in , <cit.>) on terms x_t' that (a) contribute directly/explicitly to the evaluation of x_t and (b) for which t' < t - H for some “gradient horizon” H ≥ 0. As evident from <ref>, we observe that a finite gradient horizon can dramatically improve the gradient-assisted training of q_ϕ. In this experiment, the best performance was observed while using a gradient horizon of H = 0.
We posit that this is a manifestation of a bias-variance trade-off in the Monte Carlo gradient estimation step: pruning a subset of paths in the computation graph with the use of a finite gradient horizon may introduce some bias in, but can significantly reduce the variance of, Monte Carlo estimates of the gradient ∇_ϕ𝔼_q_ϕ[ℓ(y, )] when employing the pathwise derivative. This hypothesis is supported by <ref>, which shows histograms of the standard deviation of the estimates of ∂𝔼_q_ϕ[ℓ(y, )] / ∂ϕ_j across all j for gradient horizons H ∈{ 0, 1, 2, 100 }, and for the score-based estimator. There, we see that the histogram shifts towards larger values as H increases. Further results supporting this hypothesis are given in Appendix <ref>.
§.§ The JUNE model
The model <cit.> is a large-scale epidemiological ABM of England based on a realistic synthetic population constructed from the English census. Calibrating the original implementation required the construction of a surrogate model due to its high computational cost <cit.>. The model <cit.> is a differentiable implementation of which employs the GS reparameterisation trick to differentiate through discrete randomness. Compared to its non-differentiable counterpart, has been used to more efficiently generate parameter point estimates, as well as sensitivity analyses <cit.>.
§.§.§ Reducing memory consumption through forward-mode AD
can simulate the entire English population at a scale of 1:1 — 53 million people at the time of the 2011 census. As discussed in <ref>, differentiating through this model using RMAD is challenging due to the high memory demand to store the computation graph. To perform GVI for this model, we implement a hybrid AD technique: we use FMAD to obtain the Jacobian J_ of the ABM outputs with respect to the ABM parameters, and combine it with RMAD through the flow q_ϕ, yielding
∇_ϕ𝔼_q_ϕ[ℓ(y, )] = J_ (𝔼_q_ϕ[ℓ (𝐲, )]) ·∇_ϕ, with
J_ (𝔼_q_ϕℓ (𝐲, )) = ∂𝔼_q_ϕℓ (𝐲, )/∂∈ℝ^1 × d.
Here, (<ref>) is the Jacobian obtained through FMAD and ∇_ϕ∈ℝ^d× F is the gradient, obtained with RMAD, of the d ABM parameters generated by the normalising flow with parameters ϕ∈ℝ^F.
In <ref>, we plot the memory costs of employing FMAD and RMAD to compute the ABM's Jacobian. We see that the cost of FMAD is independent of the number of time-steps, since no computation graph is stored. In contrast, the cost of RMAD scales linearly with the number of time-steps and agents. Simulating the entire English population for 300 time-steps would require 5TB of memory, while doing so with FMAD would require merely 18GB, regardless of the number of time-steps. The increase in computational time of FMAD comes at an increase of computational cost since it requires d evaluations of the model for ∈ℝ^d; however, since these evaluations are embarrassingly parallelisable, the impact on performance can be minimal.
With the above in mind, we set up an experiment with the London's population (8.1 million people) in . We generate a synthetic time-series of daily infections for 50 days using some assumed parameters that we aim to recover through our calibration process. The parameters that we vary are the contact intensities at 10 different locations, as well as the number of initial cases. Further details of the experimental setup are shown in <ref>.
We apply the GVI procedure to calibrate the model with 11 free parameters. The flow converges after approximately 3,000 model evaluations, highlighting the potential for simulation-efficient calibration with gradient-assisted methods. In <ref>, we show a comparison of runs obtained by sampling the ABM parameters from the trained flow, the untrained flow, and the prior. This demonstrates that the trained flow generates parameters that result in close agreement between the simulator and ground truth data, while providing useful uncertainty quantification.
§ CONCLUSIONS AND DISCUSSION
This study examines some challenges that arise from the application of vanilla AD to ABMs, such as overcoming the inherent discreteness of ABMs and the variance and high computational requirements of passing gradients through large simulators. We have shown that these challenges can be overcome to some extent with different modifications to vanilla AD. As supporting evidence, we successfully calibrate differentiable implementations of the Brock & Hommes and models with these modifications, the latter involving over eight million agents and discrete randomness. In this way, this study helps to pave the way towards robust calibration of large-scale agent-based models.
icml2023
§ AGENT-BASED MODELS
Agent-based modelling is the name given to a broad approach to modelling complex systems that consist of multiple discrete, autonomous, and heterogeneous interacting components – the “agents” of the system. Examples of such complex systems include the housing market <cit.>, in which a large collection of renters, homeowners, financial institutions etc. interact and take actions which affect, for example, the availability of housing and mortgage rates. An agent-based approach to modelling such a system would model the system at the level of these individual agents in the system, often with the intention of observing how aggregate, macroscopic properties of the system emerge from the microscopic details of the system.
While this is often a natural approach to modelling systems of this kind, the inherently discrete nature of the model's components and dynamics give rise to difficulties in applying gradient-based optimisation and calibration techniques. We expand on these difficulties in <ref>.
§ THE BROCK & HOMMES MODEL
The dynamics of the Brock and Hommes model are often expressed as the following system of coupled equations:
x_t = 1/R[∑_j=1^J(g_j x_t-1 + b_j)n_j, t + σϵ_t], ϵ_t∼𝒩(0, 1),
n_j, t = exp(β U_j,t-1)/∑_j' = 1^Jexp(β U_j',t-1),
U_j,t-1 = (x_t-1 - R x_t-2)(g_j x_t-3 + b_j - R x_t-2),
where R, β, σ are auxiliary parameters. We fix J=4, R = 1.01, σ = 0.04, g_1 = b_1 = b_4 = 0, g_4 = 1.01, and β = 120 for the experiment presented in the main body of the paper.
By rewriting the above system of equations, we are able to find the transition density for observation x_t+1 as
p(x_t+1|x_1:t, , ) = 𝒩(f(x_t-2:t, , ), σ^2/R^2 )
where
f(x_t-2:t, , ) = 1/R∑_j=1^Jexp[β(x_t - R x_t-1)(g_j x_t-2 + b_j - R x_t-1)]/∑_j' = 1^Jexp[β(x_t - R x_t-1)(g_j'x_t-2 + b_j' - R x_t-1)](g_j x_t + b_j)
and = (R, β, σ). The model is taken to be initialised with x_-2 = x_-1 = x_0 = 0.
§.§ The asset prices as differentiable and deterministic transformations of input noise
We claim in the main body that we may rewrite the x_t as deterministic transformations of standard Normal random variables. By exploiting the autoregressive structure of the model, we explicitly provide these forms for x_1 and x_2 below to demonstrate this claim. Throughout, ϵ_t ∼𝒩(0,1) are iid standard Normal random variables. They are as follows:
x_1 = 1/R(∑_j=1^Jb_j/J + σϵ_1) := f_1(ϵ_1, , )
x_2 = 1/R∑_j = 1^J exp[β b_j/R( ∑_j” = 1^J b_j”/J + σϵ_1 )]/∑_j' = 1^J exp[β b_j'/R( ∑_j” = 1^J b_j”/J + σϵ_1 )] + σ/Rϵ_2 := f_2(ϵ_1:2, , ).
Repeating this process, we find that the x_t may all be expressed in the form x_t = f_t(ϵ_1:t, , ) for a deterministic mapping f_t : ℝ^t×ℝ^2J×ℝ^3 →ℝ.
Taking
ℓ(y, ) = MMD^2(ℙ_T, ℙ_)
= 𝔼_x, x' ∼ℙ_[k(x, x')] + 𝔼_y, y' ∼ℙ_T[k(y, y')] - 2𝔼_x ∼ℙ_, y∼ℙ_T[k(x, y)]
≈1/T(T-1)∑_t≠ t' k(x_t, x_t') + 1/T(T-1)∑_t≠ t' k(y_t, y_t') - 2/T^2∑_t, t' = 1^T k(x_t, y_t')
with a Gaussian kernel k, the loss ℓ(y, ) is a deterministic and differentiable transformation of the noise drawn from the base distribution ρ of the normalising flow and of the separate noise source with distribution ν given as input to the simulator. This permits us to estimate the gradient of the first term in (<ref>) as
∇_ϕ𝔼_q_ϕ[ ℓ(y, )]
= ∇_ϕ𝔼_u∼ρ[ℓ(y, _ϕ(u))]
= 𝔼_u∼ρ[∇_ϕℓ(y, _ϕ(u))]
= 𝔼_u∼ρ[1/T(T-1)∑_t≠ t'∇_ϕ k(f_t(ϵ_1:t, _ϕ(u), ), f_t'(ϵ_1:t', _ϕ(u), )) - .
. 2/T^2∑_t, t' = 1^T ∇_ϕ k(f_t(ϵ_1:t, _ϕ(u), ), y_t')]
where in the first line we use the Law of the Unconscious Statistician and assume throughout that the order of derivatives and integrals can be exchanged freely.
§.§ Further experimental results for the Brock & Hommes model
§.§.§ Calibration results with gradient horizon H=0
In Figure <ref>, we show the generalised posterior approximated by the converged normalising flow and with gradient horizon H=0, which achieved a loss close to 0. Since the objective function is lower-bounded by 0, this posterior can be taken to be a good approximation to the generalised posterior it targets.
§.§.§ Further evidence in support of the bias-variance trade-off in the reparameterised Monte Carlo gradient estimator at different gradient horizons
To further test the hypothesis that the Monte Carlo gradient estimators at different gradient horizons result in a bias-variance trade-off that can result in favourable performance at finite gradient horizons, we inspect the empirical distribution of the estimates η_N of the gradient (<ref>) based on N Monte Carlo samples,
η_N := 1/N∑_n=1^N[
1/T(T-1)∑_t≠ t'∇_ϕ k(x_t, x_t') - 2/T^2∑_t, t' = 1^T ∇_ϕ k(
x_t, y_t')
]
where here we take a diagonal Gaussian distribution over ℝ^4 as the posterior estimator q_ϕ. In this experiment, therefore, ϕ = (μ_1, …, μ_4, σ_1, …, σ_4), where μ_i and σ_i are the mean and standard deviation in each dimension of this choice of q_ϕ.
When implemented in the form given by Equations (<ref>) and (<ref>) – as is necessary to avoid the cumbersome task of finding the explicit form of f_t(ϵ_1:t, , ) for each t – the x_t depend on both explicitly and implicitly via x_t-3:t-1. Thus in general we have
∂x_t/∂_i = ∑_l = 1^∞∑_Length l paths
𝐯 = (v_0, v_1, …, v_l-1, v_l)
with v_0 = x_t, v_l = _i∏_i=0^l-1∂ v_i/∂ v_i+1,
where we abuse notation by taking the derivative on the left-hand side to mean “holding only (_1, …, _i-1, _i+1, …, _d) constant” while the partial derivatives on the right-hand side are partial derivatives in the true sense of the term (or equivalently the x_t on the left-hand side is viewed only as a function of , while on the right-hand side they are viewed as functions of both and x_1:t). Choosing a gradient horizon of H > 0 then amounts to retaining paths of length greater than 1 if their first edge corresponds to an edge connecting x_t to any node in 𝒳_H := {x_t-h : h ∈ℋ_H}, where ℋ_0 = ∅ and ℋ_j = {1, …, j} when j > 0. In this way, the derivative (<ref>) is then taken instead as
∂x_t/∂_i = ∂x_t/∂_i + ∑_l = 2^∞∑_Length l paths
𝐯 = (v_0, v_1, …, v_l-1, v_l)
with v_0 = x_t, v_1 ∈𝒳_H, v_l = _i∏_i=0^l-1∂ v_i/∂ v_i+1,
where the same abuse of notation is once again used. This elimination of terms from the summation can be expected to reduce the variance since for two random variables X_0, X_1, it is the case that
Var(X_0 + X_1) = Var(X_0) + Var(X_1) + 2 Cov(X_0, X_1),
which can be greater than Var(X_0) if Var(X_1) + 2 Cov(X_0, X_1) > 0. It may also be preferable to stricter truncation of the computation graph – for example, by pruning all paths beyond a certain length – as it retains information on long-range dependencies while still potentially reducing variance.
We plot in <ref> boxplots for the distribution of η_N obtained with N=5 and with different values of H, obtained at a fixed value for ϕ (the results were qualitatively similar for different ϕ we tried, and so we show only the results from one settings). We also show the same boxplots for the gradient estimate obtained with the score-based estimator,
∇_ϕ𝔼_q_ϕ[ℓ(y, )] = 𝔼_q_ϕ[ℓ(y, )∇_ϕlog q_ϕ()] ≈1/N∑_n=1^N ℓ(y, ^(n)) ∇_ϕlog q_ϕ(^(n)).
Orange (green) dashed lines show the mean (median) of the distributions. The blue crosses show the mean of the distribution of the gradient estimate (<ref>) obtained using N=1000, which provide a good estimate of the target value (since the score-based estimator is unbiased). We see from this that, generally speaking, the variance of these estimates increase as H increases, while the bias in the estimates do not degrade substantially. This highlights the possibility that using a finite
gradient horizon can be beneficial when performing Monte Carlo gradient estimation for differentiable time series simulation models, such as ABMs, when reparameterisation is possible. Further work will be required to establish the general applicability and suitability of this technique.
§ THE JUNE MODEL
The JUNE model <cit.> is an agent-based epidemiological model that generates a synthetic population at a highly detailed level using the English census data. This model has been applied in various scenarios, including analyzing the impact of the first and second waves of SARS-CoV2 in England <cit.> and devising strategies to control disease transmission in refugee settlements <cit.>.
To enhance its performance and enable gradient-based calibration, the JUNE model has been incorporated into the GradABM framework <cit.>. This integration allows for faster execution and more efficient parameter calibration. The JUNE model offers a wide range of configurable parameters related to disease transmission and progression, vaccination, and non-pharmaceutical interventions.
Given a susceptible agent exposed to an infection at
location L, the probability of that agent getting infected is given b
p = 1 - exp(-ψ_s β_L Δ t∑_i∈ gℐ_i(t) ) ,
where the summation is conducted over all contacts an agent has with infected individuals at the given location L. The term ℐ_i(t) represents the time-dependent infectious profile of each infected agent, while Δ t is the duration of the interaction. Additionally, β_L corresponds to a location-specific parameter that captures the variation in the nature of interactions across different locations.
Since the β_L parameters are not directly measurable physical quantities, they are typically calibrated using available data on the number of cases or fatalities over a specific time period. For the current work we consider the calibration of 11 β_L parameters corresponding to the contact intensity at households, companies, schools, universities, pubs and restaurants, gyms, cinemas, shops, care homes, and residence visits. Additionally, we also calibrate the initial number of infections, I_0, which are distributed randomly across the population. The synthetic ground truth data is generated by using I_0 = 10^-3.5 N_a where N_a is the number of agents, and β_household = β_care home = 0.6, β_school = β_company = β_university = 0.4, β_pub =β_shop =β_gym =β_cinema =β_visit =0.1.
The normalising flow is trained setting ℓ(𝐲, ) to be the squared distance between the log_10 of the infection time-series. This choice of loss function keeps the training robust against outliers, since the number of infections can oscillate between several orders of magnitude. The specific training parameters are described in <ref>.
§.§ Further experimental results for the JUNE model
We show in <ref> the loss as a function of epoch when performing SVI with . We observe a rapid convergence after 600 epochs. Since we are sampling 5 Monte Carlo samples to estimate <ref>, this results in 3,000 model evaluations. We also make a corner plot of the train and untrained normalising flow which we show in <ref>, where the solid black line denotes the prior density. We observe that the flow is very confident about the value of more sensitive parameters such as the initial number of infections and the contact intensity at companies, while it is less certain for venues which have a low impact in the overall number of infections, such as cinemas. It is worth noting that this calibration challenge is very underdetermined, that is, it is quite difficult by just observing the overall number of infections over time to infer the contact intensities at each location. Nonetheless, the flow fits well the synthetic ground truth data.
§ FURTHER EXPERIMENTAL DETAILS
We use the normalizing-flows library <cit.> to implement the normalising flows in PyTorch. All models are trained using the AdamW optimizer <cit.> with a learning rate of 10^-3.
To calibrate the Brock & Hommes and models, we employ a masked affine autoregressive flow <cit.> with 16 transformations, each parametrized by 2 blocks with 20 hidden units. We also set the regularisation weight to w = 10^-3 for both models and estimate <ref> using 5 Monte Carlo samples.
|
http://arxiv.org/abs/2307.01385v1 | 20230703224349 | Recovering coefficients in a system of semilinear Helmholtz equations from internal data | [
"Kui Ren",
"Nathan Soedjak"
] | math.AP | [
"math.AP",
"cs.NA",
"math.NA",
"math.OC",
"35R30, 49M41, 65N21, 78A46"
] |
Spatial-temporal Graph Based Multi-channel Speaker Verification With Ad-hoc Microphone Arrays
Yijiang Chen, Chengdong Liang, and Xiao-Lei Zhang
Yijiang Chen and Xiao-Lei Zhang are with the School of Marine Science and Technology, Northwestern Polytechnical University, 127 Youyi West Road, Xi'an, Shaanxi 710072, China (e-mail: [email protected], [email protected]).
Chenegdong Liang is currently with the Horizon Robotics, Beijing, China. The work was done when Chengdong Liang was with the Northwestern Polytechnical University, China (e-mail: [email protected]).
9 June 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study an inverse problem for a coupled system of semilinear Helmholtz equations where we are interested in reconstructing multiple coefficients in the system from internal data measured in applications such as thermoacoustic imaging. We derive results on the uniqueness and stability of the inverse problem in the case of small boundary data based on the technique of first- and higher-order linearization. Numerical simulations are provided to illustrate the quality of reconstructions that can be expected from noisy data.
Inverse problems, semilinear Helmholtz equation, uniqueness, stability, thermoacoustic imaging, nonlinear physics, second harmonic generation
35R30, 49M41, 65N21, 78A46
§ INTRODUCTION
Let Ω⊂^d (d≥ 2) be a bounded domain with smooth boundary ∂Ω. We consider the following system of coupled semilinear Helmholtz equations
[ Δ u+ k^2 (1+η) u +ikσ u = -k^2 γ u^* v, Ω; Δ v+ (2k)^2 (1+η) v +i2kσ v = -(2k)^2 γ u^2, Ω; u = g, v = h, ∂Ω ]
where Δ denotes the standard Laplacian operator, and u^* denotes the complex conjugate of u. This system serves as a simplified model of the second harmonic generation process in a heterogeneous medium excited by an incident wave source g <cit.>. The fields u and v are, respectively, the incident field (with wave number k) and the generated second-harmonics (with wave number 2k). The medium has first- and second-order susceptibility η and γ, respectively, and an absorption coefficient σ.
We are interested in inverse problems to system (<ref>) where the objective is to reconstruct the coefficients in the system from data of the form:
H() = Γ() σ() (|u|^2+|v|^2), ∈Ω .
where Γ() is an additional physical coefficient that appears in the data generation process. This inverse problem is motivated by applications in thermoacoustic imaging, a hybrid imaging modality where thermoacoustic effect is used to couple high-resolution ultrasound imaging to microwave imaging to achieve high-resolution and high-contrast imaging of physical properties of heterogeneous media in the microwave regime. In thermoacoustic imaging, H is the initial pressure field of the ultrasound generated by the thermoacoustic effect. It is proportional to the local energy absorbed by the medium from microwave illumination, that is, σ(|u|^2+|v|^2). The proportional constant Γ() is called the Grüneisen coefficient <cit.>. We refer interested readers to <cit.> and references therein for the recent development in the modeling and computational aspects of thermoacoustic imaging.
There are two main differences between the inverse problem we study here and those that exist in the literature. First, our model (<ref>) takes into account second-harmonic generation, a nonlinear mechanism that is often used for the imaging of molecular signatures of particular proteins in biomedical applications. Second, the objective of our inverse problem includes the Grüneisen coefficient Γ, which is mostly ignored in the previous studies of quantitative thermoacoustic imaging <cit.>.
The fact that the absorbed energy is in the form of σ()(|u|^2+|v|^2) has to be understood from the physics of the thermoacoustic process. In a nutshell, consider the full time-dependent forms of the incident (at frequency ω) and generated (at frequency 2ω) electric wave of the form:
u(,t) = 2(u()e^-iω t) = 2|u()|cos(φ_u() - ω t) ,
v(,t) = 2(v()e^-i2ω t) = 2|v()|cos(φ_v() - 2ω t) ,
where φ_u (resp. φ_v) is the phase of u (resp. v). Let I() denote the energy density of the total electric field at the location , averaged over a period of length T:=2π/ω. It is then clear that
I() = 1/T∫_0^T 1/2 |u(,t)+v(,t)|^2 dt = 1/2T∫_0^T u(,t)^2 + v(,t)^2 + 2u(,t)v(,t) dt
=4|u()|^21/2T∫_0^Tcos^2(φ_u() - ω t) dt + 4|v()|^21/2T∫_0^Tcos^2(φ_v() - 2ω t) dt
+ 8|u()||v()|1/2T∫_0^T cos(φ_u() - ω t)cos(φ_v() - 2ω t) dt
=(|u()|^2+|v()|^2) ,
where we have used the standard trigonometic identity cos(x)cos(y)=1/2(cos(x+y)+cos(x-y)) to simplify the integrals.
Therefore, the cross-term vanishes due to orthogonality. The absorbed radiation at location is thus σ()I()=σ()(|u|^2+|v|^2). This simple calculation provides a (maybe overly-simplified) justification of the data (<ref>) as the internal data in thermoacoustic imaging with second-harmonic generation.
The main objective of this paper is to study the problem of determining information on (Γ, η, γ, σ) from information encoded in the map:
Λ_Γ, η, γ, σ: (g,h) ↦ H.
We will show that under appropriate conditions, the data (<ref>) allow unique (and stable, in an appropriate sense) reconstruction of the coefficients (Γ, η, γ, σ). Moreover, there is an explicit reconstruction method to recover (Γ, η, σ) (see the proof of Theorem <ref>), and another explicit method to reconstruct γ (see the remarks below (<ref>)).
The paper is organized as follows. We first review in Section <ref> some of the elementary properties of the model (<ref>) that we will use in our analysis. We also introduce the multilinearization method as the basis of the study of the inverse problems. We then derive the uniqueness and stability of reconstructing (Γ, η, σ) in Section <ref> and study the problem of reconstructing γ in Section <ref>. Numerical simulations based on synthetic data will be provided in Section <ref> to demonstrate the quality of the reconstructions that can be achieved in such an inverse problem before we conclude the paper with additional remarks in Section <ref>.
§ THE FORWARD MODEL AND ITS LINEARIZATION
Throughout the paper, we make the following assumptions on the domain Ω and the physical coefficients involved in the inverse problem:
* The domain Ω is bounded with smooth boundary ∂Ω.
* The coefficients Γ,η,σ,γ all lie in the set
:= {f∈^2(Ω; ): c_1≤ f≤ c_2 Ω}
for some c_1>0 and c_2>0.
While it is clear that such assumptions can be slightly relaxed for the technical results in the rest of the paper to still hold, we choose the current form to make the presentation of the paper easy to follow.
§.§ Well-posedness of the forward model
We start with the well-posedness of the semilinear system (<ref>) for small boundary data.
Let α∈ (0,1). Under the assumptions in <ref> and <ref>, there exist >0 and δ>0 such that for all g,h∈^2,α(∂Ω;) with g_^2,α(∂Ω)< and h_^2,α(∂Ω)<, the boundary value problem (<ref>) has a unique solution
(u,v)∈{f∈^2,α(Ω;): f_^2,α(Ω)≤δ}^2.
Moreover, there exists a constant C=C(α,Ω,η,σ,γ) such that this unique solution satisfies the estimates
u_^2,α(Ω) ≤ C( g_^2,α(∂Ω) + h_^2,α(∂Ω)),
v_^2,α(Ω) ≤ C( g_^2,α(∂Ω) + h_^2,α(∂Ω)).
This result comes as a more-or-less straightforward application of the Banach fixed point theorem in a standard setting. For the convenience of the readers, we provide the proof in Appendix <ref>.
The above well-posedness result is not satisfactory as it requires that the boundary data to be small. Currently, we do not have a stronger result. This result, however, is sufficient for the inverse problem we want to study as our method in the next sections will be mainly based on the linearization of the forward model with small boundary data.
In the engineering literature, it is often the case that one drops the γ u^* v term in the first equation of system (<ref>). In this case, the system is only one-way coupled. The solution to the first equation only appears in the second equation as the source term. In such a case, well-posedness of the system can be easily established for general boundary conditions. The corresponding inverse problems are also simplified. We will comment more on this issue in the next sections.
§.§ First- and higher-order linearizations
To deal with the challenge caused by the nonlinearity of the forward model (<ref>), we use the technique of linearization <cit.>. We now document the linearization process.
For a given small number >0, let (u_, v_) be the solution to the system
[ Δ u_+ k^2 (1+η) u_ +ikσ u_ = -k^2 γ u_^* v_, Ω; Δ v_+ (2k)^2 (1+η) v_ +i2kσ v_ = -(2k)^2 γ u_^2, Ω ]
with boundary conditions
u_ = g_1+12^2 g_2, v_ = h_1+12^2 h_2, ∂Ω .
We denote by (u_0, v_0)=(0, 0) the solution for the case of =0, and by H_ be the data of the form (<ref>) corresponding to (u_, v_), that is
H_ = Γσ (|u_|^2 + |v_|^2) .
We expect that the solution (u_, v_) varies sufficiently smoothly with respect to when is adequately small. Therefore, formally we have expansions of the solution and the data in the form of:
u_() = u^(1)() + 1/2^2 u^(2)() + o(^2),
v_() = v^(1)() + 1/2^2 v^(2)() + o(^2),
H_() = H^(1)() + 1/2^2 H^(2)() + 1/6^3 H^(3)() + o(^3),
as → 0. When this expansion is well-defined, we have that
u^(1)() := lim_→ 0u_()/, u^(2)() := lim_→ 0u_() - u^(1)()/1/2^2,
v^(1)() := lim_→ 0v_()/, v^(2)() := lim_→ 0v_() - v^(1)()/1/2^2,
H^(1)() := lim_→ 0H_()/, H^(2)() := lim_→ 0H_() - H^(1)()/1/2^2,
H^(3)() := lim_→ 0H_() - H^(1)() - 1/2^2 H^(2)()/1/6^3.
Assuming for the moment that all the derivatives are well-defined, straightforward formal calculations then show that on the first order, we have that (u^(1), v^(1)) solves the boundary value problem:
[ Δ u^(1)+ k^2 (1+η) u^(1) +ikσ u^(1) = 0, Ω; Δ v^(1)+ (2k)^2 (1+η) v^(1) +i2kσ v^(1) = 0, Ω; u^(1) = g_1, v^(1) = h_1, ∂Ω ]
while H^(1) satisfies
H^(1)=0 .
On the second order, we can formally verify that (u^(2), v^(2)) solves the boundary value problem:
[ Δ u^(2)+ k^2 (1+η) u^(2)+ikσ u^(2) = -2 k^2 γ u^(1)*v^(1), Ω; Δ v^(2)+ (2k)^2 (1+η) v^(2)+i2kσ v^(2) = -2 (2k)^2γ (u^(1))^2, Ω; u^(2) = g_2, v^(2) = h_2, ∂Ω . ]
The corresponding perturbative data H^(2) can be expressed as
H^(2)=2Γσ ( u^(1)* u^(1) + v^(1)* v^(1)) .
A little more algebra shows that the third-order data perturbation is in the form:
H^(3) = 3Γσ(u^(1)*u^(2)+u^(1)u^(2)*+v^(1)*v^(2)+v^(1)v^(2)*) .
The whole linearization process can be justified mathematically. We summarize the result here.
Let α∈ (0,1) and g_1,g_2∈^2,α(∂Ω;). For sufficiently small , let (u_, v_) denote the unique small solution in ^2,α(Ω;)×^2,α(Ω;) to the system (<ref>). Then the derivatives (<ref>) are all well-defined. Moreover, (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) hold.
The proof of this differentiability result is provided in Appendix <ref>.
The multilinearization procedure we outlined here is quite standard. It has been improved by many authors and utilized to solve various inverse problems to nonlinear models; see, for instance, <cit.> and references therein for some examples of such results.
§ THE RECONSTRUCTION OF
The first inverse problem is therefore to reconstruct (Γ,σ,η) from the data H^(2) in (<ref>) with the model for u^(1) and v^(1) given in (<ref>). By taking h_1=0, the problem reduces to reconstructing (Γ,σ,η) from the data
H^(2)=2Γσ u^(1)* u^(1),
with the model for u^(1) given in (<ref>).
When Γ and η are known, this problem was analyzed in <cit.>. It was shown that σ can be uniquely recovered with a fixed point iteration. More precisely, for the model
[ Δ u + k^2(1+η()) u + ikσ() u = 0, Ω; u = g, ∂Ω ]
with internal data
H()=Γσ()|u|^2 ,
it is shown in <cit.> that when Γ and η are known, one can reconstruct σ uniquely and stably (in appropriate metrics) from one dataset, provided that the boundary illumination g is appropriately chosen. (More specifically, the proof requires that g is sufficiently close to a a function of the form e^ρ· x|_∂Ω, for some ρ∈^n with ρ·ρ=0 and |ρ| sufficiently large.)
In <cit.>, an explicit procedure for reconstructing σ is given (again, assuming that η and Γ are known). Here, we modify the method in order to deal with the case of unknown refractive index η and unknown Grüneisen coefficient Γ. We use the procedure to develop a uniqueness and stability result from two well-chosen datasets.
Let g_1 and g_2 be two incident sources. We measure data corresponding to the illuminations g_1+g_2 and g_1+ig_2 in addition to those corresponding to g_1 and g_2. The linearity of (<ref>) means that solutions corresponding to g_1+g_2 and g_1+ig_2 are u_1+u_2 and u_1+iu_2 respectively. The corresponding data are Γσ |u_1+u_2|^2 and Γσ |u_1+iu_2|^2 respectively.
We may now apply the polarization identity to get:
u_1 u_2^* = 1/2(|u_1+u_2|^2+i|u_1+iu_2|^2-(1+i)|u_1|^2-(1+i)|u_2|^2)
on the inner product space . This gives us that the quantity
Γσ u_1 u_2^* = 1/2(Γσ|u_1+u_2|^2+iΓσ|u_1+iu_2|^2-(1+i)Γσ|u_1|^2-(1+i)Γσ|u_2|^2)
is known.
Henceforth, given illuminations {g_j}_j=1^2, we can reconstruct from the measured internal data the new data:
E_j = Γσ u_j u_1^*,
for j=1,2.
The above construction can be used to develop a uniqueness result straightforwardly.
Let {g_j}_j=1^2 be a set of incident source functions, and suppose that the measured data {E_j}_j=1^2 satisfy the following two conditions:
* E_1()≥α_0>0 for some α_0, a.e. ∈Ω.
* The vector field
β():=∇E_2/E_1
is at least W^1,∞, and |β|≥β_0>0 for some β_0, a.e. ∈Ω.
Then Γ, η, and σ are uniquely determined from the data {E_j}_j=1^2.
We follow the procedures developed in <cit.>. We multiply the equation for u_1 by u_2 and multiply the equation for u_2 by u_1. We subtract the results to have
u_1Δ u_2-u_2Δ u_1=0 .
We can then rewrite this into
∇· u_1^2 ∇u_2/u_1 = ∇· u_1^2 ∇E_2/E_1 =∇· (u_1^2 β)=0 .
The vector field β is known from the data. Therefore the above equation is a transport equation for u_1, that is,
∇· (u_1^2 β)=0, Ω, u_1=g_1, ∂Ω .
With the assumption in (-ii), classical results in <cit.> show that there exists a unique weak solution u_1 to (<ref>). This gives us the unique reconstruction of u_1.
Now that we have reconstructed u_1, we can use the equation (<ref>) to reconstruct the potential q:
q():=k^2(1+η) + ikσ = -Δ u_1/u_1 .
This gives us η and σ (which are obtained by taking real and imaginary parts of q). The last step is to reconstruct Γ as
Γ=H_1/σ|u_1|^2 .
The proof is complete.
This uniqueness result shows a dramatic difference between the inverse problem defined by (<ref>) and (<ref>) and a similar inverse problem in quantitative photoacoustic tomography in <cit.> where it is show that the multiplicative coefficient Γ causes non-uniqueness in the reconstructions, independent of the amount of data available.
The proof of the above uniqueness result is constructive in the sense that it provides an explicit way to solve the inverse problem: solving (<ref>) for u_1, computing q using (<ref>) and then computing Γ as in (<ref>).
In fact, the above explicit reconstruction procedure also leads to partial (weighted) stability results for the inverse problem.
Let E=(E_1,E_2) and E=( E_1, E_2) be the data corresponding to the coefficients (Γ, η, σ)∈^3 and (Γ, η, σ)∈^3 respectively, generated from illumination source pair (g_1, g_2). Under the assumption that E and E satisfy (-i)-(-ii), we assume further that g_1 and g_2 are selected such that E_2/E_1 is sufficiently small. Then we have that, for some constants c>0,
Γσ-Γσ_L^2(Ω)≤ c(H_1- H_1_L^2(Ω)+ E- E_(^2(Ω))^2) .
We first observe that
Γσ-Γσ=H_1/|u_1|^2- H_1/| u_1|^2 =H_1- H_1/|u_1|^2+ H_1/| u_1|^2| u_1|^2-|u_1|^2/|u_1|^2 .
This, together with the Triangle Inequality and the fact that |u_k|^2 is bounded from below, gives us
Γσ-Γσ_L^2(Ω)≤ c (H_1- H_1_L^2(Ω) + u_1^2-u_1^2_L^2(Ω)) ,
for some c>0.
To bound the second term in (<ref>) by the data, let ξ=u_1^2 and ξ= u_1^2. Then we have from the equations ∇·ξβ=0 and ∇·ξβ=0 that
∇·((ξ-ξ) β) + ∇·(ξ (β-β)) =0 .
This can be further rewritten into
β·∇(ξ-ξ)=-(ξ-ξ)∇·β - ∇·(ξ (β-β)) ,
which immediately leads to the bound
β·∇(ξ-ξ)_L^2(Ω)≤(ξ-ξ)∇·β_L^2(Ω)^2+∇·(ξ (β-β))_L^2(Ω) .
With the same algebra, we can derive the bound
β·∇(ξ-ξ)_L^2(Ω)≤(ξ-ξ)∇·β_L^2(Ω)+∇·(ξ (β-β))_L^2(Ω) .
We now multiply (<ref>) by (ξ-ξ)^* to have the equation, after a little algebra,
∇·(|ξ-ξ|^2 β) - (ξ-ξ)β·∇(ξ-ξ)^* + (ξ-ξ)^*∇·ξ(β-β) =0 .
Integrating this equation against a test function ϕ∈^1(Ω) and using integration-by-parts on the last term lead us to the identity
∫_Ω|ξ-ξ|^2 β·∇ϕ d + ∫_Ω (ξ-ξ)β·ϕ∇ (ξ-ξ)^* d
+∫_Ω (ξ-ξ)^* ξ (β-β) ·∇ϕ d +∫_Ωϕξ (β-β)·∇ (ξ-ξ)^* d =0 .
To simplify the presentation, we combine the second and the fourth terms in the equation to have
∫_Ω|ξ-ξ|^2 β·∇ϕ d + ∫_Ω (βξ-βξ)·ϕ∇ (ξ-ξ)^* d +∫_Ω (ξ-ξ)^* ξ (β-β) ·∇ϕ d =0 .
Taking the test function ϕ=E_2^*/E_1^* (hence ∇ϕ=∇E_2^*/E_1^*=β^*), we have
∫_Ω|ξ-ξ|^2 |β|^2 d = - ∫_Ω (βξ-βξ)·E_2^*/E_1^*∇ (ξ-ξ)^* d -∫_Ω (ξ-ξ)^*β^* ·ξ (β-β) d .
This gives us the bound
(ξ-ξ)β_L^2(Ω)^2≤∫_Ω|(βξ-βξ)·E_2^*/E_1^*∇ (ξ-ξ)^*| d + ∫_Ω|(ξ-ξ)^*β^* ·ξ (β-β)| d .
The first term on the right-hand side of (<ref>) can be bounded as follows:
∫_Ω|(βξ-βξ)·E_2^*/E_1^*∇ (ξ-ξ)^*| d≤E_2^*/E_1^*_L^∞(Ω)∫_Ω|(βξ-βξ)·∇ (ξ-ξ)^*| d
≤1/2E_2^*/E_1^*_L^∞(Ω)[ ∫_Ω|βξ·∇ (ξ-ξ)^*|^2 d +∫_Ω|βξ·∇ (ξ-ξ)^*|^2 d]
≤ 2E_2^*/E_1^*_L^∞(Ω)[(ξ-ξ)∇·β_L^2(Ω)^2+∇·(ξ (β-β))_L^2(Ω)^2] ,
where we have used (<ref>) and (<ref>) to get the last inequality. The second term on the right-hand side of (<ref>) can be bounded as:
∫_Ω|(ξ-ξ)^*β^* ·ξ (β-β)| d≤1/2[1/κ^2(ξ-ξ)β_L^2(Ω)^2 +κ^2 (β-β)ξ_L^2(Ω)^2] ,
for any κ>0.
Under the assumption that |E_2/E_1| is sufficiently small, we can take κ to be sufficiently large so that (<ref>) now implies that
ξ-ξ_L^2(Ω)^2≲(β-β)ξ_L^2(Ω)^2 +∇·(ξ (β-β))_L^2(Ω)^2 .
The next step is to bound β-β_L^2(Ω) and ∇·ξ(β-β)_L^2(Ω). To this end, we use the expansion
β-β=( E_1-E_1)∇ E_2/E_1 E_1+ E_2/E_1 E_1∇( E_1-E_1)
+(E_2- E_2)∇1/E_1+1/E_1^2∇(E_2- E_2)+1/E_1 E_1( E_1-E_1)∇(E_2- E_2) ,
to derive the bound
β-β_L^2(Ω)≲E- E_(^1(Ω))^2 .
In a similar manner, we find that
∇·ξ(β-β)_L^2(Ω)≲E- E_(^2(Ω))^2 .
Plugging these results into (<ref>) will give us
ξ-ξ_L^2(Ω)≲E- E_(^2(Ω))^2 .
This, together with the bound (<ref>), will lead us to the stability results of (<ref>).
With the standard techniques complex geometrical solutions, one can show that for every value of the true coefficients η, σ∈^m(Ω), where m>1+d/2, there exists a set of illuminations (g_j)_j=1^d+1 such that the corresponding measured data (E_j)_j=1^d+1 satisfies both conditions (-i) and (-ii) <cit.>.
In fact, following <cit.>, it may be possible to ensure that (-i) and (-ii) hold with high probability by drawing the boundary illuminations (g_j)_j=1^d+1 independently at random from a sub-Gaussian distribution on ^1/2(∂Ω).
We observe that the above reconstruction procedure also works in the case when the internal datum is of the form H=|u| (in which case |u|^2 is known), that is, the datum is independent of Γσ.
§ THE RECONSTRUCTION OF
The remaining problem is to reconstruct γ using third-order perturbation of the data. In the rest of this section, we assume that in addition to the internal data (<ref>), we also have access to the Dirichlet-to-Neumann map
Π_γ: (g, h) ↦(uν|_∂Ω, vν|_∂Ω)≡ (J_u,J_v) .
Note that we omit the dependence of Π on Γ, η, and σ intentionally here since those coefficients are already known.
The multilinearization of (J_u, J_v) can be established with the calculations in Appendix (<ref>). We will directly use the derivatives (J_u^(1), J_v^(1)) and (J_u^(2), J_v^(2)).
Let us recall that the third-order derivative of the data H^(3) is given in (<ref>). This implies that
u^(1)*u^(2)+u^(1)u^(2)*+v^(1)*v^(2)+v^(1)v^(2)* = H^(3)/3Γσ,
where (u^(1), v^(1)) and (u^(2), v^(2)) are respectively the solutions to (<ref>) and (<ref>), is known in Ω.
From now on, we set g_2=h_2=0 in (<ref>). Consequently, the system (<ref>) for (u^(2), v^(2)) reduces to
[ Δ u^(2)+ k^2 (1+η) u^(2)+ikσ u^(2) = -2 k^2 γ u^(1)*v^(1), Ω; Δ v^(2)+ (2k)^2 (1+η) v^(2)+i2kσ v^(2) = -2 (2k)^2γ (u^(1))^2, Ω; u^(2) = 0, v^(2) = 0, ∂Ω ]
We can now take the complex conjugate of (<ref>) and leverage the fact that γ is real-valued to write down the following system of linear equations for (u^(2), v^(2)), (u^(2)*, v^(2)*), and γ:
[ (Δ + q_1)u^(2) + 2k^2 u^(1)*v^(1)γ = 0, Ω; (Δ + q_1^*)u^(2)* + 2k^2 u^(1)v^(1)*γ = 0, Ω; (Δ + q_2)v^(2) + 2(2k)^2(u^(1))^2γ = 0, Ω; (Δ + q_2^*)v^(2)* + 2(2k)^2(u^(1)*)^2γ = 0, Ω; u^(1)*u^(2)+u^(1)u^(2)*+v^(1)*v^(2)+v^(1)v^(2)* = H^(3)/3Γσ, Ω; (u^(2),u^(2)*,v^(2),v^(2)*) = (0,0,0,0), ∂Ω; (u^(2)ν,u^(2)*ν,v^(2)ν,v^(2)*ν) = (J_u^(2),J_u^(2)*,J_v^(2),J_v^(2)*), ∂Ω , ]
where we have used the notation
q_1 := k^2(1+η) + ikσ, q_2 := (2k)^2(1+η) + i2kσ .
If we can solve (<ref>), we can reconstruct γ (and the associated (u^(2), v^(2))). This is a non-iterative reconstruction method. In the rest of this section, we show that γ can be uniquely reconstructed from available data by analyzing the uniqueness of the solution to the linear system (<ref>). The analysis is based on the uniqueness theory for redundant elliptic systems reviewed in <cit.>, which we summarize briefly in Appendix <ref> for the convenience of the readers.
Let X⊂^n be an open set and Γ, η, σ be given positive ^2 functions on X. For every bounded open subset Ω⊂ X with smooth boundary ∂Ω, we denote by (Λ_γ^(Ω), Π_γ^(Ω)) and (Λ_γ^(Ω), Π_γ^(Ω)) the data corresponding to γ∈ and γ∈ respectively.
Let _0∈ X be arbitrary. Then there exists ϵ=ϵ(η,σ,_0)>0 such that
for every Ω⊂ B(_0, ϵ), there exist g_1 and h_1 in (<ref>) such that
(H^(3), J_u^(2), J_v^(2))=( H^(3), J_u^(2), J_v^(2)) γ=γ ,
and for all p>1,
γ-γ_L^p(Ω)≤ C(H^(3) - H^(3)_W^2,p(Ω)
+ J_u^(2) - J_u^(2)_W^1-1/p, p(∂Ω)
+ J_v^(2) - J_v^(2)_W^1-1/p, p(∂Ω)),
for some constant C=C(Ω,Γ,η,σ)>0.
Let us define
u := u - u, v := v - v, γ := γ - γ, H := H - H, J_u := J_u - J_u, J_v := J_v - J_v.
The linear system (<ref>) then implies that
[ (Δ + q_1)u^(2) + 2k^2 u^(1)*v^(1)γ = 0, Ω; (Δ + q_1^*)u^(2)* + 2k^2 u^(1)v^(1)*γ = 0, Ω; (Δ + q_2)v^(2) + 2(2k)^2(u^(1))^2γ = 0, Ω; (Δ + q_2^*)v^(2)* + 2(2k)^2(u^(1)*)^2γ = 0, Ω; u^(1)*u^(2)+u^(1)u^(2)*+v^(1)*v^(2)+v^(1)v^(2)* = H^(3)/3Γσ, Ω; (u^(2),u^(2)*,v^(2),v^(2)*) = (0,0,0,0), ∂Ω; (u^(2)ν,u^(2)*ν,v^(2)ν,v^(2)*ν) = ( J_u^(2), J_u^(2)*, J_v^(2), J_v^(2)*), ∂Ω . ]
We first eliminate γ by plugging in γ = -(Δ+q_2^*)v^(2)*/2(2k)^2(u^(1)*)^2 from the fourth equation into the first three equations. We then take the Laplacian of the last equation. These procedures lead us to the linear system
(Δ + q_1)u^(2) -v^(1)/4u^(1)*(Δ + q_2^*)v^(2)* = 0,
(Δ + q_1^*)u^(2)* - u^(1)v^(1)*/4(u^(1)*)^2(Δ + q_2^*)v^(2)* = 0,
(Δ + q_2)v^(2) - (u^(1))^2/(u^(1)*)^2(Δ + q_2^*)v^(2)* = 0,
Δ(u^(1)*u^(2))+Δ(u^(1)u^(2)*)+Δ(v^(1)*v^(2))+Δ(v^(1)v^(2)*) = Δ(H^(3)/3Γσ),
in the unknowns u^(2), u^(2)*, v^(2), and v^(2)*. The system may be written in the following matrix form, for the quantity w:=(u^(2),u^(2)*,v^(2),v^(2)*):
[ w = , Ω; w = (0,0,0,0), ∂Ω; wν = ( J_u^(2), J_u^(2)*, J_v^(2), J_v^(2)*), ∂Ω , ]
where
(,D) := [ Δ+q_1 -v^(1)/4u^(1)*(Δ + q_2^*); Δ+q_1^* -u^(1)v^(1)*/4(u^(1)*)^2(Δ + q_2^*); Δ+q_2 -(u^(1))^2/(u^(1)*)^2(Δ + q_2^*); Δ(u^(1)*·) Δ(u^(1)·) Δ(v^(1)*·) Δ(v^(1)·) ] ,
and
S := (0,0,0,Δ(H^(3)/3Γσ)) .
In the rest of the proof, we show that is an elliptic operator in the sense of Douglis and Nirenberg <cit.>. For the convenience of the reader, we provide a brief review of elliptic system theory in Appendix <ref>. We choose the Douglis-Nirenberg numbers
(s_1,s_2,s_3,s_4) = (0,0,0,0), (t_1,t_2,t_3,t_4) = (2,2,2,2).
The principal part _0(,D) of has symbol
_0(,) := [ ||^2 -v^(1)/4u^(1)*||^2; ||^2 -u^(1)v^(1)*/4(u^(1)*)^2||^2; ||^2 -(u^(1))^2/(u^(1)*)^2||^2; u^(1)*||^2 u^(1)||^2 v^(1)*||^2 v^(1)||^2 ] .
One readily sees that _0(_0,) has full rank 4 for all ≠ 0 if and only if the following condition holds at _0:
-v^(1)/4u^(1)*· u^(1)* -u^(1)v^(1)*/4(u^(1)*)^2· u^(1) -(u^(1))^2/(u^(1)*)^2· v^(1)*≠ v^(1),
or equivalently
-(u^(1))^2/(u^(1)*)^2≠v^(1)/v^(1)*
at _0. This condition on u^(1)(_0) and v^(1)(_0) is easily achieved by selecting g_1 and h_1 appropriately. To be precise, let us consider some ball B(_0,ϵ_0) ⊂ X and let u_0 and v_0 be any ^2 functions on B(_0,ϵ_0) satisfying
Δ u_0+ q_1 u_0 = 0, B(_0,ϵ_0),
Δ v_0 + q_2 v_0 = 0, B(_0,ϵ_0),
-u_0^2/(u_0^*)^2(_0) ≠v_0/v_0^*(_0).
The existence of such u_0 and v_0 is obvious as we can take u_0 and v_0 to be any solutions to the first two equations and rescale them by a suitable complex constant to satisfy the condition -u_0^2/(u_0^*)^2(_0) ≠v_0/v_0^*(_0).
It is also useful to observe that u_0 and v_0 depend only on η and σ, not γ.
Now suppose Ω⊂ B(_0,ϵ_0), and select g_1=u_0|_∂Ω and h_1=v_0|_∂Ω in (<ref>), so that
u^(1)=u_0|_Ω, v^(1)=v_0|_Ω .
This means we can write as
(,D) := [ Δ+q_1 -v_0/4u_0^*(Δ + q_2^*); Δ+q_1^* -u_0 v_0^*/4(u_0^*)^2(Δ + q_2^*); Δ+q_2 -u_0^2/(u_0^*)^2(Δ + q_2^*); Δ(u_0^*·) Δ(u_0·) Δ(v_0^*·) Δ(v_0·) ] .
By construction, the constant-coefficient operator (_0,D) with coefficients frozen at x_0 is elliptic. Additionally, from the continuity of u_0 and v_0 we see that there exists ϵ_1=ϵ_1(η,σ,_0)>0 such that -u_0^2/(u_0^*)^2≠v_0/v_0^* on B(_0,ϵ_1). That is, for every Ω⊂ B(_0,ϵ_1), the operator (,D) is elliptic on Ω.
Moreover, observe that the Douglis-Nirenberg numbers (<ref>) of satisfy s_i=0 for all i and t_j = 2 is independent of j. This means that the uniqueness theory for elliptic systems presented in <cit.> applies. More specifically, by Theorem <ref>, we conclude that there exists ϵ_2=ϵ_2(η,σ,_0)>0 such that for every Ω⊂ B(_0,ϵ_2), the boundary value problem (<ref>) has a unique solution.
Now set ϵ = min{ϵ_1,ϵ_2}>0 and let Ω⊂ B(_0,ϵ), so that is an elliptic operator on Ω and the problem (<ref>) has a unique solution.
By elliptic regularity estimate Theorem <ref> applied to (<ref>) with ℓ=0, there exist constants C=C(Ω,η,σ,u_0) = C(Ω,η,σ) and C_2=C_2(Ω,η,σ) such that
u^(2)_W^2,p(Ω) + v^(2)_W^2,p(Ω) ≤ C(Δ(H^(3)/3Γσ)_L^p(Ω)
+ J_u^(2)_W^1-1/p,p(∂Ω)
+ J_v^(2)_W^1-1/p,p(∂Ω)) + C_2(u^(2)_L^p(Ω) + v^(2)_L^p(Ω)).
In fact, since the solution is unique we may drop the last term on the right-hand side, i.e. set C_2=0. This gives
u^(2)_W^2,p(Ω) + v^(2)_W^2,p(Ω)
≤ C(Δ(H^(3)/3Γσ)_L^p(Ω)
+ J_u^(2)_W^1-1/p,p(∂Ω) + J_v^(2)_W^1-1/p,p(∂Ω))
≤ C(H^(3)_W^2,p(Ω)
+ J_u^(2)_W^1-1/p,p(∂Ω) + J_v^(2)_W^1-1/p,p(∂Ω)),
where C=C(Ω,Γ,η,σ).
Finally we substitute this estimate into (<ref>) to obtain
γ_L^p(Ω) = -(Δ + q_1)u^(2)/2k^2 u_0^* v_0_L^p(Ω)≤ Cu^(2)_W^2,p(Ω)
≤ C(H^(3)_W^2,p(Ω)
+ J_u^(2)_W^1-1/p,p(∂Ω) + J_v^(2)_W^1-1/p,p(∂Ω)),
where once again C=C(Ω,Γ,η,σ). This is precisely the desired stability estimate (<ref>).
The above theory on the reconstruction of the coefficient γ requires both the availability of the additional boundary data (<ref>) and the assumption that Ω is sufficiently small. We made these assumptions merely to simplify the proof. We believe that they can be removed without breaking the uniqueness and stability results.
§ NUMERICAL EXPERIMENTS
We now present some numerical simulations to demonstrate the quality of reconstructions that can be achieved for the inverse problem.
We will perform numerical reconstructions with a slightly simplified version of model (<ref>):
[ Δ u+ k^2 (1+η) u +ikσ u = 0, Ω; Δ v+ (2k)^2 (1+η) v +i2kσ v = -(2k)^2 γ u^2, Ω; u = g, v+i2k·∇ v = 0, ∂Ω ]
In other words, we omit the backward coupling term on the right-hand side of the first equation in (<ref>).
This model is connected to the linearized problem in (<ref>). Indeed, if we take the boundary condition of v^(1) to be 0, that is, h_1=0 in (<ref>), then the first equation in (<ref>) and the second equation in (<ref>) can be combined to get (<ref>).
Note that we intentionally changed the Dirichlet boundary condition for v to the more realistic Robin boundary condition. Moreover, due to the fact that the equations in the model (<ref>) are only one-way coupled, we are not limited to the usage of small boundary data g.
The measured interior data still take the form (<ref>). We will use data generated from N_s≥ 1 different boundary conditions {g_j}_j=1^N_s: {H_j}_j=1^N_s.
The numerical reconstructions are performed using standard least-squares optimization procedures that we will outline below. The computational implementation of the numerical simulations in this section can be found at <https://github.com/nsoedjak/Imaging-SHG>. All of the following numerical experiments can be reproduced by simply running the appropriate example file (e.g., for Numerical Experiment I).
Numerical Experiment I: reconstructing γ. We start with reconstructing the coefficient γ, assuming all other coefficients are known. The reconstruction is achieved with an optimization algorithm that finds γ by minimizing the functional
Φ(γ) := 12∑_j=1^N_sΓσ(|u_j|^2 + |v_j|^2) -H_j_L^2(Ω)^2+1/2β∇γ_L^2(Ω)^2 ,
where we assume that we have collected data from N_s different boundary conditions {g_j}_j=1^N_s. The regularization parameter β will be selected with a trial and error approach. Following the standard adjoint-state method, we introduce the adjoint equations
[ Δ w_j+ (2k)^2 (1+η) w_j +i2kσ w_j = -[Γσ(|u_j|^2+|v_j|^2) - H_j]Γσ v_j^*, Ω; w_j+i2k·∇ w_j = 0, ∂Ω ]
It is then straightforward to verify that the Fréchet derivative of Φ in direction δγ can be written as:
Φ'(γ)[δγ] = ∫_Ω(2(2k)^2 ∑_j=1^N_s w_j u_j^2)δγ d
-β∫_Ω (Δγ)δγ d + β∫_∂Ωγνδγ dS
Once we have the gradient of the objective function with respect to γ, we feed it into a quasi-Newton optimization algorithm with the BFGS updating rule on the Hessian, implemented in MATLAB.
Figure <ref> shows the reconstruction of a simple profile of γ from both noise-free and noisy data. The regularization parameter is set to be β=10^-7 for this particular case. The quality of the reconstructions is reasonable by visual inspection. Similar levels of reconstruction quality are observed for various γ profiles we tested. The regularization parameter is selected in a trial-and-error manner. The value of β we used in the simulations may not be the ones to give the best reconstructions. However, we are not interested in tuning the regularization parameter to improve the reconstruction quality slightly. Therefore, we will not discuss this issue here.
Numerical Experiment II: reconstructing (η, σ,γ). In the second numerical example, we consider the case where Γ is known but η, σ, and γ are unknown. The inversions are done with a least-squares minimization algorithm that is similar to the one used in Numerical Experiment I. Figure <ref> shows that we are still able to obtain good numerical reconstructions, at least in the case when the profiles of η and γ are simple.
Numerical Experiment III: reconstructing (η,γ,Γ). In this example, we assume that σ is known and we are interested in reconstructing η, γ and Γ. Due to the fact that Γ only appears in the measurement, not the PDE model, a naive least-squares minimization formulation like the ones in the previous examples will lead to unbalanced sensitivity between Γ and the rest of the parameters. Hence we instead take a two-step reconstruction approach. In the first step, we use the ratio between measurements to eliminate Γ. That is, we minimize the functional
Ψ(η,γ) := 12∑_j=2^N_s|u_j|^2 + |v_j|^2/|u_1|^2+|v_1|^2 - H_j/H_1_L^2(Ω)^2 + 1/2β_1∇η_L^2(Ω)^2 + 1/2β_2∇γ_L^2(Ω)^2 ,
where we assume that we have collected data from N_s different boundary conditions {g_j}_j=1^N_s. It is clear that Ψ only depends on η and γ, not Γ. The Fréchet derivatives of Ψ can again be found using the standard adjoint-state method. For example, for the derivative with respect to γ, we introduce the adjoint equations
[ Δ w_j+ (2k)^2 (1+η) w_j +i2kσ w_j = -(|u_j|^2+|v_j|^2|u_1|^2+|v_1|^2-H_jH_1)1|u_1|^2+|v_1|^2v_j^*, Ω; w_j+i2k·∇ w_j = 0, ∂Ω ]
and
[ Δ z_j+ (2k)^2 (1+η) z_j +i2kσ z_j = (|u_j|^2+|v_j|^2|u_1|^2+|v_1|^2-H_jH_1)|u_j|^2+|v_j|^2(|u_1|^2+|v_1|^2)^2v_1^*, Ω; z_j+i2k·∇ z_j = 0, ∂Ω. ]
It is then straightforward to verify that the Fréchet derivative of Ψ with respect to γ in direction δγ can be written as:
Ψ'_γ(η,γ)[δγ] = ∫_Ω(2(2k)^2∑_j=1^N_s[w_j u_j^2 + z_j u_1^2])δγ d
-β_2∫_Ω (Δγ)δγ d + β_2∫_∂Ωγνδγ dS .
The Fréchet derivative with respect to η can be computed in a similar fashion.
Once η and γ are reconstructed, we can reconstruct Γ as
Γ = 1/N_s∑_j=1^N_sH_j/σ (|u_j|^2+|v_j|^2) .
A typical reconstruction is shown in Figure <ref>. The reconstructions are highly accurate in this case.
Numerical Experiment IV: reconstructing (η,σ,γ,Γ).
Figure <ref> shows a typical reconstruction of all four coefficients simultaneously. The reconstruction quality is high in the eyeball norm and can be characterized more precisely with numbers such as the relative L^2 error. Note from the reconstruction formula (<ref>) that any inaccuracies in the reconstruction of σ will directly translate into artifacts in the reconstruction of Γ. This can be observed in Figure <ref> (see columns 2 and 4), most notably near the edges of the square anomaly in σ.
§ CONCLUDING REMARKS
We performed a systematic study on inverse problems to a system of coupled semilinear Helmholtz equations as the model for second harmonic generation in thermoacoustic imaging. We developed uniqueness and stability theory for the inverse problems utilizing the multilinearization technique. We showed, via both mathematical analysis and numerical simulations, that it is possible to reconstruct all four coefficients of interest from noisy interior data.
While our results show great promise for the solution of the inverse problems, several aspects of our study's technical side still need to be significantly improved. For instance, we have assumed the Dirichlet boundary condition for the generated second harmonic wave v in model (<ref>). This should certainly be replaced with homogeneous Robin-type boundary conditions that are more physical (as what we did in the computational experiments). Moreover, in Theorem <ref>, we should be able to relax the requirement that the domain Ω is sufficiently small. In the same theorem, we should be able to remove the requirement on the additional Neumann boundary data to have a unique reconstruction of γ.
We have a few future directions in mind to continue the investigation from the perspective of practical applications. First, our mathematical results are mainly based on the assumption that the incident wave, that is, the Dirichlet boundary condition in system (<ref>), is weak since this is the case where we can establish the well-posedness of the mathematical model. This assumption, however, severely limits the applicability of the analysis for practical applications as one needs to have a sufficiently strong boundary source to generate strong second-harmonic waves in order to see its impact on the data used for inversion. Second, the linearization method requires access to a sequence of datasets generated from -dependent boundary source. This is a large amount of data. It would be interesting to see if our uniqueness and stability results can be reproduced for a finite number of measurements. Third, it would be of great interest to see if one can perform a similar analysis on the same inverse problem to the Maxwell model of second-harmonic generation, such as the model introduced in <cit.>. In fact, the linearization machinery for the Maxwell model has already been built in <cit.>. However, it is not obvious whether or not our results can be generalized to the Maxwell model with the same type of data in a straightforward way.
§ ACKNOWLEDGMENTS
This work is partially supported by the National Science Foundation through grants DMS-1913309 and DMS-1937254.
§ WELL-POSEDNESS OF SYSTEM
In this appendix, we establish the well-posedness of the boundary value problem (<ref>) for sufficiently small boundary illuminations g and h using a standard contraction mapping theorem argument.
We begin by recording a result on the well-posedness of the Helmholtz problem (<ref>).
Let α∈ (0,1), q,f∈^0,α(Ω; ), and g∈^2,α(∂Ω; ). If q >0, then the boundary value problem
[ Δ u + qu = f, Ω; u = g, ∂Ω ]
has a unique solution u∈^2,α(Ω;). Moreover, there exists a constant C=C(α,Ω,q) such that the following Schauder estimate holds:
u_^2,α(Ω)≤ C(f_^0,α(Ω) + g_^2,α(∂Ω)) .
The first task is to employ an energy method to show that (<ref>) has at most one solution u∈^2,α(Ω;). To this end, suppose that u∈^2,α(Ω;) solves the homogeneous problem
[ Δ u + qu = 0, Ω; u = 0, ∂Ω . ]
Multiplying both sides of the PDE by u^* and integrating over Ω results in
∫_Ω -|∇ u|^2 + q|u|^2 d = 0 ,
whereupon taking imaginary parts yields ∫_Ω q |u|^2 d = 0. The assumption q > 0 then leads to u≡ 0, as desired. This completes the proof of uniqueness.
Now that we have shown uniqueness, the existence of a solution u∈^2,α(Ω;) to the elliptic problem (<ref>) follows from the Fredholm alternative: see <cit.> (which applies to elliptic operators with not only real-valued but also complex-valued coefficients).
Finally, from <cit.> we have the Schauder estimate
u_^2,α(Ω)≤ C(f_^0,α(Ω) + g_^2,α(∂Ω)+u_^0(Ω)) .
On account of the problem having a unique solution in ^2,α(Ω;), the last term u_^0(Ω) may be dropped (see Remark 2 following <cit.>) to arrive at the desired (<ref>).
The proof is a standard argument based on the Banach fixed point theorem.
Before proceeding, we establish some notation. Let us define q_1 := k^2(1+η) + ikσ and q_2 := (2k)^2(1+η) + i2kσ for the sake of brevity of notation. If X and Y are two metric spaces, we shall equip the Cartesian product X× Y with any of the standard metrics, say the metric d_X× Y((x_1,y_1),(x_2,y_2)) := d_X(x_1,x_2) + d_Y(y_1,y_2). If X and Y are both complete, then so is X× Y. Finally, for δ>0 define the complete metric space
_δ := {f∈^2,α(Ω): f_^2,α(Ω)≤δ} .
To start, let >0, δ>0 and let g,h∈^2,α(∂Ω;) with g_^2,α(∂Ω)< and h_^2,α(∂Ω)<. We shall determine how small and δ need to be later.
In order to formulate the problem in terms of fixed points, define the operator L^-1: ^0,α(Ω) ×^0,α(Ω) →^2,α(Ω) ×^2,α(Ω) by setting L^-1(f_1,f_2) as the unique solution (U,V)∈^2,α(Ω)×^2,α(Ω) to the problem
[ Δ U+ q_1U = f_1, Ω; Δ V+ q_2V = f_2, Ω; U = g, V = h, ∂Ω . ]
Then a solution (u,v)∈_δ×_δ to (<ref>) is precisely the same as a fixed point in _δ×_δ of the operator T defined by
T(ϕ_1,ϕ_2) := L^-1(-k^2γϕ_1^* ϕ_2, -(2k)^2γϕ_1^2) .
It remains to show that for sufficiently small ϵ>0 and δ>0,
* T is a well-defined operator from _δ×_δ to itself, and
* T is a contraction on _δ×_δ.
In order to perform the next several calculations, recall that ^0,α(Ω) is a Banach algebra, meaning that
fg_^0,α(Ω)≤f_^0,α(Ω)g_^0,α(Ω)
for all f,g∈^0,α(Ω).
Proof of (i). For all (ϕ_1,ϕ_2)∈_δ×_δ, we compute
-k^2γϕ_1^* ϕ_2_^0,α(Ω)≤ Cϕ_1_^0,α(Ω)ϕ_2_^0,α(Ω)≤ Cϕ_1_^2,α(Ω)ϕ_2_^2,α(Ω)≤ Cδ^2
and similarly
-(2k)^2γϕ_1^2_^0,α(Ω)≤ Cϕ_1_^0,α(Ω) ^2
≤ Cϕ_1_^2,α(Ω) ^2
≤ Cδ^2
Let (U,V) := T(ϕ_1,ϕ_2). Combining the above estimates with the Schauder estimate (<ref>) for the Helmholtz equation then leads to
U_^2,α(Ω)≤ C(-k^2γϕ_1^* ϕ_2_^0,α(Ω) + g_^2,α(∂Ω))
≤ C(δ^2 + )
and
V_^2,α(Ω)≤ C(-(2k)^2γϕ_1^2_^0,α(Ω) + h_^2,α(∂Ω))
≤ C(δ^2 + ).
We can force these quantities to be less than δ by choosing and δ sufficiently small. This implies that T(ϕ_1,ϕ_2)∈_δ×_δ as desired.
Proof of (ii). Let (ϕ_1,ϕ_2), (ϕ_1',ϕ_2')∈_δ×_δ. Then we compute
-k^2γϕ_1^*ϕ_2 - (-k^2γ (ϕ_1')^*ϕ_2')_^0,α(Ω) ≤ Cϕ_1^*ϕ_2 - (ϕ_1')^*ϕ_2'_^0,α(Ω)
= Cϕ_1^*(ϕ_2-ϕ_2') + (ϕ_1^*-(ϕ_1')^*)ϕ_2'_^0,α(Ω)
≤ C(ϕ_1_^0,α(Ω)ϕ_2-ϕ_2'_^0,α(Ω) + ϕ_1-ϕ_1'_^0,α(Ω)ϕ_2'_^0,α(Ω))
≤ C(ϕ_1_^2,α(Ω)ϕ_2-ϕ_2'_^2,α(Ω) + ϕ_1-ϕ_1'_^2,α(Ω)ϕ_2'_^2,α(Ω))
≤ Cδ(ϕ_1,ϕ_2) - (ϕ_1',ϕ_2')_^2,α(Ω)×^2,α(Ω)
and similarly
-(2k)^2γϕ_1^2 - (-(2k)^2γ(ϕ_1')^2)_^0,α(Ω) ≤ Cϕ_1^2-(ϕ_1')^2_^0,α(Ω)
≤ Cϕ_1-ϕ_1'_^0,α(Ω)ϕ_1+ϕ_1'_^0,α(Ω)
≤ Cδϕ_1-ϕ_1'_^2,α(Ω)
≤ Cδ(ϕ_1,ϕ_2) - (ϕ_1',ϕ_2')_^2,α(Ω)×^2,α(Ω).
Let (U,V) := T(ϕ_1,ϕ_2) and (U',V') := T(ϕ_1',ϕ_2'). Then U-U' and V-V' satisfy
[ Δ (U-U')+ q_1(U-U') = -k^2γϕ_1^*ϕ_2 - (-k^2γ (ϕ_1')^*ϕ_2'), Ω; Δ (V-V')+ q_2(V-V') = -(2k)^2γϕ_1^2 - (-(2k)^2γ(ϕ_1')^2), Ω; U-U' = 0, V-V' = 0, ∂Ω . ]
Combining the above estimates with the Schauder estimate (<ref>) for the Helmholtz equation then leads to
U-U'_^2,α(Ω)≤ C-k^2γϕ_1^*ϕ_2 - (-k^2γ (ϕ_1')^*ϕ_2')_^0,α(Ω)≤ Cδ(ϕ_1,ϕ_2) - (ϕ_1',ϕ_2')_^2,α(Ω)×^2,α(Ω)
and
V-V'_^2,α(Ω)≤ C-(2k)^2γϕ_1^2 - (-(2k)^2γ(ϕ_1')^2)_^0,α(Ω)≤ Cδ(ϕ_1,ϕ_2) - (ϕ_1',ϕ_2')_^2,α(Ω)×^2,α(Ω).
We conclude that
T(ϕ_1,ϕ_2) - T(ϕ_1',ϕ_2')_^2,α(Ω)×^2,α(Ω) = U-U'_^2,α(Ω) + V-V'_^2,α(Ω)
≤ Cδ(ϕ_1,ϕ_2) - (ϕ_1',ϕ_2')_^2,α(Ω)×^2,α(Ω).
The factor Cδ can be made strictly less than 1 when δ is sufficiently small. This makes T into a contraction, as desired.
Having proved that T is a contraction on the complete metric space _δ×_δ, the Banach fixed point theorem guarantees that there exists a unique (u,v)∈_δ×_δ such that T(u,v) = (u,v). As discussed earlier, this is equivalent to saying that there exists a unique (u,v)∈_δ×_δ satisfying the boundary value problem (<ref>). This completes the proof of the first part of the theorem.
Proof of the estimates (<ref>). We perform a calculation similar to those in the proof of (i) to obtain
u_^2,α(Ω) ≤ C(-k^2γ u^* v_^0,α(Ω) + g_^2,α(∂Ω))
≤ C(u_^0,α(Ω)v_^0,α(Ω) + g_^2,α(∂Ω))
≤ C(u_^2,α(Ω)δ + g_^2,α(∂Ω))
When δ is sufficiently small, this implies that
u_^2,α(Ω)≤ Cg_^2,α(∂Ω) ,
as desired. To get the estimate for v, we calculate
v_^2,α(Ω) ≤ C(-(2k)^2 γ u^2_^0,α(Ω) + h_^2,α(∂Ω))
≤ C(u_^0,α(∂Ω)^2 + h_^2,α(∂Ω))
≤ C(δu_^2,α(Ω) + h_^2,α(∂Ω))
≤ C(δg_^2,α(∂Ω) + h_^2,α(∂Ω))
≤ C(g_^2,α(∂Ω) + h_^2,α(∂Ω)) ,
as desired. The proof is complete.
§ DIFFERENTIABILITY RESULT FOR LINEARIZATION
We provide here the mathematical justification of the linearization process we outlined in Section <ref>. More precisely, we prove Theorem <ref> by showing that (u_, v_) and therefore H_ are differentiable with respect to .
Let us define q_1 := k^2(1+η) + ikσ and q_2 := (2k)^2(1+η) + i2kσ to ease notation.
To start the proof, let u^(1), v^(1), u^(2), v^(2) denote the unique functions in ^2,α(Ω;) which satisfy equations (<ref>) and (<ref>). That is,
[ Δu^(1)+ q_1u^(1) = 0, Ω; Δv^(1)+ q_2v^(1) = 0, Ω; u^(1) = g_1, v^(1) = h_1, ∂Ω ]
and
[ Δu^(2)+ q_1u^(2) = -2k^2γu^(1)*v^(1), Ω; Δv^(2)+ q_2v^(2) = -2(2k)^2γ (u^(1))^2, Ω; u^(2)= g_2, v^(2) = h_2, ∂Ω . ]
(Note that the existence and uniqueness of these functions is guaranteed by Theorem <ref>.) Define now the “remainder" terms
μ_ := u_ - u^(1) - 1/2^2u^(2),
ν_ := v_ - v^(1) - 1/2^2v^(2).
We wish to show that μ_ and ν_ are in a certain sense “o(^2)" as → 0. This will be accomplished in two rounds of estimates on μ_, ν_, u_ and v_.
Round 1 estimates. We begin by using the linearity of the operators Δ + q_1 and Δ + q_2 to find that
[ Δμ_ + q_1μ_ = -k^2γ [u_^*v_ - ^2 u^(1)*v^(1)], Ω; Δν_ + q_2ν_ = -(2k)^2γ [u_^2 - ^2(u^(1))^2], Ω; μ_ = 0, ν_ = 0, ∂Ω . ]
To obtain control on the size of the right hand sides, we utilize the well-posedness result Theorem <ref> to see that
u__^0,α(Ω)≤u__^2,α(Ω)≤ C( g_1 + 1/2^2 g_2_^2,α(∂Ω) + h_1 + 1/2^2 h_2_^2,α(∂Ω)) ≤ C ,
and similarly for v_. Here, C=C(α,Ω,η,σ,γ,g_1,g_2) is a constant not depending on . We can write these bounds succinctly as
u_ = _^0,α(Ω)(), v_ = _^0,α(Ω)().
In order to perform the next several calculations, recall that ^0,α(Ω) is a Banach algebra, meaning that
fg_^0,α(Ω)≤f_^0,α(Ω)g_^0,α(Ω)
for all f,g∈^0,α(Ω). With the help of this property, we plug the bounds (<ref>) into the right hand sides of (<ref>) to discover that
-k^2γ [u_^*v_ - ^2 u^(1)*v^(1)] = _^0,α(Ω)(^2), -(2k)^2γ [u_^2 - ^2(u^(1))^2] = _^0,α(Ω)(^2).
The Schauder estimate (<ref>) for the Helmholtz equation applied to (<ref>) then gives
μ_ = _^2,α(Ω)(^2), ν_ = _^2,α(Ω)(^2),
and in particular
μ_ = _^0,α(Ω)(^2), ν_ = _^0,α(Ω)(^2).
Round 2 estimates. Using the estimates from Round 1 and recalling the definition (<ref>) of the remainder terms μ_ and ν_, we can now refine the bounds (<ref>) on the right hand sides of (<ref>):
-k^2γ [u_^*v_ - ^2 u^(1)*v^(1)]
= -k^2γ[(u^(1)* + 1/2^2u^(2)*+_^0,α(Ω)(^2))(v^(1) + 1/2^2v^(2)+_^0,α(Ω)(^2)) - ^2 u^(1)*v^(1)]
= _^0,α(Ω)(^3)
and
-(2k)^2γ [u_^2 - ^2(u^(1))^2] = -(2k)^2γ[(u^(1) + 1/2^2u^(2) + _^0,α(Ω)(^2))^2 - ^2(u^(1))^2] = _^0,α(Ω)(^3).
With these improved bounds in hand, we again apply the Schauder estimate (<ref>) to (<ref>) to obtain the following bounds for the remainder terms μ_ and ν_:
μ_ = _^0,α(Ω)(^3), ν_ = _^0,α(Ω)(^3).
Note that this is a refinement over the previous remainder bounds (<ref>). This concludes the Round 2 estimates.
Therefore from the definition (<ref>) of μ_ and ν_ we conclude that
u_ = u^(1) + 1/2^2u^(2) + _^0,α(Ω)(^3),
v_ = v^(1) + 1/2^2v^(2) + _^0,α(Ω)(^3),
and in particular for each ∈Ω we have the pointwise estimates
u_() = u^(1)() + 1/2^2u^(2)() + (^3),
v_() = v^(1)() + 1/2^2v^(2)() + (^3),
as → 0.
Asymptotic expansion of H_. To finish the proof, for each ∈Ω we compute:
H_() = Γ()σ()[u_()u^*_() + v_()v^*_()]
=Γ()σ()[(u^(1)() + 1/2^2u^(2)() + (^3))(u^(1)*() + 1/2^2u^(2)*() + (^3))
+ (v^(1)() + 1/2^2v^(2)() + (^3))(v^(1)*() + 1/2^2v^(2)*() + (^3)) ]
= H^(1)() + 1/2^2 H^(2)() + 1/6^3 H^(3)() + (^4),
where
H^(1) := 0,
H^(2) :=2Γσ (u^(1)*u^(1) + v^(1)*v^(1)),
H^(3) := 3Γσ(u^(1)*u^(2)+u^(1)u^(2)*+v^(1)*v^(2)+v^(1)v^(2)*).
These are precisely the equations (<ref>), (<ref>), and (<ref>), respectively. This completes the proof.
§ ELLIPTIC SYSTEM THEORY
For the sake of completeness, in this appendix we review the elliptic system theory that appears in the proof of Theorem <ref>. We mostly follow the presentation of <cit.>.
Let M≥ N be positive integers and consider the following system of M partial differential equations in N unknown functions v_1,…, v_N defined on an open set Ω:
(,D) v = Ω.
Here v=(v_1,…, v_N), D=(∂_x_1,…,∂_x_n), and (,D) is an M× N matrix linear partial differential operator. That is, for each 1≤ i≤ M and 1≤ j≤ N, the entry _ij(,D) is a linear partial differential operator, and the above matrix equation means that
∑_j=1^N _ij(,D)v_j = _i, 1≤ i≤ M.
To each row 1≤ i≤ M let us now associate an integer s_i, and to each column 1≤ j≤ N let us associate an integer t_j. Let us choose the numbers in such a way that the order of the partial differential operator _ij(,D) is no greater than s_i+t_j. (If s_i+t_j<0, then we require that _ij(,D)=0.)
The principal part _0(,D) of (,D) is defined as the M× N matrix linear partial differential operator such that (_0)_ij(,D) consists of the terms in _ij(,D) of order exactly s_i+t_j.
The matrix partial differential operator is called elliptic if such Douglis-Nirenberg numbers (s_i)_1≤ i≤ M and (t_j)_1≤ j≤ N exist, and the matrix _0(,) has full rank N for each ∈Ω and ∈^n-1. (The matrix _0(,) is called the symbol of the operator _0.)
We now summarize the parts of <cit.> that are relevant for the proof of Theorem <ref>. Assume from now on that is an elliptic operator with continuous coefficients and Douglis-Nirenberg numbers (s_i)_1≤ i≤ M and (t_j)_1≤ j≤ N such that
[ s_i = 0, 1≤ i≤ M,; t_j = τ, 1≤ j≤ N, ]
for some positive integer τ. In the following, we will consider the Dirichlet boundary value problem
[ (,D)v = Ω,; (ν)^q v_j = ϕ_qj ∂Ω, 0≤ q≤τ-1, 1≤ j≤ N. ]
For this boundary value problem, we have the following estimate.
<cit.> Let p>1 and ℓ≥ 0. Assume that _i∈ W^ℓ,p(Ω) for all 1≤ i≤ M and ϕ_qj∈ W^ℓ+τ-q-1/p,p(∂Ω) for all 0≤ q≤τ-1 and 1≤ j≤ N. Also, assume that all the coefficients in are in ^ℓ(Ω) <cit.>. Then the following elliptic regularity estimate holds for the boundary value problem (<ref>):
∑_j=1^Nv_j_W^ℓ+τ,p(Ω) ≤ C(∑_i=1^M _i_W^ℓ,p(Ω) + ∑_q,jϕ_qj_W^ℓ+τ-q-1/p,p(∂Ω)) + C_2∑_j=1^N v_j_L^p(Ω)
The following result on uniqueness of solutions in sufficiently small domains is used in the proof of Theorem <ref>.
<cit.> Let _0∈Ω. Then there exists ϵ>0 such that for every small domain Ω'⊂ B(_0,ϵ), the only solution to the homogeneous Dirichlet boundary value problem
[ (,D)v = 0 Ω',; (ν)^q v_j = 0 ∂Ω', 0≤ q≤τ-1, 1≤ j≤ N. ]
in Ω' is the trivial solution v=0. In particular, (<ref>) holds with C_2=0.
§ DERIVATION OF THE WAVE PROPAGATION MODEL
E(t,x) = 2[E^ω(x)e^-iω t + E^2ω(x)e^-i2ω t],
H(t,x) = 2[H^ω(x)e^-iω t + H^2ω(x)e^-i2ω t]
Assuming the second-order nonlinear susceptibility 3-tensor χ_2 is isotropic (i.e. vector-valued), the macroscopic Maxwell's equations give
∇× E^ω - iωμ H^ω = 0,
∇× E^2ω - i2ωμ H^2ω = 0,
∇× H^ω + iω E^ω - σ E^ω + iωχ_2 E^ω*· E^2ω = 0,
∇× H^2ω + i2ω E^2ω - σ E^2ω + i2ωχ_2 E^2ω· E^2ω = 0,
where (x) and μ(x) are the electric permittivity and magnetic permeability of the material, respectively. We will assume henceforth that μ(x)=μ is constant and known, which is almost always the case in practice. Eliminating H^ω and H^2ω by plugging the first two equations into the latter two equations produces the system
-∇×∇× E^ω + ω^2μ E^ω + iωμσ E^ω + ω^2μχ_2 E^ω*· E^2ω = 0,
-∇×∇× E^2ω + (2ω)^2μ E^2ω+ i2ωμσ E^2ω + (2ω)^2μχ_2 E^2ω· E^2ω = 0.
Formal scalar approximation:
Δ u+ ω^2μ u +iωμσ u = -ω^2 μχ_2 u^* v,
Δ v+ (2ω)^2μ v +i2ωμσ v = -(2ω)^2 μχ_2 u^2.
10
AgDoNi-CPAM59
S. Agmon, A. Douglis, and L. Nirenberg, Estimates near the boundary
for solutions of elliptic partial differential equations satisfying general
boundary conditions. I, Comm. Pure Appl. Math., 12 (1959), pp. 623–727.
AkBeDaElLiMi-JIIP17
H. Akhouayri, M. Bergounioux, A. Da Silva, P. Elbau, A. Litman, and
L. Mindrinos, Quantitative thermoacoustic tomography with microwaves
sources, J. Inverse Ill-Posed Probl., 25 (2017), pp. 703–717.
JeEl-IP20
H. Al Jebawy and A. El Badia, Direct algorithm for reconstructing
small absorbers in thermoacoustic tomography problem from a single data,
Inverse Problems, 36 (2020), p. 065010.
Alberti-arXiv22
G. S. Alberti, Non-zero constraints in elliptic PDE with random
boundary values and applications to hybrid inverse problems,
arXiv:2205.00994, (2022).
Ambrosio-IM04
L. Ambrosio, Transport equation and Cauchy problem for BV vector
fields, Inventiones Mathematicae, 158 (2004), p. 227.
AmGaJiNg-ARMA12
H. Ammari, J. Garnier, W. Jing, and L. Nguyen, Quantitative
thermo-acoustic imaging: An exact reconstruction formula, Submitted to
Archive for Rational Mechanics and Analysis, (2011).
AsZh-JDE21
Y. M. Assylbekov and T. Zhou, Inverse problems for nonlinear
maxwell's equations with second harmonic generation, Journal of Differential
Equations, 296 (2021), pp. 148–169.
Bal-CM13
G. Bal, Hybrid inverse problems and redundant systems of partial
differential equations, in Inverse Problems and Applications, P. Stefanov,
A. Vasy, and M. Zworski, eds., vol. 615 of Contemporary Mathematics, American
Mathematical Society, 2013, pp. 15–48.
BaRe-IP11
G. Bal and K. Ren, Multi-source quantitative PAT in diffusive
regime, Inverse Problems, 27 (2011).
075003.
BaRe-IP12
height 2pt depth -1.6pt width 23pt, On multi-spectral
quantitative photoacoustic tomography in diffusive regime, Inverse Problems,
28 (2012).
025010.
BaReUhZh-IP11
G. Bal, K. Ren, G. Uhlmann, and T. Zhou, Quantitative
thermo-acoustics and related problems, Inverse Problems, 27 (2011).
055007.
BaZh-IP14
G. Bal and T. Zhou, Hybrid inverse problems for a system of
Maxwell’s equations, Inverse Problems, 30 (2014).
055013.
BeBrPr-IP19
M. Bergounioux, É. Bretin, and Y. Privat, How to position
sensors in thermo-acoustic tomography, Inverse Problems, 35 (2019),
p. 074003.
BoLiMaSc-IP17
L. Borcea, W. Li, A. Mamonov, and J. C. Schotland, Second-harmonic
imaging in random media, Inverse Problems, 33 (2017).
065004.
BoWo-Book99
M. Born and E. Wolf, Principles of Optics: Electromagnetic
Theory of Propagation, Interference and Diffraction of Light,
Cambridge University Press, New York, 1999.
BoCr-SIAM06
F. Bouchut and G. Crippa, Uniqueness, renormalization and smooth
approximations for linear transport equations, SIAM J. Math. Anal., 38
(2006), pp. 1316–1328.
Boyd-Book20
R. W. Boyd, Nonlinear optics, Academic press, 2020.
Choulli-arXiv2022
M. Choulli, Stable determination of the nonlinear term in a
quasilinear elliptic equation by boundary measurements, arXiv:2205.16000,
(2022).
CoLe-DMJ02
F. Colombini and N. Lerner, Uniqueness of continuous solutions for
BV vector fields, Duke Math. J., 111 (2002), pp. 357–384.
CrLiSh-M2AS23
M. Cristofol, S. Li, and Y. Shang, Carleman estimates and some
inverse problems for the coupled quantitative thermoacoustic equations by
partial boundary layer data. part ii: Some inverse problems, Mathematical
Methods in the Applied Sciences, (2023).
DiLi-AM89
R. J. DiPerna and P.-L. Lions, On the Cauchy problem for
Boltzmann equations: global existence and weak stability, Ann. Math., 130
(1989), pp. 321–366.
FeLiLi-arXiv21
A. Feizmohammadi, T. Liimatainen, and Y.-H. Lin, An inverse problem
for a semilinear elliptic equation on conformally transversally anisotropic
manifolds, arXiv:2112.08305, (2021).
FrTeGaBlNeMa-JOSA15
J. Francés, J. Tervo, S. Gallego, S. Bleda, C. Neipp, and
A. Márquez, Split-field finite-difference time-domain method for
second-harmonic generation in two-dimensionally periodic structures, J. Opt.
Soc. Am. B, 32 (2015), pp. 664–669.
HaLi-NA23
B. Harrach and Y.-H. Lin, Simultaneous recovery of piecewise
analytic coefficients in a semilinear elliptic equation, Nonlinear Analysis,
228 (2023), p. 113188.
Isakov-ARMA93
V. Isakov, On uniqueness in inverse problems for semilinear
parabolic equations, Arch. Rational Mech. Anal., 124 (1993), pp. 1–12.
Isakov-Book06
height 2pt depth -1.6pt width 23pt, Inverse Problems
for Partial Differential Equations, Springer-Verlag, New York,
second ed., 2006.
Kian-Nonlinearity23
Y. Kian, Lipschitz and hölder stable determination of nonlinear
terms for elliptic equations, Nonlinearity, 36 (2023), p. 1302.
KrUh-arXiv19
K. Krupchyk and G. Uhlmann, Partial data inverse problems for
semilinear elliptic equations with gradient nonlinearities,
arXiv:1909.08122v1, (2019).
KrUh-PAMS20
height 2pt depth -1.6pt width 23pt, A remark on partial
data inverse problems for semilinear elliptic equations, Proceedings of the
AMS, (2019).
LaLi-NA22
R.-Y. Lai and Y.-H. Lin, Inverse problems for fractional semilinear
elliptic equations, Nonlinear Analysis, 216 (2022), p. 112699.
LaReZh-SIAM22
R.-Y. Lai, K. Ren, and T. Zhou, Inverse transport and diffusion
problems in photoacoustic imaging with nonlinear absorption, SIAM J. Appl.
Math., 82 (2022), pp. 602–624.
arXiv:2107.08118.
LaLiLiSa-JMPA21
M. Lassas, T. Liimatainen, Y.-H. Lin, and M. Salo, Inverse problems
for elliptic equations with power type nonlinearities, Journal de
mathématiques pures et appliquées, 145 (2021), pp. 44–82.
LuZh-arXiv23
S. Lu and J. Zhai, Increasing stability of a linearized inverse
boundary value problem for a nonlinear schrödinger equation on
transversally anisotropic manifolds, arXiv:2301.07875, (2023).
Solonnikov-JSM73
V. A. Solonnikov, Overdetermined elliptic boundary-value problems,
J. Sov. Math., 1 (1973), pp. 477–512.
SzKi-JOSA18
T. Szarvas and Z. Kis, Numerical simulation of nonlinear second
harmonic wave generation by the finite difference frequency domain method,
J. Opt. Soc. Am. B, 35 (2018), pp. 731–740.
UhZh-JMPA21
G. Uhlmann and J. Zhai, On an inverse boundary value problem for a
nonlinear elastic wave equation, Journal de Mathématiques Pures et
Appliquées, 153 (2021), pp. 114–136.
YuYa-JOSA13
J. Yuan and J. Yang, Computational design for efficient
second-harmonic generation in nonlinear photonic crystals, J. Opt. Soc. Am.
B, 30 (2013), pp. 205–210.
ZeHoLiKoMo-PRB09
Y. Zeng, W. Hoyer, J. Liu, S. W. Koch, and J. V. Moloney, Classical
theory for second-harmonic generation from metallic nanoparticles, Phys.
Rev. B, 79 (2009).
235109.
|
http://arxiv.org/abs/2307.00567v1 | 20230702132357 | A Note on Ising Network Analysis with Missing Data | [
"Siliang Zhang",
"Yunxiao Chen"
] | stat.ME | [
"stat.ME"
] |
1.4
Quantum interference between quasi-2D Fermi surface sheets in UTe_2
A. G. Eaton
August 1, 2023
===================================================================
The Ising model has become a popular psychometric model for analyzing item response data. The statistical inference of the Ising model is typically carried out via a pseudo-likelihood, as the standard likelihood approach suffers from a high computational cost when there are many variables (i.e., items). Unfortunately, the presence of missing values can hinder the use of
pseudo-likelihood, and a listwise deletion approach for missing data treatment may introduce a substantial bias into the estimation and sometimes yield misleading interpretations. This paper proposes a conditional Bayesian framework for Ising network analysis with missing data, which integrates a pseudo-likelihood approach with iterative data imputation.
An asymptotic theory is established for the method.
Furthermore, a computationally efficient Pólya-Gamma data augmentation procedure is proposed to streamline the
sampling of model parameters. The method's performance is shown through simulations and a real-world application to data on major depressive and generalized anxiety disorders from the National Epidemiological Survey on Alcohol and Related Conditions (NESARC).
KEYWORDS: Ising model, iterative imputation, full conditional specification, network psychometrics, mental health disorders, major depressive disorder, generalized anxiety disorder
§ INTRODUCTION
Recent years have witnessed the emergence of network psychometrics <cit.>, a family of statistical graphical models and related inference procedures, for analyzing and interpreting the dependence structure in psychometric data. These models embed psychometric items as nodes in an undirected or directed network (i.e., graph) and visualize their interrelationships through the network edges, which represent certain probabilistic conditional dependencies. Network psychometric methods concern the learning of the network structure. They have been developed under various
settings, including undirected graphical models for cross-sectional data <cit.>, directed networks for longitudinal data <cit.>, and extended networks with latent variables for time-series data or panel data <cit.>. These methods have received wide applications in education <cit.>,
psychology <cit.>, and health sciences <cit.>.
Analyzing cross-sectional binary item response data with the Ising model <cit.> is common in network psychometric analysis. This analysis is typically performed based on a conditional likelihood <cit.> because the standard likelihood function
is computationally infeasible when involving many variables. In this direction, Bayesian and frequentist methods have been developed, where sparsity-inducing priors or penalties are combined with the conditional likelihood for learning a sparse network structure <cit.>. Besides, the Ising model is shown to be closely related to Item Response Theory (IRT) models <cit.>. The log-multiplicative association models <cit.>, which are special cases of the Ising model, can be used as item response theory models and yield very similar results as IRT models. Furthermore, the Ising model and the conditional likelihood have been used for modeling the local dependence structure in locally dependent IRT models <cit.>.
Due to its construction, the conditional likelihood does not naturally handle data with missing values, despite the omnipresence of missing data in psychometric data.
To deal with missing values in an Ising network analysis, listwise deletion <cit.> and single imputation <cit.> are typically performed, which arguably may not be the best practice.
In particular, it is well-known that
listwise deletion is statistically inefficient and requires
the Missing Completely At Random (MCAR) assumption <cit.> to ensure consistent estimation.
Moreover, a naïve imputation procedure, such as mode imputation, likely introduces bias into parameter estimation. A sophisticated imputation procedure must be developed to ensure statistical validity and computational efficiency.
In this note, we propose an iterative procedure for learning an Ising network. The proposed procedure combines iterative imputation via Full Conditional Specification <cit.> and Bayesian estimation of the Ising network. We show that the FCS leads to estimation consistency when the conditional models are chosen to take logistic forms. In terms of computation, we propose a joint Pólya-Gamma augmentation procedure by extending the
Pólya-Gamma augmentation procedure for logistic regression <cit.>. It allows us to efficiently sample parameters of the Ising model. Simulations are conducted to compare the proposed procedure with estimations based on the listwise deletion and single imputation. Finally, the proposed procedure and a complete-case analysis are applied to study the network of Major Depressive Disorder (MDD) and Generalised Anxiety Disorders (GAD) based on data from the National Epidemiological Survey on Alcohol and Related Conditions <cit.>. Both analyses suggest that the symptoms are densely connected within each mental health disorder while only loosely connected between the two disorders. However, the two methods estimate a strong edge to be of opposite signs, leading to substantially different interpretations. A close scrutiny of the item content and the data missingness mechanism suggests that the result from the proposed method is more sensible.
§ PROPOSED METHOD
§.§ Ising Model
Consider a respondent answering J binary items. Let = (Y_1, ..., Y_J)^⊤∈{0,1}^J be a binary random vector representing the respondent's responses. We say follows an Ising model if its probability mass function satisfies
P( = |) = 1/c()exp[1/2^⊤] =1/c()exp[∑_j=1^J s_jjy_j/2+∑_j=1^J-1∑_k=j+1^Js_jky_jy_k],
where = (s_ij)_J× J is a J by J symmetric matrix that contains parameters of the Ising model
and
c() = ∑_∈{0,1}^Jexp[∑_j=1^J s_jjy_j/2+∑_j=1^J-1∑_k=j+1^Js_jky_jy_k]
is a normalizing constant. The parameter matrix encodes a network with the J items being the nodes. More specifically, an edge is present between nodes i and j if and only if the corresponding entry s_ij is nonzero.
If an edge exists between nodes i and j, then Y_i and Y_j are conditionally dependent given the rest of the variables. Otherwise, the two variables are conditionally independent.
In Ising network analysis, the goal is to estimate the parameter matrix .
The standard likelihood function is computationally intensive when J is large, as it
requires computing a normalizing constant c() which involves a summation of all the 2^J response patterns. To address this computational issue, <cit.> proposed a conditional likelihood which is obtained by aggregating the conditional distributions of Y_j given _-j = (Y_1, ..., Y_j-1, Y_j+1, ..., Y_J)^⊤, for j=1, ..., J, where the conditional distribution of Y_j given _-j takes a logistic regression form. More precisely, the conditional likelihood with one observation is defined as
p^*(|) = ∏_j=1^J p(y_j|_-j,)= ∏_j=1^Jexp[(s_jj/2 + ∑_k≠ j s_jky_k)y_j]/1+exp(s_jj/2 + ∑_k≠ j s_jky_k).
A disadvantage of the conditional likelihood is that it requires a fully observed dataset because missing values cannot be straightforwardly marginalized out from (<ref>). In what follows, we discuss how missing data can be treated in the conditional likelihood.
§.§ Proposed Method
Consider a dataset with N observations. Let Ω_j⊂{1, ..., N} be the subset of observations whose data on item j are missing. For each observation i and item j, y_ij denotes the observed response if i ∉Ω_j, and otherwise, y_ij is missing. Thus, the observed data include Ω_j and y_ij, for i∈{1, ..., N}∖Ω_j and j = 1, ..., J.
The proposed procedure iterates between two steps – (1) imputing
the missing values of y_ij for i ∈Ω_j, j=1, ..., J, achieved via a full conditional specification,
and (2) sampling the posterior distribution of given the most recently imputed data.
Let t be the current iteration number. Further, let
^(t-1)_i = (y_i1^(t-1), ..., y_iJ^(t-1))^⊤, i=1, ..., N, be imputed data from the previous iteration, where y^(t-1)_ij = y_ij for i ∉Ω_j and y^(t-1)_ij is imputed in the (t-1)th iteration for i ∈Ω_j.
For the tth iteration, the imputation and sampling steps are described as follows.
Imputation. We initialize the imputation in the tth iteration by setting _i^(t,0) = _i^(t-1). Then, we run a loop over all the items, j = 1, ..., J. In step j of the loop, we impute y_ij for all i ∈Ω_j,
given the most recently imputed data, denoted by ^(t, j-1)_i, i = 1, ..., N. We then obtain ^(t, j)_i by updating ^(t, j-1)_i with the imputed values of y_ij.
The imputation of each variable j is based on the conditional distribution of Y_j given _-j. Under the Ising model, this conditional distribution takes a logistic regression form. For computational reasons to be discussed in the sequel, we introduce an auxiliary parameter vector _j = (β_j1, ..., β_jJ)^⊤ as coefficients in the logistic regression, instead of directly using from the previous iteration to sample the missing y_ijs. Unlike the constraint of s_ij = s_ji in the symmetric matrix , no constraints are imposed on β_j, j=1, ..., J, which makes the sampling computationally efficient; see discussions in Section <ref>.
The imputation of variable j consists of the following two steps:
* Sample auxiliary parameter vector _j^(t) from the posterior distribution
p^(t,j)(_j) ∝ π_j(_j)∏_i=1^N exp[(β_jj/2+∑_k≠ jβ_jky_ik^(t,j-1))y_ij^(t,j-1)]/1+exp(β_jj/2+∑_k≠ jβ_jky_ik^(t,j-1)),
where π_j(_j) is the prior distribution for the auxiliary parameters _j.
* Sample y_ij^(t) for each i ∈Ω_j from a Bernoulli distribution with success probability
exp(β_jj^(t)/2+∑_k≠ jβ_jk^(t)y_ik^(t,j-1))/1+exp(β_jj^(t)/2+∑_k≠ jβ_jk^(t)y_ik^(t,j-1)).
After these two steps, we obtain ^(t, j)_i by updating the jth element of ^(t, j-1)_i with y_ij^(t), for i ∈Ω_j.
We emphasize that only the missing values are updated. For i ∉Ω_j, the jth element of ^(t, j)_i is always the observed value of y_ij. After the loop over all the items, we set _i^(t) = ^(t, J)_i as the output from this imputation step.
Sampling . Given the most recently imputed data _i^(t), i=1, ..., N,
update ^(t) by sampling from the pseudo-posterior distribution
p(|_1^(t),…,_N^(t)) ∝ π()∏_i=1^N p^*(_i^(t)|),
where π() is the prior distribution for the Ising parameter matrix and recall that ∏_i=1^N p^*(_i^(t)|) is the conditional likelihood.
Figure <ref> visualizes the steps performed in the proposed method. Note that it is unnecessary to sample the parameter matrix during the burn-in period and in
every iteration after the burn-in period; thus, we employ a thinning step after the burn-in period. This is done to both decrease computational cost and reduce the auto-correlation in the imputed data.
Moreover, we outline the proposed algorithm in Algorithm <ref>. The final estimate of is obtained by averaging all the ^(t) obtained after the burn-in period.
The computational details, including the sampling of auxiliary parameters and Ising parameter matrix and discussions of the computational complexity, are given in Section <ref>.
§.§ Statistical Consistency
As our method is not a standard Bayesian inference procedure, we provide an asymptotic theory under the frequentist setting to justify its validity. In particular, we show that
the parameter sampled from the pseudo-posterior distribution converges to the true parameter _0, under the assumptions that the Ising model is correctly specified and the data are Missing At Random <cit.>.
Consider one observation with a complete data vector = (Y_1, ..., Y_J)^⊤. Further, let = (Z_1, ..., Z_J)^⊤ be a vector of missing indicators, where Z_ij = 1 if Y_ij is observed and Z_ij = 0 otherwise. We further let _obs = {Y_j: Z_j = 1, j = 1, ..., J} and _mis = {Y_j: Z_j = 0, j = 1, ..., J} be the observed and missing entries of , respectively. Consider the joint distribution of observable data (_obs, ), taking the form
P(_obs = _obs, = 𝐳|, ϕ) = ∑_y_j:z_j=0( exp(^⊤/2)/c()) q(𝐳|,ϕ),
where exp(^⊤/2)/c() is the distribution of = under the Ising model,
q(𝐳|,ϕ) denotes the conditional probability of = 𝐳 given =, and ϕ denotes the unknown parameters of this distribution. The MAR assumption, also known as the ignorable missingness assumption, means that the conditional distribution q(𝐳|,ϕ) depends on only through the observed entries, i.e., q(𝐳|,ϕ) = q(𝐳|_obs,ϕ). In that case, (<ref>) can be factorized as
P(_obs = _obs, = 𝐳|, ϕ) = q(𝐳|_obs,ϕ) ×(∑_y_j:z_j=0exp(^⊤/2)/c()).
Consequently, the inference of does not depend on the unknown distribution
q(𝐳|,ϕ).
As shown in <cit.>, the MAR assumption, together with additional regularity conditions, ensures that the iterative imputation of the missing responses converges to the imputation distribution under a standard Bayesian procedure as the number of iterations and the sample size N go to infinity. A key to this convergence result is the compatibility of the conditional models in the imputation step – the logistic regression models are compatible with the Ising model as a joint distribution.
The validity of the imputed samples further ensures the consistency of the estimated Ising parameter matrix. We summarize this result in Theorem <ref>.
Assume the following assumptions hold: 1) The Markov chain for missing data, generated by the iterative imputation algorithm Algorithm <ref>, is positive Harris recurrent and thus admits a unique stationary distribution; 2) The missing data process is ignorable; 3) A regularity condition holds for prior distributions of Ising model parameters and auxiliary parameters, as detailed in Appendix <ref>.
Let π_N^*() be the posterior density of implied by the stationary distribution of the proposed method. Given the true parameters _0 for the Ising model and any ε >0, we have π_N^*() concentrates at _0,
∫_B_ε(_0)π_N^*()d→ 1,
in probability as N→∞. B_ε(_0)={:‖-_0‖<ε} is the open ball of radius ε at _0.
We note that the regularity condition on the prior distributions holds for the normal priors adopted in the current paper; see Section <ref> for the specification of the priors and Appendix <ref> for the verification of the condition under the normal priors.
§.§ Computational Details
In what follows, we discuss the specification of the prior distributions and the sampling of auxiliary parameters β_j and Ising model parameters .
Sampling β_j.
We set independent mean-zero normal priors for entries of β_j. For the intercept parameter β_jj, we use a weakly informative prior by setting the variance to 100. For the slope parameters β_jk, k≠ j, we set a more informative prior by setting the variance to be 1, given that these parameters correspond to the off-diagonal entries of , which are sparse and typically do not take extreme values.
The sampling of the auxiliary parameters _j, following (<ref>), is essentially a standard Bayesian logistic regression problem. We achieve it by a Markov chain Monte Carlo (MCMC) sampler called the Pólya-Gamma sampler <cit.>.
To obtain β^(t)_j, this sampler starts with β^(t-1)_j from the previous step. It constructs an MCMC transition kernel by a data argumentation trick. More precisely, the following two steps are performed.
* Given β^(t-1), independently sample N augmentation variables, each from a Pólya-Gamma distribution <cit.>.
* Given the N augmentation variables, sample β^(t) from a J-variate normal distribution.
The details of these two steps are given in Appendix <ref>, including the forms of the Pólya-Gamma distributions and the mean and covariance matrix of the J-variate normal distribution. We choose the Pólya-Gamma sampler because it is very easy to construct and computationally efficient.
It is much easier to implement than Metropolis-Hastings samplers which often need tuning to achieve good performance.
We comment on the computational complexity of the sampling of β_j. The sampling from the Pólya-Gamma distribution has a complexity O(NJ), and the sampling from the J-variate normal distribution has a complexity of O(NJ^2)+O(J^3). Consequently, a loop of all the _j, j=1, ..., J, has a complexity of
O((N+J)J^3).
Sampling . Since is a symmetric matrix, we reparametrize it by vectorizing its off-diagonal entries (including the diagonal entries). Specifically, the reparameterization is done by half-vectorization, denoted by = vech() = (s_11, ..., s_J1, s_22, ..., s_J2, ..., s_JJ)^⊤∈ℝ^J(J+1)/2. It is easy to see that vech(·) is a one-to-one mapping between ℝ^J(J+1)/2 and J× J symmetric matrices. Therefore, we impose a prior distribution on and sample ^(t) in the tth iteration when is sampled. Then we let ^(t) = vech^-1(^(t)).
Recall that a thinning step is performed, and t_0 is the gap between two samples of . Let t be a multiple of t_0 and ^(t-t_0) = vech(^(t-t_0)) be previous value of . The sampling of ^(t) is also achieved by a Pólya-Gamma sampler, which involves the following two steps similar to the sampling of β_j.
* Given ^(t-t_0), independently sample NJ augmentation variables, each from a Pólya-Gamma distribution.
* Given the NJ augmentation variables, sample ^(t) from a J(J+1)/2-variate normal distribution.
The details of these two steps are given in Appendix <ref>. We note that the computational complexity of sampling the NJ augmentation variables is O(NJ^2), and that of sampling ^(t) is O(NJ^5)+O(J^6), resulting in an overall complexity O((N+J)J^5). Comparing the complexities of the imputation and sampling steps, we notice that the latter is computationally much more intensive.
This is the reason why we choose to impute data by introducing auxiliary parameters β_js rather than using Ising network parameters so that the iterative imputation mixes much faster in terms of the computation time. In addition, we only sample every t_0 iterations for a reasonably large t_0 to avoid a high computational cost and also reduce the auto-correlation between the imputed data.
We remark that <cit.> considered a similar Ising network analysis problem based on fully observed data, in which they proposed a Bayesian inference approach based on a spike-and-slab prior to learning . Their Bayesian inference is also based on a Pólya-Gamma sampler. However, they combined Gibbs sampling with a
Pólya-Gamma sampler, updating one parameter in at a time. This Gibbs scheme often mixes slower than the joint update of as in the proposed method and, thus, is computationally less efficient. The proposed Pólya-Gamma sampler may be integrated into the framework of <cit.> to improve computational efficiency.
§ NUMERICAL EXPERIMENTS
We illustrate the proposed method and show its power via two simulation studies and a real-world data application.
§.§ Simulation
Study I: A six-node case.
We generate data from an Ising model with J=6 variables. Missing values are generated under an MAR setting that is not MCAR. The proposed method is then compared with Bayesian inference based on (1) listwise deletion and (2) a single imputation, where the single imputation is based on the imputed data from the Tth iteration of Algorithm <ref>, recalling that T_0 is the burn-in size.
We configure the true parameter matrix _0 as follows. Since _0 is a symmetric matrix, we only need to specify its upper triangular matrix and then the diagonal entries. For the upper triangular entries (i.e., s_jl, j<l),
we randomly assign 50% of them to zero to introduce a moderately sparse setting. In addition, the nonzero parameters are then generated by
sampling from a uniform distribution over the set [-1, -0.4] ∪ [0.4, 1]. The intercept parameters s_jj,j=1,…, J are set to zero.
The true parameter values are given in
Appendix <ref>.
Missing data are simulated by masking particular elements under an MAR mechanism. In particular, we have z_i6=1, so that the sixth variable is always observed. We further allow the missingness probabilities of the first five variables (i.e., z_ij=0,j=1,…,5) to depend on the values of y_i6.
The specific settings on p(z_ij=0| y_i6),j=1,…, 5 are detailed in Appendix <ref>.
Data are generated following the aforementioned Ising model and MAR mechanism for four different sample sizes, N = 1,000, 2,000, 4,000, and 8,000, respectively.
For each sample size, 50 independent replications are created.
Three methods are compared – the proposed method, Bayesian inference with a single imputation, and Bayesian inference based on complete cases from listwise deletion.
The Bayesian inference for complete data is performed by sampling parameters from the posterior implied by the pseudo-likelihood and a normal prior, which is a special case of the proposed method without iterative imputation steps.
All these methods shared the same initial values s_jl^(0)∼ U(-0.1,0.1),1≤ j≤ l≤ J.
For our proposed method, we set the length of the Markov Chain Monte Carlo (MCMC) iterations to T = 5,000 and a burn-in size of T_0 = 1,000. This setup leads to an effective total of 400 MCMC samples for the Ising parameter matrix . Notably, identical MCMC length and burn-in configuration are applied during parameters inference in the single imputation and complete-case analyses.
Figure <ref> gives the plots for the mean squared errors (MSE) of the estimated edge parameters and intercept parameters under different sample sizes and for different methods.
The MSE for each parameter s_jl is defined as
1/50∑_k=1^50(ŝ_k,jl-s_0,jl)^2.
Here, ŝ_k,jl denotes the estimated value from the kth replication while s_0,jl refers to the true value.
Each box in panel (a) corresponds to the 15 edge parameters, and each box in panel (b) corresponds to the 6 intercept parameters.
We notice that the listwise deletion procedure introduces biases into the edge and intercept estimation, resulting in the MSEs for certain parameters not decaying toward zero as the sample size grows. Additionally, both the proposed method and the single imputation method offer accurate parameter estimation, with MSEs decaying toward zero as the sample size increases. Notably, the proposed method is substantially more accurate than the single imputation method, suggesting that aggregating over multiple imputed datasets improves the estimation accuracy. Furthermore,
for smaller sample sizes, the complete-case analysis
seems to yield slightly more accurate estimates of the edge parameters than the single imputation method.
A fifteen-node example.
We further simulate a fifteen-node Ising model to demonstrate the performance of the proposed method in terms of parameter estimation and edge selection.
Similar to the six-node scenario, we generate model parameters by randomly setting 70% of the edge parameters s_jl to zero to create a sparse network. The true parameter values can be found in Appendix <ref>.
To simulate missing data, we implement the MCAR mechanism and randomly label 50% of the data entries as missing.
Data are generated for four sample sizes of N= 1,000, 2,000, 4,000, and 8,000, following the specified Ising model parameters and MCAR mechanism. For each sample size, 50 independent replications are generated.
Algorithm <ref> is applied to these
datasets, where a random starting point is used as in the six-node example. We set the MCMC iterations length to T= 5,000 and a burn-in size T_0= 1,000.
The resulting MSEs for edge parameter estimation under various sample sizes are displayed in Figure <ref>(a), where each box corresponds to the MSEs for 105 edge parameters.
As we can see, the MSEs decrease toward zero as the sample size increases. Furthermore, by employing a hard thresholding step after edge parameter estimation, a receiver operating characteristic (ROC) curve is created for edge selection under each setting, and the corresponding Area Under the Curve (AUC) is calculated to evaluate the performance of the proposed method. That is, given a hard threshold τ, the True Positive Rate (TPR) and False Positive Rate (FPR) are calculated as
TPR(τ) = ∑_k=1^50∑_j<l1_{|ŝ_k,jl|>τ and s_0,jl≠ 0}/50×∑_j<l1_{s_0,jl≠ 0}, FPR(τ) = ∑_k=1^50∑_j<l1_{|ŝ_k,jl|>τ and s_0,jl=0}/50×∑_j<l1_{s_0,jl=0}.
A ROC curve is obtained by varying the value of τ.
The ROC curves and the corresponding AUC values are given in Figure <ref>(b).
§.§ A Real Data Application
We analyze the dataset for the 2001-2002 National Epidemiological Survey of Alcohol and Related Conditions (NESARC), which offers valuable insights into alcohol consumption and associated issues in the U.S. population <cit.>. The dataset consists of 43,093 civilian non-institutionalized individuals aged 18 and older. In this analysis, we focus on two specific sections of the survey
that concern two highly prevalent mental health disorders – Major Depressive Disorder (MDD) and Generalized Anxiety Disorder (GAD). Because MDD and GAD have high symptom overlap <cit.>
and often co-occur <cit.>,
it is important to perform a joint analysis of the symptoms of the two mental health disorders and study their separation. In particular, <cit.> performed factor analysis based on the same data and found that the two mental health disorders have distinct latent structures. We reanalyze the data, hoping to gain some insights from the network perspective of the two mental health disorders.
Following <cit.>, we consider data with nine items measuring MDD and six items measuring GAD. These items are designed according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) <cit.>.
These items ask the participants if they have recently
experienced certain symptoms; see
Table <ref> for their short descriptions.
After eliminating samples with entirely absent values across the 15 items, a total of 42,230 cases remain in the dataset. Note that any “Unknown” responses in the original data are converted into missing values.
The dataset exhibits a significant degree of missingness, with only 2,412 complete cases for the 15 items, representing approximately 6% of the total cases. Specifically, the missing rate for each item is given in Table <ref>.
Importantly, items D1 and D2 function as screening items and, thus, have a very low missing rate. The respondents did not need to answer items D3-D9 if the responses to D1 and D2 were “No” or “Unknown”, resulting in high missing rates for these items. This pattern suggests that the missing data in this study is not MCAR. The GAD items A1-A6 also have a screening item, which results in the high missing rates in A1 through A6. Following the treatment in <cit.>, these screening items are not included in the current analysis.
We apply the proposed method and the complete-case analysis to the data. For each method, 10 MCMC chains with random starting values are used, each having 10,000 MCMC iterations and a burn-in size 5,000. The Gelman-Rubin statistics are always below 1.018, confirming the satisfactory convergence of all 120 parameters for both methods. The estimated network structures for MDD and GAD items are presented in Figure <ref>, where an edge is shown between two variables when the estimated parameter has an absolute value greater than the hard threshold of 0.5. This hard threshold is chosen to ensure a clear visualization of the network. The nine MDD items are shown as blue nodes at the bottom, and the six GAD items are represented shown as red nodes at the top.
The edges are colored red and blue, which represent
positive and negative parameter estimates, respectively.
In addition, the line thickness of the edges indicates their magnitude. A clear difference between the two methods is the edge between D1 “depressed mood most of the day, nearly every day," and D2 “markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day", which are two screening questions in the survey that all the participants responded to. The estimated parameter for this edge has a large absolute value under each of the two methods, but the estimated parameter is negative in the complete-case analysis, while it is positive according to the proposed method.
Given that data are not MCAR
and considering the content of the two items, we believe that the estimate from the proposed method is more sensible. Furthermore, we also see that the complete-case analysis yields more edges than the proposed method; for example, the edges of A4-A5, A1-D5, D1-D6, D1-D7, D3-D4, and D8-D9 appear in the estimated network from the complete-case analysis but not in that of the proposed method. They are likely false positives due to the higher estimation variance of the complete-case analysis, where the high variance is due to the relatively small sample size.
Finally, our analysis shows that the symptoms of each mental health disorder tend to densely connect with each other in the Ising network, while the symptoms are only loosely but positively connected between the two mental health disorders. The edges between the two mental health disorders identify the overlapping symptoms, including “D4: Insomnia or hypersomnia"
and “A6: Sleep disturbance", “A2: Easily fatigued" and “D6: Fatigue/loss of energy", and “A3: Difficulty concentrating" and “D8: Diminished concentration". These results suggest that MDD and GAD are two well-separated mental health disorders, despite their high symptom overlap and frequent co-occurrence. This result confirms the conclusion of <cit.> that GAD and MDD are closely related but different nosological entities.
§ CONCLUDING REMARKS
In this paper, we propose a new method for Ising network analysis in the presence of missing data. The proposed method integrates iterative imputation into a Bayesian inference procedure based on conditional likelihood. An asymptotic theory is established that guarantees the consistency of the proposed estimator. Furthermore, a Pólya-Gamma machinery is proposed for the sampling of Ising model parameters, which yields efficient computation.
The power of the proposed method is further shown via simulations and a real-data application. An R package has been developed that will be made publicly available upon the acceptance of the paper.
The current work has several limitations that require future theoretical and methodological developments. First, we did not investigate using sparsity-inducing priors to better explore the Ising network structure when it is sparse. We believe that the proposed method, including the iterative imputation and the Pólya-Gamma machinery, can be adapted when we replace the normal prior with the spike-and-slab prior considered in <cit.>.
This adaptation can be done by adding some Gibbs sampling steps. Second, from the frequentist perspective, the asymptotic normality of the Bayesian estimator remains to be established through a result in a similar form as the Bernstein-von-Mises theorem. This analysis is challenging and is left for future investigation. Finally, the computational cost for the Ising network analysis is still quite high, as discussed in Section <ref>. When sparsity-inducing priors are incorporated, more scalable algorithms may be developed, following recent advances in high-dimensional Bayesian model selection <cit.>.
§ ACKNOWLEDGEMENT
The research was supported in part by the Shanghai Science and Technology Committee Rising-Star Program (22YF1411100).
and define the ignorable likelihood
L_ign(|_obs)=∫ p(_obs,_mis|)d_mis.
“Ignorable” in this context means that valid Bayesian estimation for model parameters can be achieved without modeling the missing mechanism. That is, the posterior distribution of based on p(|_obs)∝ p()L_ign(|_obs) is the same as the posterior distribution based on p(,ϕ|_obs,)∝ p(,ϕ)L_full(,ϕ|_obs,). Conditions such as Missing at Random (MAR), along with a priori independent parameters, i.e., p(,ϕ)=p()p(ϕ), are sufficient for ignorable missingness.
We assume throughout the ignorable missingness and assume the existence of a unique stationary distribution for the proposed iterative imputation Markov chain of _mis.
In particular, let {_mis^(k),k∈ℤ^+} be the Markov chain of the missing data produced by <ref>. It possesses a unique stationary distribution, henceforth referred to as μ^*__obs.
Define p(_mis|_obs) = ∫ p(_mis|_obs,)p(|_obs)d be the true posterior predictive distribution of the missing data given the observed data.
As detailed in <ref>, we demonstrate that μ^*_obs converges in total variation to the true posterior distribution.
Let d_TV denote the total variation distance between two measures, that is
d_TV(μ^*,μ) = sup_A∈ℱ|μ^*(A) - μ(A)|,
where μ^*,μ are defined on the same probability space.
<Ref> emphasizes a critical consistency aspect of the iterative imputation process. It demonstrates conclusively that the stationary distribution of the Markov chain for the missing data gradually converges to the true posterior. This essential insight underscores the validity of the iterative imputation approach and sets a fundamental basis for ensuring consistent model estimation. We note that the regularity condition is applicable for the normal prior used in this paper, as well as for others such as the spike-and-slab prior. We further demonstrate in below the consistency of Ising model parameters.
<Ref> establishes the consistency of the model parameters. This conclusion is built on the consistency of the stationary distribution of the missing data stated in <ref> and the consistency of composite likelihood. This finding substantiates the use of the iterative imputation method in <ref>, for the estimation of parameters in the Ising model under ignorable missingness.
The pseudo likelihood, as outlined in <ref>, mitigates the computational complexity inherent to the Ising model expressed in <ref>. However, its application is impeded when analyzing datasets with missing values. Specifically, the conditional models within the pseudo likelihood framework become undefined when _-j incorporates missing values. To counter this obstacle, we propose an iterative imputation strategy. This strategy is integrated within our model, enabling both missing data imputation and model estimation. The primary contribution of our research is the development of a methodology for simultaneous data imputation and model estimation in the context of network psychometrics with missing data, bolstered by theoretical justifications and an efficient computational algorithm for the proposed method.
We further introduce some additional notations for the missing data setting. Let _mis and _obs represent the missing and observed entries of the response data, with _mis={_j,mis| j=1,…,J} and _obs={_j,obs| j=1,…,J}. Under the missing at random (MAR) assumption, the missing data _mis can be imputed one variable at a time by drawing _j,mis iteratively from the conditional distributions p(y_ij|_i,-j,_j),j=1,…,J,i∈{i| y_ij is missing}. This process depends on all other variables and the current estimated parameters. We propose <ref> to perform model estimation while iteratively impute the missing data.
In the domain of psychometric data analysis, traditional methods have been firmly grounded in Spearman's g-factor theory <cit.>, employing item factor analysis and the fundamental assumption of latent constructs.
However, a paradigm shift has been observed in recent years with the introduction of network models <cit.>. This innovative approach suggests a departure from the concept of latent constructs, postulating instead that measurement items comprise a complex network that elucidates their conditional associations <cit.>. Under this approach, each variable within the network is denoted by a node. Two nodes are interconnected by an edge if the variables they represent are conditionally dependent, meaning they maintain an association after adjusting for all other variables in the network. Conversely, if the relationship between two variables can be accounted for by other variables in the network, these variables are considered conditionally independent and the edge between them is removed.
This method, known as “network psychometrics”, brings a novel viewpoint to the analysis of multivariate psychometric data, garnering increasing recognition within the psychometric community <cit.>.
Early development research on the subject of network modeling in psychometrics area includes network visualization <cit.> and software development <cit.>. These visualization methods and software implementations populate the application of network modeling to practitioners in psychometric area.
Apart from modeling multivariate data at a single time point, called the cross-sectional data, dynamic network modeling for longitudinal data have been included in the network psychometrics framework <cit.>. <cit.> extends the Gaussian graphical model with latent variables to handle measurement error and random effect for the time-series or panel data analysis. The dynamic network modeling is a promising approach to model the time-varying associations between variables, which is a common phenomenon in psychometric data. For ordinal data which is prevalent in psychometric surveys, <cit.> introduces the graphical model for the analysis where ordinal variables are described by discretizing continuous latent variables that follow Gaussian graphical model. An approximate EM-like algorithm is proposed to handle the computational challenge. <cit.> looked into two graphical modeling approaches for ordinal data that originated from two parameterization of the univariate ordinal distribution. Theoretical properties and computational efficient estimators are derived. <cit.> uses copula model with latent Gaussian variables to model mixed type data with ordinal and continuous variables. A rank-based ensemble approach is introduced for model estimation. <cit.> proposed a modified ordinal graphical models with finite mixtures to model grouped ordinal data, where a generalized expectation-maximization algorithm is developed.
Despite the significant advancements observed in the field of network psychometrics, there exist a number of challenges and gaps. There is a discernible disconnect between the available network modeling methodologies, theories, computation algorithms to the actual requirements of practical problems. Consequently, the resolution of these practical needs remains a work in progress.
A key challenge, as highlighted by <cit.>, is the task of network structure selection, often termed as model or edge selection.
This problem originates from the vast parameter space related to the graphical model, which expands exponentially with an augmentation in the number of nodes, thus potentially inducing model instability - an element frequently overlooked in network modeling.
Numerous studies have been undertaken to measure this instability and accompanying uncertainty <cit.>. <cit.> conducted an analysis of the reliability and replicability of previous Gaussian graphical model results, inciting concerns about the trustworthiness of reproduced outcomes; see <cit.> for additional comments.
Incorporating sparsity in network models enhances their interpretability. This is often achieved using penalties or regularization techniques, like Lasso, from a frequentist perspective. For example, graphical Lasso (gLasso) and eLasso are leveraged to eliminate trivial model estimates by shrinking them to zero <cit.>. On the other hand, Bayesian solutions for network structure selection often involve introducing hyperparameters with specific priors, including the graphical horseshoe prior <cit.> and Laplacian priors <cit.>. Recently, <cit.> proposed employing spike-and-slab hyperparameters for the Ising model edges, preceded by a screening step to limit the parameter space. The consistency of edge selection has been thoroughly investigated. <cit.> offers a comparative review of this Bayesian method and the frequentist gLasso approach. In addition to these, there is another category of solutions that are built on non-regularized approaches. For high-dimensional Gaussian graphical models, <cit.> proposed a method based on multiple hypothesis testing that relies on a new measure of partial correlation coefficients. Moreover, non-regularized methods like multiple regression and non-regularized maximum likelihood have been introduced within the context of psychological networks <cit.>.
Another concern that often complicates current network psychometrics is the presence of missing data, a common phenomenon in survey data during the gathering process. The problem of missing data, originated from <cit.>, continues to be an enduring and important subject in the statistical literature <cit.>. Three missingness mechanisms are established. Specifically, the data is called Missing Completely At Random (MCAR) if the missingness is independent of both the observed and unobserved data; A less stringent mechanism is Missing At Random (MAR), in which the missingness depends on the observed data but is independent of the unobserved data; Lastly, Missing Not At Random (MNAR) refers to the situation where the missingness depends on elements of the unobserved data.
Statistical methods for handling missing data have evolved alongside the development of new modeling methods, transitioning from basic to advanced and sophisticated, and from restrictive to more relaxed assumptions.
Basic strategies for dealing with missing data encompass complete-case analysis, which omits records containing any missing values. However, complete-case analysis is generally viewed as a rudimentary method, as it depends on the Missing Completely At Random (MCAR) assumption. This is typically a strong assumption for data, which can lead to inefficiency and substantial bias (refer to <ref> for a demonstration). Other basic methods encompass weighting methods and single imputation methods, further details of which are provided in sections 3 and 4 of <cit.>.
More advanced likelihood-based methods include Expectation-Maximization (EM) techniques <cit.> and multiple imputation <cit.>, the latter of which provides valid standard errors for parameters and accommodates more general missing data mechanisms.
In particular, chained-equation is a special likelihood-based imputation methods that is typically combined with multiple imputation, referred to as Chained-Equation Multiple Imputation <cit.>.
This technique, also known as Fully Conditional Specification (FCS) or sequential regression, imputes missing data from a series of specified conditional distributions <cit.>. The advantage of FCS over methods based on joint distributions lies in its flexibility, allowing for each variable with missing data to be modeled separately using its own model.
This offers the possibility of employing different types of models for different variables, if necessary, while joint distribution methods require the specification of a joint distribution which can be vulnerable. Other benefits of Chained-Equation methods include computational efficiency and robustness <cit.>.
Furthermore, <cit.> established the convergence of the iterative imputation under the compatible and incompatible cases. Despite the advancement in methods and theories concerning missing data analysis, their incorporation into network psychometrics has been inadequate. And literature in network psychometrics is sparse.
The last challenge we concern is the matter of computational efficiency. Computation for network models tends to be difficult due to the intricacies involved in model specification and optimization over the vast parameters space. For instance, the Ising network model for binary data incorporates an intractable normalizing constant, leading to computational burden during direct optimization.
A convenient solution lies in the use of pseudo likelihood, where the multiplication of each node conditioned on the others forms a node-conditional model <cit.>. This node-wise conditional form conveniently eliminates the normalizing constant, thereby addressing the computational load while still ensuring estimation consistency <cit.>. Additionally, the Pólya-Gamma augmentation has been proposed, which establishes a connection between the logistic form and the normal distribution. This allows for efficient Bayesian analysis of binary data with a logistic probability form <cit.>.
In response to the challenges and gaps outlined above, this study embarks on answering the following research inquiries: How can network model analysis be conducted in the presence of missing data? What is the theoretical justification for valid imputation and model estimation? How can we boost computational efficiency? With this work, we address the critical gap between the urgent requirement for solutions and the current paucity of available methods.
In this study, we introduce a pioneering approach for Ising model analysis on psychometric missing data, grounded in iterative imputation.
The new method addresses the issue of network modeling with data missingness, determination of network structure, and efficient computation within the comprehensive Bayesian framework.
This paper offers several key contributions. First, we propose a unique modeling framework, based on the Ising model and incorporating fully conditional specification iterative imputation to analyse binary responses in the presence of missing values. Second, we assert theoretical outcomes that validate both the consistency of the missing data imputation and model parameter estimates inherent in our proposed procedure. Third, we develop an efficient computational algorithm leveraging Pólya-Gamma augmentation and spike-and-slab priors. Lastly, we assess the effectiveness of our proposed framework, underlying theories, and the developed algorithm through extensive simulations and analysis of real-world data.
The subsequent sections of this paper are arranged in the following order: In Section <ref>, we introduce the employed models and approaches. We derive the asymptotic characteristics of our proposed procedure in Section <ref>. Section <ref> is devoted to the discussion of edge selection. The computational algorithm for model estimation is detailed in Section <ref>. We evaluate our method's performance through simulation studies in Section <ref>. Section <ref> is dedicated to the application of our new method to a real data set. The paper concludes with Section <ref>.
We remark the <ref> comprises two general parts, with two distinct sets of parameters (i.e.,{_j,j=1,…,J} and ) being introduced. The first part encompasses a Gibbs sampler for a series of conditional distributions, wherein _j are parameters corresponding to the jth conditional model. In particular, the conditional models adopt the logistic form which are compatible with the joint Ising model. A rigorous definition of compatibility can be found in <cit.> and <cit.>. Through iterative sampling of missing data and model parameters from the conditional distributions, Markov chains are established.
However, a complication arises as the sampled _js can be non-symmetric, thereby violating the structure of the Ising model. In order to resolve this issue and forge a connection between the imputed missing data and Ising model parameters, we introduce the second part of the thinning step. Specifically, we sample given the imputed missing data, i.e., ∼ p(_mis,_obs|)p(). Nevertheless, due to the intractable normalizing constant in p(_mis,_obs|) as per <ref>, sampling from this distribution poses significant challenges. To circumvent this, we employ the pseudo likelihood and draw . This crucial substitution greatly enhances computational speed (as detailed in <ref>), whilst maintaining the beneficial theoretical properties of data imputation and parameters estimation (<ref>).
§ CONSISTENCY FOR IMPUTATION AND MODEL PARAMETERS
§ EDGE SELECTION
The model selection is crucial in Ising model analysis. This is due to the large model parameter space with high model complexity. How to restrict model to have a parsimony structure while maintaining a good fit the data has been a popular and vital question in statistical literature <cit.>. In this section, we propose a Bayesian model selection method for the Ising model, where is a special case of the model stated in <ref>.
In particular, to induce a sparse network structure, we adopt the spike-and-slab prior <cit.> for the network edge parameters . Specifically, we use a special prior p() to edge parameters, comprising a mixture of two distributions: a point mass at zero and a continuous distribution spanning the entire parameter space. This prior effectively acts as Bayesian regularization for the network edges. That is,
α_l|δ_l ∼ (1-δ_l)N(0, σ_0^2) + δ_l N(0, σ_1^2),
δ_l ∼Bernoulli(q), for l∈ E,
α_l' ∼ N(0, σ_2^2), for l'∈ D,
where q∈ (0,1), σ_1^2≫σ_0^2>0, E contains indices for the edge parameters (i.e., off-diagonal parameters) and D contains indices for the intercept parameters (diagonal parameters). Here N(0,σ_0^2) and N(0,σ_1^2) correspond to “spike” and “slab” of the prior, respectively. If δ_l=0, then a “spike” prior is used for α_l, represents the belief that the edge between variables j and k might be neglected in the model. If δ_l=1, then a large variance “slab” prior allows non-zero values for α_l. Furthermore, we apply a weakly informative prior (i.e., normal distribution with a large variance σ_2^2) to the intercept parameters instead of using the spike-and-slab prior. The parameter q controls the proportion of “spike” and “slab” priors. So the joint distribution is
p^*(|,Ω)p(|)p()p(Ω).
The hyperparameters have to follow certain rules to ensure the consistency of edge selection when applying the spike-and-slab prior. It is suggested that the variance of the “spike” distribution, symbolized as σ_0^2, diminish at the rate of n^-1 rather than maintaining a fixed value.
In this research, we adopt the equation σ_0^2=σ_1^2/(10nlog(n)) and σ_1^2=1. For the prior distribution for the diagonal parameters, let σ_2^2=100.
In terms of the network structure prior q, we allow it varies. We direct readers to the discussion by <cit.> for more discussion of the strategies choosing hyperparameters.
§ COMPUTATION
For the Ising model analysis with missing entries in , we employ a Gibbs sampler. This approach is characterized by the iterative sampling of missing data from Ising conditional models and model parameters, along with augmentation random variables, from the complete data pseudo likelihood post the proper imputation of missing data.
To boost computational efficiency within our modeling framework, we turn to the Pólya-Gamma augmentation strategy <cit.> during the parameter sampling process. The computation of the proposed method is then divided into two parts: 1) an imputation step that draws missing data and auxiliary parameters from standard logistic models with latent variables of Pólya-Gamma augmentation and spike-and-slab prior; 2) a thinning step that draws model parameters given the imputed missing data.
§.§ Pólya-Gamma augmentation
Upon observation of the logistic form present in the pseudo likelihood as indicated by equation (<ref>), we employ the Pólya-Gamma augmentation to implement an intuitive and effective procedure for imputing missing data. This procedure originates from the crucial equation shown below,
(e^ϕ)^a/(1+e^ϕ)^b = 2^-b e^κϕ∫_0^∞ e^-ωϕ^2/2 p(ω)dω,
where κ = a - b/2, ω∼ PG(b, 0), and ω|ϕ∼ PG(b,ϕ). As a result, we can rephrase the conditional distribution density p(y_j|_-j,_j) in the manner depicted by the following equation,
exp[s_j^⊤y_-j^*]^y_j/1 + exp[s_j^⊤y_-j^*]
= 2^-1exp(κ_j(s_j^⊤y_-j^*))𝔼_p(ω_j| 1,0)[exp(-ω_j(s_j^⊤y_-j^*)^2/2)],
where κ_j = y_j - 1/2, ω_j|ϕ_j∼ PG(1, ϕ_j), ϕ_j=s_j^⊤y_-j^*.
Supposing we have N independent observations as represented by = (_1,…,_N)^⊤, the subsequent conditional distribution for _j is illustrated below,
p(_j|_j,_-j,_j,ω_j) ∝exp[-1/2(_j-_s_j)^⊤Σ_s_j^-1(_j-_s_j)]
where Σ_s_j = (𝐘_-j^*⊤Ω_j𝐘_-j^*+D_s_j)^-1, _s_j = Σ_s_jY_-j^*⊤_j, D_s_j is a diagonal matrix with the lth diagonal element equals (1-δ_jl)σ_0^-2+δ_jlσ_1^-2,l≠ j and the lth diagonal element equals σ_2^-2, _j=_j-(1/2)1, ^*_-j = (^*_1,-j,…,^*_N,-j)^⊤ = - _je_j^⊤ (e_j is a J-dimensional vector with the jth element as 1 and all others as 0).
Furthermore, we deduce the conditional distribution for ,
p(|,,Ω) ∝exp[-1/2(-_α)^⊤Σ_α^-1(-_α) ],
with
Σ_α = [ M^⊤Ω_D M + T^⊤ D_D T]^-1, μ_α = Σ_α M^⊤vec(K),
where M = ((𝐘_-1^* T_1)^⊤,…,(𝐘_-J^* T_J)^⊤)^⊤ is a NJ× J(J+1)/2 matrix, T = (T_1^⊤,…,T_J^⊤)^⊤ is a J^2× J(J+1)/2 matrix, and vec(K)=(_1^⊤,…,_J^⊤)^⊤, Ω_D = diag(Ω_1,…,Ω_J)=diag(vec(Ω)), D_D = diag(D_s_1,…,D_s_J). A thorough elucidation of the posterior distribution for _j and is provided in the supplementary material <ref>, which follows from the standard logistic regression model with Pólya-Gamma augmentation. Finally, we have the block-wise Gibbs sampling procedure given in <ref>.
To elucidate further, within <ref>, we are dealing with two distinct sets of parameters present in the model. The first, _js and related augmented latent variables ω,δ, serve as auxiliary parameters inherent to the conditional Ising models. The incorporation of these parameters is driven by several motives. _js are updated in each iteration alongside _js, which facilitate the convergence of the imputed missing data given the observed data similar to the scenario in <cit.>. The second, = vech(), represents the half-vectorization of the joint parameters of the Ising model, namely . undergoes intermittent updates for the estimation of the joint Ising model parameters , which is a symmetric positive definite matrix. From the consistency results in <ref>, consistently estimates the joint Ising model parameters .
§ SIMULATION STUDY
§ REAL DATA
§ CONCLUSION
§ APPENDIX
§ OLD STATISTICAL CONSISTENCY
On imputation distribution. Note that our imputation step follows the iterative imputation procedure given in <cit.>, which provides a theoretical framework for analyzing the imputation distribution.
The idea is to compare the proposed procedure with the posterior distribution based on the standard likelihood, where the latter is a standard Bayesian inference procedure for missing data imputation. We remark that the Bayesian inference procedure based on the standard likelihood is introduced for the purpose of theoretical analysis. As discussed earlier, it is computationally infeasible when J is large due to the high-computational burden of computing the normalizing constant c().
Specifically, consider one observation with a complete data vector = (Y_1, ..., Y_J)^⊤. Further, let = (Z_1, ..., Z_J)^⊤ be a vector of missing indicators, where Z_ij = 1 if Y_ij is observed and Z_ij = 0 otherwise. We further let _obs = {Y_j: Z_j = 1, j = 1, ..., J} and _mis = {Y_j: Z_j = 0, j = 1, ..., J} be the observed and missing entries of , respectively. Consider the joint distribution of observable data (_obs, ), taking the form
P(_obs = _obs, = 𝐳|, ϕ) = ∑__mis( exp(^⊤/2)/c()) q(𝐳|,ϕ),
where exp(^⊤/2)/c() is the distribution of = under the Ising model,
q(𝐳|,ϕ) denotes the conditional probability of = 𝐳 given =, and ϕ denotes the unknown parameters of this distribution.
The MAR assumption, also known as the ignorable missingness assumption, means that the conditional distribution q(𝐳|,ϕ) depends on only through the observed entries, i.e., q(𝐳|,ϕ) = q(𝐳|_obs,ϕ). In that case, (<ref>) can be factorized as
P(_obs = _obs, = 𝐳|, ϕ) = q(𝐳|_obs,ϕ) ×(∑__misexp(^⊤/2)/c()).
Consequently, the inference of only needs to depend on the second term in the product
∑__misexp(^⊤/2) /c().
In particular, given a dataset with N observations with missing indicators 𝐳_i and observed responses _i,obs,
the likelihood function for takes the form
L_ign()= ∏_i=1^N (∑__i,misexp(_i^⊤_i/2)/c()).
Note that this likelihood function only involves the parameters and the observed data 𝐳_i and _i, obs.
Ignoring the computational cost for the moment, one can set a prior distribution π() on the parameter matrix and perform standard Bayesian inference.
On parameter estimation. We further show that the stationary distribution of from the proposed method concentrates in a neighbourhood of the true model parameter _0. Consequently, the trajectory average from Algorithm <ref> consistently estimates _0.
We summarize this result in Theorem <ref> below.
Assume the conditions in Theorem 1 hold. Denote by _0 the true parameters for Ising model and π_N^*() be the posterior density of implied by the stationary distribution of the proposed method. We have for any ε >0,
∫_B(_0,ε)π_N^*()d→ 1,
in probability as N→∞. B(_0,ε) is a ball with radius ε in parameter space.
<Ref> establishes the consistency of the model parameters,
and further substantiates the use of the proposed iterative imputation method for the estimation of parameters in the Ising model under ignorable missingness.
§ TECHNICAL PROOFS
§.§ A lemma for imputation consistency
Following the derivation of Section 2.3 in the main text, under ignorable missingness assumption, the posterior distribution for satisfies
π_N() ∝ p(_obs|) π(). Under the same Bayesian model, one can impute the missing values from the posterior predictive distribution. That is, the posterior predictive distribution for _i,mis, i=1, ..., N, takes the form
p_N(_1,mis, ..., _N,mis) = ∫π_N()(∏_i=1^N exp(_i^⊤_i/2)/(c()p_i(_i,obs|))) d,
where p_i(_i,obs|) = ∑_y_ij:z_ij=0exp(_i^⊤_i/2)/c(). Further, suppose that the
Algorithm <ref> converges to a stationary distribution, and let p^*_N(_1,mis, ..., _N,mis) be the implied posterior predictive distribution given the observed data.
Then we show in Lemma <ref>, which is an adaptation of Theorem 1 of <cit.>, suggests that p_N(_1,mis, ..., _N,mis) and p^*_N(_1,mis, ..., _N,mis) converge to each other in the total variation sense.
Assume the following assumptions hold: 1) The Markov chain for missing data, generated by the iterative imputation algorithm Algorithm <ref>, is positive Harris recurrent and thus admits a unique stationary distribution denoted by p_N^*; 2) The missing data process is ignorable; 3) A regularity condition holds for prior distributions of Ising model parameters and auxiliary parameters, as detailed in <Ref>.
Then the implied posterior predictive distribution p^*_N is consistent with the true posterior predictive distribution, p_N, i.e.,
d_TV(p^*_N,p_N) = max__1,mis,…,_N,mis| p^*_N(_1,mis, ..., _N,mis) - p_N(_1,mis, ..., _N,mis)|→ 0,
in probability as N→∞.
To prove Lemma <ref>, we start by define a Gibbs sampling process for the joint Ising model, as outlined in Algorithm <ref>. This algorithm is constructed for the theoretical purposes since the step of sampling is intractable.
The aim of our proof is to validate the posterior predictive distribution of missing data given observed data p_N^*, implied by the Algorithm <ref>, converges in total variation to the true posterior predictive distribution p_N.
We first establish that p_N^* in fact converges to the stationary distribution of the Gibbs chain, denoted as p_N', implied by Algorithm <ref>. By corroborating the convergence of p_N' and p_N, the proof of Lemma <ref> is thereby completed. In the following proof, we reparameterize =vech() for convenience. We define the following for the proof.
* We denotes 𝒴 the data matrix with N samples and J variables, 𝒴_j the jth column and 𝒴_-j the remaining j-1 columns.
* Define A_N={𝒴|‖(𝒴)‖≤γ}, where (𝒴) is the complete-data maximum likelihood estimator, where γ can be sufficiently large so that
p_N^*(A_N) → 1, and
p_N'(A_N) → 1,
in probability as N→∞.
* Let
K(ω,dω') = (𝒴_mis^(k+1)∈ dω'|𝒴_mis^(k) = ω)
be the transition kernels for the missing data chain, which depend on 𝒴_obs.
* Let K^*(ω,dω') and K'(ω,dω') be the transition kernels for the missing data chains from Algorithm <ref> and Algorithm <ref>, respectively.
* We further define the transition kernels conditional on A_N by
K̃(ω,B) = K(ω,B∩ A_N)/K(ω,A_N).
So we have K̃^*(ω,·),K̃'(ω,·) are two transition kernels for the missing data chains conditional on A_N. And let p̃_N^*, p̃_N' be their stationary distributions, respectively.
* Define ‖μ‖_1 = sup_| h|≤ 1∫ h(x)μ(dx).
[A regularity condition for priors]
Let (𝒴) be the complete data maximum likelihood estimator and A_N = {𝒴: ‖(𝒴)‖≤γ}. Since the logistic models are with Ising model, we also have map _j=T_j(),j=1,…,J. Let π_j(_j) and π() be prior distributions. Further define _j^* = T_j^*() such that T̃_j() = {T_j(),T_j^*()} is a one-to-one invertible map (_j^* can be ∖_j).
Define
π_j^*(_j,_j^*) = (∂T̃_j/∂)^-1π(T̃_j^-1(_j,_j^*)).
Let L_j(_j) = π_j(_j)/π_j,𝒴_-j(_j), where
π_j,𝒴_-j(_j) = ∫ p(𝒴_-j|_j,_j^*)π_j^*(_j,_j^*)d_j^*
=∫∑_y_1j,…,y_Nj p(𝒴_j,𝒴_-j|_j,_j^*)π_j^*(_j,_j^*)d_j^*.
The assumption requires that on the set A_N,
sup_‖_j‖<γ∂log L_j(_j)<∞.
We remark that the above assumption holds for the Ising model with the normal priors adopted in the current paper. Specifically, π_j(_j) and π() are J-variate and J(J+1)/2-variate normal distributions, respectively. Moreover, π_j^* can also be a J(J+1)/2 normal distribution.
Since on A_N, L_j(_j) is a continuously differentiable function defined in ℝ^J(J+1)/2, we have then it is Lipschitz on any compact set in ℝ^J(J+1)/2. That is, on A_N, sup_‖_j‖<γ∂log L_j(_j) = sup_‖_j‖<γ[∂logπ_j(_j) - ∂logπ_j,𝒴_-j(_j)]<∞.
According to the assumptions, the Markov chain for the missing data produced by the Gibbs sampling procedure Algorithm <ref> is positive Harris recurrent and thus admit a unique stationary distribution p_N'.
We verify the conditions holds. First, on A_N, the Fisher information of the Ising model has a lower bound of ϵ n for some ϵ. So according to proposition 1 of <cit.>, we have ‖ K^*(ω,·)-K'(ω,·)‖_1→ 0 uniformly for ω∈ A_N, that is,
lim_N→∞‖K̃^*(ω,·) - K̃'(ω,·)‖_1 = 0.
According to the standard bound for Markov chain convergence rates, there exists a common starting value ω∈ C and a bound r_k such that (ii) of Lemma 2 holds. Then Lemma 2 gives us
d_TV(p̃_N^*, p̃_N')→ 0,
Further combining with conclusions in Lemma 1 in <cit.> that d_TV(p_N^*, p̃_N^*)→ 0, and d_TV(p_N', p̃_N')→ 0, we have the convergence of iterative imputation of compatible models,
d_TV(p_N^*, p_N') → 0,
in probability as N→∞.
Next, based on the construction of the Gibbs sampling procedure Algorithm <ref>, we have the sequence converges to the target distribution, that is,
d_TV(p_N', p_N) → 0,
in probability as N→∞.
Based on (<ref>) and (<ref>), we have
d_TV(p_N^*, p_N) = sup_A∈ℱ| p_N^*(A) - p_N(A)|
≤sup_A∈ℱ| p_N^*(A) - p_N'(A)| + sup_A∈ℱ| p_N'(A) - p_N(A)|→ 0,
in probability as N→∞.
<Ref> emphasizes the consistency of the proposed iterative imputation process. It implies that the implied posterior predictive distribution gradually converges to the posterior predictive distribution under standard Bayesian inference, underscoring the validity of the iterative imputation.
§.§ Proof of Theorem <ref>
We will use , the half-vectorization of in the proof. Denote π_N^*() the posterior density of implied by the stationary distribution of the proposed method. Let 𝒴 be the data matrix with 𝒴_mis and 𝒴_obs being the missing and observed parts, respectively. We have
∫_B_ε(_0)π_N^*()d
= ∫_B_ε(_0)[ ∑_𝒴_mis p^*(𝒴_mis,𝒴_obs|)p^*(𝒴_mis|𝒴_obs) ]π() / c_N d
= ∑_𝒴_mis[∫_B_ε(_0) p^*(𝒴_mis,𝒴_obs|) π() / c_N d] p^*(𝒴_mis|𝒴_obs)
= ∑_𝒴_mis[∫_B_ε(_0)exp(-N f_N()) π() / c_N d] p^*(𝒴_mis|𝒴_obs),
where f_N() = -1/N∑_i=1^Nlog p^*(_i|) given in (<ref>), c_N = ∫exp(-N f_N()) π()d. Further let π_N() = ∑_𝒴_mis[exp(-N f_N())π()/c_N] p(𝒴_mis|𝒴_obs), we have
∫_B_ε(_0)π_N()d
= ∑_𝒴_mis[∫_B_ε(_0)exp(-N f_N()) π() / c_N d] p(𝒴_mis|𝒴_obs).
Let Θ∈ℝ^J(J+1)/2, E⊂Θ be open and bounded. It can be veried that: 1) f_N have continuous third derivatives; 2) f_N→ f pointwise for some f; 3) f”(_0) is positive definite; 4) f”'(_0) is uniformly bounded on E; 5) each f_N is convex and f'(_0)=0.
Then, according to the generalized posterior concentration theorem <cit.>, we have for any ε > 0,
∫_B_ε(_0)exp(-N f_N()) π() / c_N d→ 1
in probability as N→∞. Consequently,
∫_B_ε(_0)π_N()d→ 1,
in probability as N→∞.
Finally, by employing the convergence of imputation from Lemma <ref>, specifically
d_TV(p^*(𝒴_mis|𝒴_obs) - p(𝒴_mis|𝒴_obs))→ 0
in probability as N→∞, we arrive at
∑_𝒴_mis[∫_B_ε(_0)exp(-N f_N()) π() / c_N d] (p^*(𝒴_mis|𝒴_obs) - p(𝒴_mis|𝒴_obs))→ 0
in probability as N→∞. This conclude the proof, given that
∫_B_ε(_0)π_N^*()d = ∫_B_ε(_0)π_N()d + ∫_B_ε(_0)(π_N^*()-π_N())d,
where the first term converges to 1 (i.e., (<ref>)) and the second term converges to 0 (i.e., (<ref>)) in probability as N→∞.
§ COMPUTATION DETAILS FOR SAMPLING _J AND
Upon observation of the logistic form presented in the conditional distribution when sampling the auxiliary parameters _j, we employ the Pólya-Gamma for effective sampling. Denote a random variable ω follows the Pólya-Gamma distribution PG(b,c), b>0, c∈ℝ with parameters b>0 and c∈ℝ if it is a weighted sum of independent Gamma random variables
ω = 1/2π^2∑_k=1^∞g_k/(k-1/2)^2+c^2/(4π^2),
where g_k∼Γ(b,1), which is the Gamma distribution with shape and rate parameters as b and 1, respectively.
§.§ Derivation of posterior distribution of _j
By introducing Pólya-Gamma latent variables ω_ij∼ PG(1,0),i=1,…,N, we establish a connection between the logistic form and the normal distribution.
We rephrase the jth conditional distribution p(y_ij|_i,-j,_j) by the following equation,
exp(ϕ_ij)^y_ij/1 + exp(ϕ_ij)
= 2^-1exp(κ_ijϕ_ij)𝔼_ω_ij[exp(-ω_ijϕ_ij^2/2)],
where κ_ij = y_ij - 1/2, ω_ij∼ PG(1,0), ω_ij|ϕ_ij∼ PG(1, ϕ_ij), ϕ_ij=β_jj/2+∑_k≠ jβ_jky_ik.
Denote 𝒴=(_1,…,_N)^⊤, 𝒴_j as the jth column of 𝒴, and 𝒴_-j the remaining j-1 columns. Given _j, sample N augmentation variables ω_ij,i=1,…,N, each from a Pólya-Gamma distribution
ω_ij|_j,_i∼ PG(1,β_jj/2+∑_k≠ jβ_jky_ik),
based on <ref>. Moreover, for the jth variable, we have
p(𝒴_j|𝒴_-j,_j,ω_j) = ∏_i=1^N p(y_ij|_i,-j,_j,ω_ij)
= ∏_i=1^N 2^-1exp(κ_ij(β_jj/2+∑_k≠ jβ_jky_ik))exp(-ω_ij(β_jj/2+∑_k≠ jβ_jky_ik)^2/2)
∝exp[-1/2(_j^⊤(𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)_j-2_j^⊤(𝒴 - _je_j^⊤)_j) ],
where _j=(κ_1j,…,κ_Nj)^⊤, κ_ij = y_ij - 1/2, D_ω_j = diag(ω_j). We further have the following conditional distribution for _j
p(_j |𝒴,ω_j)∝ p(𝒴_j|𝒴_-j,_j,ω_j)π_j(_j)
∝exp[-1/2(_j^⊤(𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)_j-2_j^⊤(𝒴 - _je_j^⊤)_j)-1/2_j^⊤ D_β_j_j]
=exp[-1/2(_j-_β_j)^⊤Σ_β_j^-1(_j-_β_j)],
where Σ_β_j = [(𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)+D_β_j]^-1, _β_j = Σ_β_j(𝒴 - _je_j^⊤)^⊤_j.
Here, e_j is a J-dimensional vector with the jth element be one and all others be zeros, D_ω_j=diag(ω_j), D_β_j = diag(τ_j), where τ_jl=σ_1^-2, for l≠ j and τ_jj=σ_2^-2. A weak informative prior on the intercept parameter by letting σ_2^2>σ_1^2.
To summarize, the introduced Pólya-Gamma latent variables ω_ij establish a connection between the logistic form and the normal distribution that lead to a normal form of the posterior _j, i.e.,
_j |𝒴,ω_j ∼ N(_β_j, Σ_β_j),
Σ_β_j = [(𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)+D_β_j]^-1,
_β_j = Σ_β_j(𝒴 - _je_j^⊤)^⊤_j.
§.§ Derivation of posterior distribution of
Observe that there is constraint between edge parameters s_jk = s_jk for j≠ k, which is reflected by the symmetry of matrix 𝐒. It is sufficient to examine the lower triangular elements of 𝐒 while adhering to the equality constraint. To impose such symmetric constraint on , we reparameterize by α=vech(), which is the half-vectorization of . Specifically,
α=(s_11,…,s_J1,s_22,…,s_J2,…,s_J-1,J-1,s_J,J-1,s_J,J)^⊤ = (α_1,…,α_J(J+1)/2)^⊤.
To establish a relationship between and , we first define the following equation,
_j = E_jvec().
In this equation, vec()=(_1^⊤,…,_J^⊤)^⊤ represents the vectorization of the matrix . The matrix E_j = (0_J,…,I_J,…,0_J) is a J× J^2 matrix, where the jth row block is the identity matrix I_J and all other row blocks are zero matrices.
Next, we can express vec() as follows,
vec() = D_J,
where D_J is a J^2× J(J+1)/2 duplication matrix, which can be explicitly defined as,
D_J^⊤ = ∑_i≥ j u_ij(vec T_ij)^⊤.
Here, u_ij is a unit vector of order J(J+1)/2 with ones in the position (j-1)J + i - j(j-1)/2 and zeros elsewhere. The matrix T_ij is a J× J matrix with ones in position (i,j) and (j,i) and zeros in all other positions.
By combining equations (<ref>) and (<ref>), we obtain,
s_j = T_jα,
where T_j=E_jD_J is a J× J(J+1)/2 transformation matrix.
Given , we first sample NJ augmentation variables Ω=(ω_ij)_N× J from
ω_ij|𝒴,∼ PG(1,σ_ω,ij^2),
where σ_ω,ij^2 is the (i,j)th entry of Σ_ω = 𝒴 - (𝒴-1/21_N1_J^⊤)∘1_Ndiag()^⊤. =vech^-1(), diag() is the diagonal vector of the matrix , and ∘ is the Hadamard product. Furthermore, given the above transformation, the sampling of can be done instead by sampling from its posterior with a similar Pólya-Gamma augmentation procedure. Specifically, the pseudo likelihood with Pólya-Gamma augmentation is
p^*(𝒴|,Ω) = ∏_j=1^J p(𝒴_j|𝒴_-j,_j,ω_j)
∝exp[- 1/2∑_j=1^J ( _j^⊤(𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)_j -2_j^⊤(𝒴 - _je_j^⊤)_j )].
Then we have the posterior of
p (|𝒴,Ω) ∝ p^*(𝒴|,Ω)π()
∝exp[- 1/2∑_j=1^J ( _j^⊤(𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)_j -2_j^⊤(𝒴 - _je_j^⊤)_j )-1/2∑_j=1^J_j^⊤ D_s_j_j]
= exp{-1/2∑_j=1^J[_j^⊤((𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)+D_s_j)_j - 2_j^⊤(𝒴 - _je_j^⊤)_j]}.
Plugging (<ref>) into (<ref>) we have,
p( |𝒴,Ω)
∝exp{-1/2∑_j=1^J[^⊤ T_j^⊤((𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)+D_s_j)T_j - 2_j^⊤(𝒴 - _je_j^⊤)T_j]}
∝exp{-1/2[^⊤(∑_j=1^JT_j^⊤((𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)+D_s_j)T_j)
-2(∑_j=1^J((𝒴 - _je_j^⊤)T_j)^⊤_j)^⊤]}
∝exp[-1/2(-_α)^⊤Σ_α^-1(-_α) ],
where
Σ_α = [∑_j=1^J T_j^⊤((𝒴 - _je_j^⊤)^⊤ D_ω_j(𝒴 - _je_j^⊤)+D_s_j)T_j]^-1, μ_α = Σ_α[∑_j=1^J ((𝒴 - _je_j^⊤)T_j)^⊤κ_j],
which can be further simplified as below.
In summary, the posterior of is
|𝒴,D_ω∼ N(μ_α,Σ_α),
Σ_α = [ M^⊤ D_ω M + T^⊤ D_S T]^-1,
μ_α = Σ_α M^⊤,
where 𝒴 = (_1,…,_N)^⊤, M = ([(𝒴 - _1e_1^⊤) T_1]^⊤,…,[(𝒴 - _je_j^⊤) T_J]^⊤)^⊤, D_ω = diag(ω), ω=(ω_11,…,ω_N1,ω_12,…,ω_NJ)^⊤, T = (T_1^⊤,…,T_J^⊤)^⊤, =(_1^⊤,…,_J^⊤)^⊤, and D_S = diag(τ), where τ=(τ_11,…,τ_J1,τ_12,…,τ_JJ)^⊤, τ_jl=σ_1^-2, for l≠ j and τ_jj=σ_2^-2.
Instead of conventional matrix inversion for calculating Σ_α, we use Cholesky decomposition of a symmetric positive definite (SPD) matrix, which offers enhanced efficiency and numerical stability <cit.>. More precisely, we start by performing the Cholesky decomposition of Σ_α^-1 = LL^⊤, and then proceed to solve two triangular systems: i) LY = I, and ii) L^⊤ X = Y, thus deriving X = Σ_α.
§ DETAILED SETTINGS FOR SIMULATIONS
§.§ True parameters used in the six-node example
§.§ MAR settings in the six-node example
§.§ True parameters used in the fifteen-node example
apacite
|
http://arxiv.org/abs/2307.01992v1 | 20230705025049 | Global existence and optimal time decay rate to one-dimensional two-phase flow model | [
"Xushan Huang",
"Yi Wang"
] | math.AP | [
"math.AP"
] |
=15.0cm =21.0cm =-1.1cm =-0.5cm
thmTheorem[section]
lmaLemma[section]
proProposition[section]
zhubajiezhubajie[section]
CorollaryCorollary[section]
definition
DefinitionDefinition[section]
remark
RemarkRemark[section]
ExampleExample[section]
equationsection
ϱ
φ
y
ç
w
U
/
Δ
∑
v̆^n
^n
ξ1-ξ^2
1^11-ξ^2
łλ
equation
Xushan Huang ]
Xushan HuangAcademy of Mathematics and Systems Science, Chinese Acaddemy of Sciences, Beijing, 100190 People's Republic of [email protected] Wang]
Yi WangAcademy of Mathematics and Systems Science, Chinese Acaddemy of Sciences, Beijing, 100190 People's Republic of China [email protected]
[CE]Two phase flow[CO]X S. Huang, Y. Wang
[C]
[LO, CE]
[RO, LE]
We investigate the global existence and optimal time decay rate of solution to the one dimensional (1D) two-phase flow described by compressible Euler equations coupled with compressible Navier-Stokes equations through the relaxation drag force on the momentum equations. First, we prove the global existence of strong solution to 1D Euler-Navier-Stokes system by using the standard continuity argument for small H^1 data while the second order derivative can be large. Then we derive the optimal time decay rate to the equilibrium state (ρ_*, 0, n_*, 0). Compared with multi-dimensional case, it is much hard to get time decay rate by direct spectrum method due to a slower convergence rate of the fundamental solution in 1D case. To overcome this main difficulty, we need to first carry out time-weighted energy estimates for higher order derivatives, and based on these time-weighted estimates, we can close a priori assumptions and get the optimal time decay rate by spectrum analysis method. Moreover, due to non-conserved form and insufficient decay rate of the coupled drag force terms between the two-phase flows, we essentially need to use momentum variables (m= ρ u, M=nω), not velocity variables (u, ω) in the spectrum analysis, to fully cancel out those non-conserved and insufficiently decay drag force terms.
Global existence and optimal time decay rate to one-dimensional two-phase flow model
[
August 1, 2023
====================================================================================
§ INTRODUCTION AND MAIN RESULTS
§.§ Introduction
Two-phase flow models appear in a large number of important applications in nature and engineering <cit.>. In this paper, we are concerned with the two-phase flow model
described by the following compressible Euler equations coupled with compressible Navier-Stokes equations with density-dependent viscosities
{[ ρ_t+ div_x (ρ u)=0,; (ρ u)_t+ div_x (ρ u⊗ u) +∇_x p(ρ)=
ρ n(ω-u),; n_t+ div_x (n ω)=0,; (n ω)_t+ div_x (n ω⊗ω) +∇_x n=div_x(n 𝔻(ω))-ρ n(ω-u), ].
through the relaxation drag force on the momentum equations, where the spatial variable x∈ℝ^3 and the time variable t>0 and (ρ(t,x), u(t,x)) and (n(t,x), ω(t,x)) are respectively the density and velocity of the two fluids, the pressure p is given by the γ-law:
p(ρ)= aρ^γ, with a>0 is the fluid constant and γ> 1 is the adiabatic exponent, and 𝔻(ω):=∇ω+(∇ω)^t/2 is the deformation tensor with (∇ω)^t is the transpose of the matrix ∇ω.
The two-fluid system (<ref>) can be derived from Chapman-Enskog expansion of the fluid-particle model consisting of the compressible Euler equations for fluids coupled with the Vlasov-Fokker-Planck equation for particles, through the relaxation drag force on the momentum equation and the Vlasov force on the Fokker-Planck equation (<cit.>):
{[ ρ_t + div_x(ρ u) = 0,; (ρ u)_t + div_x(ρ u⊗ u) + ∇_xp(ρ)) = ∫_ℝ^3ρ(v - u)f dv,; f_t + v·∇_xf =div_v(ρ(v-u)f + ∇_vf). ].
A large amount of literature is dedicated to the well-posedness and asymptotic behavior on the isentropic compressible Navier-Stokes equations and isentropic Euler equations with damping respectively. We are going to recall some results on existence of solutions to isentropic compressible Navier-Stokes equations in one-dimension case. Kazhikhov and Shelukhin <cit.> first studied global weak solutions for smooth initial data tends to the equilibrium state. In <cit.> Shelukhin investigated the case when the initial data is discontinuous, then Serre <cit.> and Hoff <cit.> continued the works. First results dealing with vanishing initial density were also obtained by Shelukhin <cit.>. Hoff <cit.> extends the previous results by proving the existence of global weak solution with large discontinuous initial data having different limits at x=±∞.
For the multi-dimensional case, the local existence and uniqueness of classical solutions was proved in <cit.> in the absence of vacuum and in <cit.> for the case in which the initial density need not be positive and may vanish in open sets. The global smooth solutions were first obtained by Matsumura-Nishida <cit.> for initial data close to a non-vacuum equilibrium in H^s. Later, Hoff <cit.> studied the global weak solutions for discontinuous initial data. The global weak solution has been proved for the first time by Lions in <cit.>, the results has been later refined by Feireisl et al(<cit.> and <cit.>). Jiang-Zhang <cit.> examined the initial value problem for the compressible Navier-Stokes equations in the cases of two and three space dimensions and proved the existence of global-in-time spherically symmetric weak solutions. Huang-Li-Xin <cit.> established the global existence and uniqueness of classical solutions to the Cauchy problem for the isentropic compressible Navier-Stokes equations in three spatial dimensions with smooth initial data that are of small energy but possibly large oscillations with constant state as far field, which could be either vacuum or nonvacuum. Concerning the uniqueness of the solution, Solonnikov in <cit.> obtained the existence of strong solution for smooth initial data in finite time. However, the regularity may blow up when the density approaches from the vacuum.
There are extensive literatures for both Cauchy problem and initial-boundary value problem on the compressible Euler equations with damping. For the one-dimension case, the readers are referred to <cit.> and references therein. For the multi-dimension case, Wang-Yang <cit.> proved the global existence and asymptotic behavior to the Cauchy problem by Green function method. Sideris-Thomases-Wang <cit.> showed that the damping term prevented the development of singularities for small amplitude classical solutions in three-dimensional space, using an equivalent reformulation of the Cauchy problem to obtain effective energy estimates.
When these two types of equations coupled together, there are a few results on the global existence and large time behavior of the two-phase flow. We are going to recall some results of the two phase fluid model for the multi-dimension case. Choi <cit.> proved the global existence of a unique strong solution for two-phase flow model in ℝ^3 and 𝕋^3 and obtained the exponential decay rate for the solution to a given constant equilibrium in 𝕋^3. But this method cannot be applied to the whole space. Recently, based on Hodge decomposition, low-frequency and high-frequency decomposition, delicate spectral analysis and energy methods, Wu-Zhang-Zou <cit.> proved the optimal convergence rates of the solutions to the two-phase fluid model in the whole space originally. Wu-Tang-Zhang <cit.> analyze the Green’s function and use classical H^s energy method to derive the optimal decay rate of the classical solution. However, as to our best knowledge, there is few results for the one-dimension case, especially for the asymptotic behavior.
Moreover, there is no any results on the optimal time decay rate of the solution to the one-dimensional (1D) two fluid system as far as we know. In 1D setting, compressible Euler-Navier-Stokes system takes the form
{[ ρ_t + (ρ u)_x = 0,; (ρ u)_t + (ρ u^2+p(ρ))_x = ρ n(ω-u),; n_t + (nω)_x=0,; (nω)_t + (nω^2+n)_x =(nω_x)_x + ρ n(u-ω), ].
where (ρ(x,t), u(x,t)) and (n(x,t), ω(x,t)) are respectively the density and velocity of the two fluids, they are the unknown functions of x∈ℝ and t>0, the pressure p satisfies the γ-law:
p(ρ)= ρ^γ, with γ> 1 is the adiabatic exponent.
We study the coupled system (<ref>) with initial data:
(ρ,u,n,ω)(x,0)=(ρ_0,u_0,n_0,ω_0)(x) ⟶ (ρ_*, 0, n_*, 0), x→±∞,
the constant equilibrium state (ρ_*, 0, n_*, 0) satisfying ρ_*>0, n_*>0 are prescribed.
In this paper, under the hypothesis that the initial value makes a small perturbation under the H^1 norm around a constant equilibrium state and has no restrict to the higher order derivatives, we firstly obtain the entropy estimates and then the derivative estimates, thus the density n has a lower and upper bound, this is crucial for the subsequent estimates. Then by virtue of the local well-posedness and the a priori estimates, using the standard continuity argument, we get the global in time solution to the system (<ref>), (<ref>). As we have mentioned above, for the asymptotic behavior of the solution, compared with multi-dimensional case, it is much hard to get time decay rate by direct spectrum method due to a slower convergence rate of the fundamental solution in 1D case. To overcome this main difficulty, we need to carry out time-weighted energy estimates for higher order derivatives, and based on these time-weighted estimates, we can close a priori assumptions and get the optimal time decay rate by spectrum analysis method. This is motivated by <cit.>, but we should point out that our model is not a special case of the previous because our model do not satisfying the assumptions in <cit.>. Moreover, due to non-conserved form and insufficient decay rate of the coupled drag force terms between the two-phase flow in the spectrum analysis, we essentially need to use momentum variables (m= ρ u, M=nω), not velocity variables (u, ω), to fully cancel out those non-conserved drag force terms.
We organize the article as follows. In section <ref>, some notations, auxiliary lemmas and the main results would be provided. In section <ref>, we present the entropy estimate, a priori estimates and the proof of Theorem <ref>. Spectrum analysis of the linear system, time weighted energy estimates are established in section <ref> and the proof of Theorem <ref> would be obtained in section <ref>.
§.§ Notations and Basic Lemmas
If f ∈ L^p(ℝ) we define the L^p norm of f by
f_L^p(ℝ) = (∫_ℝ |f(x)|^p dx)^1/p.
For p=2, we simply write · and for p=∞, ·_∞:=·_L^∞(ℝ).
Next we give two lemmas which play an important role in subsequent parts of this paper. The proof can be found in <cit.> and <cit.> respectively.
(Sobolev inequality)
For f∈ H^1(ℝ),
f_∞≤√(2)f^1/2f_x^1/2.
(Gagliardo-Nirenberg)
Let 1≤ q ≤ +∞ be a positive extended real quantity. Let j and m be non-negative integers such that j<m. Furthermore, let 1≤ r ≤ +∞ be a positive extended real quantity, p≥ 1 be real and θ∈ [0,1] such that the relations
1/p = j/n + θ(1/r- m/n)+ 1-θ/q, j/m≤θ≤ 1
hold. Then,
D^ju_L^p(ℝ^n)≤ CD^mu^θ_L^r(ℝ^n)u^1-θ_L^q(ℝ^n)
for any u∈ L^q(ℝ^n) such that D^mu ∈ L^r(ℝ^n), where the constant C≥ 0 depends on the parameters j, m, n, q, r, θ, but not on u.
§.§ Reformulation of The Problem
The main purpose of this subsection is to obtain a symmetric system. Introduce the sound speed
σ (ρ) = √(p'(ρ)) and set σ_* = σ (ρ
_*). Define<cit.>
v = 2/γ -1 (σ (ρ) - σ_*),
then the system (<ref>) are transformed into the following system for C^1 solutions:
{[ v_t + σ_*u_x = -uv_x - γ -1/2vu_x,; u_t + σ_*v_x = -uu_x - γ -1/2vv_x + n(ω-u),; n_t+(nω)_x=0,; (nω)_t+(nω^2+n)_x=(nω_x)_x+ρ n(u-ω). ].
The initial condition (<ref>) becomes
(v, u, n, ω)|_t=0 = (v_0(x), u_0(x), n_0(x), ω_0(x)).
The proof of the following lemma is straightforward.
For any T>0, if (ρ,u,n,ω) ∈ C^1(ℝ×[0,T]) is a solution of Equation (<ref>) with n>0, then (v,u,n,ω)∈ C^1(ℝ×[0,T]) is a solution of Equation (<ref>) with γ-1/2v + σ_* >0.
Conversely, if (v,u,n,ω)∈ C^1(ℝ×[0,T]) is a solution of Equation (<ref>) with γ-1/2v + σ_* >0 and ρ = σ^-1(γ-1/2v + σ_*), then (ρ,u,n,ω) ∈ C^1(ℝ×[0,T]) is a solution of Equation (<ref>) with n>0.
§.§ Main Results
There are two main results in this paper. Theorem <ref> gives the global existence to the Cauchy problem (<ref>) and (<ref>) which we can get by using the standard continuity argument, the readers are reffered to <cit.> and the proof will be given in section 2. By careful observation from Theorem <ref>, we know that the solution decays to the constant equilibrium states, a natural question is what's the optimal decay rate? Theorem <ref> gives the answer. The proof is based on the new idea to combine the time weighted energy estimates and the spectral analysis together and will be given in section <ref>.
Define the solution space on the time interval [0,T] by
X(0,T) = {(ρ, u, n, ω) |(ρ -ρ_*, u, n-n_*) ∈(C(0,T;H^2(ℝ))∩ L^2(0,T;H^2(ℝ)))^3,
ω∈
C(0,T;H^2(ℝ))∩ L^2(0,T;H^3(ℝ))} .
(Global existence)
There exists a suitably small positive constant ϵ_0 and a positive constant C_0 such that if the initial value satisfies (ρ_0-ρ_*, u_0, n_0 - n_*, ω_0)_H^1≤ϵ_0 and (ρ_0xx, u_0xx, n_0xx, ω_0xx)≤ C_0, then the two phase flow system (<ref>), (<ref>) has a unique global-in-time solution (ρ, u, n, ω)∈ X(0,∞) satisfying
sup_t∈ [0,∞)[(ρ -ρ_*, u, n-n_*, ω)(·,t)_H^1^2 +(ρ_t, u_t)(·,t)^2]
+ ∫_0^∞[(ρ_x, u_x, n_x, u_t)^2 +ω_xx^2 + u-ω^2] dt ≤ Cϵ_0^2,
sup_t∈ [0,∞](ρ_xx, u_xx,n_xx, ω_xx, ρ_xt, u_xt)(t,·)^2
+ ∫_0^∞ ((ρ_xx, u_xx, n_xx, u_xt)^2 + ω_xxx^2 )dt ≤ C(ϵ_0^2+ C_0^2),
where C is a uniform positive constant. Consequently, the following time-asymptotic behavior holds true,
(ρ-ρ_*, u, n-n_*, ω)(t,·)_L^∞ + (ρ_x, u_x, n_x, ω_x)(t,·)_H^1⟶ 0 as t→∞.
(Optimal time decay rate)
Under the assumptions of Theorem <ref>, if additionally (ρ_0-ρ_*, u_0, n_0-n_*, ω_0)_L^1≤ϵ_0 and (ρ_0, u_0, n_0, ω_0)_xx_H^2≤ C_0, then the global solution obtained in Theorem <ref> has the following time decay rate:
{[ ∂_x^k(ρ - ρ_*, u, n-n_*, ω)≤ Cϵ_0(1+t)^-1/4 - k/2, k=0, 1,; ∂_x^2(ρ - ρ_*, u, n-n_*, ω)≤ CC_0(1+t)^-5/4. ].
and
∂_x^k(u- ω)≤ C(ϵ_0 +C_0) (1+t)^-3/4- k/2, k=0, 1.
From <ref> we can see that u-ω has faster decay than ρ-ρ_*, u, n-n_*, ω, the reason for this phenomenon is very likely that the action of friction between two fluids.
§ ENERGY ESTIMATES
§.§ Entropy Estimate and A Priori Estimates
(entropy estimate)
Assume that (ρ_0-ρ_*, u_0,n_0-n_*,ω_0)∈ L^2(ℝ). There exists a constant ϵ_0>0 such that if
ρ_0-ρ_*,u_0,n_0-n_*,ω_0≤ϵ_0, then the following estimate holds:
∫_ℝ[ρu^2/2 + nω^2/2 + (ρ -ρ_*)^2 + (n-n_*)^2] dx
+ ∫_0^t∫_ℝ nω_x^2 dxdτ + ∫_0^t∫_ℝρ n(u-ω)^2 dxdτ≤ Cϵ_0^2.
By virtue of (<ref>)_1, (<ref>)_2
we get
ρ u_t+ρ u u_x+p(p)_x=ρ n(ω-u).
Multiplying (<ref>)_1, (<ref>) by u^2/2 and u respectively, then adding them together yields
(ρu^2/2)_t+(ρ u u^2/2)_x+(ρ^r-ρ_*^r)_x u=ρ(w-u)u.
Multiplying (<ref>)_1 by γ/γ - 1(ρ^γ
-1 - ρ_*^γ - 1), we have
1/γ-1(ρ^γ-ρ_*^γ)_t-γ/γ-1ρ_*^γ-1(ρ-ρ_*)_t+1/γ-1(ρ^γ-ρ_*^γ)_x u + ( ρ^γ - ρ_*^γ) u_x
+ 1/γ-1(ρ^γ-ρ_*^γ) u_x-γ/γ-1ρ_*^γ-1(ρ u_x+ρ_xu)
+γ/γ-1ρ_*^γ u_x=0.
Combining (<ref>) and (<ref>) gives
[ρu^2/2 + 1/γ-1ρ^γ -1/γ-1ρ_*^γ - γ/γ-1ρ_*^γ-1 (ρ-ρ_*)]_t
+ [ρ uu_2/2 + (ρ^γ-ρ_*^γ)u .
+ .1/γ-1(ρ^γ-ρ_*^γ)u + γ/γ-1ρ_*^γ u -γ/γ-1ρ_*^γ-1ρ u]_x = ρ n(ω-u)u.
Similarly, in view of (<ref>)_3 and (<ref>)_4, this implies
nω_t + nωω_x + n_x = (nω_x)_x + ρ n(u-ω).
Multiplying (<ref>), (<ref>)_3 by ω and ω^2/2 respectively, then adding them together yields
(n ω^2/2)_t + (nωω^2/2)_x + (n-n_*)_xω - (nωω_x)_x + nω_x^2 = ρ n(u - ω) ω.
Multiplying (<ref>)_3 by [(1 + ln n) - (1+ ln n_*)] gives
[(n ln n - n_*ln n_*) - (1 + ln n_*)(n - n_*)]_t +(n ln n - n_*ln n_*)_xω
- (1 + ln n_*) (n_x ω + nω_x) + (n ln n-n_*ln n_*)ω_x + (n_* + n_*ln _*)ω_x + (n-n_*)ω_x = 0.
Adding (<ref>) and (<ref>) together, we have
[nω^2/2 + n ln n - n_*ln n_* - (1 + ln n_*)(n - n_*)]_t + [(n - n_*)ω
+(n ln n - n_*ln n_*)ω - nωω_x - (1 + ln n_*) nω]_x +n ω_x^2 = ρ n(u-ω)ω.
Combining (<ref>) and (<ref>) then integrating over ℝ×[0,t], we obtain
∫_ℝ[ρu^2/2 + 1/γ-1ρ^γ -1/γ-1ρ_*^γ - γ/γ-1ρ_*^γ-1 (ρ-ρ_*)
+ nω^2/2 + (n ln n - nln n_*) - (1 + ln n_*)(n - n_*)] dx
+ ∫_0^t∫_ℝ(nω_x^2 +ρ n(u-ω)^2) dxdτ
= ∫_ℝ [ρ_0u_0^2/2 + 1/γ-1ρ_0^γ -1/γ-1ρ_*^γ - γ/γ-1ρ_*^γ-1 (ρ_0-ρ_*)
+ n_0ω_0^2/2 + (n_0ln n_0 - n_0ln n_*) - (1 + ln n_*)(n_0 - n_*) ] dx.
Consequently, we have
∫_ℝ(ρu^2/2 + nω^2/2 + (ρ -ρ_*)^2 + (n-n_*)^2) dx
+ ∫_0^t∫_ℝ(nω_x^2 + ρ n(u-ω)^2) dxdτ
≤ C(u_0^2 + ω_0^2+ ρ_0 -ρ_*^2 + n_0 - n_*^2)
≤ Cϵ_0^2.
§.§ Estimate of n_x
Assume that (ρ_0-ρ_*,u_0,n_0-n_*,ω_0)∈ L^2(ℝ), n_0x∈ L^2(ℝ). There exists a constant ϵ_0>0 such that if
ρ_0-ρ_*,u_0,n_0-n_*,ω_0+ n_0x≤ϵ_0. Then the following estimate holds:
∫_ℝn_x^2/n dx + ∫_0^t∫_ℝn_x^2/n dxdτ≤ C ε_0^2
Divided (<ref>)_3 by n, then differentiating with x and multiplying by n we obtain
n(ln n)_xt + nω (ln n)_xx + (nω_x)_x = 0.
Adding (<ref>) and (<ref>) together yields
n(ω + (ln n)_x)_t + nω(ω +(ln n)_x)_x + n_x = ρ n(u-ω)
Multiplying (<ref>), (<ref>)_3 by ω + (ln n)_x and [ω + (ln n)_x]^2/2 respectively, then summing them up we have
[n (ω + (ln n)_x)^2/2]_t + [nω(ω + (ln n)_x)^2/2]_x + ω n_x + n_x^2/n
= ρ n(u-ω)[ω + (ln n)_x].
Multiplying (<ref>)_3 by (1 + ln n) gives
[n ln n - n_*ln n_* - (1+ln n_*)(n-n_*)]_t + [(ln n- ln n_*) (nω)]_x - n_xω = 0.
Combining (<ref>) and (<ref>) and integrating over ℝ× [0,t] by parts we have
∫_ℝ[n (ω + (ln n)_x)^2/2 + n ln n - n_*ln n_* - (1+ln n_*)(n-n_*)] dx
+ ∫_0^t∫_ℝn_x^2/n dxdτ = ∫_ℝ[n_0(ω_0 + (ln n_0)_x)^2/2 + n_0ln n_0 - n_*ln n_*
- (1+ln n_*)(n_0-n_*)] dx + ∫_0^t∫_ℝρ n(u-ω)[ω + (ln n)_x] dxdτ.
Combining Proposition <ref> and (<ref>), by virtue of Young's inequality leads to
∫_ℝ[n (ω + (ln n)_x)^2/2 + n ln n - n_*ln n_* - (1+ln n_*)(n-n_*)]dx
+ ∫_0^t∫_ℝn_x^2/n dxdτ≤∫_ℝ[n_0(ω_0 + (ln n_0)_x)^2/2 + n_0ln n_0 - n_*ln n_*
- (1+ln n_*)(n_0-n_*)]dx + C(u_0^2 + ω_0^2 + ρ_0 -ρ_*^2 + n_0 - n_*^2)
therefore,
∫_ℝn_x^2/n dx + ∫_0^t∫_ℝn_x^2/n dxdτ
≤ C(n_0)_x^2 + C(u_0^2 + ω_0^2 + ρ_0 -ρ_*^2 + n_0 - n_*^2)
≤ Cϵ_0^2
n has a upper and lower bound, in other words, there are two positive constants B_1, B_2 which are independent of t such that
B_1 ≤ n(t,x) ≤ B_2
Using Cauchy-Schwarz inequality and Lemma <ref> one gets
|(√(n)- √(n_*))^2| = |∫_-∞^x(√(n)-√(n_*))_x^2 dx| = ∫_-∞^x2(√(n)-√(n_*))(√(n))_x dx
≤ 2(∫_-∞^∞(√(n))_x^2 dx)^1/2(∫_-∞^∞(√(n)- √(n_*))^2 dx)^1/2
≤ C(∫_-∞^∞(√(n)- √(n_*))^2 dx)^1/2
≤ C(∫_|n-n_*|≥1/2n_*(√(n)- √(n_*))^2 dx)^1/2
+ C(∫_|n-n_*|≤1/2n_*(√(n)- √(n_*))^2 dx)^1/2.
If n ≥3/2n_* or n ≤1/2n_*,
since
lim_n→∞(√(n)-√(n_*))^2/(n-n_*)^2 = 0
and
lim_n→ 0(√(n)-√(n_*))^2/(n-n_*)^2 =1/n_*,
we have
∫_|n-n_*|≥1/2n_*(√(n)- √(n_*))^2 dx ≤ C∫_|n-n_*|≥1/2n_*(n-n_*)^2 dx
≤ C∫_ℝ (n-n_*)^2 dx ≤ Cϵ_0^2.
If 1/2n_* ≤ n ≤3/2n_*,
∫_|n-n_*|≤1/2n_*(√(n)- √(n_*))^2 dx ≤ C∫_|n-n_*|≤1/2n_*(n-n_*)^2 dx
≤ C∫_R (n-n_*)^2 dx ≤ Cϵ_0^2.
Plugging these estimates into (<ref>) and taking square root we can obtain
|(√(n)- √(n_*))| ≤ Cϵ_0.
If we choose ϵ_0 ≤√(n_*)/2C, we complete the proof.
§.§ Estimate of ω_x
Assume that (ρ_0-ρ_*,u_0,n_0-n_*,ω_0)∈ L^2(ℝ), (n_0, ω_0)_x ∈ L^2(ℝ), there exists a constant ϵ_0>0 such that if
(v_0,u_0,n_0-n_*,ω_0) + (n_0, ω_0)_x≤ϵ_0. Then the following estimate holds:
sup_t∈ [0,∞)ω_x^2(t,·) + ∫_0^∞ω_xx^2 dt ≤ Cϵ_0^2.
Divided (<ref>) by n and multiplying by -ω_xx, a direct computation yields
-(ω_xω_t)_x + ω_x ω_xt - ωω_xω_xx + (ln n)_xω_xx = -ω_xx^2 +(ln n)_xω_xω_xx +ρ (u-ω)ω_xx.
Integrating over ℝ× (0,t) we have
∫_ℝω_x^2/2dx + ∫_0^t∫_ℝω_xx^2dxdτ = ∫_ℝω_0x^2/2dx -∫_0^t∫_ℝ (ln n)_xω_xω_xx dxdτ +
∫_0^t∫_ℝωω_xω_xxdxdτ
+ ∫_0^t∫_ℝ(ln n)_xω_xxdxdτ + ∫_0^t∫_ℝρ (u-ω)ω_xx dxdτ
With the aid of Corollary <ref>, Lemma <ref> and Young's inequality one gets
∫_0^t∫_ℝ (ln n)_xω_xω_xx dxdτ≤∫_0^tω_x_L^∞ω_xx(ln n)_x dτ
≤ C ∫_0^tω_x^1/2ω_xx^3/2≤δ∫_0^tω_xx^2dt + C∫_0^tω_x^2 dτ.
Similarly,
∫_0^t∫_ℝωω_xω_xxdxdτ≤δ∫_0^tω_xx^2 dt + C
∫_0^tω_x^2 dτ,
∫_0^t∫_ℝ(ln n)_xω_xxdxdτ≤δ∫_0^tω_xx^2 dτ + C∫_0^tn_x^2 dτ,
∫_0^t∫_ℝρ (u-ω)ω_xx dxdτ≤δ∫_0^tω_xx^2 dτ + C∫_0^tu - ω ^2 dτ.
Plugging these estimates into (<ref>), let δ small enough and by virtue of Proposition <ref> and Lemma <ref>, we deduce that
sup_t∈ [0,∞)ω_x^2(t,·) + ∫_0^∞ω_xx^2 dt ≤ Cϵ_0^2.
§.§ Estimate of u_x, v_x, u_t, v_t
Assume
(v_0,u_0,n_0-n_*,ω_0)_H^1 + (v_0, u_0)_t≤ϵ_0. For any given T>0, suppose that (v, u, n,ω) is the solution of the Cauchy problem (<ref>), (<ref>) defined for (x,t)∈ℝ× [0,T). There exist two positive constants E_0 and C_1, where E_0 ≤1/32√(C)(1+C_0), C_1 = 2√(C)(1 + C_0), such that if sup_t∈ [0,T]u_x,v_x,u_t,v_t≤ E_0, sup_t∈ [0,T]u_xx, v_xx≤ C_1, then the following a priori estimate holds for t∈ [0,T)
(u_x, v_x, u_t, v_t)^2 + ∫_0^t(u_x, u_t)^2 dτ≤ Cϵ_0^2.
Differentiating (<ref>)_1,(<ref>)_2 with x, multiplying them by v_x, u_x respectively, then adding them together and integrating by parts, we can get
d/dt∫_ℝ1/2(u_x^2 + v_x^2) dx + ∫_ℝ nu_x^2 dx = -1/2∫_ℝ u_x^3 dx
-γ/2∫_ℝ u_xv_x^2 dx
+ ∫_ℝ n_x(ω - u)u_x dx + ∫_ℝ nω_xu_x dx.
By using Lemma <ref>, we obtain
∫_ℝ u_x^3 dx ≤ C u_x_L^∞u_x^2≤ CE_0^1/2C_1^1/2u_x^2.
The equation (<ref>)_2 implies that
|v_x | ≤ C(|u_t| + |uu_x| + |n(ω - u)|) ≤ C(|u_t| + |u_x| + |(ω - u)|),
consequently,
v_x^2≤ C(u_t^2 + u_x^2 + (ω - u)^2),
therefore,
∫_ℝ u_xv_x^2 dx ≤u_x_∞u_t^2 + u_x_∞u_x^2 + u_x_∞ω -u^2
≤ C E_0^1/2C_1^1/2 (u_t^2 + u_x^2 + ω - u^2).
Similarly, by Lemma <ref> and Young's inequality, it follows that
∫_ℝ n_x(ω - u)u_x dx ≤δu_x^2 + Cϵ_0n_x^2,
∫_ℝ nω_xu_x dx ≤δu_x^2 + Cω_x^2.
Plugging these estimates into (<ref>) and choosing δ sufficiently small yields
d/dt∫_ℝ1/2(u_x^2 + v_x^2) dx + ∫_ℝ nu_x^2 dx ≤ CE_0^1/2C_1^1/2 (u_x^2 + u_t^2)
+ C(ω_x^2 + n_x^2 + ω - u^2).
To estimate u_t^2, differentiating (<ref>)_1, (<ref>)_2 with time variable t, multiply them by v_t, u_t respectively, then adding them together and integrating by parts gives
d/dt∫_ℝ1/2(u_t^2 + v_t^2) dx + ∫_ℝ nu_t^2 dx = -1/2∫_ℝ u_xu_t^2 dx -
γ/2∫_ℝ u_xv_t^2 dx - ∫_ℝ u_tv_tv_x dx + ∫_ℝ n_t(ω - u)u_t dx + ∫_ℝ nω_tu_t dx.
Using Lemma <ref> and the hypothesis of Lemma <ref> leads to
∫_ℝ u_xu_t^2 dx ≤ Cu_x_∞u_t^2≤ Cu_x^1/2u_xx^1/2u_t^2≤ CE_0^1/2C_1^1/2u_t^2.
Owing to (<ref>)_1, we have
|v_t| ≤ C(|u_x| + |uv_x|) ≤ C(|u_x| + |v_x|),
and thus,
|v_t|^2≤ C(|u_x|^2 + |v_x|^2),
therefore,
∫_ℝ u_xv_t^2 dx ≤ C E_0^1/2C_1^1/2 (u_t^2 + u_x^2).
Similarly, with the aid of (<ref>), (<ref>) and Young's inequality, we can get
∫_ℝ u_tv_tv_x dx
≤ CE_0^1/2C_1^1/2u_t^2 + δu_t^2 + CE_0 C_1 u_x^2 + CE_0 C_1ω-u^2.
Noting that n_t = -(nω)_x = -(n_xω + nω_x), we have
∫_ℝ n_t(ω - u)u_t dx = - ∫_ℝ (n_xω + nω_x) (ω - u)u_t dx
= - ∫_ℝ [(n_xω(ω - u)u_t + nω_x)(ω - u)u_t] dx
≤δu_t^2 + Cϵ_0C_1ω -u^2.
Similarly, noting that
nω_t = -nωω_x - n_x - n_xω_x + nω_xx + ρ n(u - ω),
thus,
∫_ℝ nω_tu_t dx ≤δu_t^2 + C(ω_x^2 + n_x^2 + ω_xx^2 + u-ω^2).
Plugging these estimates into (<ref>) and choosing δ sufficiently small, we can get
d/dt∫_ℝ (u_t^2 + v_t^2) dx + ∫_ℝ nu_t^2 dx ≤ C E_0^1/2C_1^1/2 (u_t^2 +u_x^2 )
+ C(ω_x^2 + n_x^2 + ω_xx^2 + u-ω^2).
Adding (<ref>) and (<ref>) together integrating with t over [0,t] yields
∫_ℝ(u_x^2 + v_x^2 + u_τ^2 + v_τ^2) dx+ ∫_0^t∫_ℝ (u_x^2 +u_τ^2)dxdτ
≤∫_ℝ(u_0x^2 + v_0x^2 + u_0τ^2 + v_0τ^2) dx +CE_0^1/2C_1^1/2∫_0^t(u_τ^2 +u_x^2)dτ
+ C∫_0^t(ω_x^2 + n_x^2 + ω_xx^2 + u-ω^2)dτ.
By virtue of Proposition <ref>, Lemma <ref>, <ref>, and the assumption of Lemma <ref>, we complete the proof.
§.§ Estimate of n_xx, ω_xx
Assume
(v_0,u_0,n_0-n_*,ω_0)_H^1(ℝ) + (v_0, u_0)_t≤ϵ_0, (n_0, ω_0)_xx≤ C_0. For any given T>0, suppose that (v, u, n,ω) is the solution to Cauchy problem (<ref>), (<ref>) defined for (x,t)∈ℝ× [0,T). There exists a positive constant C_1, where C_1 ≤1/16ϵ_0, such that if
sup_t∈ [0,T](u_xx, v_xx,n_xx,ω_xx)≤ C_1,
then we have
n_xx^2 + ω_xx^2 + ∫_0^t (n_xx^2 + ω_xxx^2) dτ≤ C(ϵ_0^2 + C_0^2).
Differentiating (<ref>)_3 two times with x and multiplying by n_xx, then divided by n yields
1/2n(n_xx^2)_t + 1/nn_xxn_xxxω + 3/nn_xx^2ω_x + 3/nn_xn_xxω_xx + n_xxω_xxx = 0.
Divided (<ref>) by n, differentiating it with respect to x, then multiplying by n_xx we can get
ω_xtn_xx + ω_x^2n_xx + ωω_xxn_xx + 1/nn_xx^2 - 1/nn_x^2n_xx = 1/nω_xn_xx^2 + ω_xxxn_xx
- 1/n^2ω_xn_x^2n_xx + ρ_x(u-ω)n_xx + ρ (u-ω)_xn_xx.
Adding (<ref>) with (<ref>) together gives
1/2n(n_xx^2)_t + 1/nn_xx^2 = -1/nn_xxn_xxxω - 2/nω_xn_xx^2 - 3/nn_xn_xxω_xx - ω_xtn_xx
- ω_x^2n_xx - ωω_xxn_xx + 1/nn_x^2n_xx
- 1/n^2ω_xn_x^2n_xx + ρ_x(u-ω)n_xx
+ ρ (u-ω)_xn_xx.
A direct computation by using (<ref>)_3, it is easily to see that
(1/2nn_xx^2)_t + 1/nn_xx^2 = - 1/nω_xn_xx^2 - 3/nn_xn_xxω_xx - (ω_xn_xx)_t + (ω_xn_xt)_x
+ ω_xxn_xt - ω_x^2n_xx - ωω_xxn_xx +1/nn_x^2n_xx - 1/n^2ω_xn_x^2n_xx
+ ρ_x(u-ω)n_xx + ρ (u-ω)_xn_xx.
Integrating with respect x and t over ℝ× [0, t], we shall get
∫_ℝ1/2nn_xx^2 dx + ∫_0^t∫_ℝ1/nn_xx^2 dxdτ = ∫_ℝ1/2n_0n_0xx^2 dx + ∫_0^t∫_ℝ[-1/nω_xn_xx^2
-3/nn_xn_xxω_xx + ω_xxn_xt - ω_x^2n_xx - ωω_xxn_xx + 1/nn_x^2n_xx - 1/n^2ω_xn_x^2n_xx
+ ρ_x(u-ω)n_xx + ρ (u-ω)_xn_xx]dxdτ + ∫_ℝω_xn_xx dx + ∫_ℝω_0xn_0xx dx.
With the aid of Corollary <ref>, Lemma <ref> and <ref>, we have
∫_0^t∫_ℝ1/nω_xn_xx^2 dxdτ≤ C ∫_0^tω_x_∞n_xx^2 dτ
≤ C ∫_0^tω_x^1/2ω_xx^1/2n_xx^2 dτ≤ Cϵ_0^1/2 C_1^1/2∫_0^tn_xx^2 dτ.
Similarly,
∫_0^t∫_ℝ3/nn_xn_xxω_xx dxdτ≤∫_0^tδn_xx^2 dτ + Cϵ_0^2C_1^2∫_0^tω_xx^2 dτ,
∫_0^t∫_ℝω_x^2n_xx dxdτ≤δn_xx^2 + Cϵ_0C_1∫_0^tω_x^2 dτ,
∫_0^t∫_ℝωω_xxn_xx dxdτ≤δn_xx^2 + Cϵ_0^2∫_0^tω_xx^2 dτ,
∫_0^t∫_ℝ1/nn_x^2n_xx dxdτ≤δn_xx^2 + Cϵ_0^4∫_0^tn_x^2 dτ,
∫_0^t∫_ℝ1/n^2ω_xn_x^2n_xx dxdτ≤ Cϵ_0^2∫_0^tn_xx^2 dτ,
∫_0^t∫_ℝρ_x(u-ω)n_xx dxdτ≤δn_xx^2 + Cϵ_0C_1
∫_0^tu-ω^2 dτ,
∫_0^t∫_ℝρ (u-ω)_xn_xx dxdτ≤δ∫_0^tn_xx^2 dτ + C∫_0^t (u_x^2 + ω_x^2) dτ,
the last two terms have estimates
∫_ℝω_xn_xx dx ≤δn_xx^2 + Cω_x^2,
∫_ℝω_0xn_0xx dx ≤ C(ω_0x^2 + n_0xx^2),
where we used Cauchy-Schwarz inequality.
Finally we estimate the term ∫_0^t∫_ℝω_xxn_xτ dxdτ. By virtue of (<ref>)_3, we have
n_xt = -(nω)_xx = - (n_xxω + 2n_xω_x + nω_xx).
It is easy to see that
∫_0^t∫_ℝω_xxn_xτ dxdτ≤δ∫_0^tn_xx^2 dτ + C ϵ_0∫_0^tω_x^2 dτ + C∫_0^tω_xx^2 dτ.
Plugging these estimates into (<ref>), and choosing δ small enough we have
n_xx^2 + ∫_0^tn_xx^2 dτ≤ C(ω_0x^2 + n_0xx^2)+ C∫_0^t (u_x^2 +ω_x^2
+ω_xx^2) dτ + Cϵ_0^4∫_0^tn_x^2 dτ + C(ϵ_0^2 + ϵ_0^1/2C_1^1/2)∫_0^tn_xx^2 dτ
+ Cϵ_0C_1∫_0^tu-ω^2 dτ + Cω_x^2.
Divided (<ref>) by n, differentiating it with respect to x, then mulplying by -nω_xxx, we can get
1/2n(ω_xx)_t^2 + nω_xxx^2 =
n(ω_xtω_xx)_x + nω_x^2ω_xxx + nωω_xxω_xxx
+ n_xxω_xxx - 1/nn_x^2ω_xxx
-ω_xn_xxω_xxx + 1/nω_xn_x^2ω_xxx
-nρ_x(u-ω)ω_xxx - nρ (u-ω)_xω_xxx.
Adding (<ref>) with (<ref>) together, by a direct computation we obtain
(1/2nn_xx^2)_t - 1/2n_xx^2(1/n)_t + (1/2nω_xx^2)_t - 1/2n_tω_xx^2 + nω_xxx^2 = - 1/nn_xxn_xxxω
- 3/nn_xx^2ω_x - 3/nn_xn_xxω_xx +
n(ω_xtω_xx)_x + nω_x^2ω_xxx + nωω_xxω_xxx
- 1/nn_x^2ω_xxx
-ω_xn_xxω_xxx + 1/nω_xn_x^2ω_xxx-nρ_x(u-ω)ω_xxx
- nρ (u-ω)_xω_xxx.
Integrating (<ref>) with x and t over ℝ× [0,t] and after integrating by parts yields
∫_ℝ (1/2nn_xx^2 + 1/2nω_xx^2) dx + ∫_0^t∫_ℝ nω_xxx^2 dxdτ = ∫_ℝ (1/2n_0n_0xx^2 + 1/2n_0ω_0xx^2) dx
∫_0^t∫_ℝ[ -2/nω_xn_xx^2 - 1/2n_xωω_xx^2
- 1/2nω_xω_xx^2 - 3/nn_xn_xxω_xx + nω_x^2ω_xxx
+ nωω_xxω_xxx - 1/nn_x^2ω_xxx - ω_xn_xxω_xxx + 1/nω_xn_x^2ω_xxx - nρ_x(u-ω)ω_xxx
- nρ (u-ω)_xω_xxx] dxdτ.
Using Corollary <ref>, Lemma <ref> and <ref> we have
∫_0^t∫_ℝ2/nω_xn_xx^2 dxdτ≤ C∫_0^tω_x_L^∞n_xx^2 dτ
≤ C∫_0^tω_x^1/2ω_xx^1/2n_xx^2 dτ≤ Cϵ_0^1/2C_1^1/2∫_0^tn_xx^2 dτ.
Similarly,
∫_0^t∫_ℝ1/2n_xωω_xx^2 dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0^4∫_0^tω_xx^2 dτ,
∫_0^t∫_ℝ1/2nω_xω_xx^2 dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0^4/3∫_0^tω_xx^2 dτ,
∫_0^t∫_ℝ nω_x^2ω_xxx dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0^2∫_0^t(ω_x^2 + ω_xx^2)dτ,
∫_0^t∫_ℝ3/nn_xn_xxω_xx dxdτ≤δ∫_0^tn_xx^2 dτ + Cϵ_0^2C_1^2∫_0^tω_xx^2 dτ,
∫_0^t∫_ℝ nωω_xxω_xxx dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0^2∫_0^tω_xx^2 dτ,
∫_0^t∫_ℝ1/nn_x^2ω_xxx dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0^2∫_0^t(n_x^2 + n_xx^2)dτ,
∫_0^t∫_ℝω_xn_xxω_xxx dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0C_1∫_0^tn_xx^2 dτ,
∫_0^t∫_ℝ1/nω_xn_x^2ω_xxx dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0^4∫_0^tn_xx^2 dτ,
∫_0^t∫_ℝ nρ_x(u-ω)ω_xxx dxdτ≤δ∫_0^tω_xxx^2 dτ + Cϵ_0C_1 ∫_0^tu-ω^2 dτ,
∫_0^t∫_ℝ nρ (u-ω)_xω_xxx dxdτ≤δ∫_0^tω_xxx^2 dτ + C∫_0^t (u_x^2 + ω_x^2) dτ.
Plugging these estimates into (<ref>), we have
n_xx^2 + ω_xx^2 + ∫_0^tω_xxx^2 dτ
≤ C(n_0xx^2 + ω_0xx^2)
+ C∫_0^t(u_x^2 + ω_x^2) dτ + Cϵ_0^2∫_0^tn_x^2 dτ
+ Cϵ_0^1/2C_1^1/2∫_0^tn_xx^2 dτ + Cϵ_0^2/3∫_0^tω_xx^2 dτ + Cϵ_0C_1 ∫_0^tu-ω^2 dτ.
Putting (<ref>), (<ref>), Proposition <ref>, Lemma <ref>, <ref> and <ref> together, we obtain the result.
§.§ Estimate of u_xx, v_xx, u_xt, v_xt
Assume
(v_0,u_0,n_0-n_*,ω_0)_H^1(ℝ)≤ϵ_0, (u_0, v_0)_xx + (u_0, v_0)_xt≤ C_0. For any given T>0, suppose that (v, u, n,ω) is the solution to Cauchy problem (<ref>), (<ref>) defined for (x,t)∈ℝ× [0,T). There exists a positive constant C_1, where C_1 ≤1/16ϵ_0, such that if
sup_t∈ [0,T](u_xx, v_xx,n_xx,ω_xx)≤ C_1,
then we have
(u_xx, v_xx,u_xt,v_xt)^2 + ∫_0^t ((u_xx,u_xτ)^2) dτ
≤ C(ϵ_0^2+ C_0^2).
Differentiating (<ref>)_1, (<ref>)_2 with x two times, multiplying them by v_xx, u_xx respectively, then adding them together and after integrating by parts we can get
1/2∫_ℝ (u_xx^2 + v_xx^2) dx + ∫_0^t∫_ℝ nu_xx^2 dx
= 1/2∫_ℝ (u_0xx^2 + v_0xx^2) dx + ∫_0^t∫_ℝ[-5/2 u_xu_xx^2
- 1/2(γ + 2)u_xv_xx^2
- (2γ - 1) v_xu_xxv_xx + n_xx(ω - u)u_xx + 2n_x(ω - u)_xu_xx + nω_xxu_xx] dxdτ.
Using Lemma <ref>, <ref> and the hypothesis leads to
∫_0^t∫_ℝ u_xu_xx^2 dxdτ≤ C∫_0^tu_x_∞u_xx^2 dτ≤ C∫_0^tu_x^1/2u_xx^1/2u_xx^2 dτ
≤ Cϵ_0^1/2C_1^1/2∫_0^tu_xx^2 dτ
Noting that
v_x ∼ u_t + uu_x + n(ω - u) ∼ u_t + u_x + (ω - u),
hence,
v_xx∼ u_xt + u_xx + n_x(ω - u) + n(ω - u)_x,
therefore,
|v_xx| ≤ C(|u_xt| + |u_xx| + |n_x(ω - u)| + |n(ω - u)_x|),
thus a similar estimation gives
∫_0^t∫_ℝ u_xv_xx^2 dxdτ≤ϵ_0^1/2C_1^1/2∫_0^t (u_xt^2 + u_xx^2) dτ
+ Cϵ_0^3/42C_1^3/2∫_0^tω - u^2 dτ + Cϵ_0C_1∫_0^t (ω_x^2 + u_x^2) dτ.
Similarly, using (<ref>) we conclude that
∫_0^t∫_ℝ v_xu_xxv_xx dxdτ
≤δ∫_0^tu_xx^2 dτ + Cϵ_0C_1 ∫_0^tu_xτ^2 dτ + Cϵ_0^1/2C_1^1/2∫_0^t |u_xx^2 dτ
+ Cϵ_0^2C_1^2∫_0^t(ω - u)^2 dτ + Cϵ_0C_1 ∫_0^t (ω_x^2 + u_x^2) dτ.
Applying Lemma <ref> and Young's inequality, we have
∫_0^t∫_ℝ n_xx(ω - u)u_xx dxdτ≤δ∫_0^tu_xx^2 dτ + Cϵ_0^2∫_0^tn_xx^2 dτ,
∫_0^t∫_ℝ n_x(ω - u)_xu_xx dxdτ≤δ∫_0^tu_xx^2 dτ + Cϵ_0C_1 ∫_0^t (ω_x^2 + u_x^2) dτ,
∫_0^t∫_ℝ nω_xxu_xx dxdτ≤δ∫_0^tu_xx^2 dτ + C∫_0^tω_xx^2 dτ.
Plugging these estimates into (<ref>) yields
u_xx^2 + v_xx^2 + ∫_0^tu_xx^2 dτ
≤ (u_0xx^2 + v_0xx^2) + Cϵ_0C_1∫_0^t (ω_x^2 + u_x^2) dτ
+ δ∫_0^tu_xx^2 dτ + Cϵ_0^1/2C_1^1/2∫_0^t(u_xx^2 + u_xτ^2) dτ + Cϵ_0^2∫_0^tn_xx^2 dτ
+ C∫_0^tω_xx^2 dτ
+ Cϵ_0^3/2C_1^3/2∫_0^tω - u^2 dτ.
On the other hand, to estimate ∫_0^tu_xτ^2 dτ, differentiating (<ref>)_1, (<ref>)_2 with x and then with t respectively, multiply them by v_xt, u_xt respectively, then adding them together and after integrating by parts we can get
1/2∫_ℝ (u_xt^2 + v_xt^2) dx + ∫_0^t∫_ℝ nu_xτ^2 dxdτ = 1/2∫_ℝ (u_0)_xt^2 + (v_0)_xt^2) dx
+∫_0^t∫_ℝ[ -3/2u_xu_xτ^2 - γ/2 u_xv_xτ^2 - γ v_xu_xτv_xτ
- u_τ v_xxv_xτ - u_τu_xxu_xτ - γ - 1/2 v_τ u_xxv_xτ
- γ - 1/2 v_τv_xxv_xτ + n_xτ(ω - u )u_xτ + n_x(ω - u)_τ u_xτ + n_τ(ω-u)_xu_xτ + nω_xτu_xτ]dxdτ.
By Lemma <ref>, <ref> and the hypothesis one gets
∫_0^t∫_ℝ u_xu_xτ^2 dxdτ≤ C∫_0^tu_x_∞u_xτ^2 dτ
≤ C∫_0^tu_x^1/2u_xx^1/2u_xτ^2 dτ≤ Cϵ_0^1/2C_1^1/2∫_0^tu_xτ^2 dτ.
To estimate ∫_0^t∫_ℝ u_xv_xτ^2 dxdτ, note that
v_t ∼ u_x + v_x,
consequently,
v_xt∼ u_xx + v_xx,
therefore,
v_xt^2∼ u_xx^2 + v_xx^2,
hence,
∫_0^t∫_ℝ u_xv_xτ^2 dxdτ≤ Cϵ_0^1/2C_1^1/2∫_0^t(u_xx^2 +v_xx^2) dτ.
Thus we need to estimate ∫_0^tv_xx^2 dτ, note that
v_xx∼ u_xt + u_xx + n_x(ω - u) + n(ω - u)_x,
therefore,
∫_0^tv_xx^2 dτ≤ C∫_0^t (u_xτ^2 + u_xx^2 + ϵ_0^1/2C_1^1/2(ω - u)^2 + ω_x^2 + u_x^2) dτ,
thus, we have
∫_0^t∫_ℝ u_xv_xτ^2 dxdτ
≤ Cϵ_0^1/2C_1^1/2∫_0^t(u_xx^2+u_xτ^2 + ω_x^2 + u_x^2) dτ
+ Cϵ_0C_1∫_0^t(ω - u)^2 dτ
Similarly, we have
∫_0^t∫_ℝ v_xu_xτv_xτ dxdτ
≤δ∫_0^tu_xτ^2 dτ + Cϵ_0C_1 ∫_0^t (u_xx^2 + u_xτ^2) dτ
+ ϵ_0C_1∫_0^t ((ω - u)^2 + ω_x^2+ u_x^2) dτ,
∫_0^t∫_ℝ u_τ v_xxv_xτ dxdτ
≤δ∫_0^tu_xx^2 dτ + C ϵ_0^1/2C_1^1/2∫_0^t (u_xτ^2 + u_xx^2) dτ
+ C ϵ_0C_1 ∫_0^t (ω - u^2 +ω_x^2 + u_x^2) dτ,
∫_0^t∫_ℝ u_τ u_xxu_xτ dxdτ≤δ∫_0^tu_xt^2 dt + Cϵ_0^1/2C_1^1/2u_xx^2 dτ,
∫_0^t∫_ℝ v_τ u_xxv_xτ dxdτ
≤δ∫_0^tu_xx^2 dτ + C ϵ_0^1/2C_1^1/2∫_0^t (u_xτ^2 + u_xx^2) dτ
+ ϵ_0C_1∫_0^t (ω - u^2 + ω_x^2+ u_x^2) dτ,
∫_0^t∫_ℝ v_τ v_xxv_xτ dxdτ
≤δ∫_0^tu_xx^2 dτ + C ϵ_0^1/2C_1^1/2∫_0^t (u_xτ^2+ u_xx^2) dτ
+ ϵ_0^1/2C_1^1/2∫_0^t (ω - u^2+ ω_x^2 + u_x^2) dτ,
∫_0^t∫_ℝ n_xτ(ω - u )u_xτ dxdτ≤δ∫_0^tu_xτ^2 + Cϵ_0^4∫_0^tn_xx^2 dτ
+ Cϵ_0^2C_1^2∫_0^tω -u ^2 dτ + Cϵ_0^2∫_0^tω_xx^2 dxdτ.
Since,
∫_0^t∫_ℝ n_x(ω - u)_τ u_xτ dxdτ = ∫_0^t∫_ℝ n_xω_τ u_xτ dxdτ - ∫_0^t∫_ℝ n_xu_τ u_xτ dxdτ,
To estimate ∫_0^t∫_ℝ n_xω_τ u_xτ dxdτ, we note that
ω_t = -ωω_x - 1/nn_x + 1/nn_xω_x + ω_xx + ρ (u - ω),
hence, we conclude that
∫_0^t∫_ℝ n_x(ω - u)_τ u_xτ dxdτ
≤δ∫_0^tu_xτ^2 dτ + Cϵ_0^3/2C_1 ∫_0^tω_x^2 dτ + Cϵ_0^1/2C_1∫_0^t (n_x^2 + ω_xx^2
+ (u - ω)^2) dxdτ + Cϵ_0^2∫_0^tn_xx^2 dτ.
Similarly,
∫_0^t∫_ℝ n_τ (ω-u)_xu_xτ dxdτ
≤δ∫_0^tu_xτ^2 dτ + Cϵ_0C_1∫_0^t (ω_x^2 + u_x^2) dτ.
To estimate ∫_0^t∫_ℝ nω_xτu_xτ dxdτ,
we note that
ω_xt = -ω_x^2 -ωω_xx - 1/nn_xx -
1/n^2n_x^2 + 1/nn_xxω_x + ω_xxx -1/n^2n_x^2ω_x + 1/nn_xω_xx + ρ_x(u -ω)n_xx + ρ(u-ω)_xn_xx,
hence, we obtain
∫_0^t∫_ℝ nω_xτu_xτ dxdτ
≤δ∫_0^tu_xτ^2 dτ + Cϵ_0C_1 ∫_0^t (ω_x^2 + n_x^2) dτ
+ C(ϵ_0^2 + 1)∫_0^tω_xx^2 dτ + C(1 + ϵ_0C_1) ∫_0^tn_xx^2 dτ.
Plugging these estimates into (<ref>) yields
(u_xt,v_xt)^2 + ∫_0^tu_xτ^2 dτ≤u_0xt^2 + v_0xt^2
+ δ∫_0^t(u_xx^2 + u_xτ^2) dτ + C( ϵ_0^1/2C_1^1/2 + ϵ_0^3C_1) ∫_0^t (n_x^2 +ω_x^2 + u_x^2) dτ
+ C ϵ_0^1/2C_1^1/2∫_0^t (u_xτ^2 + u_xx^2) dτ + C(ϵ_0^2 + 1 + ϵ_0C_1)∫_0^t (n_xx^2 + ω_xx^2) dxdτ
+ Cϵ_0C_1∫_0^t(ω - u)^2 dτ.
Adding (<ref>) with (<ref>) together and choosing δ sufficiently small, we obtain
(u_xx,v_xx,u_xt,v_xt)^2 + ∫_0^t(u_xx^2 + u_xτ^2) dτ
≤(u_0xx,v_0xx,u_0xt,v_0xt)^2 + C( ϵ_0^1/2C_1^1/2 + ϵ_0^3C_1) ∫_0^t ((n_x,ω_x,u_x)^2) dτ
+ C ϵ_0^1/2C_1^1/2∫_0^t ((u_xτ,u_xx)^2) dτ + C(ϵ_0^2 + 1 + ϵ_0C_1 )∫_0^t ((n_xx,ω_xx)^2) dxdτ
+ Cϵ_0C_1 ∫_0^t(ω - u)^2 dτ.
Combining (<ref>) and Proposition <ref>, Lemma <ref>, <ref>, <ref> and <ref> yields
(u_xx,v_xx,u_xt,v_xt)^2 + ∫_0^t(u_xx,u_xτ)^2dτ≤ C(ϵ_0^2+ C_0^2).
Proposition <ref> is the local existence to the Cauchy problem (<ref>) and (<ref>). For Euler equations, this can be obtained by using the arguments in <cit.>, for compressible Navier-Stokes equations, this can be seen in <cit.>. For our model which consist of Euler equations and Navier-Stokes equations, this can be achieved by using the interaction arguments.
(Local existence)
Assume (ρ_0-ρ_*, u_0, n_0 - n_*, ω_0) ∈ H^2(ℝ), and satisfying the estimates (ρ_0-ρ_*, u_0, n_0 - n_*, ω_0)_H^1≤ P_1, (ρ_0, u_0, n_0, ω_0)_xx+ (ρ_0,u_0)_xt≤ P_2 for P_1, P_2 are two positive constants, then there exists a positive constant T_0 small enough such that
the two phase flow system (<ref>), (<ref>) has a unique solution (ρ, u, n, ω)∈ X(0,T) satisfying
sup_t∈ [0,T_0](ρ_0-ρ_*, u_0, n_0 - n_*, ω_0)_H^1≤ 2P_1
and
sup_t∈ [0,T_0](ρ_0, u_0, n_0, ω_0)_xx≤ 2P_2.
§.§ Proof of Theorem <ref>
If the initial data satisfies ϵ_0 ≤ min(1/2E_0, 1/2CE_0) , C_0 ≤ min(1/2C_1, 1/2√(C)C_1-1), then by Proposition <ref>, the local solution of (<ref>), (<ref>) exists in C[0,T_0;H^2(ℝ)] and has the estimate
sup_t∈ [0,T_0](ρ_0 - ρ_*, u_0, n_0 - n_*, ω_0)_H^1≤ 2ϵ_0 ≤ E_0,
sup_t∈ [0,T_0](ρ_0xx, u_0xx, ρ_0xt,u_0xt, n_0xx, ω_0xx)≤ 2C_0 ≤ C_1,
therefore by Proposition <ref> and Lemma <ref>- <ref>, the solution satisfies a priori estimates
sup_t∈ [0,T_0](ρ_0 - ρ_*, u_0, n_0 - n_*, ω_0)_H^1≤ Cϵ_0 ≤1/2E_0
and
sup_t∈ [0,T_0](ρ_0xx, u_0xx, ρ_0xt, u_0xt, n_0xx, ω_0xx)≤√(C)(ϵ_0 + C_0)≤1/2C_1
provided ϵ_0 ≤1/2CE_0 , C_0 ≤1/2√(C)C_1-1. Thus by Proposition <ref> the initial value problem (<ref>) for t ≥ T_0 with the initial data (ρ, u, n, ω)(T_0) has again a unique solution (ρ, u, n, ω) ∈ C[0,2T_0;H^2] satisfying the estimates
sup_t∈ [T_0,2T_0](ρ_0 - ρ_*, u_0, n_0 - n_*, ω_0)_H^1≤ E_0,
sup_t∈ [T_0,2T_0](ρ_0xx, u_0xx, ρ_0xt,u_0xt, n_0xx, ω_0xx)≤ C_1.
Then by (<ref>), (<ref>) and Lemma <ref>- <ref>, we have
sup_t∈ [0,2T_0](ρ_0 - ρ_*, u_0, n_0 - n_*, ω_0)_H^1≤ Cϵ_0 ≤1/2E_0
and
sup_t∈ [0,2T_0](ρ_0xx, u_0xx, ρ_0xt,u_0xt, n_0xx, ω_0xx)≤√(C)(ϵ_0 + C_0) ≤1/2C_1
provided ϵ_0 ≤1/2CE_0, C_0 ≤1/2√(C)C_1-1. Thus we can continue the same process for 0 ≤ t≤ nT_0, n=3, 4, 5, ⋯ and finally get a global solution (ρ, u, n, ω) ∈ X(0, +∞) satisfying
sup_t∈ [0,∞)[(ρ -ρ_*, u, n-n_*, ω)(·,t)_H^1^2 +(ρ_t, u_t)(·, t)^2]
+ ∫_0^∞[(ρ_x, u_x, n_x, ω_x)^2 +(u_t, ω_xx)^2 + u-ω^2] dt ≤ Cϵ_0^2,
sup_t∈ [0,∞](ρ_xx, u_xx,n_xx, ω_xx, ρ_xt, u_xt)(·, t)^2
+ ∫_0^∞ ((ρ_xx, u_xx, n_xx, u_xt)^2 + ω_xxx^2 )dt ≤ C(ϵ_0^2+ C_0^2).
§ DECAY ESTIMATES
§.§ Spectral Analysis
In this section, we carry out the spectral analysis of the linear system. Based on it we derive estimates for the linearized system.
To get a refined estimate that is a higher decay rate which play a crucial role in the proof of optimal decay rate for the nonlinear terms, as we do in (<ref>), we can write the nonlinear term G into two parts which consists of the conserved form part and the non-conserved one that is an essential difficulty. In order to cancel out the non-conserved part, we define m := ρ u, M := nω so that we can make full use of some symmetric form, i.e., the relaxation drag force term have the opposite sign, of the system. Thanks to the upper and lower bound of ρ and n, m and M are equivalent to u and ω respectively. We linearized the system (<ref>) around the equilibrium state (ρ_*, 0, n_*,0) and get
{[ ρ_t + m_x = 0,; m_t + P'(ρ_*)ρ_x - ρ_*M + n_*m = -(mu)_x - (P'(ρ) - P'(ρ_*))(ρ - ρ_*)_x; + (ρ - ρ_*)M + (n_* - n)m,; n_t + M_x = 0,; M_t + n_x - M_xx - n_*m + ρ_* M = -(Mω)_x - (n_xω)_x + (n- n_*)m; + (ρ_* - ρ)M. ].
Without loss of generality, we take ρ_* = 1, n_* = 1 in Eq.(<ref>) and consider the linear system
{[ ρ_t + m_x = 0,; m_t + ρ_x - M + m = 0; n_t + M_x = 0,; M_t + n_x - M_xx - m + M = 0. ].
The Fourier transform of Eq. (<ref>) yields ∂_tÛ(ξ,t) = A(ξ)Û(ξ,t), with
Û(ξ, t) = (ρ̂(ξ, t),m̂(ξ, t), n̂(ξ, t), M̂(ξ, t )^⊤
and
A(ξ) =
([ 0 -i ξ 0 0; -i ξ -1 0 1; 0 0 0 -i ξ; 0 1 -i ξ -1 -ξ^2 ]).
Now let us analyze the spectrum of A(ξ). The characteristic equation of A(ξ) is given by
|λ I - A(ξ)| = λ^4 + (ξ^2 + 2)λ^3 + 3ξ^2λ^2 + ξ^2(ξ^2 + 2)λ + ξ^4
and denote by λ_i(ξ) (1≤ i ≤ 4) the eigenvalues of matrix A, by a direct computation, we obtain
(1)There exist positive constants r_1 ≤ r_2 such that λ_i(ξ) (1≤ i ≤ 3) has the Taylor series expansion
λ_1 = -1/2ξ^2 + o(ξ^2),
λ_2 = iξ -1/4ξ^2 + o(ξ^2),
λ_3 = -iξ -1/4ξ^2 + o(ξ^2),
λ_4 = -2 + o(ξ^2)
for |ξ| ≤ r_1,
and
λ_1 = -ξ^2 + o(1/ξ),
λ_2 = iξ - 1/2 + o(1/ξ),
λ_2 = - iξ - 1/2 + o(1/ξ),
λ_2 = - 1 + o(1/ξ)
for |ξ| ≥ r_2.
(2)The matrix ponential e^tA(ξ)has the spectral resolution
e^tA(ξ) = ∑_j=1^4 e^tλ_j(ξ)P_j,
(3)P_j (1 ≤ j ≤ 4) has the estimate
|P_j| ≤ C
for |ξ| ≤ r_1 and |ξ | ≥ r_2, where |·| denotes the matrix norm.
(4)There exists a positive constant β_1 such that for |ξ| ≤ r_1,
Re λ_j(ξ) ≤ - β_1|ξ|^2 (1 ≤ j ≤ 4 ),
(5)There exists a positive constant β_2 such that for |ξ| ≥ r_2,
Re λ_j(ξ) ≤ - β_2 (1 ≤ j ≤ 4 ).
By Lemma <ref>, A(ξ) is simple and has the following spectral representation <cit.>
A(ξ) = ∑_j=1^4λ_j(ξ)P_j(ξ),
where P_j is called the eigenprojection for the eigenvalue λ_j of A(ξ) and has the following property
P_j(ξ)P_k(ξ) = δ_jkP_j(ξ), ∑_j=1^4P_j(ξ) = I.
Taking Taylor expansions at ξ = 0 and by (<ref>) we get
A(ξ) = ∑_j=1^4[λ_j(0) + λ^'_j(0)ξ + ⋯][P_j(0) + P^'_j(0)ξ + ⋯],
we denote P_j0≡ P_j(0).
Comparing the constant terms on the both sides of (<ref>) yields
A(0) = ∑_j=1^4λ_j(0)P_j0.
Taking Taylor expansions in (<ref>), we compare the constant terms to get
P_j0P_k0 = δ_jkP_j0, ∑_j=1^4P_j0 = I.
A direct computation for the matrix A(0) gives its eigenvalues,
λ_10 = λ_20 = λ_30 = 0, λ_40 = -2.
Then ∑_j=1^3P_j0 is the eigenprojection of A(0) corresponding to the eigenvalue zero, it is obvious that
P_0 = ∑_j=1^3P_j0 =
([ 1 0 0 0; 0 1/2 0 1/2; 0 0 1 0; 0 1/2 0 1/2 ]).
Note that by (<ref>)
P_j0P_0 = P_0P_j0 = P_j0, 1≤ j ≤ 3.
If h∈ L^1(ℝ) and ∂_x^kh ∈ L^2(ℝ), then
e^At(iξ)^kĥ(ξ)≤ C(1+t)^-1/4 - k/2h_L^1 + Ce^-ct∂_x^kh,
where C and c are positive constants. If in addition, h takes the form
h =
([ 0; g; 0; -g ]),
then
e^At(iξ)^kĥ(ξ)≤ C(1+t)^-3/4 - k/2h_L^1 + Ce^-ct∂_x^kh.
By virtue of Lemma <ref> we have
e^At(iξ)^kĥ(ξ)^2 = ∫_ℝ |e^At(iξ)^kĥ(ξ)|^2 dξ
= ∫_ℝ∑ _i=1^4e^tλ_i(ξ)P_i(ξ)^2|ξ|^2k|ĥ(̂ξ̂)̂|^2 dξ
≤∫_|ξ|≤ r_1 e^-2β_1|ξ|^2t|ξ|^2k|ĥ(̂ξ̂)̂|^2 dξ + ∫_|ξ|≥ r_2 e^-2β_2t|(iξ)^kĥ(̂ξ̂)̂|^2 dξ
≤ Cĥ_∞^2∫_|ξ|≤ r_1 e^-2β_1|ξ|^2t|ξ|^2k| dξ + Ce^-2β_2t (iξ)^kĥ(̂ξ̂)̂^2
≤ C(1+t)^-1/2-kh_L^1^2 + Ce^-2β_2t∂_x^kh^2.
Take square root on both sides and we obtain (<ref>).
If h further satisfies (<ref>), we refine the integral over |ξ| ≤ r_1 to obtain (<ref>) as follows. Owing to
e^tA(ξ) = ∑_j=1^4 e^tλ_j(ξ)P_j(ξ),
where λ_j(ξ) and P_j(ξ) are holomorphic at ξ = 0. Thus for |ξ| ≤ r_1 with r_1 small enough, taking Taylor expansions of P_j(ξ), 1 ≤ j ≤ 3, and applying (<ref>), (<ref>) and (<ref>), we have
e^tA(ξ)(iξ)^kĥ(ξ) = ∑_j=1^3e^λ_j(ξ)t[P_j0 + O(|ξ|)](iξ)^kĥ(ξ) + e^λ_4(ξ)tP_j(ξ)(iξ)^kĥ(ξ)
= ∑_j=1^3e^λ_j(ξ)tO(|ξ|)(iξ)^kĥ(ξ) + e^λ_4(ξ)tP_j(ξ)(iξ)^kĥ(ξ).
This implies
|e^tA(ξ)(iξ)^kĥ(ξ)| ≤ C( ∑_j=1^3e^Reλ_j(ξ)t|ξ|^k+1|ĥ(ξ)| + e^Reλ_4(ξ)tP_j(ξ)|iξ|^k|ĥ(ξ)|).
We also have
Reλ_4(ξ)≤1/2λ_4(0) ≤ -c_0
for small ξ. Using lemma <ref> and (<ref>), we refine the integral over |ξ|≤ r_1 as
∫_|ξ|≤ r_1 |e^tA(ξ)(iξ)^kĥ(ξ)|^2 dξ ≤ C∫_|ξ|≤ r_1 (e^-2β_1ξ^2t|ξ|^2k+2 + e^-2c_0t)|ĥ(ξ)|^2 dξ
≤ C(1+t)^-3/2-kĥ_∞^2≤ C(1+t)^-3/2-kh_L^1^2.
Replacing the corresponding integral in (<ref>) by (<ref>) we obtain (<ref>).
§.§ Weighted Energy Estimate
In this subsection, we derive the time weighted energy estimate.
We first introduce the following notation for m=0, 1, 2 and t ≥ 0:
N_m^2(t) = sup_τ∈[0,t](1+ τ)^1/2+m∂^m_x(ρ-ρ_*, u, n-n_*, ω)^2.
Let (ρ_*, 0, n_*, 0) be the constant equilibrium state of (<ref>) and (ρ_0-ρ_*,u_0, n_0 - n_*, ω_0) ∈ H^4. Then there exists a constant ϵ_0 ≥ 0 such that if (ρ_0-ρ_*,u_0 , n_0 - n_*, ω_0 )≤ϵ_0, N_0, N_1 is small and N_2 is bounded, the solution of (<ref>), (<ref>) given in Theorem <ref> has the following estimates:
(ρ_x, u_x, n_x, ω_x )≤ C ϵ_0(1 + t)^-1/2
∫_0^t (1+τ)(ω_xx^2 + (u-ω)_x^2) dτ≤ Cϵ_0^2
For t ≥ 0, we define
M_1^2(t) = sup_0 ≤τ≤ t{(1+τ)(ρ_x, u_x, n_x, ω_x)^2 + ∫_0^t(1 + τ)(ω_xx^2 + (u - ω)_x^2) dτ},
our goal is to prove
M_1^2(t) ≤ Cϵ_0^2
where C is a positive constant.
We firstly use N_0, N_1 and N_2 to express some L^∞ norms needed in this section. Using Lemma <ref>, we can obtain
ρ - ρ_*_∞≤ Cρ - ρ_*^1/2(ρ - ρ_*)_x^1/2≤ CN_0^1/2N_1^1/2(1 + t)^-1/2,
n - n_*_∞≤ Cn - n_*^1/2(n - n_*)_x^1/2≤ CN_0^1/2N_1^1/2(1 + t)^-1/2,
u_x_∞≤ Cu_x^1/2u_xx^1/2≤ CN_1^1/2N_2^1/2(1 + t)^-1,
v_x_∞≤ Cv_x^1/2v_xx^1/2≤ CN_1^1/2N_2^1/2(1 + t)^-1,
n_x_∞≤ Cn_x^1/2n_xx^1/2≤ CN_1^1/2N_2^1/2(1 + t)^-1,
ω_x_∞≤ Cω_x^1/2ω_xx^1/2≤ CN_1^1/2N_2^1/2(1 + t)^-1.
To estimate u - ω_∞, we note that
u_t + σ_*v_x - n_*(ω - u) = - uu_x - γ - 1/2vv_x + (n - n_*)(ω - u),
ω_t - ω_xx - ρ_*(u - ω) = -ωω_x - 1/nn_x - 1/nn_xω_x + (ρ - ρ_*)(u - ω),
minus (<ref>) from (<ref>) yields,
(u - ω)_t + (ρ_* + n_*)(u - ω) = -σ_*v_x -uu_x - γ - 1/2vv_x
-ω_xx + ωω_x + 1/nn_x + 1/nn_xω_x + (n_* -n + ρ_* - ρ_0*)(u - ω) :=R,
thus, we obtain
u - ω = e^-(ρ_* + n_*)t(u_0 - ω_0) + ∫_0^t e^-(ρ_* + n_*)(t - τ)R(x,τ) dτ,
therefore, there is a constant c ≥ 0 such that
u - ω_∞≤ e^-ct(u_0 - ω_0)_∞ + ∫_0^t e^-c(t - τ)R(x,τ)_∞ dτ.
Owing to the expression of R and Lemma <ref>,
R_∞≤ C(v_x_∞ + u_x_∞ + n_x_∞ + ω_xx_∞ + n-n_*_∞u-ω_∞
+ ρ - ρ_*_∞u-ω_∞)
≤ C(v_x^1/2v_xx^1/2 + u_x^1/2u_xx^1/2 + n_x^1/2n_xx^1/2
+ ω_xx^1/2ω_xxx^1/2 + n-n_*^1/2(n-n_*)_x^1/2u-ω_∞
+ ρ - ρ_*^1/2(ρ - ρ_*)_x^1/2u-ω_∞),
hence,
∫_0^t e^-c(t-τ)R_∞(τ) dτ
≤ CN_1^1/2N_2^1/2∫_0^t e^-c(t-τ)(1 + τ)^-1 dτ + C∫_0^t e^-(t-τ)ω_xx^1/2ω_xxx^1/2 dτ
+ CN_0^1/2N_1^1/2∫_0^t e^-c(t-τ)(1 + τ)^-1/2u -ω_∞ dτ.
We estimate the three terms on the right hand side of (<ref>)
CN_1^1/2N_2^1/2∫_0^t e^-c(t-τ)(1 + τ)^-1 dτ≤ CN_1^1/2N_2^1/2(1 + t)^-1,
∫_0^t e^-(t-τ)ω_xx^1/2ω_xxx^1/2 dτ = ∫_0^t e^-(t-τ)ω_xx^3/8ω_xx^1/8ω_xxx^1/2 dτ
≤ N_2^3/8∫_0^t e^-(t-τ)(1 + τ)^-31/32((1 + τ)^2ω_xxx^2(τ))^1/4]ω_xx^1/8 dτ
≤ CN_2^3/8(∫_0^t (1 + τ)^2ω_xxx^2 dτ)^1/4(∫_0^t e^-(t-τ)(1 + τ)^-31/24ω_xx^1/6 dτ)^3/4
≤ CN_2^3/8M_2^1/2(∫_0^t e^-(t-τ)(1 + τ)^-33/24((1 + τ)ω_xx^2)^1/12 dτ)^3/4
≤ CN_2^3/8M_2^1/2(∫_0^t (1 + τ)ω_xx^2 dτ)^3/48( ∫_0^t e^- c(t - τ)(1 + τ)^-3/2 dτ)^11/16
≤ CN_2^3/8M_2^1/2M_1^3/24(1 + τ)^-33/32,
where we have used Höder's inequality two times.
N_0^1/2N_1^1/2∫_0^t e^-c(t-τ)(1 + τ)^-1/2u -ω_∞ dτ
≤ CN_0^1/2N_1^1/2(1 + t)^-3/2sup_0 ≤τ≤ t[(1 + τ)u - ω_∞].
Plugging these estimates into (<ref>), we obtain
∫_0^t e^-c(t-τ)R_∞(τ) dτ
≤ CN_1^1/2N_2^1/2(t)(1 + t)^-1 + CN_2^3/8M_2^1/2M_1^3/24(1 + τ)^-33/32
+ CN_0^1/2N_1^1/2(1 + t)^-3/2sup_0 ≤τ≤ t[(1 + τ)u - ω_∞],
therefore, we obtain
u - ω_∞≤ e^-ct(u_0 - ω_0)_∞ + ∫_0^t e^-c(t - τ)R(x,τ)_∞ dτ
≤ e^-ct(u_0 - ω_0)_∞ + CN_1^1/2N_2^1/2(t)(1 + t)^-1 + CN_2^3/8M_2^1/2M_1^3/24(1 + τ)^-33/32
+ CN_0^1/2N_1^1/2(1 + t)^-3/2sup_0 ≤τ≤ t[(1 + τ)u - ω_∞],
which implies
sup_0 ≤τ≤ t[(1 + τ)u - ω_∞≤ C[(u_0 - ω_0)_∞ + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24]
for small N_0, N_1. Owing to Sobolev embedding, we have
u - ω_∞≤ C[(u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24](1 + τ)^-1.
Now, we start the weighted energy estimate. Differentiating (<ref>)_1, (<ref>)_2 with x, multiplying them by ρ v_x, ρ u_x respectively,
differentiating (<ref>)_3 and (<ref>)_4 with x, multiplying (<ref>)_3 by n_x and then divided by n, multiplying (<ref>)_4 by nω_x, then adding them together, we can obtain
1/2ρ (u_x^2 + v_x^2)_t + 1/2n(n_x^2)_t + 1/2n(ω_x^2)_t + σ_*ρ (u_xv_x)_x + (n_xω_xx)_x + ρ n(u-ω)_x^2
= -1/2ρ u_x^3 -γ/2ρ u_xv_x^2
+ ρ n_x(ω - u)u_x - 1/nn_xn_xxω - 2/nn_x^2ω_x - nω_x^3
- nωω_xω_xx + 1/nω_xn_x^2 + ω_x^2n_xx - 1/nω_x^2n_x^2 + n_xω_xω_xx + nω_xωxxx
+ nρ_x(u-ω)ω_x.
We replace the time variable by τ, multiply the equation by the weighted function (1 + τ), and integrate the result over ℝ× [0, t]. After integrating by parts, we have
1/2∫_ℝ (1+t)(ρ u_x^2 + ρ v_x^2 + 1/nn_x^2 + nω_x^2) dx
+ ∫_0^t∫_ℝ (1+τ)(nω_xx^2 + ρ n(u-ω)_x^2) dxdτ
= 1/2∫_ℝ(ρ_0 u_0x^2 + ρ_0 v_0x^2 + 1/n_0n_0x^2 + n_0ω_0x^2) dx + ∫_0^t∫_ℝ (1+τ)[-ρ u_x^3
- 1/2(γ+1)ρ u_xv_x^2 - 1/2ρ_xuu_x^2 - 1/2ρ_xuv_x^2 + ρ n_x(ω - u)u_x - nω_x^3
+1/2n_xωω_x^2 - 1/nω_x^2n_x^2
-2n_xω_xω_xx + nρ_x(u-ω)ω_x] dxdτ
+ 1/2∫_0^t∫_ℝ(ρ u_x^2 + ρ v_x^2 + 1/nn_x^2 + nω_x^2) dxdτ.
By Lemma <ref> and (<ref>),
∫_0^t∫_ℝ (1+τ)ρ u_x^3 dxdτ≤∫_0^t (1+τ) ρ_∞u_x_∞u_x^2 dτ
≤ CN_1^1/2N_2^1/2∫_0^tu_x^2 dτ.
Similarly,
∫_0^t∫_ℝ (1+τ)ρ u_xv_x^2 dxdτ≤ CN_1^1/2N_2^1/2∫_0^tv_x^2 dτ,
∫_0^t∫_ℝ (1+τ)ρ_xuu_x^2 dxdτ≤ CN_0^1/2N_1N_2^1/2M_1^2,
∫_0^t∫_ℝ (1+t)ρ_xuv_x^2 dxdτ≤ CN_0^1/2N_1N_2^1/2M_1^2,
∫_0^t∫_ℝ (1+τ)ρ n_x(ω - u)u_x dxdτ≤ C((u_0 - ω_0)_∞ + N_1^1/2N_2^1/2
+ N_2^3/8M_2^1/2M_1^3/24) ∫_0^t(u_x^2 + n_x^2) dτ,
∫_0^t∫_ℝ (1+t)nω_x^3 dxdτ≤ CN_1^1/2N_2^1/2∫_0^tω_x^2 dτ,
∫_0^t∫_ℝ (1+t)n_xωω_x^2 dxdτ≤ CN_0^1/2N_1N_2^1/2M_1^2,
∫_0^t∫_ℝ (1+τ)1/nω_x^2n_x^2 dxdτ≤ CN_1N_2M_1^2.
By virtue of Lemma <ref>, Cauchy-Schwarz inequality and Young's inequality
∫_0^t∫_ℝ (1+τ)n_xω_xω_xx dxdτ
≤ C∫_0^t (1+τ)ω_x_∞n_xω_xx dτ
≤ C∫_0^t (1+τ)ω_x^1/2n_xω_xx^3/2 dτ
≤δ∫_0^t (1+τ)ω_xx^2 dτ + C∫_0^tω_x^2n_x^4 dτ
≤δ∫_0^t (1+τ)ω_xx^2 dτ + CM_1^4M_1^2.
Using Cauchy-Schwarz inequality and (<ref>),
∫_0^t∫_ℝ (1+τ)nρ_x(u-ω)ω_x dxdτ
≤∫_0^t (1+τ)n_∞(u-ω)_∞ρ_xω_x dτ
≤ C((u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24) ∫_0^t (v_x^2 + ω_x^2) dτ.
Plugging these estimates into (<ref>) and choosing δ sufficiently small, then combining with Theorem <ref> we can obtain
M_1^2 ≤ Cϵ_0^2 + C(N_1N_2 + CN_0^1/2N_1N_2^1/2 + CM_1^4)M_1^2,
which implies
[1-C(N_1N_2 + CN_0^1/2N_1N_2^1/2 + CM_1^4)]M_1^2≤ Cϵ_0^2.
Therefore, if N_0, N_1, M_1 are small and N_2, M_2 are bounded, we get
M_1^2(t) ≤ Cϵ_0^2.
Let (ρ_*, 0, n_*, 0) be the constant equilibrium state of (<ref>) and (ρ_0-ρ_*,u_0, n_0 - n_*, ω_0) ∈ H^4(ℝ).Then there exist two constants ϵ_0 ≥ 0 and C_0 ≥ 0, such that if (ρ_0-ρ_*,u_0 , n_0 - n_*, ω_0)_H^1≤ϵ_0, (ρ_0xx,u_0xx , n_0xx, ω_0xx ,ρ_0xt, u_0xt)≤ C_0, N_0, N_1 is small and N_2 is bounded, the solution of (<ref>), (<ref>) given in Theorem <ref> has the following estimates:
(ρ_xx, u_xx, n_xx, ω_xx)≤ C(ϵ_0 + C_0)(1+t)^-1
∫_0^t (1+τ)^2(ω_xxx^2 + (u-ω)_xx^2) dτ≤ C (ϵ_0^2 + C_0^2).
For t ≥ 0, we define
M_2^2(t) = sup_0 ≤τ≤ t[(1+τ)^2ρ_xx, u_xx, n_xx, ω_xx^2 + ∫_0^t(1 + τ)^2(ω_xxx^2 + (u - ω)_xx^2) dτ],
our goal is to prove
M_2^2(t) ≤ C(ϵ_0^2 + C_0^2),
where C is a positive constant.
Differentiating (<ref>)_1 and (<ref>)_2 with x two times, multiplying by ρ v_xx, ρ u_xx respectively, differentiating (<ref>)_3 and (<ref>)_4 with x two times, multiply (<ref>)_3 by n_xx and then divided by n, multiplying (<ref>)_4 by nω_xx , then summing them up yields
1/2ρ (u_xx^2 + v_xx^2)_t + 1/2n(n_xx^2)_t + 1/2n(ω_xx^2)_t + σ_*ρ (u_xxv_xx)_x + (n_xxω_xx)_x
+ ρ n(u-ω)_xx^2 = -(3γ-2)ρ v_xu_xxv_xx - 5/2ρ u_xu_xx^2 - γ-2/2ρ u_xv_xx^2
+ ρ n_xx(ω-u)u_xx + 2ρ n_x(ω-u)_xu_xx + -1/nn_xxn_xxxω -3/nn_xx^2ω_x
-3/nn_xn_xxω_xx - 3nω_xω_xx^2 -nωω_xxω_xxx -2/n^2n_x^3ω_xx -3/nn_xn_xxω_xx
+ 2/n^2n_x^3ω_xω_xx - 2/nn_x^2ω_xx^2 - 3/n^2n_xω_xn_xxω_xx + 2n_xxω_xx^2 + n_xω_xxω_xxx
+ ω_xω_xxn_xxx + ω_xxω_xxxx + nω_xxρ_xx(u-ω) + 2nρ_x(u-ω)_xω_xx.
We replace the time variable by τ, multiply the equation by the weighted function (1 + τ)^α, and integrate the result over ℝ× [0, t]. After integrating by parts, we have
1/2∫_ℝ (1+t)^α(ρ u_xx^2 + ρ v_xx^2 + 1/nn_xx^2 + nω_xx^2) dx
+ ∫_0^t∫_ℝ (1+τ)^α(nω_xxx^2 + ρ n(u-ω)_xx^2) dxdτ
= 1/2∫_ℝ (ρ_0 u_0xx^2 + ρ_0 v_0xx^2 + 1/n_0n_0xx^2 + n_0ω_0xx^2) dx
+ ∫_0^t∫_ℝ (1+τ)^α(σ_*ρ_xu_xxv_xx - (3γ-2)ρ v_xu_xxv_xx - 3ρ u_xu_xx^2
- γ-1/2ρ u_xv_xx^2 + ρ n_xx(ω-u)u_xx + 2ρ n_x(ω-u)_xu_xx
- 2/nn_xx^2ω_x - 3/nn_xn_xxω_xx
- 3nω_xω_xx^2 + 1/2 n_xωω_xx^2 - 2/n^2n_x^3ω_xx
+ 2/n^2n_x^3ω_xω_xx - 2/nn_x^2ω_xx^2 - 3/n^2n_xω_xn_xxω_xx
- n_xω_xxω_xxx
- ω_xω_xxxn_xx
+ nω_xxρ_xx(u-ω) + 2nρ_x(u-ω)_xω_xx) dxdτ
+ ∫_0^t∫_ℝ (1+τ)^α-1(ρ u_xx^2 + ρ v_xx^2 + 1/nn_xx^2 + nω_xx^2) dxdτ.
By virtue of Lemma <ref> and Cauchy-Schwarz inequality,
∫_0^t∫_ℝ (1+τ)^αρ_xu_xxv_xx dxdτ≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α-1(u_xx^2 + v_xx^2) dτ.
Similarly,
∫_0^t∫_ℝ (1+τ)^αρ v_xu_xxv_xx dxdτ≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α-1(u_xx^2 + v_xx^2) dτ,
∫_0^t (1+τ)^αρ u_xu_xx^2 dxdτ≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α-1u_xx^2 dτ,
∫_0^t∫_ℝ (1+τ)^αρ u_xv_xx^2 dxdτ≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α-1v_xx^2 dτ,
∫_0^t∫_ℝ (1+τ)^αρ n_x(ω-u)_xu_xx dxdτ≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α-1 ((ω-u)_x^2 + u_xx^2) dτ,
∫_0^t∫_ℝ (1+τ)^α2/nn_xx^2ω_x dxdτ≤ CN_1^1/2N_2^1/2∫_0^t(1+τ)^α - 1n_xx^2 dτ,
∫_0^t∫_ℝ (1+τ)^α1/nn_xn_xxω_xx dxdτ≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α - 1 (n_xx^2 + ω_xx^2) dτ,
∫_0^t∫_ℝ (1+τ)^αnω_xω_xx^2 dxdτ≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α-1ω_xx^2 dτ,
∫_0^t∫_ℝ n_xωω_xx^2 dxdτ≤ CN_0^1/2N_1N_2^1/2M_2^2,
∫_0^t∫_ℝ (1+τ)^α2/n^2n_x^3ω_xx dxdτ≤ CN_1N_2(M_1^2 + M_2^2),
∫_0^t∫_ℝ (1+τ)^α2/n^2n_x^3ω_xω_xx dxdτ≤ N_1^3/2 N_2^3/2(M_1^2 + M_2^2),
∫_0^t∫_ℝ (1+τ)^α2/nn_x^2ω_xx^2 dxdτ≤ N_1N_2M_2^2,
∫_0^t∫_ℝ (1+τ)^α3/n^2n_xω_xn_xxω_xx dxdτ≤ C N_1N_2M_2^2,
∫_0^t∫_ℝ (1+τ)^αnρ_x(u-ω)_xω_xx dxdτ
≤ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α - 1((u-ω)_x^2 + ω_xx^2) dτ.
Applying Lemma <ref>, Cauchy-Schwarz inequality and Young's inequality
∫_0^t∫_ℝ (1+τ)^αn_xω_xxω_xxx dxdτ
≤δ∫_0^t (1+τ)^αω_xxx^2 dτ + CN_1N_2M_2^2
∫_0^t∫_ℝ (1+τ)^αω_xω_xxxn_xx dxdτ≤δ∫_0^t (1+τ)^αω_xxx^2 dτ + CN_1N_2M_2^2.
Combining Lemma <ref>, Cauchy-Schwarz inequality and (<ref>), we deduce that
∫_0^t∫_ℝ (1+τ)^αρ n_xx(ω-u)u_xx dxdτ
≤ C((u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2(t) + N_2^3/8M_2^1/2M_1^3/24)∫_0^t (1+τ)^α-1(n_xx^2 + u_xx^2) dτ.
∫_0^t∫_ℝ (1+τ)^αnω_xxρ_xx(u-ω) dxdτ
≤ C((u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24)∫_0^t (1+τ)^α -1 (v_xx^2 + ω_xx^2) dτ.
Plugging these estimates into (<ref>) and choosing δ small enough, owing to the smallness of (u_0 - ω_0)_H^1, N_1, M_1 and the boundedness of N_2, M_2, we have
(1+t)^α ((u_xx,v_xx,n_xx,ω_xx)^2) + ∫_0^t (1+τ)^α(ω_xxx^2 + (u-ω)_xx^2) dτ
≤ C(u_0xx,v_0xx,n_0xx,ω_0xx)^2 + C∫_0^t(1+τ)^α-1((u_xx,v_xx,n_xx,ω_xx)^2) dτ
+ CN_1^1/2N_2^1/2∫_0^t (1+τ)^α-1(ω-u)_x^2 dτ + C(N_1N_2 + N_1^3/2 N_2^3/2)M_1^2
+ C(N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2)M_2^2.
Take α = 1 in (<ref>), it follows from Proposition <ref> and Theorem <ref> that
(1+t)(u_xx,v_xx,n_xx,ω_xx)^2 + ∫_0^t (1+τ)(ω_xxx^2 + (u-ω)_xx^2) dτ
≤ C(ϵ_0^2 + C_0^2)+ C(N_1N_2 + N_1^3/2 N_2^3/2)M_1^2 + C(N_0^1/2N_1N_2^1/2
+ N_1N_2 + N_1^3/2 N_2^3/2)M_2^2.
Take α =2 in (<ref>), we obtain
(1+t)^2(u_xx,v_xx,n_xx,ω_xx)^2 + ∫_0^t (1+τ)^2(ω_xxx^2 + (u-ω)_xx^2) dτ
≤ C(u_0xx,v_0xx,n_0xx,ω_0xx)^2 + C∫_0^t(1+τ)(u_xx,v_xx,n_xx,ω_xx)^2dτ
+ CN_1^1/2N_2^1/2∫_0^t (1+τ) (ω-u)_x^2 dτ + C(N_1N_2 + N_1^3/2 N_2^3/2)M_1^2
+ C(N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2)M_2^2.
We still need an estimate of the integral on the right hand side of (<ref>). For this, we go back to (<ref>)
1/2 (u_xx^2 + v_xx^2)_t + σ_*(u_xxv_xx)_x +nu_xx^2 = -(3γ-2)v_xu_xxv_xx - 5/2 u_xu_xx^2
- γ-2/2 u_xv_xx^2 + n_xx(ω-u)u_xx + 2 n_x(ω-u)_xu_xx + nω_xxu_xx.
We replace the time variable by τ, multiply the equation by the weight function (1+τ), and integrate over ℝ× [0,t], after integrating by parts, we have
1/2(1+t)∫_ℝ (u_xx^2 + v_xx^2) dx + ∫_0^t∫_ℝ (1+τ) nu_xx^2 dxdτ
= 1/2∫_ℝ (u_0xx^2 + v_0xx^2) dx + 1/2∫_0^t∫_ℝ (u_xx^2 + v_xx^2) dxdτ
-(3γ-2)∫_0^t∫_ℝ (1+τ)v_xu_xxv_xx dxdτ - 5/2∫_0^t∫_ℝ (1+τ)u_xu_xx^2 dxdτ
- γ-2/2∫_0^t∫_ℝ (1+τ) u_xv_xx^2 dxdτ + ∫_0^t∫_ℝ (1+τ) n_xx(ω-u)u_xx dxdτ
+ 2∫_0^t∫_ℝ (1+τ)n_x(ω-u)_xu_xx dxdτ + ∫_0^t∫_ℝ (1+τ)nω_xxu_xx dxdτ.
Using Lemma <ref> and (<ref>) one gets
∫_0^t∫_ℝ (1+τ)v_xu_xxv_xx dxdτ
≤∫_0^t (1+τ)v_x_∞u_xxv_xx dτ
≤ C∫_0^t (1+τ)v_x^1/2v_xx^1/2u_xxv_xx dτ
≤ CN_1^1/2N_2^1/2M_2^2.
Similarly,
∫_0^t∫_ℝ (1+τ)u_xu_xx^2 dxdτ≤ CN_1^1/2N_2^1/2M_2^2,
∫_0^t∫_ℝ (1+τ) u_xv_xx^2 dxdτ≤ CN_1^1/2N_2^1/2M_2^2,
∫_0^t∫_ℝ (1+τ)n_x(ω-u)_xu_xx dxdτ≤ C N_1^1/2N_2^1/2∫_0^t ((ω-u)_x^2 + u_xx^2) dτ,
∫_0^t∫_ℝ (1+τ)nω_xxu_xx dxdτ≤δ∫_0^tu_xx^2 dτ + C∫_0^t (1+τ)ω_xx^2 dτ,
∫_0^t∫_ℝ (1+τ) n_xx(ω-u)u_xx dxdτ≤ C((u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24)M_2^2.
Substituting these estimates into (<ref>), and choosing δ small enough, by virtue of Proposition <ref> and Theorem <ref> we can obtain
(1+t)(u_xx^2 + v_xx^2) + ∫_0^t (1+τ)u_xx^2 dτ≤ C(ϵ_0^2 + C_0^2)
+ C[CN_1^1/2N_2^1/2 + (u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24]M_2^2.
Similarly,
1/2(u_xt^2 + v_xt^2)_t + σ_* (u_xtv_xt)_x + nu_xt^2 = -3/2u_xu_xt^2 - γ/2 u_xv_xt^2 - γ v_xu_xtv_xt
- u_tv_xxv_xt - u_tu_xxu_xt - γ - 1/2 v_tu_xxv_xt - γ - 1/2 v_tv_xxv_xt + n_xt(ω - u )u_xt
+ n_x(ω - u)_tu_xt + n_t(ω-u)_xu_xt + nω_xtu_xt.
We replace the time variable by τ, multiply the equation by the weight function (1+τ), and integrate over ℝ× [0,t], after integrating by parts, we have
1/2∫_ℝ (1+t)(u_xτ^2 + v_xτ^2) dx + ∫_0^t∫_ℝ (1+τ) nu_xτ^2 dxdτ = 1/2∫_ℝ (u_0xτ^2 + v_0xτ^2) dx
+ 1/2∫_0^t∫_ℝ (u_xτ^2 + v_xτ^2) dxdτ + ∫_0^t∫_ℝ(1+τ)(-3/2 u_xu_xτ^2 - γ/2u_xv_xτ^2 - γ v_xu_xτv_xτ
- u_τ v_xxv_xτ - u_τ u_xxu_xτ
-γ - 1/2v_τ u_xxv_xτ - γ - 1/2v_τ v_xxv_xτ
+ n_xτ(ω - u )u_xτ
+ n_x(ω - u)_τ u_xτ + n_τ(ω-u)_xu_xτ + nω_xτu_xτ) dxdτ.
Combining Lemma <ref> and (<ref>) implies
∫_0^t∫_ℝ (1+τ)u_xu_xτ^2 dxdτ≤ CN_1^1/2N_2^1/2∫_0^tu_xτ^2 dτ.
Similarly, note that v_xt∼ u_xx + v_xx, we have
∫_0^t∫_ℝ (1+τ)u_xv_xτ^2 dxdτ≤ CN_1^1/2N_2^1/2M_2^2,
∫_0^t∫_ℝ (1+τ)v_xu_xτv_xτ dxdτ≤δ∫_0^t (1+τ)u_xτ^2 dτ + CN_1N_2M_2^2.
Note that u_t ∼ u_x + v_x + (ω-u), by Lemma <ref> and (<ref>), we have
u_t_∞≤ C (u_x_∞ + v_x_∞ + ω-u_∞)
≤ C(u_x^1/2u_xx^1/2 + v_x^1/2v_xx^1/2 + ω-u_∞)
≤ C((u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24)(1+t)^-1,
therefore,
∫_0^t∫_ℝ (1+τ)u_τ v_xxv_xτ dxdτ≤ C((u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2 + N_2^3/8M_2^1/2M_1^3/24)M_2^2,
∫_0^t∫_ℝ (1+τ)u_τ u_xxu_xτ dxdτ
≤δ∫_0^t (1+τ)|u_xτ^2 dτ + C((u_0 - ω_0)_H^1^2 + N_1N_2 + N_2^3/4M_2M_1^3/12)M_2^2.
Note that v_t ∼ u_x + v_x,
v_t_∞≤ C(u_x_∞ + v_x_∞) ≤ C(u_x^1/2u_xx^1/2 + v_x^1/2v_xx^1/2)
≤ CN_1^1/2N_2^1/2(1+t)^-1,
hence,
∫_0^t∫_ℝ (1+τ)v_τ u_xxv_xτ dxdτ≤ CN_1^1/2N_2^1/2M_2^2.
Similarly,
∫_0^t∫_ℝ (1+τ)v_τ v_xxv_xτ dxdτ≤ CN_1^1/2N_2^1/2M_2^2,
∫_0^t∫_ℝ (1+τ)n_xτ(ω - u )u_xτ dxdτ
≤δ∫_0^t (1+τ)u_xτ^2 dτ + ((u_0 - ω_0)_H^1^2 + N_1N_2 + N_2^3/4M_2M_1^3/12)M_2^2,
+ CN_1N_2((u_0 - ω_0)_H^1^2 + N_1N_2 + N_2^3/4M_2M_1^3/12)M_1^2,
∫_0^t∫_ℝ (1+τ)n_x(ω - u)_τu_xτ dxdτ
= ∫_0^t∫_ℝ (1+τ)n_xω_τu_xτ dxdτ - ∫_0^t∫_ℝ (1+τ)n_x u_τu_xτ dxdτ
≤δ∫_0^t (1+τ)u_xτ^2 dτ + C(N_0N_1^2N_2 + N_1N_2 +N_1^2N_2^2 + (u_0 - ω_0)_H^1^2
+ N_2^3/4M_2M_1^3/12)M_1^2 + CN_1N_2M_2^2,
∫_0^t∫_ℝ (1+τ)n_τ(ω-u)_xu_xτ dxdτ
≤δ∫_0^t (1+τ)u_xτ^2 dτ + C(N_0N_1^2N_2 + N_1N_2)M_1^2.
To estimate ∫_0^t∫_ℝ (1+τ)nω_xτu_xτ dxdτ, we note that
ω_xt = -ω_x^2 -ωω_xx - 1/nn_xx -
1/n^2n_x^2 + ω_xxx + 1/n^2n_x^2ω_x
+ 1/nn_xxω_x + 1/nn_xω_xx + ρ_x(u -ω)n_xx + ρ(u-ω)_xn_xx,
by a direct computation, we have
∫_0^t∫_ℝ (1+τ)nω_xτu_xτ dxdτ
≤δ∫_0^t (1+τ)u_xτ^2 dτ + C(N_1N_2 + N_1^2N_2^2 + (u_0 - ω_0)_H^1^2 + N_2^3/4M_2M_1^3/12)M_1^2
+ C(N_0N_1 + N_1N_2)M_2^2 + C∫_0^t (1+τ)n_xx^2 dτ + C∫_0^t (1+τ)ω_xxx^2 dτ
+ C ∫_0^t (1+τ)(u-ω)_x^2 dτ.
Substituting these estimates into (<ref>), and choosing δ sufficiently small, by virtue of theorem <ref> and (<ref>), we obtain
(1+t)(u_xt^2 + v_xt^2) + ∫_0^t (1+τ) u_xτ^2 dτ≤ C(ϵ_0^2 + C_0^2)
+ C(N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1^2N_2 + N_1^2N_2^2 + (u_0 - ω_0)_H^1^2
+ N_2^3/4M_2M_1^3/12)M_1^2+ C(N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1
+ N_1^1/2N_2^1/2 + (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + N_2^3/4M_2M_1^3/12)M_2^2
+ C∫_0^t (1+τ)n_xx^2 dτ.
To estimate ∫_0^t (1+τ)n_xx^2 dτ, we replace the time variable by τ, multiply the equation (<ref>) by weight function (1+τ), and integrate the result over ℝ× [0,t], after integrating by parts, we have
(1+t)∫_ℝ1/2nn_xx^2 dx + ∫_0^t∫_ℝ (1+τ) 1/nn_xx^2 dxdτ = ∫_ℝ1/2n_0n_0xx^2 dx
+ ∫_0^t∫_ℝ (1+τ)(- 1/nω_xn_xx^2 - 3/nn_xn_xxω_xx - ω_xτn_xx
- ω_x^2n_xx
- ωω_xxn_xx
+ 1/nn_x^2n_xx - 1/n^2ω_xn_x^2n_xx
+ ρ_x(u-ω)n_xx + ρ (u-ω)_xn_xx) dxdτ.
Applying <ref> and (<ref>) gives
∫_0^t∫_ℝ (1+τ) 1/nω_xn_xx^2 dxdτ≤ CN_1^1/2N_2^1/2M_2^2,
∫_0^t∫_ℝ (1+τ) 3/nn_xn_xxω_xx dxdτ≤ CN_1^1/2N_2^1/2M_2^2.
A direct computation gives
∫_0^t∫_ℝ (1+τ) ω_xtn_xx dxdτ = ∫_0^t∫_ℝ((1+τ) ω_xn_xx)_τ dxdτ
- ∫_0^t∫_ℝω_xn_xx dxdτ - ∫_0^t∫_ℝ (1+τ) ω_xn_xxτ dxdτ,
By virtue of Newton-Leibniz formula, Cauchy-Schwarz inequality, Theorem <ref> and (<ref>), we conclude that
∫_0^t∫_ℝ((1+τ) ω_xn_xx)_τ dxdτ = ∫_ℝ (1+t) ω_xn_xx dx - ∫_ℝω_0xn_0xx dx
≤ (1+t)ω_xn_xx + Cω_0xn_0xx
≤ (1+t)ω_x^2 + (1+t)n_xx^2 + C(ω_0x^2 + n_0xx^2)
≤ C(ϵ_0^2 + C_0^2) + C(N_1N_2 + N_1^3/2 N_2^3/2)M_1^2+ C(N_0^1/2N_1N_2^1/2
+ N_1N_2 + N_1^3/2 N_2^3/2)M_2^2,
∫_0^t∫_ℝω_xn_xx dxdτ≤ C(ϵ_0^2 + C_0^2).
To estimate ∫_0^t∫_ℝ (1+τ) ω_xn_xxτ dxdτ, note that
n_xxt = -n_xxxω -3n_xxω_x - 3n_xωxx - nω_xxx,
Thus, we have
∫_0^t∫_ℝ (1+τ) ω_xτn_xx dxdτ
≤ C(ϵ_0^2 + C_0^2) + C(N_1^1/2N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2)M_1^2+ C(N_0^1/2N_1^1/2 + N_1^1/2N_2^1/2
+ N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2)M_2^2,
where we omit the details.
Similarly,
∫_0^t∫_ℝ (1+τ) ω_x^2n_xx dxdτ≤ CN_1^1/2N_2^1/2(M_1^2 + M_2^2),
∫_0^t∫_ℝ (1+τ) ωω_xxn_xx dxdτ≤ CN_0^1/2N_1^1/2M_2^2,
∫_0^t∫_ℝ (1+τ) 1/nn_x^2n_xx dxdτ≤ CN_1^1/2N_2^1/2(M_1^2 + M_2^2),
∫_0^t∫_ℝ (1+τ) 1/n^2ω_xn_x^2n_xx dxdτ≤ CN_1N_2(M_1^2 + M_2^2),
∫_0^t∫_ℝ (1+τ) ρ_x(u-ω)n_xx dxdτ,
≤ C[(u_0 - ω_0)_H^1 + N_1^1/2N_2^1/2(t) + N_2^3/8M_2^1/2M_1^3/24](M_1^2 + M_2^2),
∫_0^t∫_ℝ (1+τ)ρ (u-ω)_xn_xx dxdτ
≤δ∫_0^tn_xx^2 dτ + ∫_0^t (1+τ) (u-ω)_x^2 dτ.
Plugging these estimates into (<ref>), we have
(1+t)n_xx^2 + ∫_0^t (1+τ)n_xx^2 dτ≤
C(ϵ_0^2 + C_0^2)
+ C(N_1^1/2N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24)M_1^2
+ C(N_0^1/2N_1^1/2 + N_1^1/2N_2^1/2 + N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + (u_0 - ω_0)_H^1
+ N_2^3/8M_2^1/2M_1^3/24)M_2^2.
Combining (<ref>) and (<ref>) yields
(1+t)(u_xt^2 + v_xt^2) + ∫_0^t (1+τ) u_xτ^2 dt ≤ C(ϵ_0^2 + C_0^2)
+ C(N_1^1/2N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1^2N_2 + N_1^2N_2^2
+ (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2 + N_2^3/4M_2M_1^3/12)M_1^2
+ C(N_0^1/2N_1^1/2 + N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1 + N_1^1/2N_2^1/2
+ (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2 + N_2^3/4M_2M_1^3/12)M_2^2.
Owing to equation (<ref>)_2, we have
v_x ∼ u_t + uu_x + n(ω - u),
therefore,
v_xx∼ u_xt + u_x^2 + uu_xx + n_x(ω - u) + n(ω - u)_x
which combining (<ref>) yields
∫_0^t (1+τ)v_xx^2 dτ
≤ C∫_0^t (1+τ)u_xτ^2 dτ + C(N_1N_2 + (u_0 - ω_0)_H^1^2 + N_1N_2
+ N_2^3/4M_2M_1^3/12)M_1^2 + CN_0N_1M_2^2 + C∫_0^t (1+τ)(ω - u)_x^2 dτ
≤ C(ϵ_0 + C_0) + C(N_1^1/2N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1^2N_2 + N_1^2N_2^2
+ (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2 + N_2^3/4M_2M_1^3/12)M_1^2
+ C(N_0^1/2N_1^1/2 + N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1 + N_1^1/2N_2^1/2
+ (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2 + N_2^3/4M_2M_1^3/12)M_2^2.
Putting (<ref>), (<ref>) and (<ref>) together, we obtain
∫_0^t (1+τ)(u_xx^2 + v_xx^2 +n_xx^2) dτ
≤ C(ϵ_0^2 + C_0^2) + C(N_1^1/2N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1^2N_2 + N_1^2N_2^2
+ (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2 + N_2^3/4M_2M_1^3/12)M_1^2
+ C(N_0^1/2N_1^1/2 + N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1 + N_1^1/2N_2^1/2
+ (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2 + N_2^3/4M_2M_1^3/12)M_2^2.
Combining (<ref>) and (<ref>) yields
M_2^2≤ C(ϵ_0^2 + C_0^2) + C(N_1^1/2N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1^2N_2
+ N_1^2N_2^2 + (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2
+ N_2^3/4M_2M_1^3/12)M_1^2 + C(N_0^1/2N_1^1/2 + N_0^1/2N_1N_2^1/2 + N_1N_2 + N_1^3/2 N_2^3/2
+ N_0N_1 + N_1^1/2N_2^1/2 + (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24 + (u_0 - ω_0)_H^1^2
+ N_2^3/4M_2M_1^3/12)M_2^2
Adding (<ref>) with (<ref>) gives
M_1^2 + M_2^2≤ C(ϵ_0^2 + C_0^2) + C(N_0^1/2N_1N_2^1/2 + CM_1^4 + N_1^1/2N_2^1/2 + N_1N_2
+ N_1^3/2 N_2^3/2 + N_0N_1^2N_2 + N_1^2N_2^2 + (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24
+ (u_0 - ω_0)^2_H^1 + N_2^3/4M_2M_1^3/12)M_1^2 + C(N_0^1/2N_1^1/2 + N_0^1/2N_1N_2^1/2
+ N_1N_2 + N_1^3/2 N_2^3/2 + N_0N_1 + N_1^1/2N_2^1/2 + (u_0 - ω_0)_H^1 + N_2^3/8M_2^1/2M_1^3/24
+ (u_0 - ω_0)_∞^2 + N_2^3/4M_2M_1^3/12)M_2^2.
Therefore, if N_1, M_1 is small and N_2, M_2 are bounded, we have
M_2^2(t) ≤ C(ϵ_0^2 + C_0^2).
Hence, we complete the proof of Proposition <ref>.
Next we give four propositions. Proposition <ref> and <ref> is the base to the proof of Propositions <ref> and <ref>, which together with Proposition <ref>, <ref> are needed in the proof of Proposition <ref>. Their proof are similar to Lemma <ref>, <ref>, Proposition <ref> and <ref> respectively. we omit the details.
Let (ρ_*, 0, n_*, 0) be the constant equilibrium state of (<ref>) and (ρ_0-ρ_*,u_0, n_0 - n_*, ω_0) ∈ H^4(ℝ).Then there exist constants ϵ_0 ≥ 0 and C_0≥ 0 such that if ρ_0-ρ_*,u_0 , n_0 - n_*, ω_0 _H^1≤ϵ_0, ρ_0xx,u_0xx , n_0xx, ω_0xx_H^1 + ρ_0xt, u_0xt_H^1≤ C_0,
the solution of (<ref>), (<ref>) given in theorem <ref> has the following estimates :
∂^3_x(ρ, u, n, ω) ^2 + ∂^2_x(ρ_t, u_t)^2 + ∫_0^t(∂^3_x(ρ, u, n), ∂^2_xu_t^2 + ∂^4_xω^2) dτ
≤ C (ϵ_0^2 + C_0^2)
Let (ρ_*, 0, n_*, 0) be the constant equilibrium state of (<ref>) and (ρ_0-ρ_*,u_0, n_0 - n_*, ω_0) ∈ H^4(ℝ).Then there exist constants ϵ_0 ≥ 0 and C_0≥ 0 such that if ρ_0-ρ_*,u_0 , n_0 - n_*, ω_0 _H^1≤ϵ_0, ρ_0xx,u_0xx , n_0xx, ω_0xx_H^2 + ρ_0xt, u_0xt_H^2≤ C_0, the solution of (<ref>), (<ref>) given in theorem <ref> has the following estimates :
∂^4_x(ρ, u, n, ω)^2 + ∂^3_x(ρ_t, u_t)^2 + ∫_0^t(∂^4_x(ρ, u, n), ∂^3_xu_t^2 + ∂^5_xω^2) dτ
≤ C (ϵ_0^2 + C_0^2)
Let (ρ_*, 0, n_*, 0) be the constant equilibrium state of (<ref>) and (ρ_0-ρ_*,u_0, n_0 - n_*, ω_0) ∈ H^4(ℝ).Then there exist constants ϵ_0 ≥ 0 and C_0≥ 0 such that if ρ_0-ρ_*,u_0 , n_0 - n_*, ω_0 _H^1≤ϵ_0, ρ_0xx,u_0xx , n_0xx, ω_0xx_H^1 + ρ_0xt, u_0xt_H^1≤ C_0, N_1 is small and N_2 is bounded, the solution of (<ref>), (<ref>) given in theorem <ref> has the following estimates :
∂^3_x(ρ, u, n, ω) ≤ C(ϵ_0 + C_0)(1+t)^-3/2
∫_0^t (1+τ)^3(∂^4_xω^2 + ∂^3_x(u-ω)^2) dτ
≤ C (ϵ_0^2 + C_0^2)
Let (ρ_*, 0, n_*, 0) be the constant equilibrium state of (<ref>) and (ρ_0-ρ_*,u_0, n_0 - n_*, ω_0) ∈ H^4(ℝ).Then there exist constants ϵ_0 ≥ 0 and C_0≥ 0 such that if ρ_0-ρ_*,u_0 , n_0 - n_*, ω_0 _H^1≤ϵ_0, ρ_0xx, u_0xx, n_0xx, ω_0xx , ρ_0xt, u_0xt, ρ_0xxx, u_0xxx,
n_0xxx,
ω_0xxx, ρ_0xxt, u_0xxt≤ C_0, N_1 is small and N_2 is bounded, the solution of (<ref>), (<ref>) given in theorem <ref> has the following estimates :
∂^4_x(ρ, u, n, ω)≤ C(ϵ_0 + C_0)(1+t)^-2
∫_0^t (1+τ)^4(∂^5_xω^2 + ∂^4_x(u-ω)^2) dτ
≤ C (ϵ_0^2 + C_0^2).
§ OPTIMAL DECAY RATES
In this section, we used the results obtained in section <ref> to get the optimal decay rate of the solution (ρ, u, n, ω) of the Cauchy problem (<ref>), (<ref>) which tend toward the constant equilibrium state (ρ_*, 0, n_*, 0).
Under the hypotheses of Theorem <ref>, if N_1 is bounded by a small positive constant, N_2 is bounded by a constant, which are independent of T ≥ 0, then
N_1 ≤ Cϵ_0
and
N_2 ≤ C(ϵ_0 + C_0).
We can write (<ref>) in the form
U_t = AU + G(U),
where
G = ([ 0; -(mu)_x - (P'(ρ)-P'(ρ_*))(ρ - ρ_*)_x; 0; -(Mω)_x - (n_xω)_x ])
+
([ 0; (ρ -ρ_* )M + (n_* - n)m; 0; (ρ_* -ρ)M + (n-n_*)m ]),
or in the Fourier transform of (<ref>)
Û_t = A(ξ)Û + Ĝ(̂Û)̂.
The solution of (<ref>) is
Û = e^tA(ξ)Û(̂0̂)̂ + ∫_0^te^(t-τ)AĜ(̂Û)̂(ξ,τ) dτ.
Using Planchel theorem, (<ref>), and triangle inequality, we have
∂_x^kU = (iξ)^kÛ
≤(iξ)^ke^tA(ξ)Û(̂0̂)̂ + ∫_0^t(iξ)^ke^(t-τ)AĜ(̂Û)̂(ξ,τ) dτ,
from (<ref>) we have
(iξ)^ke^tA(ξ)Û(̂0̂)̂≤ C(1+t)^-1/4 - k/2(U(0)_L^1 + ∂_x^kU(0)),
Similarly, by virtue of (<ref>), (<ref>) and (<ref>), we obtain
∫_0^t(iξ)^ke^(t-τ)AĜ(̂Û)̂(ξ,τ) dτ≤ I_1 + I_2 + I_3,
where
I_1 =C∫_0^t/2(1+t-τ)^-1/4-k+1/2(mu_L^1 + (ρ - ρ_*)^2_L^1 + Mω_L^1 + n_xω_L^1
+ (ρ -ρ_*)M_L^1 + (n -n_*)m_L^1) dτ,
I_2 = C∫_t/2^t(1+t-τ)^-3/4[∂_x^k(mu)_L^1 + ∂_x^k(ρ - ρ_*)^2_L^1 + ∂_x^k(Mω)_L^1
+ ∂_x^k(n_xω)_L^1 + ∂_x^k((ρ -ρ_*)M)_L^1 + ∂_x^k((n -n_*)m)_L^1]dτ,
I_3 = C∫_0^t e^-c(t-τ)(∂_x^k+1(mu) + ∂_x^k+1(ρ - ρ_*)^2 + ∂_x^k+1(Mω)
+ ∂_x^k+1(n_xω) + ∂_x^k((ρ -ρ_*)M) + ∂_x^k((n -n_*)m)) dτ.
Firstly we have,
mu_L^1≤mu≤ CN_0^2(1+τ)^-1/2,
(ρ - ρ_*)^2_L^1≤ CN_0^2(1+τ)^-1/2,
Mω_L^1≤ CN_0^2(1+τ)^-1/2,
(ρ -ρ_*)M_L^1≤ CN_0^2(1+τ)^-1/2,
(n -n_*)m_L^1≤ CN_0^2(1+τ)^-1/2,
n_xω_L^1≤n_xω≤ CN_0N_1(1+τ)^-1.
Substituting these estimates into (<ref>) yields
I_1 ≤ C(N_0^2 + N_0N_1)∫_0^t/2 (1+t-τ)^-3/4- k/2(1+τ)^-1/2 dτ
≤ C(N_0^2 + N_0N_1)(1+t)^-1/4-k/2.
Take k=0 in I_2, applying the estimates above gives
I_2 ≤ C(N_0^2 + N_0N_1)(1+t)^-1/4.
To estimate I_3,
(mu)_x ≤m_xu + mu_x
≤ Cu_∞m_x + m_∞u_x
≤ CN_0^1/2N_1^3/2(1+τ)^-5/4.
Similarly,
(ρ - ρ_*)^2_x ≤ CN_0^1/2N_1^3/2(1+τ)^-5/4,
(Mω)_x ≤ CN_0^1/2N_1^3/2(1+τ)^-5/4,
(ρ -ρ_*)M ≤ρ - ρ_*_∞M
≤ CN_0^3/2N_1^1/2(1+τ)^-3/4,
(n -n_*)m ≤n-n_*_∞m
≤ CN_0^3/2N_1^1/2(1+τ)^-3/4.
By Lemma <ref>, <ref>, (<ref>), Proposition <ref>, Lemma <ref> and Lemma <ref>, one gets
(n_xω)_x ≤n_xxω + n_xω_x
≤ω_∞n_xx + n_x_∞ω_x
≤ Cω^1/2ω_x^1/2n_x^1/2n_xxx^1/2 + Cn_x^1/2n_xx^1/2ω_x
≤ CN_0^1/2N_1^1/2ϵ_0^1/2(ϵ_0 + C_0)^1/2(1+τ)^-3/2 + CN_1ϵ_0^1/2(ϵ_0 + C_0)^1/2(1+τ)^-3/2.
Substituting these estimates into (<ref>) yields
I_3 ≤ C(N_0^1/2N_1^3/2 + N_0^1/2N_1^1/2ϵ_0^1/2(ϵ_0 + C_0)^1/2 + N_1ϵ_0^1/2(ϵ_0 + C_0)^1/2
+ N_0^3/2N_1^1/2)∫_0^t e^-c(t-τ)(1+τ)^-3/4 dτ
≤ C(N_0^1/2N_1^3/2 + N_0^1/2N_1^1/2ϵ_0^1/2(ϵ_0 + C_0)^1/2 + N_1ϵ_0^1/2(ϵ_0 + C_0)^1/2 + N_0^3/2N_1^1/2)(1+t)^-3/4.
Combining (<ref>), (<ref>), (<ref>), (<ref>) we have
∫_0^te^(t-τ)AĜ(̂Û)̂(ξ,τ) dτ
≤ C(N_0^2 + N_1^2 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_0 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_1)(1+t)^-1/4.
Substituting (<ref>), (<ref>) into (<ref>) implies that
U≤ C(ϵ_0 + N_0^2 + N_1^2 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_0 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_1)(1+t)^-1/4.
Equivalently,
(1+t)^1/4U≤ C(ϵ_0 + N_0^2 + N_1^2 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_0 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_1),
take supremum, we have
N_0 ≤ C(ϵ_0 + N_0^2 + N_1^2) + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_0 + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_1.
Take k=1, we have
I_2 = C∫_t/2^t(1+t-τ)^-3/4((mu)_x_L^1 + [(ρ - ρ_*)^2]_x_L^1 + (Mω)_x_L^1
+ (n_xω)_x_L^1 + ((ρ -ρ_*)M)_x_L^1 + ((n -n_*)m)_x_L^1) dτ.
Using Cauchy-Schwarz inequality and (<ref>) gives
(mu)_x_L^1≤m_xu_L^1 + mu_x_L^1
≤m_xu + mu_x≤ CN_0N_1(1+τ)^-1.
Similarly, we have
[(ρ - ρ_*)^2]_x_L^1≤ CN_0N_1(1+τ)^-1,
(Mω)_x_L^1≤ CN_0N_1(1+τ)^-1,
((ρ -ρ_*)M)_x_L^1≤ CN_0N_1(1+τ)^-1,
((n -n_*)m)_x_L^1≤ CN_0N_1(1+τ)^-1.
By Cauchy-Schwarz inequality, Lemma <ref>, <ref> and Proposition <ref>, we have
(n_xω)_x_L^1 ≤n_xxω_L^1 + n_xω_x_L^1
≤n_xxω + n_xω_x
≤n_x^1/2n_xxx^1/2ω + n_xω_x
≤ N_0ϵ_0^1/2(ϵ_0 + C_0)^1/2(1+τ)^-5/4 + CN_1^2(1+τ)^-3/2.
Substituting these estimates into (<ref>) yields
I_2 ≤ C[N_0N_1 + N_1^2 + N_0ϵ_0^1/2(ϵ_0 + C_0)^1/2]∫_t/2^t(1+t-τ)^-3/4(1+τ)^-1 dτ
≤ C[N_0^2 + N_1^2 + ϵ_0^1/2(ϵ_0 + C_0)^1/2 N_0](1+t)^-3/4.
When k=1,
I_3 = C∫_0^t e^-c(t-τ)((mu)_xx + ((ρ - ρ_*)^2)_xx
+ (Mω)_xx + (n_xω)_xx + ((ρ -ρ_*)M)_x + ((n -n_*)m)_x) dτ.
We estimate terms on the right hand side of (<ref>).
It follows from Lemma <ref>, Lemma <ref>, Proposition <ref> and (<ref>) that
(mu)_xx ≤m_xxu + 2m_xu_x + mu_xx
≤u_∞m_xx + 2u_x_∞m_x + m_∞u_xx
≤ Cu^1/2u_x^1/2m_x^1/2m_xxx^1/2 + Cu_x^1/2u_xx^1/2m_x
+ Cm^1/2m_x^1/2u_x^1/2u_xxx^1/2
≤ Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_0^1/2N_1^1/2(1+τ)^-3/2 + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_1(1+τ)^-3/2.
Similarly,
((ρ - ρ_*)^2)_xx ≤ Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_0^1/2N_1^1/2(1+τ)^-3/2 + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_1(1+τ)^-3/2
(Mω)_xx ≤ Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_0^1/2N_1^1/2(1+τ)^-3/2 + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_1(1+τ)^-3/2.
Combining Lemma <ref>, <ref>, <ref>, Proposition <ref> and <ref>, we deduce that
(n_xω)_xx ≤n_xxxω + 2n_xxω_x + n_xω_xx
≤ω_∞n_xxx + ω_x_∞n_xx + n_x_∞ω_xx
≤ Cω^1/2ω_x^1/2n_x^1/3n_xxxx^2/3 + Cω_x^1/2ω_xx^1/2n_xx
+ Cn_x^1/2n_xx^1/2ω_xx
≤ Cω^1/2ω_x^1/2n_x^1/3n_xxxx^2/3 + Cω_x^1/2ω_x^1/4ω_xxx^1/4n_x^1/2n_xxx^1/2
+ Cn_x^1/2n_x^1/4n_xxx^1/4ω_x^1/2ω_xxx^1/2
≤ CN_0^1/2N_1^1/2ϵ_0^1/3(ϵ_0 + C_0)^2/3(1+τ)^-2 + CN_1ϵ_0^1/4(ϵ_0 + C_0)^3/4(1+τ)^-2.
Applying Lemma <ref>, (<ref>) and Proposition <ref> gives
((ρ -ρ_*)M)_x ≤(ρ -ρ_*)_xM + (ρ -ρ_*)M_x
≤M_∞(ρ -ρ_*)_x + (ρ -ρ_*)_∞M_x
≤ CM^1/2M_x^1/2(ρ -ρ_*)_x + C(ρ -ρ_*)^1/2(ρ -ρ_*)_x^1/2M_x
≤ CN_0^1/2N_1^1/2ϵ_0(1+τ)^-1.
Similarly,
((n -n_*)m)_x≤ CN_0^1/2N_1^1/2ϵ_0(1+τ)^-1.
Substituting these estimates into (<ref>) yields
I_3 ≤ C(ϵ_0^1/2(ϵ_0 + C_0)^1/2N_0^1/2N_1^1/2 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_1 + CN_0^1/2N_1^1/2ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ N_1ϵ_0^1/4(ϵ_0 + C_0)^3/4 + N_0^1/2N_1^1/2ϵ_0)∫_0^t e^-c(t-τ)(1+τ)^-3/4 dτ
≤ C[ϵ_0^1/2(ϵ_0 + C_0)^1/2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + ϵ_0^1/4(ϵ_0 + C_0)^3/4 + ϵ_0 ](N_0 + N_1)(1+t)^-3/4.
Putting (<ref>), (<ref>), (<ref>), and (<ref>) together, we have
∫_0^t(iξ)e^(t-τ)AĜ(̂Û)̂(ξ,τ) dτ
≤ C[N_0^2 + N_1^2 + [ϵ_0^1/2(ϵ_0 + C_0)^1/2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + ϵ_0^1/4(ϵ_0 + C_0)^3/4 + ϵ_0]
(N_0 + N_1)](1+t)^-3/4.
Substituting (<ref>), (<ref>) into (<ref>) gives
U_x ≤ C[ϵ_0 + N_0^2 + N_1^2 + (ϵ_0^1/2(ϵ_0 + C_0)^1/2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/4(ϵ_0 + C_0)^3/4 + ϵ_0 ) (N_0 + N_1)](1+t)^-3/4,
equivalently,
(1+t)^3/4U_x ≤ C[ϵ_0 + N_0^2 + N_1^2 + (ϵ_0^1/2(ϵ_0 + C_0)^1/2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/4(ϵ_0 + C_0)^3/4 + ϵ_0 ) (N_0 + N_1)]
take supremum, we have
N_1 ≤ C[ϵ_0 + N_0^2 + N_1^2 + (ϵ_0^1/2(ϵ_0 + C_0)^1/2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/4(ϵ_0 + C_0)^3/4 + ϵ_0 ) (N_0 + N_1)].
Adding (<ref>) with (<ref>), we obtain
N_0 + N_1 ≤ C[ϵ_0 + N_0^2 + N_1^2 + (ϵ_0^1/2(ϵ_0 + C_0)^1/2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/4(ϵ_0 + C_0)^3/4 + ϵ_0 ) (N_0 + N_1)]
≤ Cϵ_0 + C(N_0 + N_1)^2 + (ϵ_0^1/2(ϵ_0 + C_0)^1/2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/4(ϵ_0 + C_0)^3/4 + ϵ_0 ) (N_0 + N_1).
Choosing ϵ_0, N_0, N_1 suitably small, we arrives at
N_0 + N_1 ≤ Cϵ_0.
Take k =2, we have
I_2 = C∫_t/2^t(1+t-τ)^-3/4((mu)_xx_L^1 + ((ρ - ρ_*)^2)_xx_L^1 + (Mω)_xx_L^1
+ (n_xω)_xx_L^1 + ((ρ -ρ_*)M)_xx_L^1 + ((n -n_*)m)_xx_L^1) dτ.
By Cauchy-Schwarz inequality and Proposition <ref> one gets
(mu)_xx_L^1 ≤m_xxu_L^1 + 2m_xu_x_L^1 + mu_xx_L^1
≤um_xx + 2m_xu_x + mu_xx
≤ϵ_0N_2(1+τ)^-3/2 + Cϵ_0^2(1+
τ)^-3/2.
Similarly,
((ρ - ρ_*)^2)_xx_L^1≤ϵ_0N_2(1+τ)^-3/2 + Cϵ_0^2(1+
τ)^-3/2,
(Mω)_xx_L^1≤ϵ_0N_2(1+τ)^-3/2 + Cϵ_0^2(1+
τ)^-3/2,
((ρ -ρ_*)M)_xx_L^1≤ϵ_0N_2(1+τ)^-3/2 + Cϵ_0^2(1+
τ)^-3/2,
((n -n_*)m)_xx_L^1≤ϵ_0N_2(1+τ)^-3/2 + Cϵ_0^2(1+
τ)^-3/2.
With the aid of Cauchy-Schwarz inequality, Lemma <ref>, Proposition <ref> and Theorem <ref>, we get
(n_xω)_xx_L^1 ≤n_xxxω_L^1 + 2n_xxω_x_L^1 + n_xω_xx_L^1
≤n_xxxω + 2n_xxω_x + n_xω_xx
≤ωn_x^1/3n_xxxx^2/3 + 2n_xxω_x + n_xω_xx
≤ Cϵ_0^1/3(ϵ_0 + C_0)^2/3(1+τ)^-21/12 + Cϵ_0N_2(1+τ)^-2.
Substituting these estimates into (<ref>) yields
I_2 ≤( ϵ_0N_2 + ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3)∫_t/2^t(1+t-τ)^-3/4(1+τ)^-5/4 dτ
≤( ϵ_0N_2 + ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3)(1+τ)^-5/4.
When k=2,
I_3 = C∫_0^t e^-c(t-τ)((mu)_xxx + ((ρ - ρ_*)^2)_xxx
+ (Mω)_xxx + (n_xω)_xxx + ((ρ -ρ_*)M)_xx + ((n -n_*)m)_xx) dτ.
Applying Lemma <ref>, <ref> and Proposition <ref> gives
(mu)_xxx ≤m_xxxu + 3m_xxu_x + 3m_xu_xx + mu_xxx
≤u_∞m_xxx + 3 u_x_∞m_xx + 3 m_x_∞u_xx + m_∞u_xxx
≤ Cu^1/2u_x^1/2m_xxx + Cu_x^1/2u_xx^1/2m_xx
+ Cm_x^1/2m_xx^1/2u_xx +
Cm^1/2m_x^1/2u_xxx
≤ Cϵ_0^1/3(ϵ_0 + C_0)^2/3N_0^1/2N_1^1/2(1+τ)^-2 + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_1^1/2N_2^1/2(1+τ)^-2.
Similarly,
((ρ - ρ_*)^2)_xxx ≤ Cϵ_0^1/3(ϵ_0 + C_0)^2/3N_0^1/2N_1^1/2(1+τ)^-2 + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_1^1/2N_2^1/2(1+τ)^-2
(Mω)_xxx ≤ Cϵ_0^1/3(ϵ_0 + C_0)^2/3N_0^1/2N_1^1/2(1+τ)^-2 + Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_1^1/2N_2^1/2(1+τ)^-2.
By Lemma <ref>, Proposition <ref> and <ref>, we can obtain
(n_xω)_xxx ≤n_xxxxω + 3 n_xxxω_x + 3n_xxω_xx + n_xω_xxx
≤ Cω^1/2ω_x^1/2n_xxxx + Cω_x^1/2ω_xx^1/2n_xxx
+ Cn_xx^1/2n_xxx^1/2ω_xx + Cn_x^1/2n_xx^1/2ω_xxx
≤ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2(1+τ)^-19/8 + Cϵ_0^1/3(ϵ_0 + C_0)^2/3N_1^1/2N_2^1/2(1+τ)^-5/2
+ Cϵ_0^1/4(ϵ_0 + C_0)^3/4N_2(1+τ)^-5/2.
By Lemma <ref>, Theorem <ref>, Proposition <ref> and <ref>, we deduce that
((ρ -ρ_*)M)_xx ≤(ρ - ρ_*)_xxM + 2(ρ - ρ _*)_xM_x + (ρ -ρ_*)M_xx
≤M_∞(ρ - ρ_*)_xx + 2(ρ - ρ_*)_x_∞M_x + (ρ -ρ_*)_∞M_xx
≤ CM^1/2M_x^1/2(ρ - ρ_*)_x^1/2(ρ - ρ_*)_xxx^1/2
+ C(ρ - ρ_*)_x^1/2(ρ - ρ_*)_xx^1/2M_x + C(ρ -ρ_*)^1/2(ρ -ρ_*)_x^1/2M_xx
≤ Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_0^1/2N_1^1/2(1+τ)^-3/2 + Cϵ_0N_1^1/2N_2^1/2(1+τ)^-3/2.
Similarly,
((n -n_*)m)_xx ≤ Cϵ_0^1/2(ϵ_0 + C_0)^1/2N_0^1/2N_1^1/2(1+τ)^-3/2 + Cϵ_0N_1^1/2N_2^1/2(1+τ)^-3/2.
Substituting these estimates into (<ref>), we have
I_3 ≤(ϵ_0^1/3(ϵ_0 + C_0)^2/3N_0^1/2N_1^1/2 + ϵ_0^1/2(ϵ_0 + C_0)^1/2N_1^1/2N_2^1/2
+ ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2 + Cϵ_0^1/3(ϵ_0 + C_0)^2/3N_1^1/2N_2^1/2
+ ϵ_0^1/2(ϵ_0 + C_0)^1/2N_0^1/2N_1^1/2 + ϵ_0N_1^1/2N_2^1/2)∫_0^t e^-c(t-τ)(1+τ)^-5/4 dτ
≤[(ϵ_0 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + ϵ_0^1/2(ϵ_0 + C_0)^1/2)(N_0 + N_1 + N_2)
+ ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2](1+τ)^-5/4.
Combining (<ref>), (<ref>), (<ref>) and (<ref>), we have
∫_0^t(iξ)^ke^(t-τ)AĜ(̂Û)̂(ξ,τ) dτ
≤[ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + (ϵ_0 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + ϵ_0^1/2(ϵ_0 + C_0)^1/2)(N_0 + N_1 + N_2)
+ ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2](1+t)^-5/4.
Plugging (<ref>), (<ref>) into (<ref>), we obtain
U_xx ≤ C[ϵ_0 + ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + (ϵ_0 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/2(ϵ_0 + C_0)^1/2)(N_0 + N_1 + N_2) + ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2](1+t)^-5/4,
that is,
(1+t)^5/4U_xx ≤ C[ϵ_0 + ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + (ϵ_0 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/2(ϵ_0 + C_0)^1/2)(N_0 + N_1 + N_2) + ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2],
take supremum, we have
N_2 ≤ C[ϵ_0 + C_0 + ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + (ϵ_0 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/2(ϵ_0 + C_0)^1/2)(N_0 + N_1 + N_2) + ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2].
Combining (<ref>) and (<ref>) yields
N_0 + N_1 + N_2 ≤ C[ϵ_0 + C_0 + ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + (ϵ_0 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/2(ϵ_0 + C_0)^1/2)(N_0 + N_1 + N_2) + ϵ_0^1/2(ϵ_0 +C_0)N_0^1/2]
≤ C[ϵ_0 + ϵ_0^2 + ϵ_0^1/3(ϵ_0 + C_0)^2/3 + (ϵ_0 + ϵ_0^1/3(ϵ_0 + C_0)^2/3
+ ϵ_0^1/2(ϵ_0 + C_0)^1/2)(N_0 + N_1 + N_2)] + ϵ_0^1/2(ϵ_0 +C_0)(N_0 + N_1 + N_2)^1/2.
Owing to the smallness of ϵ_0 and Young's inequality, we obtain
N_0 + N_1 + N_2 ≤ C(ϵ_0 + C_0).
We now carry out the proof of ∂_x^k(u-ω) in Theorem <ref>. For this we recall (<ref>), which implies that
u - ω≤ e^-ct(u_0 - ω_0) + ∫_0^t e^-c(t - τ)R(x,τ) dτ,
where c is a positive constant and R is defined as in (<ref>). To estimate R, by virtue of (<ref>) and (<ref>), we obtain
v_x≤ Cϵ_0(1+τ)^-3/4,
uu_x≤ Cϵ_0^2(1+τ)^-5/4,
vv_x≤ Cϵ_0^2(1+τ)^-5/4,
ωω_x≤ Cϵ_0^2(1+τ)^-5/4,
ω_xx≤ C(ϵ_0 + C_0)(1+τ)^-5/4,
1/nn_x≤ Cϵ_0(1+τ)^-3/4,
1/nn_xω_x≤ C(ϵ_0 + C_0)^2(1+τ)^-7/4,
(n-n_*)(u-ω)≤ Cϵ_0(1+τ)^-1/2u-ω,
(ρ -ρ_*)(u-ω)≤ Cϵ_0(1+τ)^-1/2u-ω,
therefore, we can get
R≤(ϵ_0 + C_0 + C_0^2)(1+τ)^-3/4
+ Cϵ_0(1+τ)^-1/2u-ω.
Plugging (<ref>) into (<ref>), we obtain
u - ω≤ e^-ct(u_0 - ω_0) + (ϵ_0 + C_0 + C_0^2)(1+t)^-3/4
+ Cϵ_0(1+t)^-5/4sup_τ∈[0,t](1+τ)^3/4u-ω
≤ C(ϵ_0 + C_0 + C_0^2)(1+t)^-3/4 + Cϵ_0(1+t)^-5/4sup_τ∈[0,t](1+τ)^3/4u-ω,
furthermore
(1+t)^3/4u - ω≤ C(ϵ_0 + C_0 + C_0^2) + Cϵ_0(1+t)^-1/2sup_τ∈[0,t](1+τ)^3/4u-ω,
because of the smallness of ϵ_0, we have
u - ω≤ C(ϵ_0 + C_0)(1+t)^-3/4.
Similarly,
(u - ω)_x≤ e^-ct(u_0 - ω_0)_x + ∫_0^t e^-c(t - τ)R(x,τ)_x dτ
To estimate R_x, by virtue of (<ref>) and (<ref>), we obtain
v_xx ≤ C(ϵ_0 + C_0)(1+τ)^-5/4,
(uu_x)_x ≤ C(ϵ_0 + C_0)^2(1+τ)^-7/4,
(vv_x)_x ≤ C(ϵ_0 + C_0)^2(1+τ)^-7/4,
(ωω_x)_x ≤ C(ϵ_0 + C_0)^2(1+τ)^-7/4,
ω_xxx ≤ C(ϵ_0 + C_0)(1+τ)^-3/2,
(1/nn_xω_x)_x ≤ C(ϵ_0 + C_0)^2(1+τ)^-7/4,
((n-n_*)(u-ω))_x ≤(n-n_*)_x(u-ω) + (n-n_*)(u-ω)_x
≤ C(ϵ_0 + C_0)(ϵ_0 + C_0 + C_0^2)(1+τ)^-7/4
+ Cϵ_0(1+τ)^-1/2(u-ω)_x,
therefore, we can get
R_x ≤ C(ϵ_0 + C_0 + C_0^2 + (ϵ_0 + C_0)(ϵ_0 + C_0 + C_0^2))(1+τ)^-5/4
+ Cϵ_0(1+τ)^-1/2(u-ω)_x.
Plugging (<ref>) into (<ref>) we can obtain
(u - ω)_x ≤ e^-ct(u_0 - ω_0)_x + C(ϵ_0 + C_0 + C_0^2 + (ϵ_0 + C_0)(ϵ_0 + C_0 + C_0^2))(1+t)^-5/4
+ Cϵ_0(1+τ)^-7/4sup_0≤τ≤ t[(1+t)^5/4(u-ω)_x],
that is,
(1+t)^5/4(u - ω)_x
≤ C(ϵ_0 + C_0 + C_0^2 + (ϵ_0 + C_0)(ϵ_0 + C_0 + C_0^2))
+ Cϵ_0(1+t)^-1/2sup_0≤τ≤ t[(1+τ)^5/4(u-ω)_x].
Because of the smallness of ϵ_0, we obtain
(u - ω)_x≤ C(ϵ_0 + C_0)(1+t)^-5/4,
this complete the proof of Theorem <ref>.
aabe 2003 S. Berres, R. Bürger, K. Karlsen and E. Tory, Strongly degenerate parabolic-hyperbolic systems modeling polydisperse sedi-mentation with compression. SIAM Appl. Math. 64 (2003), pp. 41–80.
C2019 P. Constantin, Theodore D. Drivas, Huy Q. Nguyen, and Federico Pasqualotto, Compressible fluids and active potentials. Ann. Inst. H. Poincaré C Anal. Non Linéaire 37 (2020), pp. 145–180.
cho2004 Y. Cho, H. J. Choe, and H. Kim, Unique solvability of the initial boundary value problems for compressible viscous fluid. J Math Pures Appl, 83 (2004), pp. 243–275.
cho2006 Y. Cho and H. Kim, On classical solutions of the compressible Navier-Stokes equations with nonnegative initial densities. Manuscripta Math. 120 (2006), pp. 91–129.
cho2003 Y. Cho and H. Kim, Strong solutions of the Navier-Stokes equations for isentropic compressible fluids. J Differ Eqs. 190 (2003), pp. 504–523.
choi2016Y.-P. Choi, Global classical solutions and large-time behavior of the two-phase fluid model, SIAM J. Math. Anal. 48 (2016) pp. 3090–3122.
da1981 C M. Dafermos, Can dissipation prevent the breaking of waves? Transactions of the Twenty-Sixth Conference of Army Mathematicians, 187-198, ARO Rep 81, 1. Research Triangle Park, NC: US Army Res Office, 1981
fe2004 E. Feireisl, On the motion of a viscous compressible, and heat conducting fluid. Indiana Univ. Math. J. 53 (2004), pp. 1705–1738.
fe2001 E. Feireisl, A. Novotný, and H. Petzeltòva, On the existence of globally defined weak solutions to the Navier–Stokes equations. J. Math. Fluid. Mech. 3 (2001), pp. 358–392.
ho1987 D. Hoff, Global existence for 1D compressible isentropic Navier–Stokes equations with large initial data. Trans. Amer. Math. Doc. 303 (1987), pp. 169–181.
ho199501 D. Hoff, Global solutions of the Navier-Stokes equations for multidimensional compressible flow with discontinuous initial data. J. Differ Equ., 120 (1995), pp. 215–254.
ho199502 D. Hoff, Strong convergence to global solutions for multidimensional flows of compressible, viscous fluids
with polytropic equations of state and discontinuous initial data. Arch Rational Mech Anal, 132 (1995), pp. 1–14.
ho1998 D. Hoff, Global solutions of the equations of one-dimensional, compressible flow with large data and forces, and with differing end states. Z. Angew. Math. Phys. 49 (1998), pp. 774–785.
hs1998 L. Hsiao, Quasilinear Hyperbolic Systems and Dissipative Mechanisms. (World Scientific, 1998).
hs1992 L. Hsiao and T P. Liu, Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping. Comm. Math. Phys., 143 (1992), pp. 599–605.
hs1993 L. Hsiao and T P. Liu, Nonlinear diffusive phenomena of nonlinear hyperbolic systems. Chinese Ann Math Ser B, 14 (1993), pp. 1–16.
hs2000 L. Hsiao and R H. Pan, The damped p-system with boundary effects. Contemporary Mathematics, 255 (2000), pp. 109–123.
hu2005 F M. Huang, P. Marcati and R H. Pan, Convergence to Barenblatt Solution for the Compressible Euler Equations
with Damping and Vacuum. Arch. Ration. Mech. Anal., 176 (2005), pp. 1–24.
hu2006 F M. Huang and R H. Pan, Asymptotic behavior of the solutions to the damped compressible Euler equations with vacuum. J Differ Equations, 220 (2006), pp. 207–233.
hu2003 F M. Huang and R H. Pan, Convergence rate for compressible Euler equations with damping and vacuum. Arch Ration Mech Anal, 166 (2003), pp. 359–376.
hlx-2012 X D. Huang, J. Li and Z P. Xin, Global well-posedness of classical solutions with large oscillations and vacuum to the three-dimensional isentropic compressible Navier-Stokes equations. Comm. Pure Appl. Math., 65 (2012), pp. 549–585.
ji200901 M N. Jiang, L Z. Ruan and J. Zhang, Existence of global smooth solution to the initial boundary value problem for p-system with damping. Nonlinear Anal., 70 (2009), pp. 2471–2479.
ji200902 M N. Jiang and C J. Zhu, Convergence rates to nonlinear diffusion waves for p-system with nonlinear damping on quadrant. Discrete Contin Dyn Syst, 23 (2009), pp. 887–918.
jz-2001 S. Jiang and P. Zhang, On spherically symmetric solutions of the compressible isentropic Navier-Stokes equations.
Comm. Math. Phys., 215 (2001), pp. 559–581.
K1968 Ya. I. Kanel, On a model system of equations for one-dimensional gas motion. Diff. Eq. (in Russian), 4 (1968), pp. 721-734.
K1975 T. Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems. Arch. Ration. Mech. Anal., 58 (1975), pp. 181-205.
k1976 T. Kato, Perturbation Theory for Linear Operators, 2nd edn. (Springer, 1976).
ka1977 A. V. Kazhikhov and V. V. Shelukhin, Unique global solution with respect to time of initial-boundary value problems for one-dimensional equations of a viscous gas. Prikl. Mat. Mech., 41 (1977), pp. 282–291.
W2022 H-L. Li, T. Wang, and Y. Wang, Wave Phenomena to the Three-Dimensional Fluid-Particle Model. Arch. Ration. Mech. Anal. 243 (2022), pp. 1019–1089.
li1998P.-L. Lions, Mathematical topics in fluid mechanics. vol. 2: Compressible models. (Clarendon Press, Oxford University Press, 1998).
liu1996 T P. Liu, Compressible flow with damping and vacuum. Japan J. Appl. Math., 13 (1996), pp. 25–32.
M1980 A. Matsumura and T. Nishida, The initial value problem for the equations of motion of viscous and heat-conductive gases. J. Math. Kyoto Univ. 20 (1980), pp. 67-104.
ma1980 A. Matsumura and T. Nishida, The initial value problem for the equations of motion of viscous and heat-conductive gases. J Math Kyoto Univ, 20, (1980), pp. 67–104.
ma1979 A. Matsumura and T. Nishida, The initial value problem for the equations of motion of compressible viscous
and heat conductive fluids. Proc Japan Acad Ser A Math Sci, 5 (1979), pp. 337–342.
ma1983 A. Matsumura and T. Nishida, Initial boundary value problems for the equations of motion of compressible viscous and heat-conductive fluids. Comm. Math. Phys. 89 (1983), pp. 445–464.
mar1990 P. Marcati and A. Milani, The one-dimensional Darcy’s law as the limit of a compressible Euler flow. J. Differential Equations, 84 (1990), pp. 129–147.
mar2000 P. Marcati and B. Rubino, Hyperbolic to Parabolic Relaxation Theory for Quasilinear First Order Systems. J. Differential Equations, 162 (2000), pp. 359–399.
na1962 J. Nash, Le problème de Cauchy pour les équations différentielles d'un fluide général. (French) Bull. Soc. Math. France 90 (1962), pp. 487–497.
N1959 Nirenberg and Louis, On elliptic partial differential equations. Ann. Scuola Norm. Sup. Pisa Cl. Sci., 13 (1959), pp. 115–162.
ni1968 T. Nishida, Global solutions for an initial-boundary value problem of a quasilinear hyperbolic systems. Proc. Japan Acad., 44 (1968), pp. 642–646.
se198601 D. Serre, Solutions faibles globales des équations de Navier–Stokes pour un fluide compressible. C. R. Acad. Sci. Paris Sér. I Math. 303 (1986), pp. 639–642.
ser1959 J. Serrin, On the uniqueness of compressible fluid motion. Arch. Ration. Mech. Anal., 3 (1959), pp. 271–288.
W2003 T. C. Sideris, B. Thomases, and D. Wang, Long time behavior of solutions to the 3D compressible Euler equations with damping. Comm. Partial Differential Equations 28 (2003), pp. 795–816.
se198602 D. Serre, Sur l'équation monodimensionnelle d'un fluide visqueux, compressible et conducteur de chaleur. C. R. Acad. Sci. Paris Sér. I Math. 303 (1986), pp. 703–706.
sh1982 V. V. Shelukhin, Motion with a contact discontinuity in a viscous heat conducting gas. Dinamika Sploshn. Sredy., 57 (1982), pp. 131–152.
sh1983V. V. Shelukhin, Evolution of a contact discontinuity in the baratropic flow of a viscous gas. Prikl. Mat. Mekh. 47 (1983), pp. 870–872.
sh1984 V. V. Shelukhin, On the structure of generalized solutions of the one-dimensional equations of a polytropic viscous gas. Prikl. Mat. Mekh. 48(1984), pp. 912–920.
sh1986 V. V. Shelukhin, Boundary value problems for equations of a baratropic viscous gas with nonnegative initial density. Dinamika Sploshn. Sredy. 74 (1986), pp. 108–125.
so1976 V. A. Solonnikov, The solvability of the initial-boundary value problem for the equations of motion of a viscous compressible fluid. Zap. Naucn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 56 (1976), pp. 128–142, 197. Investigations on linear operators and theory of functions, VI.
tang2021 Y. Wu, H. Tang and Y. Zhang, Large time behavior of solutions to a two phase fluid model in ℝ^3. J. Math. Anal. Appl. 503 (2021), pp. 35Q35.
yang2001 W. Wang and T. Yang, The point-wise estimates of solutions for Euler equations with damping in multi-dimensions. J Differ Equations, 173 (2001), pp. 410–450.
wu2020 G. Wu, Y. Zhang and L. Zou, Optimal large-time behavior of the two-phase fluid model in the whole space. SIAM J. Math. Anal. 52 (2020), pp. 5748–5774.
Z2019 Y. Zeng, L^p time asymptotic decay for general hyperbolic-parabolic balance laws with applications. J. Hyperbolic Differ. Equ. 16 (2019), pp. 663–700.
Z2015 Y. Zeng, Global existence theory for a general class of hyperbolic balance laws. Bulletin, Inst. Math. Acad. Sin. 10 (2015), pp. 143-170.
zhu2003 C J. Zhu, Convergence rates to nonlinear diffusion waves for weak entropy solutions to p-system with damping. Sci China, Ser A, 46 (2003), pp. 562–575.
zhu2006 C J. Zhu and M N. Jiang, L^p-decay rates to nonlinear diffusion waves for p-system with nonlinear damping. Sci China, Ser A, 49 (2006), pp. 721–739.
zhao2001 H J. Zhao, Convergence to strong nonlinear diffusion waves for solutions of p-system with damping. J. Differential Equations, 174 (2001), pp. 200–236.
zh2010 Y H. Zhang, Z. Tan, Existence and asymptotic behavior of global smooth solution for p-system with damping and boundary effect. Nonlinear Anal, 72 (2010), pp. 2499–2513.
|
http://arxiv.org/abs/2307.03224v1 | 20230706180001 | Gravitational Waves, Bubble Profile, and Baryon Asymmetry in the Complex 2HDM | [
"Dorival Gonçalves",
"Ajay Kaladharan",
"Yongcheng Wu"
] | hep-ph | [
"hep-ph"
] |
=1
./Figures/
#1Fig. #1 #1Tab. #1 #1Eq. (#1)[4]
a]Dorival Gonçalves,a]Ajay Kaladharan,b,a]Yongcheng Wu
[a]Department of Physics, Oklahoma State University, Stillwater, OK, 74078, USA[b]Department of Physics and Institute of Theoretical Physics, Nanjing Normal University, Nanjing, 210023, China
[email protected]@[email protected] study explores the generation of the observed baryon asymmetry of the Universe within the complex Two Higgs Doublet Model (C2HDM) while considering theoretical and current experimental constraints. In our investigation, we analyze critical elements of the Higgs potential to understand the phase transition pattern. Specifically, we examine the formation of the barrier and the uplifting of the true vacuum state, which play crucial roles in facilitating a strong first-order phase transition. Furthermore, we explore the potential gravitational wave signals associated with this phase transition pattern and investigate the parameter space points that can be probed with LISA. Finally, we compare the impact of different approaches to describing the bubble profile on the calculation of the baryon asymmetry. We contrast the typically used kink profile approximation against the explicit solution of the tunneling profile. We find that a non-negligible range of the C2HDM parameter space results in significant discrepancies in the baryon asymmetry estimation between these two approaches. Through an examination of the parameter space, we identify a benchmark point that satisfies the observed baryon asymmetry.
Gravitational Waves, Bubble Profile, and Baryon Asymmetry in the Complex 2HDM
[
Received: XX, XX, XXXX. Accepted: YY, YY, YYYY, Report Number: NORDITA 2023-020
===================================================================================
§ INTRODUCTION
Understanding the origin of the matter-antimatter asymmetry of the Universe, known as the baryon asymmetry of the Universe (BAU), is a fundamental question in particle physics and cosmology. The asymmetry between baryons and antibaryons in the early Universe can be quantitatively evidenced through the
baryon-to-entropy ratio measurement n_B/s≃ 8.6× 10^-11 <cit.>, exceeding the expected value for a symmetric scenario by several orders of magnitude. As a consequence, the majority of antibaryons underwent annihilation during the thermal history, leaving behind a significant density of baryons in the present Universe. The essential ingredients required for generating this baryon asymmetry are theoretically well understood and encapsulated by the three Sakharov conditions <cit.>. These conditions demand the violation of baryon number, the presence of C and CP violation, and a departure from thermal equilibrium. While the Standard Model (SM) satisfies the requirements for baryon number violation and C violation, it falls short in providing a sufficiently robust source of CP violation. Additionally, the observed Higgs mass of m_h=125 GeV precludes the necessary out-of-equilibrium conditions through a strong first-order phase transition <cit.>. Thus, the quest for baryogenesis requires physics beyond the SM <cit.>.
Among possible extensions, the complex Two-Higgs Doublet Model (C2HDM) can potentially provide both of the missing ingredients: strong first-order electroweak phase transition and additional sources of CP-violation <cit.>. In this work, we explore the phase transition pattern and the feasibility of generating the observed baryon asymmetry within the context of the C2HDM. Central to our investigation is the shape of the Higgs potential, which plays a crucial role in determining the nature of the phase transition. We focus on the formation of the barrier and the upliftment of the true vacuum state, as these factors are instrumental in driving the phase transition from a smooth crossover to a strong first-order transition. Our analysis builds upon previous studies for other new physics extensions <cit.>, where it was observed that the intensity of the phase transition is closely linked to the elevation of the true vacuum relative to the symmetric one at zero temperature. The prevalence of one-loop effects over thermal corrections, particularly when ξ_c>1, enhances the strength of the phase transition <cit.>. However, it should be noted that if the one-loop correction is too large, the universe may become trapped in the electroweak symmetric vacuum, resulting in an incomplete phase transition <cit.>. Consequently, as we will show, a significant portion of parameter points with large ξ_c values become unphysical in this scenario.
The first-order phase transition in the early Universe can generate stochastic gravitational waves (GW) whose characteristic peak frequency is associated with the phase transition temperature. After redshifting to the present time, the GW spectrum would have a peak frequency at the mHz range for the phase transition at the electroweak scale <cit.>. This presents an exciting prospect to probe electroweak phase transition (EWPT) at LISA <cit.>, designed to be sensitive to mHz frequency signals. Hence, we also investigate the parameter space points in C2HDM that can be probed using LISA.
Through an extensive exploration of the parameter space, we note that the C2HDM can describe the observed baryon asymmetry, although only for a limited set of parameter space points. In this regard, we compare two different approaches to describe the bubble profile, a key ingredient in the BAU estimation. The commonly adopted kink profile parameterization <cit.> and the explicit solution for the tunneling equation are examined to assess their impact on the resulting baryon asymmetry. Our analysis reveals relevant deviations between the BAU calculation between these two approaches. While the majority of parameter space points yield similar results using both methods, a notable fraction exhibits significant differences, sometimes varying by several orders of magnitude. To understand these discrepancies, we scrutinize the behavior of the source term in front of the bubble wall, which sheds light on the distinct asymmetry values obtained from the two profile assumptions.
The paper is organized as follows. In <ref>, we provide a brief overview of the complex Two Higgs Double Model. <ref> discusses the one-loop finite temperature effective potential. It is followed by a discussion on electroweak phase transition and GW signals in <ref>. In <ref>, we study how the shape of the Higgs potential will affect the EWPT, focusing on the barrier formation and the vacuum upliftment. In <ref>, we present the details for the baryon asymmetry calculation. The results of the BAU are presented in <ref>, where we also contrast the results depending on the bubble profile estimation. Finally, we summarize in <ref>. Details of the parameterization for the C2HDM scan are presented in <ref>.
§ COMPLEX TWO HIGGS DOUBLET MODEL
The two Higgs doublet model (2HDM) lays out a compelling extension of the SM in line with current experimental constraints <cit.>. This work considers CP-violating 2HDM with a softly broken ℤ_2 symmetry. Within this framework, the tree-level potential is given by
V_0(Φ_1,Φ_2) = m_11^2Φ_1^†Φ_1 + m_22^2Φ_2^†Φ_2 - (m_12^2Φ_1^†Φ_2 + h.c.) + λ_1/2(Φ_1^†Φ_1)^2 + λ_2/2(Φ_2^†Φ_2)^2
+ λ_3(Φ_1^†Φ_1)(Φ_2^†Φ_2) + λ_4(Φ_1^†Φ_2)(Φ_2^†Φ_1) + (λ_5/2(Φ_1^†Φ_2)^2 + h.c.),
where the mass term m_12^2 and quartic coupling λ_5 are complex and all other mass terms and quartic couplings are taken to be real. However, one of the phases in m_12^2 and λ_5 can be removed by a phase redefinition of Φ_2. In this work, we always keep m_12^2 real and λ_5 will be complex at zero temperature. Hence, overall in such setup, there is only one independent physical CP violation phase. To preclude dangerous tree-level Flavor Changing Neutral Currents (FCNC) <cit.>, we impose a ℤ_2 symmetry softly broken by the m_12^2 term, under which Φ_1→Φ_1 and Φ_2→ -Φ_2. Following electroweak symmetry breaking, the neutral components of Φ_1 and Φ_2 develop non-zero vacuum expectation values (VEVs).
Expanding around the VEVs ω_i, the scalar doublets Φ_i can be written as
Φ_1 = [ H_1^+; ω_1 + H_1^0 + i A_1^0/√(2) ]andΦ_2 = e^iω_θ[ H_2^+ + ω_ CB/√(2); ω_2 + H_2^0 + i A_2^0/√(2) ]
where at zero temperature VEVs v_i≡ω_i|_T=0, i=1,2 are linked to SM VEV by v_1^2+v_2^2=v^2≈ (246 GeV)^2.
Whereas an additional source of CP-violation should decrease at zero temperature (ω_θ|_T=0→ 0) to comply with the stringent electric dipole moment (EDM) constraints <cit.>, the dynamical generation of CP-violation at high temperatures offers a potential avenue for a CP-violating mechanism crucial to the success of Electroweak baryogenesis.
To account for a more comprehensive scenario, we also incorporate a possible charge-breaking at high temperature, ω_CB. Since a non-zero charge-breaking VEV at zero temperature would lead to massive photons, we impose v_CB=0.
The scalar sector in the CP-violating 2HDM has five physical mass eigenstates: three CP-mixed neutral scalars H_i and one charged scalar pair H^±. The correspondence between mass eigenstates and gauge eigenstates is established by the mixing angle β in CP-odd and charged sectors and another three angles α, α_b and α_c mixing the CP-odd and CP-even scalars:
([ G^±; H^± ]) =
([ c_β s_β; -s_β c_β ]) ([ H_1^±; H_2^± ]), ([ G^0; A ])= ([ c_β s_β; -s_β c_β ])([ A_1^0; A_2^0 ]),
([ H_1; H_2; H_3 ]) = O[ H_1^0; H_2^0; A ]=
[ -s_α c_α_b c_α c_α_b s_α_b; c_α c_α_c + s_α s_α_bs_α_c s_α c_α_c - c_α s_α_b s_α_c c_α_bs_α_c; -c_α s_α_c + s_α s_α_bc_α_c -s_α s_α_c - c_α s_α_b c_α_c c_α_b c_α_c ]([ H_1^0; H_2^0; A ]).
The mixing angle β is defined as t_β≡tanβ = v_2/v_1 (cosβ≡ c_βsinβ≡ s_β). We also define c_x ≡cos x and s_x ≡sin x.At zero temperature, the physical parameters in the scalar sector include the VEVs (v_1=vc_β,v_2=vs_β,θ≡⟨ω_θ⟩), the masses of the scalar eigenstates (m_H_i and m_H^±), the mixing angles (α, α_b, and α_c), and m_12^2. Note that, as we mentioned earlier, there is only one physical CP violation phase, i.e., only one of θ, α_b, and α_c is independent. In this work, we keep α_c as an independent input while calculating θ and α_b from other parameters. Hence, we choose the input parameters to be
v = 246 GeV, t_β, c_β-α, α_c, m_12^2, m_h = 125 GeV, m_H_↑, m_H_↓, m_H^± ,
which match the 9 real parameters in the potential <ref>. Here, m_H_↑ and m_H_↓ represent the masses of heavier and lighter beyond the Standard Model (BSM) neutral scalars, respectively. The detailed mapping between the parameters in <ref> and those in <ref> can be found in <ref>. This parameterization for the CP-violating 2HDM is similar to the scan performed for CP-conserving 2HDM in our earlier works <cit.> in the sense that it provides the scans over all physical BSM scalar masses (m_H_↑, m_H_↓ and m_H^±) and CP-violating angle (α_c).
The phase transition pattern in 2HDM, to a large extent, depends on the masses of additional scalars and corresponding mass splittings <cit.>. Hence, the numerical scan performed over three scalar masses is more suitable than one of the scalar masses written as a function of other scan variables.
Within the Yukawa sector, there are four distinct ℤ_2 charge assignments that effectively preclude tree-level FCNC. In this study, we focus on two specific scenarios: type-I and type-II. In the type-I scenario, all fermions exclusively couple with Φ_2, while in the type-II scenario, only up quarks couple with Φ_2, with down quarks and charged leptons coupling with Φ_1. To thoroughly explore these possibilities, we conduct a random uniform scan, encompassing both type-I and type-II configurations, over the parameter space region
tanβ ∈ (0.8,25) , m_12^2 ∈(10^-3,5× 10^5) GeV^2 , m_H_↑/↓ ∈(30,1500) GeV ,
α_c ∈(-π/2,π/2) , cos (β-α)∈(-0.3,0.3) , m_H^± ∈(150,1500) GeV.
We performe the parameter space scan by implementing the parametrization detailed in <ref> in ScannerS <cit.>. Using ScannerS, we impose constraints from perturbative unitarity <cit.>, boundedness from below <cit.>, vacuum stability <cit.>, electroweak precision, and flavor constraints.
EDM constraints are also imposed using the stringent limits from the ACME collaboration <cit.>. Furthermore, constraints from the 125 GeV Higgs boson measurements and additional scalar searches are carried out using HiggsBounds and HiggsSignals <cit.>.
§ ONE-LOOP FINITE TEMPERATURE EFFECTIVE POTENTIAL
We use loop-corrected finite temperature effective potential to determine the dynamics of electroweak symmetry breaking in the early Universe. Along with the tree-level potential V_0 from <ref>, we also include the Coleman-Weinberg potential V_ CW and counterterms V_ CT that encode one-loop corrections at zero temperature, and finite-temperature corrections V_T. The effective potential is given by
V_ eff = V_0 + V_ CW + V_ CT+ V_T .
The Coleman-Weinberg potential in the Landau gauge can be written, using MS renormalization prescription as <cit.>
V_ CW = ∑_i n_i/64π^2m_i^4(Φ_1,Φ_2)[log(m_i^2(Φ_1,Φ_2)/μ^2)-c_i] ,
where the index i runs over all particles in the thermal bath with field-dependent mass m_i(Φ_1,Φ_2), including Higgs bosons, massive gauge bosons, Goldstone bosons, longitudinal photon, and fermions. The parameter n_i represents the number of degrees of freedom for each particle, with n_i>0 for bosons and n_i<0 for fermions. In the MS renormalization procedure, the coefficient c_i takes the value of 5/6 for gauge bosons and 3/2 otherwise. Moreover, we set the renormalization scale μ to the zero-temperature VEV, μ=v(T=0)≈ 246GeV.[A renormalization group improved calculation can be taken into account for a further refined estimation <cit.>. For the renormalization scale μ^2 dependence of effective potential at finite temperature, we refer to Ref. <cit.>.]
The one-loop effects of the Coleman-Weinberg potential result in shifts of the mixing angle and scalar masses from their tree-level values. To perform a consistent parameter scan, we adopt an on-shell renormalization scheme, which enforces the parameters to match their tree-level values <cit.>, by proper counterterms determined according to
∂_ϕ_i(V_CW+V_CT)|_ω=ω_tree=0 ,
∂_ϕ_i∂_ϕ_j(V_CW+V_CT)|_ω=ω_tree=0 ,
where ϕ_i (i=1,...,8) represents scalar components from the Φ_1 and Φ_2 doublets, ω denotes the ω_i values, and ω_tree characterizes the minimum of the tree-level potential for the fields in Φ_1 and Φ_2. The first and second derivatives of V_ CW are consistently defined with an analytical expression in Ref. <cit.>. The first renormalization condition, given by <ref>, ensures that the minimum of the effective potential is not shifted from tree-level minimum, and the second condition, shown in <ref>, guarantees that mixing angles and scalar masses remain the same as their tree-level values.
The one-loop thermal correction V_T in <ref> is given by <cit.>
V_T =T^4/2π^2[
∑_f n_f J_+(m_f^2/T^2)+
∑_𝒱_T n_𝒱_T J_-(m_𝒱_T^2/T^2)
+∑_𝒱_L n_𝒱_L J_-(m_𝒱_L^2/T^2) ]
-T^4/2π^2∑_𝒱_Lπ/6(m^3_𝒱_L/T^3-m_𝒱_L^3/T^3) ,
where the sum extends over fermions f and bosons. The bosonic sector can be further divided into two categories: the transverse modes of gauge bosons, represented by 𝒱_T=W_T,Z_T, and the longitudinal modes of gauge bosons and scalars, denoted by 𝒱_L=W_L,Z_L,γ_L,Φ^0,Φ^±. The resummation of the n=0 Matsubara modes of the longitudinal components 𝒱_L leads to thermal corrections in their masses <cit.>. The second line in <ref> corresponds to the Daisy contributions, where m_𝒱_L represents the thermal Debye mass calculated using the Arnold-Espinosa scheme <cit.>. Finally, the thermal functions for fermions (J_+) and bosons (J_-) are given by
J_±(x)=∓∫_0^∞ dy y^2 log(1± e^-√(y^2+x^2)) .
Whereas the effective potential in the electroweak phase transition is subject to theoretical uncertainties stemming from gauge parameter choices <cit.>, Nielsen identities offer a way to construct gauge-independent probes <cit.>. These identities ensure that the gauge dependence cancels out at the extrema of the potential
∂ V_eff(Φ _1,Φ _2,ξ )/∂ξ=-C_i(Φ _1,Φ _2,ξ)∂ V_eff(Φ _1,Φ _2,ξ )/∂ϕ_i ,
where ξ is the gauge fixing parameter. Inspired by the gauge independence guaranteed by Nielsen identities, we employ two distinct methods for phenomenological analyses. The first approach involves calculating the finite-temperature effective potential and performing a numerical scan. The second approach focuses on determining the gauge-invariant vacuum upliftment at T=0. In <ref>, we highlight that the upliftment of the true vacuum relative to the symmetric vacuum at zero temperature serves as an effective probe of the phase transition's strength.
While the first method carries uncertainties associated with gauge parameter choices, the latter approach is gauge invariant, as assured by Nielsen identities <cit.>. It is worth noting that we introduce additional counterterms at one-loop order to preserve the positions of the electroweak vacuum and masses. The agreement between our numerical scan and the profile derived from the vacuum upliftment serves to confirm the reliability of the numerical scan despite its inherent uncertainties.
§ ELECTROWEAK PHASE TRANSITION AND GRAVITATIONAL WAVES
The finite temperature effective potential dictates the phase-transition pattern. The two Higgs doublet model displays both single and multi-step phase transitions. The first-order phase transition occurs through tunneling from false to true vacua. It results in bubbles of the broken phase that pop up and expand in the surrounding region of the symmetric phase, transitioning from the false vacuum to the true vacuum. The tunneling probability is given by <cit.>
Γ (T)≈ T^4 (S_3/2π T )^3/2e^-S_3/T ,
where S_3 represents the three-dimensional Euclidean action associated with the critical bubble formation
S_3=4π∫_0^∞dr r^2 [ 1/2 ( dϕ(r)/dr )^2+V(ϕ,T) ] .
Here, the scalar field ϕ corresponds to the critical bubble profile, which is determined by solving the following differential equation
d^2ϕ/dr^2+2/rdϕ/dr=dV(ϕ,T)/dϕ , with lim_r→∞ϕ(r)=0
and lim_r→ 0dϕ(r)/dr=0.
We utilize the publicly available code CosmoTransitions <cit.> to solve the differential equation and compute the Euclidean action S_3.
The first-order phase transition is considered to be completed around the nucleation temperature T_n, which corresponds to the point where one bubble nucleates per unit horizon volume <cit.>.
∫_T_n^∞dT/TΓ (T)/H(T)^4=1 .
This condition ensures that the bubbles percolate even in the inflating Universe. For the electroweak phase transition, with a nucleation temperature of approximately T_n ≈ 100 GeV, this condition can be approximated as <cit.>
S_3(T)/T≈ 140 .
To preserve the baryon asymmetry generated through electroweak baryogenesis, it is crucial to suppress the sphaleron process inside the bubble. This requires the electroweak symmetry breaking to undergo a strong first-order phase transition <cit.>
ξ_c≡v_c/T_c≳ 1 ,
where v_c≡√(ω_1^2(T_c)+ω_2^2(T_c)+ω_CB^2(T_c))
is the Higgs VEV at the critical temperature T_c. This critical temperature corresponds to the point where the broken and unbroken vacua of the electroweak symmetry are degenerate. The approximate inequality in <ref> indicates the theoretical uncertainty in this condition <cit.>.
The production of stochastic gravitational waves is a significant consequence of a first-order phase transition. These GW originate from three main sources: the collision of vacuum bubbles, fluid motion resembling sound waves in the plasma, and turbulent motion within the plasma. Each source contributes to the GW spectrum, which can be described by numerical functions dependent on two parameters that capture the dynamics of the phase transition at the nucleation temperature T_n <cit.>. The first parameter is α, defined as the ratio of the latent heat released during the phase transition (ϵ) to the energy density of the vacuum radiation (ρ_rad), i.e., α≡ϵ/ρ_rad. The latent heat and the vacuum radiation energy density are expressed as
ϵ = Δ(- V_ eff + T∂ V_ eff/∂ T)_T=T_n and ρ_ rad = π^2/30g_⋆ T_n^4 ,
where Δ represents the difference between the true and false vacua, and
g_⋆ the number of relativistic degrees of freedom in the plasma.
The second important parameter is β/H_n, which characterizes the inverse time duration of the phase transition. This quantity is defined as
β/H_n ≡ T_nd/dT.(S_3/T)|_T=T_n ,
where H_n is the Hubble constant at the nucleation temperature T_n. Detectable GW signals are typically associated with a slow phase transition (small β/H_n) and a large latent heat release (large α).
Finally, to assess the detectability of GW signal in the detector, we employ the signal-to-noise ratio (SNR) measure <cit.>
SNR=√(𝒯∫_f_min^f_max d f[h^2Ω_GW(f)/h^2Ω_Sens(f)]^2) ,
where Ω_ Sens represents the sensitivity curve of the considered GW detector <cit.> and 𝒯 corresponds to the mission duration. For our analysis, we adopt the LISA gravitational wave detector as a benchmark, with 𝒯=5 years and a detection threshold of SNR=10 <cit.>.
§ BARRIER FORMATION AND VACUUM UPLIFTMENT
Introducing a second Higgs doublet to the SM Higgs sector can alter the behavior of electroweak symmetry breaking from a smooth crossover to a strong first-order phase transition. In Ref. <cit.>, the authors studied the key ingredients that trigger this transmutation in the EWPT by focusing on the barrier formation and upliftment of the true vacuum in the context of CP-conserving 2HDM <cit.>. In this model, the barrier is driven primarily by one-loop corrections and ξ_c can be correlated with Δℱ_0/|ℱ_0^ SM|, a gauge independent parameter calculated at zero temperature. The Δℱ_0/|ℱ_0^ SM| is defined as
Δℱ_0/|ℱ_0^ SM|≡ℱ_0-ℱ_0^ SM/|ℱ_0^ SM|,
where ℱ_0 is the zero-temperature vacuum energy density of the 2HDM defined as
ℱ_0≡ V_ eff(v_1,v_2,T=0)-V_ eff(0,0,T=0),
with ℱ_0^ SM=-1.25 × 10^8 GeV^4.
It is interesting to examine whether the phase transition features of the CP-conserving 2HDM prevail in the CP-violating 2HDM. In <ref> (left panel), we show that the fraction of one-loop contribution to the barrier height is correlated with the zero temperature vacuum upliftment measure Δℱ_0/ | ℱ_0^ SM|. We observe that the larger the one-loop correction, the higher the value of vacuum upliftment. In particular, this correlation can be seen for ξ_c≳ 1. As one-loop effects are the dominant contributions, we can use Δℱ_0/ | ℱ_0^ SM| to shed light on the properties of the EWPT. We can approximately propose Δℱ_0/ | ℱ_0^ SM| ≳ 0.2 as minimal condition for strongly first-order EWPT in the CP-violating 2HDM.
In the scenario where the vacuum upliftment measure Δℱ_0/ | ℱ_0^ SM| is extremely large, the tunneling from false vacuum to true vacuum becomes challenging, translating into <ref> having no solution <cit.>. Thus, the Universe is trapped in a high energetic electroweak symmetric vacuum, yielding a nonphysical vacuum. In the <ref> (right panel), we denote these points with the orange color. Most of the parameter points with vacuum upliftment measure Δℱ_0/ | ℱ_0^ SM| ≳ 0.87 exhibit a vacuum trapped scenario. The above constraint excludes the bulk of ξ_c>2 points, which would otherwise serve as promising candidates for successful electroweak baryogenesis.
In <ref> (upper-panel), we show the scanned points in the (Δ m_H_↑, Δ m_H_↓) plane color-coded by ξ_c, where m_H_↑ (m_H_↓) represents the mass of the heaviest (lightest) BSM neutral scalar, and
Δ m_H_↑≡ m_H^±-m_H_↑ (Δ m_H_↓≡ m_H^±-m_H_↓). The gray points in the background pass all the theoretical and current experimental constraints. The black points also satisfy the first-order phase transition condition with 0<ξ_c<1. The preference to the region with m_H_↑≈ m_H^± or m_H_↓≈ m_H^± is induced predominantly by electroweak precision measurements <cit.>. The ξ_c>1 points favor a large value of |Δ m_H_↑| or |Δ m_H_↓| because a higher value of Δℱ_0/ | ℱ_0^ SM| requires a larger mass split, similarly to the CP-conserving scenario <cit.>. In <ref> (middle-panel), we present the parameter points in the (Δ m_H_↑, Δ m_H_↓) plane, color-coded by ξ_n. The points marked in orange correspond to locations where vacuum trapping occurs. The majority of parameter points with large values of ξ_c, where m_H_↑≈ m_H^± and m_H^±-m_H_↓>250 GeV, are trapped in the false vacuum state. As a result, the phase transition remains incomplete.
In <ref> (lower-panel), we show parameter points that can be probed by LISA in the (Δ m_H_↑, Δ m_H_↓) plane. The color coding in this case represents the logarithm (base 10) of the signal-to-noise ratio. We focus on points above the SNR threshold, SNR>10. Among these points, we highlight the benchmark point BP4 in <ref>, which serves as an example of a parameter point that can be probed by LISA. For the parameter points with ξ_n>1, 6% of Type-I points show a detectable GW signal by LISA, whereas it is around 2.5% for Type-II. These differences between type-I and type-II scenarios are driven by constraints from flavor physics <cit.>. More concretely, constraints from B-meson decays impose a lower bound on the charged scalar mass requiring m_H^±≳ 580 GeV in the type-II 2HDM. In <ref>, we show the correlation between SNR and zero-temperature vacuum upliftment measure Δℱ_0/ℱ_0^ SM for the Type I parameter points with ξ_n>1. The bulk of parameter points that exhibit strong GW signals are associated with large Δℱ_0/ℱ_0^ SM measure. In most cases, parameter points with Δℱ_0/ℱ_0^ SM<0.4 do not show a promissable GW at LISA.
§ BARYON ASYMMETRY CALCULATION
§.§ Estimation of bubble wall profile
The bubble profile in the radial coordinate can be obtained by solving the tunneling equation <ref>. The baryon asymmetry calculation is performed in the bubble wall coordinate system z, where z=0 denotes the bubble wall. We obtain the position of the bubble wall in the radial coordinate system r_0, where the energy density obtains the maximum value. The energy density U_E is given by
U_E(r)=1/2 [ϕ'(r) ]^2+V_eff ( ϕ(r) ) ,
and the bubble wall coordinate z can be defined as
z=r-r_0.
A key ingredient for the baryogenesis is the complex mass of quarks and leptons, which couples to Φ_2,
m_i(z) = y_i/√(2)ω_2e^-iω_θ≡ |m_i(z)|e^iθ^i(z)
To illustrate these concepts, we present a graphical representation in <ref>. The left panel displays the energy density U_E(r) as a function of the radial distance r, specifically for benchmark point BP1 as defined in <ref>. The position of the bubble wall is identified as the barrier of the tunneling profile. On the right panel, we show the dynamic variation of the CP-violating angle of the top quark with respect to temperature in the broken phase for the BP1. As thermal effects come into play, additional CP violation is induced at higher temperatures. This effect becomes prominent, whereas at zero temperature, it is roughly seven orders of magnitude smaller. The oscillatory behavior observed between temperatures of 20 GeV and 35 GeV arises due to thermal contributions that lead to a change in sign of the CP angle, which we represent in terms of the absolute value |θ_t|.
In the literature, it is a customary practice to parameterize the tunneling profile θ^i(z) by kink profile <cit.>
θ^i(z)=θ^i_brk+θ^i_sym/2-θ^i_brk-θ^i_sym/2tanh ( z/L_W ) ,
where θ_brk^i (θ_sym^i) is the phase at the broken (symmetric) minimum. The thickness of the wall L_W is given by L_W=v_n/√(8 V_b) <cit.> with v_n representing the VEV at EWPT and V_b the height of the barrier that separates the two minima (at the nucleation temperature T_n).
Remarkably, this parameterization displays a tunneling profile that is symmetric with respect to the bubble wall. In Section <ref>, we compare the estimation of the baryon asymmetry of the universe using two different methods: the kink profile and the explicit solution from the bubble profile. In the latter case, the bubble profile is obtained by directly solving the tunneling equation. By comparing the results obtained from these two approaches, we can evaluate the consistency and reliability of the BAU estimation.
§.§ Semi-classical force method
The baryon asymmetry in the Universe can be estimated using the semi-classical force method. This framework utilizes the existence of a fermion with varying complex mass as it passes through the bubble wall. The particle interaction with the bubble wall can be formalized using the WKB approximation <cit.> or the closed-time-path formalism of thermal field theory <cit.>, where the force acting on the particle is given by
F_z=-(m^2)^'/2E_0± s (m^2θ^')^'/2E_0E_0z∓θ^' m^2(m^2)^'/4E_0^3E_0z.
The radial coordinate denotes the perpendicular distance from the wall in the rest frame of the wall, where the positive direction of z points towards the symmetric phase. E_0 is the conserved wall frame energy of the quasi-particle, E_0z^2=E_0^2-p_∥^2 and (..)^' denotes the derivative with respect to the z coordinate. The first term in <ref> conserves CP, whereas the second and third terms depend on the spin and nature of the particle, with the upper sign solution corresponding to the particle and the lower sign to the antiparticle. Thus, the presence of a non-zero value for θ^' generally indicates the appearance of CP violation <cit.>. Assuming that the kinetic momentum is conserved in collisions, the perturbation δ f_i from the equilibrium density f_i of species i caused by the movement of the bubble wall is given by
f_i=1/e^β [ γ_W(E_0+v_wp_z)-μ_i ]±1+δ f_i,
where β=1/T, γ_W= 1/√(1-v_w^2) is the boost factor of the wall, and + (-) refers to fermions (bosons).
In <ref>, the CP even term is first order in derivatives, while the CP odd term is second order in derivatives; thus we can solve the CP even and odd parts separately. Following Ref. <cit.>, we introduce the following definition
μ_i≡μ_i,1e+μ_i,2o+μ_i,2e, δ f_i≡δ f_i,1e+δ f_i,2o+δ f_i,2e.
The evolution of f_i is described by the Boltzmann equation
L[f_i]≡ ( v_g∂_z+ṗ_z∂_p_z )f_i=C[f_i],
where L[f_i] is the Liouville operator and v_g is the group velocity determined by WKB dispersion relation <cit.>
v_g=P_z/E_0 ( 1±θ^' m^2/2E^2_0E_0z ).
The C[f_i] is a model-dependent collision integral associated with the interaction rate of the thermal bath <cit.>. The terms in the fluid equation can be written as the average over-phase space of the form <cit.>
⟨ X ⟩=∫d^3p X(p)/∫d^3p f^'_0+(m=0), ⟨p_z/E_0X ⟩=∫d^3p p_z/E_0X(p)/∫d^3p f^'_0+(m=0),
where f^'_0+(m=0) can be written as
f^'_0+(m=0)≡ f_i|_fermion,μ_i=0,δ f_i=0,v_W=0.
Plasma velocities can be defined as
u_i≡⟨p_z/E_0δ f_i ⟩.
The second-order CP odd chemical potential is defined by the difference between the second-order chemical potential of the particle and its anti-particle, and a similar definition follows for corresponding plasma velocities,
μ_i,2≡μ_i,2o-μ̅_i,2o, u_i,2≡ u_i,2o-u̅_i,2o.
The zeroth and first momenta of the collision integral can be written in terms of inelastic rate Γ_inel and total interaction rate Γ_tot by <cit.>
⟨ C[f_i] ⟩=Γ_inel∑μ_i, ⟨p_z/E_0C[f_i] ⟩=-Γ_totu.
For the generation of the baryon asymmetry, the first step is to produce asymmetry in left-handed quarks. We consider the effects of the strong sphaleron process, W-scattering, top Yukawa interaction, helicity flip, and Higgs number violation with the rate of Γ_ss, Γ_W, Γ_y, Γ_m, and Γ_h respectively. The last two processes are relevant only in the broken phase. The transport equation for chemical potentials of the left-handed top quark, the conjugate of the right-handed bottom quark, left-handed bottom quark, Higgs bosons, and the corresponding plasma velocities are given as follows <cit.>:
* Left-handed top quarks (t)
0 = 3 K_1,t( ∂_z μ_t,2) + 3 K_2,t( ∂_z m_t^2 ) μ_t,2 + 3 ( ∂_z u_t,2)
- 3Γ_y (μ_t,2 + μ_t^c,2 + μ_h,2) - 6Γ_M ( μ_t,2 + μ_t^c,2) - 3Γ_W ( μ_t,2 - μ_b,2)
- 3Γ_ss[ (1+9 K_1,t) μ_t,2 + (1+9 K_1,b) μ_b,2 + (1-9 K_1,t) μ_t^c,2] ,
S_t = -3K_4,t( ∂_z μ_t,2) + 3K̃_5,t( ∂_z u_t,2) + 3K̃_6,t( ∂_z m_t^2 ) u_t,2 + 3Γ_t^tot u_t,2 .
* Charge conjugation of right-handed top quarks (t^c)
0= 3 K_1,t( ∂_z μ_t^c,2) + 3 K_2,t( ∂_z m_t^2 ) μ_t^c,2 + 3 ( ∂_z u_t^c,2)
- 3Γ_y (μ_t,2 + μ_b,2 + 2μ_t^c,2 + 2μ_h,2) - 6Γ_M ( μ_t,2 + μ_t^c,2)
- 3Γ_ss[ ( 1+9 K_1,t) μ_t,2 + (1+9K_1,b) μ_b,2 + (1-9K_1,t) μ_t^c,2]
S_t = -3K_4,t( ∂_z μ_t^c,2) + 3K̃_5,t( ∂ u_t^c,2) + 3K̃_6,t( ∂_z m_t^2) u_t^c,2 + 3Γ_t^tot u_t^c,2 .
* Left-handed bottom quarks (b)
0 = 3 K_1,b(∂_z μ_b,2) + 3 (∂_z u_b,2) - 3Γ_y ( μ_b,2 + μ_t^c,2 + μ_h,2) - 3Γ_W ( μ_b,2 - μ_t,2)
- 3Γ_ss[ ( 1 + 9K_1,t) μ_t,2 + (1+9K_1,b) μ_b,2 + (1-9K_1,t) μ_t^c,2]
0 = -3K_4,b( ∂_z μ_b,2) + 3K̃_5,b(∂_z u_b,2) + 3Γ_b^tot u_b,2 .
* Higgs
0 = 4 K_1,h( ∂_z μ_h,2) +
4( ∂_z u_h,2) - 3Γ_y (
μ_t,2 + μ_b,2 + 2μ_t^c,2 + 2μ_h,2) -
4Γ_h
μ_h,2 ,
0 = -4K_4,h( ∂_z μ_h,2) + 4K̃_5,h( ∂_z u_h,2) + 4Γ_h^tot u_h,2 ,
S_t denotes the source term of the top quark that can be written as
S_t=-v_WK_8,t∂_z(m^2_t ∂_z θ)+v_WK_9,t(∂_z θ)m^2_t(∂_z m^2_t).
The source term for the bottom quark can be neglected due to the suppression factor m_b^2/m_t^2 ∼ 10^-3.
Thermal transport coefficients are defined as
K_1,i =-⟨p^2_z/E_0^2∂_E^2f_i,0⟩, K_2,i =⟨∂^2_Ef_i,0/2E_0⟩,
K_4,i =⟨p^2_z/E_0^2∂_Ef_i,0⟩,
K̃_5,i = [ p^2_z/E_0^2∂_Ef_i,0 ],
K̃_6,i = [ E_0^2-p^2_z/2E_0^3∂_Ef_i,0 ],
K_8,i =⟨ | p_z |∂_Ef_i,0/2E_0^2E_0z⟩,
K_9,i =⟨|p_z|/4E_0^3E_0z ( ∂_E f_i,0/E_0-∂^2_E f_i,0 ) ⟩ ,
with the expectation values given by
⟨ X ⟩=∫d^3p X(p)/∫d^3p ∂_Ef_0+(m=0), [ X ]=∫d^3p X(p)/∫d^3p f_i,0,v_W=∫d^3p X(p)/∫d^3p f_i,0|_v_W=0,
and the distribution function defined as
f_i,0=f_i|_μ_i=0,δ f_i=0,v_w=0,
f_0+= f_i|_fermion,μ_i=0,δ f_i=0,v_w=0,
f_i,0,v_w=f_i,0+v_Wp_z∂_E_0f_i,0.
The third equation in <ref> is a Taylor expansion; hence, it is valid only in small values of v_W. The transport equation with full dependence on the wall velocity is provided in Ref. <cit.>. The values for the strong sphaleron rate, top Yukawa rate, Higgs number violating rate and rate for spin-helicity flipping rate for the top quark are given by <cit.>
Γ_ss =4.9× 10^-4T , Γ_y =4.2×10^-3T ,
Γ_m =m^2_t(z,T)/63T , Γ_h =m^2_W(z,T)/50T ,
where z is the distance. The W exchange rate can be approximated as the total Higgs interaction Γ_W=Γ_h^tot.
Finally, the asymmetry in left-handed quarks is converted into baryon asymmetry by electroweak sphaleron transition which can be calculated as <cit.>
η_B=n_B/s=405 Γ_ws/4π^2v_W g_⋆ T∫_0^∞dzμ_B_Lexp ( -45Γ_wsz/4v_W ),
where Γ_ws≃ 1× 10^-6T is the weak sphaleron rate estimated by lattice calculation <cit.> and g_⋆≃ 106.75 is the effective degrees of freedom at the electroweak scale. The chemical potential for left-handed quarks μ_B_L is given by
μ_B_L=1/2 ( 1+4K_1,t )μ_t+1/2 ( 1+4K_1,b )μ_b-2K_1,tμ_t^c.
We solved the top transport equation and estimate the baryon asymmetry of the Universe with BSMPT v2 <cit.>.
§ BARYON ASYMMETRY IN THE C2HDM
In this section, we estimate the baryon asymmetry generated via electroweak baryogenesis using the semi-classical force method. A key ingredient in this calculation is the estimation of the bubble profile. As discussed in <ref>, it is usual in the literature to parametrize bubble profile by the kink profile <cit.>. In this section, in addition to deriving the BAU in the C2HDM framework, we pay close attention to the viability of the kink profile by comparing it with the bubble profile obtained by explicitly solving the tunneling equation using CosmoTransitions <cit.>.
In <ref>, we compare the magnitude of baryon asymmetry estimated using these two bubble profiles at the nucleation temperature with the color code representing the distribution probability. First, we observe that the C2HDM can satisfy the observed baryon asymmetry, matching the observed value η_obs. However, these points are rare in our parameter space scan. We highlight one of these points in <ref> as benchmark point 2 (BP2) and detailed define it in <ref>.[The VEV-insertion approximation (VIA) <cit.> has been found to produce baryon asymmetry values that are two to three orders of magnitude larger than the semi-classical force method adopted in the current study <cit.>, displaying a larger number of points satisfying η_obs. Several works have raised criticisms about the validity of the approximations used in this alternative method. One particular argument is that the expansion utilized in deriving the source term for the top quark in the VIA approach may encounter limitations due to the substantial mass of the top quark <cit.>. It is important to highlight that recent improvements have been made in treating the source term in the VIA method <cit.>.] Second, we observe in <ref> that for most of the points in the parameter space, the kink profile solution leads to larger values than the profile obtained by solving the tunneling equation. In addition, there is a non-negligible fraction of points where the kink profile overestimates the asymmetries by a few orders of magnitude in comparison to the profile from the solution of the tunneling equation.
In most cases, we can understand the difference in asymmetry using two profiles by looking at the behavior of the source term <ref> in front of the bubble wall. Specifically, the sign of ∂_z θ_t in front of the bubble wall determines the sign of the source term S_t, thereby influencing the overall asymmetry. In most cases, a negative (positive) ∂_z θ_t results in a positive (negative) source term S_t, leading to a positive (negative) asymmetry. The kink profile typically provides a higher value for the top mass around the bubble wall, and thereby a higher magnitude for the source term in <ref>. This feature is illustrated in <ref> using our benchmark point 1 (BP1) as defined in <ref>. Even when the change in phase of the top mass θ_t has a larger magnitude for the tunneling profile, the value of the top mass is higher for the kink profile, and subsequently, the kink profile has a larger asymmetry. Therefore, the behavior of the top mass is the dominant factor in estimating the magnitude of the asymmetry compared to the phase of top mass θ_t.
There are instances where the top mass in the tunneling profile is smaller, but does not quickly drop to zero compared to the kink profile. This translates into the source term being active for a larger wall distance for the tunneling profile compared to the kink profile. In this case, the magnitude of asymmetry calculated using the tunneling profile would have a larger value compared to the kink profile. The above feature is illustrated in the case of BP2 shown in <ref>, where the asymmetry differs by two orders. Once again, we highlight that BP2 can explain the value of the observed baryon asymmetry η_obs when using the explicit solution for the tunneling equation. The above characteristic of the tunneling profile will permit significant baryon asymmetry even though the change in CP violating phase is relatively small.
Finally, there are instances where even the sign of the derivative for the CP phase ∂_zθ_t near the bubble wall differs between the kink and tunneling profiles. We illustrate this scenario with the benchmark point 3 (BP3) presented in <ref>. Near the bubble wall, θ_t exhibits an increasing trend for the kink profile, while it displays a decreasing trend for the explicit solution for the tunneling profile. Consequently, the asymmetry is positive for the tunneling profile and negative for the kink profile, despite both profiles having identical endpoints. These findings emphasize the importance of accurately determining the bubble profile and highlight the discrepancies that can arise when relying on the kink profile approximation.
§ SUMMARY
In this work, we explored the phase transition pattern and the feasibility of generating the observed baryon asymmetry of the Universe within the C2HDM framework while considering the theoretical and experimental constraints. First, we carefully examined the essential elements in the shape of the Higgs potential, specifically focusing on the formation of the barrier and the upliftment of the true vacuum state. These factors are critical in facilitating the phase transition from a smooth crossover to a strong first order phase transition. We observe that the intensity of the phase transition is linked to the elevation of the true vacuum relative to the symmetric vacuum state at zero temperature <cit.>. This phenomenon occurs due to the prevalence of one-loop effects over thermal corrections, particularly when ξ_c>1 <cit.>. However, if the vacuum upliftment measure is too large, the universe becomes trapped in the false vacuum state, rendering no solution for the nucleation temperature <ref>. This leads to parameter points with Δℱ_0/ | ℱ_0^ SM| ≳ 0.87 unphysical, which excludes most of the ξ_c>2 points <cit.>. Therefore, in electroweak baryogenesis studies, it is crucial to look at the nucleation temperature T_n and not just at the critical temperature T_c.
When it comes to gravitational wave signals, only a small fraction of the parameter points in the Strong First-Order Electroweak Phase Transition parameter space of the C2HDM can be probed by LISA. However, among the accessible points, those with a higher value of the ξ_c parameter display particularly strong gravitational wave signals. Notably, the Type I parameter space points generally offer more promising gravitational wave signals compared to the Type II parameter points in the C2HDM. These differences can be traced to the more stringent flavor constraints imposed on the Type-II scenario that shape the parameter space.
We note that the C2HDM can describe the observed baryon asymmetry η_obs, albeit for a limited set of parameter space points. One specific point, BP2, was highlighted as a benchmark that satisfied the observed asymmetry value. Furthermore, we contrast the impact on the baryon asymmetry calculation using two different approaches to describe the bubble profile, namely the usually adopted kink profile parameterization and the explicit solution for the tunneling equation. Our objective was to access the dependency of the resulting value of η_B on these two approaches and evaluate their respective contributions to the baryon asymmetry calculation. We found that the majority of points in our parameter space scan yield similar results from both approaches. Nonetheless, a non-negligible portion of points exhibits significant discrepancies between these two methods. Specifically, the kink profile approximation often displays higher asymmetry values compared to the explicit solution obtained from the tunneling equation. In some cases, the discrepancy was by several orders of magnitude. The difference in the asymmetry value for the two profiles was scrutinized in terms of the behavior of the source term in front of the bubble wall.
Undoubtedly, the task of achieving a baryon asymmetry of the universe that aligns with the observed value poses significant challenges. The requirements of a strong first-order electroweak phase transition, substantial CP violation, and stringent theoretical and experimental constraints make the generation of a compatible BAU a formidable task. However, the discrepancies observed in calculations performed using different profile assumptions provide avenues for improving the accuracy of computing the BAU. The comparison between the kink profile and the explicit solution for the tunneling profiles provided valuable insights into the estimation of baryon asymmetry, emphasizing the importance of accurately determining the bubble profile for a more robust analysis of electroweak baryogenesis.
§ ACKNOWLEDGEMENTS
We would like to thank Margarete Mühlleitner and Jonas Wittbrodt for useful discussions about BSMPT and ScannerS. DG, AK, and YW thank the U.S. Department of Energy for the financial support, under grant number DE-SC 0016013. Some computing for this project was performed at the High Performance Computing Center at Oklahoma State University, supported in part through the National Science Foundation grant OAC-1531128.
§ PARAMETRIZATION FOR C2HDM SCAN
In this appendix, we present the detailed parameterization for the C2HDM adopted in our parameter space scan in <ref>. From the following minimization conditions at zero temperature
dV/dω_1 = ω_1(2m_11^2+λ_1ω_1^2+(λ_5^rc_2ω_θ-λ_5^is_2ω_θ)ω_2^2 + (λ_3+λ_4)ω_2^2)-2m_12^2ω_2c_ω_θ/2=0,
dV/dω_2 = ω_2(2m_22^2+λ_2ω_2^2+(λ_5^rc_2ω_θ-λ_5^is_2ω_θ)ω_1^2 + (λ_3+λ_4)ω_1^2)-2m_12^2ω_1c_ω_θ/2=0,
dV/dω_θ = ω_1ω_2(2m_12^2s_ω_θ - ω_1ω_2(λ_5^rs_2ω_θ + λ_5^ic_2ω_θ)) /2 = 0,
we can write the tree-level parameters as
m_11^2 = m_12^2t_β c_θ/c_2θ - 1/2v^2(c_β^2λ_1 + s_β^2(λ_3+λ_4+λ_5^r/c_2θ)),
m_22^2 = m_12^2c_θ/t_β c_2θ - 1/2v^2(s_β^2λ_2 + c_β^2(λ_3+λ_4+λ_5^r/c_2θ)),
λ_5^i = 2m_12^2s_θ/s_β c_β c_2θv^2 - λ_5^r t_2θ.
Note that in the limit θ→ 0, where α_c also goes to zero, we recover the CP-conserving 2HDM.
From the quadratic terms in the potential, we have the following relations for the charged scalar mass and neutral scalar mass matrix:
m_H^±^2 = m_12^2c_θ/s_β c_β c_2θ - 1/2v^2 (λ_4 + λ_5^r/c_2θ),
ℳ^2_N = ([ ℳ_11^2 ℳ_12^2 ℳ_13^2; ℳ_21^2 ℳ_22^2 ℳ_23^2; ℳ_31^2 ℳ_32^2 ℳ_33^2 ]),
ℳ_11^2 = m_12^2t_β c_θ + λ_1v^2c_β^2,
ℳ_22^2 = m_12^2c_θ/t_β + λ_2v^2s_β^2,
ℳ_33^2 = m_12^2/2s_β c_β c_2θ(3c_θ - c_3θ) - λ_5^rv^2/c_2θ,
ℳ_12^2 = ℳ_21^2 = 1/2(m_12^2(c_3θ-3c_θ)/c_2θ + s_2β(λ_3+λ_4 + λ_5^r/c_2θ)v^2),
ℳ_13^2 = ℳ_31^2 = -m_12^2s_θ/c_β,
ℳ_23^2 = ℳ_32^2 = -m_12^2s_θ/s_β.
From m_H^±^2 and ℳ, we obtain the expressions for λ_1,⋯,4,λ_5^r in terms of the physical parameters in <ref>:
λ_1 = ∑_i O_i1^2m_i^2/c_β^2 v^2 - m_12^2t_β c_θ/c_β^2 v^2,
λ_2 = ∑_i O_i2^2m_i^2/s_β^2 v^2 - m_12^2c_θ/t_β s_β^2 v^2,
λ_3 = ∑_i O_i1O_i2m_i^2/s_β c_β v^2 - m_12^2c_θ/s_β c_β v^2 + 2m_H^±^2/v^2,
λ_4 = ∑_i O_i3^2m_i^2/v^2 + m_12^2c_θ/s_β c_β v^2 - 2m_H^±^2/v^2,
λ_5^r = -c_2θ∑_i O_i3^2m_i^2/v^2 + m_12^2(3c_θ-c_3θ)/s_2βv^2,
where O is the rotation matrix in <ref> that diagonalizes ℳ_N^2, and m_i^2 for i=1,2,3 are the mass eigenvalues of ℳ_N^2 and can be identified with m_h, m_H_↑ and m_H_↓.
Note that the determinant of matrix O in <ref> is -1, which is chosen such that the definition of α follows the same convention as the counterpart in CP-conserving 2HDM.[To match the convention in ScannerS, extra permutations and multiplications will be added to the rotation matrix.]
The parameters α_b and θ can be obtained by using ℳ_13^2 and ℳ_23^2,
s_α_b = s_2α_c(m_2^2-m_3^2)/2(m_1^2-m_2^2s_α_c^2-m_3^2c_α_c^2)t_α+β,
s_θ = -c_β/m_12^2∑_i m_i^2 O_i1O_i3.
With the three mixing angles α, α_b, and α_c, we can evaluate the rotation matrix O in <ref> and subsequently obtain λ's using <ref>. In this parametrization we choose β, α and α_c as independent parameters. The remaining m_11^2, m_22^2 and λ_5^i can be calculated using <ref>.
JHEP
|
http://arxiv.org/abs/2307.01825v1 | 20230704165642 | Self-similar solution for fractional Laplacian in cones | [
"Krzysztof Bogdan",
"Piotr Knosalla",
"Łukasz Leżaj",
"Dominika Pilarczyk"
] | math.PR | [
"math.PR",
"Primary: 60G18, 60J35, secondary: 60G51, 60J50"
] |
K. Bogdan, P. Knosalla, Ł. Leżaj and D. Pilarczyk]Krzysztof Bogdan^1,*, Piotr Knosalla^2, Łukasz Leżaj^1,† and Dominika Pilarczyk^1,
^1Wrocław University of Science and Technology, Faculty of Pure and Applied Mathematics, wyb. Wyspiańskiego 27, 50-370 Wrocław, Poland
^*mailto:[email protected]@pwr.edu.pl, ^†[email protected]@pwr.edu.pl, ^[email protected]@pwr.edu.pl
^2University of Opole, Institute of Physics, ul. Oleska 48, 45-052 Opole, Poland
[email protected]@uni.opole.pl
Krzysztof Bogdan was partially supported by the National Science Centre (Poland):
grant 2017/27/B/ST1/01339. Łukasz Leżaj was partially supported by the National Science Centre (Poland): grant 2021/41/N/ST1/04139.
[2020]Primary: 60G18, 60J35; secondary: 60G51, 60J50.
We construct a self-similar solution of the heat equation
for the fractional Laplacian with Dirichlet boundary conditions
in every fat cone. As applications, we give the Yaglom limit and entrance law for the corresponding killed isotropic stable Lévy process and precise large-time asymptotics for solutions of the Cauchy problem in the cone.
Self-similar solution for fractional Laplacian in cones
[
August 1, 2023
=======================================================
§ INTRODUCTION
Let d∈:={1,2,…}. Consider an arbitrary non-empty open set Γ⊂ such that ry ∈Γ whenever y ∈Γ and r>0. Thus, Γ is a non-empty open cone in .
Let α∈ (0,2). For the fractional Laplacian Δ^α/2 we consider the Dirichlet heat kernel of the cone, p_t^Γ(x,y), t>0, x,y∈Γ. In other words, p^Γ is the transition density of the isotropic α-stable Lévy process in killed upon leaving Γ. Let M_Γ→ [0,∞) be the Martin kernel of Γ with the pole at infinity (for definitions, see Section <ref>). The function is homogeneous (or self-similar) of some degree β∈ [0,α).
Our first result captures the asymptotics of p_t^Γ at the vertex 0 of Γ:
If the cone Γ is fat, then for s,t >0, x ∈Γ,
Ψ_t(x):=lim_Γ∈ y → 0 p_t^Γ(x,y)/M_Γ(y)∈ (0,∞),
Ψ_t(x) = t^-(d+β)/αΨ_1(t^-1/αx),
and
∫_ΓΨ_s(y)p^Γ_t(x,y) y=Ψ_s+t(x).
The proof of Theorem <ref> is given in Section <ref>. In a perspective, the result is the next step in the development of the
potential theory of the isotropic α-stable processes after the boundary Harnack principle, Green function, and Dirichlet heat kernel estimates, suggested by the Introduction of Bogdan at al. <cit.>.
In view of (<ref>) and (<ref>), Ψ_t(x) may be called a self-similar semigroup solution of the heat equation for the fractional Laplacian with Dirichlet conditions. The property (<ref>) also
means that Ψ_t is an entrance law for p^Γ at the origin, see, e.g., Blumenthal <cit.>, Haas and Rivero <cit.> or Bañuelos et al. <cit.>.
Furthermore, in Theorems <ref> and <ref> below, we prove the existence of the Yaglom limit for Γ. Similar results were obtained in Bogdan et al. <cit.> for Lipschitz cones. Our approach is different and more versatile than that presented in <cit.>; we are able to cover more general cones, e.g. Γ=∖{0} or Γ=^2 ∖ ([0,∞) ×{0}), and much more general initial distributions for the Yaglom limit, including distributions with unbounded support.
We next present our second main result.
Let 1 ≤ q ≤∞ and L^q(Γ) := L^q(Γ, x). For a weight function w>0, we denote L^q(w):= L^q(Γ, w(x) x). For instance, L^1(M_Γ) = {f/M_Γ f ∈ L^1}. Then, for 1≤ q <∞, we define
f_q,M_Γ:=f/M_Γ_L^q(M_Γ^2)
=( ∫_Γ |f(x)|^q M_Γ^2-q(x) x ) ^1/q
=f_L^q(M_Γ^2-q) ,
and, for q=∞, we let
f_∞, M_Γ:=_x∈Γ |f(x)|/M_Γ(x).
Of course, f_1,M_Γ=f_L^1(M_Γ). For a non-negative or integrable function f we let
P_t^Γf(x):= ∫_Γ p_t^Γ(x,y)f(y) y, t>0, x ∈Γ.
We say that the cone Γ is smooth if its boundary is C^1,1 outside of origin, to wit, there is r>0 such that at every boundary point of Γ on the unit sphere , there exist inner and outer tangent balls for Γ, with radii r. Put differently, for d≥ 2, the spherical cap, Γ∩, is a C^1,1 subset of . For instance, the right-circular cones (see Section <ref>) are smooth.
The second result describes the large-time asymptotic behavior of the semigroup P_t^Γ.
Let q ∈ [1,∞). Assume that the cone Γ is smooth with β≥α/2. Then for every f ∈ L^1(M_Γ) and A=∫_Γf(x)M_Γ(x) d x we have
lim_t →∞ t^d+2β/αq-1/qP_t^Γf-AΨ_t_q,M_Γ=0.
Theorem <ref> follows from the more general Theorem <ref>, by means of
Corollary <ref>. As we shall see, the condition β≥α/2 is sharp.
Let us comment on our methods and previous developments in the literature. If Γ=, then β=0, M_Γ=1 and p^Γ_t (x,y)=p_t(x,y) is the transition density of the fractional Laplacian on (see below). In this case, Ψ_t(x)=p_t(0,y) and Theorem <ref> were resolved by Vázquez <cit.>, see also Bogdan et al. <cit.> when κ=0 in <cit.>; see also Example <ref> below. For general cones Γ, the behavior of p_t^Γ is intrinsically connected to properties of M_Γ, see, e.g., Bogdan and Grzywny <cit.>, <cit.>, or Kyprianou et al. <cit.>. The identification of the Martin kernel M_Γ was accomplished by Bañuelos and Bogdan <cit.>. Its crucial property is the homogeneity of order β∈ [0,α), which is also reflected in the behavior of the Green function studied by Kulczycki <cit.> and Michalik <cit.>, at least when Γ is a right-circular cone. As we see in Theorems <ref> and <ref>, the exponent β determines the self-similarity of the semigroup solution and the asymptotic behavior of the semigroup P_t^Γ, too. For more information on β we refer the reader to <cit.> and Bogdan et al. <cit.>.
If Γ is a Lipschitz cone, then Theorem <ref> follows from <cit.>. However, the method presented in <cit.> does not apply to fat cones, in particular, to Γ=∖{0} or Γ = ^2 ∖ ([0,∞) ×{0}), which are intrinsically interesting for α∈ (1,2). Therefore in this work, we follow the approach suggested by <cit.>, where the authors employ a stationary density of an Ornstein-Uhlenbeck type semigroup corresponding to a homogeneous (self-similar) heat kernel. Another key tool in their analysis is the so-called Doob conditioning using an invariant function or the heat kernel; see also Bogdan et al. <cit.>. In the present paper, we study the semigroup (P_t^Γ t ≥ 0) of the α-stable Lévy process killed when exiting the cone Γ. Its kernel is the (Dirichlet heat) kernel p_t^Γ. Although the setting is seemingly different than in <cit.>, due to Theorem <ref> (below) the Martin kernel M_Γ is invariant with respect to P_t^Γ, which allows for Doob conditioning. Then we form the corresponding Ornstein-Uhlenbeck semigroup and prove existence of a stationary density in Theorem <ref> by using the Schauder-Tychonoff fixed-point theorem. As we shall see in the proof of Theorem <ref>, the self-similar semigroup solution Ψ_t is directly expressed by and M_Γ.
In Subsection <ref> we obtain an asymptotic relation between the Martin kernel and the survival probability near the vertex of the cone (see Corollary <ref>). We also obtain a Yaglom limit (quasi-stationary distribution) in Theorem <ref>, which describes the behavior of the stable process starting from a fixed point x ∈Γ and conditioned to stay in a cone, generalizing Theorem 1.1 of <cit.>. In Theorem <ref> we extend both results to every initial distribution with finite moment of order α. Note that once the existence and properties of the stationary density φ are established, the results of Subsection <ref> follow by scaling. Notably, our approach applies to rather general self-similar transition densities, at least when they enjoys positive sharp (upper and lower) bounds and invariant function exists. For an approach to entrance laws based on fluctuation theory of Markov additive processes, we refer to <cit.>, see also
Chaumont et al. <cit.>.
In passing, we also note that Yaglom limit for random walks in cones is discussed by Denisov and Wachtel <cit.>. For a broad survey on quasi-stationary distributions, we refer to van Doorn and Pollet <cit.>.
Self-similar solutions for general homogeneous semigroups are discussed in Cholewa and Rodriguez-Bernal <cit.>; see Patie and Savov <cit.> for a treatment of generalized Ornstein-Uhlenbeck semigroups, which they call generalized
Laguerre semigroups.
Results related to Theorem <ref>, but for fractal Burgers equation and fractional p-Laplacian can be found in Biler et al. <cit.> and Vázquez <cit.>, respectively.
§ PRELIMINARIES
For x,z ∈, the standard scalar product of is denoted by x · z and |z| is the Euclidean norm. For x ∈ and r∈ (0,∞), we let B(x,r) = {y ∈ |x-y|<r}, the ball centered at x with radius r, and we write B_r:=B(0,r). All the considered sets, functions and measures are Borel. For non-negative functions f,g, we write f ≈ g if there is a number c∈ (0,∞), i.e., a constant, such that c^-1f≤ g≤ c f, and write
f ≲ g if there is a constant c such that f ≤ cg.
Recall that α∈ (0,2) and let
ν(z) = c_d,α |z|^-d-α, z ∈,
where the constant c_d,α is such that
∫_( 1-cos (ξ· z) ) ν(z) z = |ξ|^α, ξ∈.
For t>0 we let
p_t(x) := (2π)^-d∫_ e^-t|ξ|^α e^-iξ· xξ, x ∈.
By the Lévy-Khintchine formula, p_t is a probability density function and
∫_ e^iξ· x p_t(x) x
= e^-t|ξ|^α, ξ∈, t >0.
We consider the isotropic α-stable Lévy process =(X_t,t≥ 0) in , with
p_t(x,y):=p_t(y-x), x,y ∈, t>0,
as transition density. Thus,
_x e^iξ· X_t =
∫_ e^iξ· y p_t(x,y) y
= e^iξ· x-t|ξ|^α, ξ∈, x∈, t >0.
The Lévy-Khintchine exponent of is, of course, |ξ|^α and ν is the intensity of jumps.
By (<ref>),
p_t(x,y) = t^-d/αp_1 (t^-1/αx,t^-1/αy), x,y ∈, t>0,
and
p_t( Tx,Ty) = p_t(x,y), x,y ∈, t>0,
for every isometry T on .
It is well known that
p_t(x,y) ≈ t^-d/α∧ t|y-x|^-d-α, x,y ∈, t>0,
see, e.g., <cit.>.
We then consider the time of the first exit of from the cone Γ,
τ_Γ := inf{t ≥ 0 X_t ∉Γ},
and we define
the Dirichlet heat kernel for Γ,
p_t^Γ(x,y) := p_t(x,y) -_x [ τ_Γ<t;p_t-τ_Γ( X_τ_Γ,y) ], x,y ∈Γ, t>0,
see <cit.>.
It immediately follows that p_t^Γ(x,y) ≤ p_t(x,y) for all x,y ∈Γ and t>0. The Dirichlet heat kernel is non-negative, and symmetric: p_t^Γ(x,y)=p_t^Γ(y,x) for x,y ∈Γ, t>0. It satisfies the Chapman-Kolmogorov equations:
p_t+s^Γ(x,y) = ∫_Γp_t^Γ(x,z)p_s^Γ(z,y) z, x,y ∈Γ, s, t>0.
For nonnegative or integrable functions f we define the killed semigroup
by
P_t^Γ f(x) := _x [ τ_Γ>t; f(X_t) ] = ∫_Γ p_t^Γ(x,y)f(y) y, x ∈Γ, t>0.
In particular, for f ≡ 1 we obtain
the survival probability:
_x(τ_Γ>t) = ∫_Γ p_t^Γ(x,y) y, x ∈Γ, t>0,
see <cit.>.
Since t^-1/αΓ=Γ, the scaling (<ref>) extends to the Dirichlet heat kernel:
p_t^Γ(x,y) = t^-d/αp_1^Γ( t^-1/αx,t^-1/αy ), x,y ∈Γ, t>0.
As a consequence,
_x (τ_Γ>t) = _t^-1/αx(τ_Γ>1), x ∈Γ, t>0.
Furthermore, by (<ref>),
p_t^TΓ(Tx,Ty) = p_t^Γ (x,y), x,y ∈Γ, t>0.
The operators P_t^Γ and the kernel p_t^Γ (x,y) are the main subject of the paper. In view of eq:gamma_T, without loss of generality we may assume that := (0,…,0,1) ∈Γ.
By <cit.>, there is a unique non-negative function M_Γ on such that M_Γ()=1, M_Γ=0 on Γ^c, and for every open bounded set B ⊂Γ,
M_Γ(x) = _x M_Γ(X_τ_B), x ∈.
Moreover, M_Γ is locally bounded on and homogeneous of some order β∈ [0,α), i.e.,
M_Γ(x) = |x|^βM_Γ(x/|x|), x ∈Γ.
We call M_Γ the Martin kernel of Γ with the pole at infinity.
By <cit.>, β=α/2 if Γ is a half-space and β=α-1 if Γ=∖{0} and 1<α<2.
By <cit.>, β=(α-1)/2 if Γ=^2 ∖ ([0,∞) ×{0}) and 1<α<2.
Throughout the article, we often assume that Γ is fat, i.e., κ∈ (0,1) exists such that for all Q ∈Γ and r ∈ (0,∞), there is a point A = A_r(Q) ∈Γ∩ B(Q,r) such that B(A,κ r) ⊆Γ∩ B(Q,r), see <cit.>.
Recall that Γ is smooth if d=1 or d≥ 2 and Γ∩ is a C^1,1 subset of .
Furthermore, a cone Γ is called right-circular, if Γ = {x=(x_1,…,x_d) ∈∖{0}: x_d>|x| cosη}. The parameter η∈ (0,π) is usually called the angle of the cone. Of course, every right-circular cone is smooth, and every smooth cone is fat.
By <cit.>, the following approximate factorization holds true for fat cones:
p_t^Γ(x,y) ≈_x(τ_Γ>t) p_t(x,y) _y(τ_Γ>t), x,y ∈Γ, t>0.
For R∈ (0,∞), we let Γ_R := Γ∩ B_R, the truncated cone.
§ DOOB CONDITIONING
The Martin kernel M_Γ is invariant for the semigroup P_t^Γ, as follows.
For all x ∈Γ and t>0, we have P_t^Γ M_Γ(x) = M_Γ(x).
Fix t>0 and x∈Γ. We have
P_t^Γ M_Γ(x) = _x [τ_Γ>t; M_Γ(X_t) ].
Let R>0 and τ_R:=τ_Γ_R. By (<ref>) and the strong Markov property,
M_Γ(x) = _x M_Γ(X_τ_Γ_R)= _x M_Γ( X_t ∧τ_R) = _x [ X_t ∧τ_R∈Γ; M_Γ( X_t ∧τ_R) ],
where the last equality follows from the fact that M_Γ = 0 outside Γ. We note that _x-a.s., τ_R →τ_Γ as R →∞ (see, e.g., <cit.>). We consider two scenarios. On {τ_Γ=∞}, for R large enough, we have: τ_R > t, _X_t ∧τ_R∈Γ = 1 = _t<τ_Γ, and
M_Γ( X_t ∧τ_R) _X_t ∧τ_R∈Γ = M_Γ(X_t) = M_Γ(X_t)_t<τ_Γ.
On {τ_Γ<∞}, for R large enough we have: τ_R = τ_Γ, _X_t ∧τ_R∈Γ = _t<τ_Γ, and
M_Γ( X_t ∧τ_R) _X_t ∧τ_R∈Γ = M_Γ(X_t) _t<τ_Γ,
too. In both cases, the integrand on the right-hand side of (<ref>) converges a.s. to the integrand on the right-hand side of (<ref>) as R →∞. By the local boundedness of M_Γ and (<ref>),
| M_Γ( X_t ∧τ_R) _X_t ∧τ_R∈Γ| ≤ c | X_t ∧τ_R|^β≤ c (X_t^*)^β,
where
X_t^* := sup_0 ≤ s ≤ t |X_s|.
Using <cit.> and the fact that β∈[0,α), we conclude that _x(X_t^*)^β<∞. An application of the dominated convergence theorem ends the proof.
§.§ Renormalized kernel
We define the renormalized (Doob-conditioned) kernel
ρ_t(x,y) = p_t^Γ(x,y)/M_Γ(x)M_Γ(y), x,y ∈Γ, t>0.
Note that ρ is jointly continuous. By Theorem <ref>,
∫_Γρ_t(x,y)M_Γ^2(y) y=1, x ∈Γ, t>0,
and by (<ref>),
∫_Γρ_t(x,y)ρ_s(y,z) M_Γ^2(y) y = ρ_t+s (x,z), x,y ∈Γ, s,t>0.
In other words, ρ_t is a symmetric transition probability density on Γ with respect to the measure M^2_Γ(y) y. Furthermore, the following scaling property holds true: for all x,y ∈Γ and all t>0,
ρ_t(x,y) = t^-d/αp_1^Γ(t^-1/αx,t^-1/αy)/t^2β/αM_Γ(t^-1/αx)M_Γ(t^-1/αy) = t^-(d+2β)/αρ_1 (t^-1/αx,t^-1/αy).
Therefore,
ρ_st(t^1/αx,t^1/αy) = t^-(d+2β)/αρ_s(x,y), x,y ∈Γ, s,t>0.
By (<ref>), for fat cones we have
ρ_t(x,y) ≈_x (τ_Γ>t)/M_Γ(x)p_t(x,y) _y (τ_Γ>t)/M_Γ(y), x,y ∈Γ, t>0.
The boundary behavior of _x(τ_Γ>t)/M_Γ (x) is important due to (<ref>), but it is rather elusive.
The next lemma strengthens the upper bound from <cit.>.
There exists a constant c depending only on α and Γ, such that
_x(τ_Γ>t) ≤ c(t^-β/α+t^-1|x|^α-β) M_Γ(x), t>0, x ∈Γ.
(1) For t=1, (<ref>) reads as follows,
_x(τ_Γ>1) ≤ c(1+|x|^α-β)M_Γ(x), x ∈Γ.
(2) The estimate (<ref>) applies to arbitrary cones and arguments t,x,
however, it is not optimal. For example, for the right-circular cones, we can confront (<ref>) with
M_Γ(x) ≈δ_Γ(x)^α/2|x|^β-α/2, x ∈Γ,
and
_x(τ_Γ>1) ≈(1∧δ_Γ(x))^α/2(1∧|x|)^β-α/2, x ∈Γ,
as provided by <cit.> and <cit.>.
(3) For the right-circular cones, the ratio
_x(τ_Γ>1)/M_Γ(x)≈(1+δ_Γ(x))^-α/2/(1+|x|)^β-α/2, x ∈Γ,
is bounded if and only if β≥α/2.
We slightly modify the proof of <cit.>. First, suppose that t=1. The case x ∈Γ_1 in (<ref>) is resolved by <cit.>, so we assume that x ∈Γ∖Γ_1. For every z ∈∖{0} we define its projection on the unit sphere z̃ := z/|z|. By (<ref>),
_x(τ_Γ>1) = _x̃(τ_Γ>|x|^-α).
Then we have
_x̃(τ_Γ>|x|^-α) ≤_x̃(τ_Γ_2>|x|^-α) + _x̃(τ_Γ_2<τ_Γ).
By the boundary Harnack principle (BHP), see Song and Wu <cit.>, and the homogeneity of M_Γ (<ref>),
_x̃(τ_Γ_2<τ_Γ) ≤_(τ_Γ_2<τ_Γ)M_Γ(x̃) = c_1 |x|^-βM_Γ(x).
We let
c_2 = inf_y ∈Γ_2∫_Γ∖Γ_2ν(y-z) z.
Clearly, c_2>0. We recall the Ikeda-Watanabe formula:
_x[τ_D∈ I, Y_τ_D-∈ A, Y_τ_D∈ B]=
∫_I ∫_A∫_B p_s^D(x,v)ν(v,z) z v s,
where x∈ D, I⊂ [0,∞), A⊂ D and B⊂ D^c, see, e.g., Bogdan et al. <cit.>. By Markov inequality and BHP,
_x̃(τ_Γ_2>|x|^-α) ≤ |x|^α_x̃τ_Γ_2 = |x|^α∫_Γ_2G_Γ_2(x̃,y) y
≤ c_2^-1 |x|^α∫_Γ∖Γ_2∫_Γ_2G_Γ_2(x̃,y) ν(y-z) y z
≤ c_2^-1 |x|^α_x̃(X_τ_Γ_2∈Γ)
≤ c_1c_2^-1 |x|^α_(X_τ_Γ_2∈Γ) M_Γ(x̃)
= c_1c_2^-1c |x|^α-βM_Γ(x).
By (<ref>), we get (<ref>) when x ∈Γ∖Γ_1. For arbitrary t>0, we use (<ref>) and (<ref>):
_x(τ_Γ>t) = _t^-1/αx(τ_Γ>1)
≤ c (1+(t^-1/α|x|)^α-β) M_Γ(t^-1/αx)
= c(t^-β/α+t^-1|x|^α-β)M_Γ(x).
By the proof of <cit.>, for every R ∈ (0,∞) there exists a constant c, depending only on α, Γ and R, such that
c^-1M_Γ(x)t^-β/α≤_x(τ_Γ>t) ≤ c M_Γ(x)t^-β/α, x ∈Γ_Rt^1/α, t>0.
In particular, for fat cones, in view of (<ref>) and eq:rho_factor,
ρ_1(x,y) ≈ (1+|y|)^-d-α_y(τ_Γ>1)/M_Γ(y), x∈Γ_R, y ∈Γ,
with comparability constant depending only on α, Γ and R.
Using Lemma <ref> we also conclude that for every R ≥ 1 there is a constant c depending only on R, α and Γ, such that
ρ_1(x,y) ≤ c(1+|y|)^-d-β, x ∈Γ_R, y ∈Γ.
§.§ Ornstein-Uhlenbeck kernel
Encouraged by <cit.>, we let
ℓ_t(x,y) := ρ_1-e^-t(e^-t/αx,y), x,y ∈Γ, t>0,
and, by (<ref>), we get the Chapman-Kolmogorov property for ℓ_t:
∫_Γℓ_t(x,y)ℓ_s(y,z)M_Γ^2(y) y = ℓ_t+s(x,z), x,z ∈Γ, s,t>0.
By (<ref>),
∫_Γℓ_t(x,y)M_Γ^2(y) y=1, x ∈Γ, t>0 .
Thus, ℓ_t is a transition probability density on Γ with respect to M_Γ^2(y) y.
We define the corresponding Ornstein-Uhlenbeck semigroup:
L_t f(y) = ∫_Γℓ_t(x,y) f(x)M_Γ^2(x) x, x∈Γ, t>0.
We easily see that the operators are bounded on L^1(M^2_Γ(y) y). In fact, they preserve densities, i.e., functions f≥0 such that ∫_Γ f(x)M^2_Γ(x) x=1.
Before we immerse into details, let us note that the relations (<ref>) and (<ref>) will be crucial in what follows. Both of them rely on the factorization of the Dirichlet heat kernel (<ref>), which is valid for fat sets. For this reason, although it is usually clear from the setting, to avoid unnecessary considerations we assume below in this section that Γ is a fat cone.
Assume Γ is a fat cone. Then there is a unique stationary density for the operators L_t, t>0.
Fix t>0 and consider the family F of non-negative functions on Γ that have the form
f(y) = ∫_Γ_1ρ_t(x,y) μ( x), y ∈Γ,
for some sub-probability measure μ concentrated on Γ_1. By (<ref>), F ⊆ L^1(M_Γ^2(y)). By the scaling (<ref>) and the same reasoning as in the proof of <cit.>, L_t F ⊆ F. Since L_t is continuous, we also have L_t F⊆F, where F is the closure of F in the norm topology of L^1(Γ,M^2_Γ(y) y). Next, we observe that F is convex, therefore by <cit.>, F is equal to the closure of F in the weak topology. In view of (<ref>),
f(y) ≲ (1+|y|)^-d-α_y(τ_Γ>1)/M_Γ(y), y ∈Γ,
uniformly for f∈ F. Moreover, (<ref>) and (<ref>) show that the right-hand side of (<ref>) is integrable with respect to M_Γ^2(y) y. Therefore, the family F is uniformly integrable with respect to M^2_Γ(y) y. By <cit.>, F is weakly pre-compact in L^1(M^2_Γ(y)), so F is weakly compact.
Furthermore, we invoke <cit.> to conclude that L_t is weakly continuous. By the Schauder-Tychonoff fixed point theorem <cit.>,
there is a density ∈F satisfying L_t =. It is unique by the strict positivity of the kernel ℓ_t, and the same for every t>0, see the proof of <cit.>.
Let us note that by Theorem <ref> and <cit.>, the following stability result for kernels l_t in L^1(M_Γ^2(y) y) holds true for every x∈Γ:
∫_Γ| ℓ_t(x,y)-(y) | M_Γ^2(y) y → 0 as t →∞.
We claim that the convergence in (<ref>) is in fact uniform for x in any bounded subset A ⊆Γ. Indeed, let x,x_0 ∈ A. In view of (<ref>) and (<ref>) we may write
∫_Γ| ℓ_1+t(x,y) - (y) | M_Γ^2(y) y = ∫_Γ| ∫_Γℓ_1(x,z) ( ℓ_t(z,y) - (y) ) M_Γ^2(z) z | M_Γ^2(y) y
≤ c ∫_Γℓ_1(x_0,z) ∫_Γ| ℓ_t(z,y) - (y)| M_Γ^2(y) y M_Γ^2(z) z.
By (<ref>), for every z∈Γ,
I_t(z):=∫_Γ| ℓ_t(z,y) - (y) | M_Γ^2(y) y → 0 as t →∞.
Moreover, I_t(z) ≤∫_Γ( ℓ_t(z,y) + (y) ) M_Γ^2(y) y = 2. Since
∫_Γ 2ℓ_1(x_0,z)M_Γ^2(z) z = 2 < ∞,
by the dominated convergence theorem the iterated integral in (<ref>) tends to 0 as t →∞, so the convergence in (<ref>) is uniform for all x ∈ A, as claimed. By rewriting (<ref>) in terms of ρ, we get that, uniformly for x ∈ A,
∫_Γ| ρ_1-e^-t( e^-t/αx,y ) - (y) | M_Γ^2(y) y → 0 as t →∞.
This leads to the following
spacial asymptotics for ρ_1.
Let Γ be a fat cone. If Γ∋ x → 0 then ∫_Γ| ρ_1(x,y)-(y) |M_Γ^2(y) y → 0.
By the scalings (<ref>) and eq:scaling2,
ρ_1-e^-t( e^-t/αx,y ) = ( 1-e^-t)^-(d+2β)/αρ_1 ( ( e^t-1 )^-1/αx, ( 1-e^-t)^-1/αy ),
thus, in view of (<ref>),
∫_Γ| ρ_1 ( ( e^t-1 )^-1/αx, ( 1-e^-t)^-1/αy ) - (y) | M_Γ^2(y) y → 0 as t →∞.
By the continuity of dilations in L^1(),
∫_Γ| ( ( 1-e^-t)^1/αy ) M_Γ^2(y) - (y)M_Γ^2(y) | y → 0 as t →∞.
Thus, by a change of variables in (<ref>) and the triangle inequality, we conclude that
∫_Γ| ρ_1 ( (e^t-1)^-1/αz,y ) - (y) |M_Γ^2(y) y → 0 as t →∞
uniformly for all z ∈ A. To end the proof, we take A = B_1 and x = (e^t-1)^-1/αz, where t = ln(1+|x|^-α) and z = x/|x| ∈ A. .
After a modification on set of Lebesgue measure 0, is continuous on Γ and
(y) ≈ (1+|y|)^-d-α_y(τ_Γ>1)/M_Γ(y), y ∈Γ.
By Corollary <ref> and (<ref>),
(y) ≈ (1+|y|)^-d-α_y(τ_Γ>1)/M_Γ(y)
on Γ less a set of Lebesgue measure zero.
Theorem <ref> entails that = L_1 a.e., so it suffices to verify that L_1 is continuous on Γ. To this end we note that ℓ_1(x,y) is continuous in x,y ∈Γ. Next, by (<ref>) and (<ref>),
ℓ_1(x,y) ≈_e^-1/αx(τ_Γ>1-e^-1)/M_Γ(e^-1/αx)p_1-e^-1( e^-1/αx,y ) _y(τ_Γ>1-e^-1)/M_Γ(y), x,y ∈Γ.
Let R>1. By (<ref>) and (<ref>),
<cit.> and (<ref>), and the homogeneity (<ref>) of M_Γ,
ℓ_1(x,y) ≲ (1+|x|)^-d-α_x(τ_Γ>1)/M_Γ(x), x ∈Γ, y ∈Γ_R.
By the dominated convergence theorem, L_1 is continuous on Γ_R.
In what follows, denotes the continuous modification from Lemma <ref>.
Let Γ be a fat cone. For every t>0, uniformly in y ∈Γ we have
ρ_t(0,y):=
lim_Γ∋ x → 0ρ_t(x,y) = t^-(d+2β)/α(t^-1/αy).
If β=0 then ρ_t(x,y) = p_t(x,y) and the claim is simply the continuity property of the heat kernel p_t. Thus, we assume that β>0.
We only prove the claim for t=1; the extension to arbitrary t is a consequence of the scaling (<ref>).
By (<ref>) and the Chapman-Kolmogorov property, for x,y ∈Γ,
ρ_1(x,y) = 2^-(d+2β)/αρ_2 ( 2^1/αx,2^1/αy )
= 2^-(d+2β)/α∫_Γρ_1 ( 2^1/αx,z ) ρ_1 ( z,2^1/αy ) M_Γ^2(z) z.
We will prove that, uniformly in y ∈Γ,
∫_Γρ_1 ( 2^1/αx,z ) ρ_1 ( z,2^1/αy ) M_Γ^2(z) z →∫_Γ(z) ρ_1 ( z,2^1/αy ) M_Γ^2(z) z
as Γ∋ x → 0.
To this end we first claim that there is c ∈ (0,∞) dependent only on α and Γ, such that for all x ∈Γ_1 and y ∈Γ,
∫_Γ| ρ_1 ( 2^1/αx,z )-(z) | ρ_1 ( z,2^1/αy ) M_Γ^2(z) z ≤ c(1+|y|)^-β.
Indeed, denote ỹ = 2^1/αy. By (<ref>), Lemma <ref> and (<ref>), there is c>0 such that for all z,y ∈Γ and x ∈Γ_1,
| ρ_1 ( 2^1/αx,z )-(z) | ρ_1 ( z,y) M_Γ^2(z) ≲ (1+|z|)^-d-α(1+|z-y|)^-d-α (1+|y|)^α-β.
We split the integral in (<ref>) into two integrals. For z∈ A:=B(y,|y|/2) we use the fact that |z| ≈ |y| ≈ |y| and 1+|z-y| ≥ 1, therefore
∫_A | ρ_1 ( 2^1/αx,z )-(z) | ρ_1 ( z,y) M_Γ^2(z) z ≲ |y|^d (1+|y|)^-d-β≤ (1+|y|)^-β.
For z∈Γ∖ A we simply have 1+|z| ≥ 1, thus,
∫_Γ∖ A| ρ_1 ( 2^1/αx,z )-(z) | ρ_1 ( z,y) M_Γ^2(z) z ≲ (1+|y|)^α-β∫_Γ∖ A (1+|z-y|)^-d-α z
≲ (1+|y|)^α-β∫_|y|/2^∞ (1+r)^-1-α r
≲ (1+|y|)^-β.
Combining it with (<ref>), we arrive at (<ref>), as claimed.
Let ϵ>0. In view of (<ref>) and the fact that β > 0, there is R∈ (0,∞) depending only on α, β, Γ and ϵ such that
∫_Γ| ρ_1 ( 2^1/αx,z )-(z) | ρ_1 ( z,2^1/αy ) M_Γ^2(z) z < ϵ,
provided that y ∈Γ∖Γ_R. For y ∈Γ_R, by (<ref>) we get
∫_Γ| ρ_1 ( 2^1/αx,z )-(z) | ρ_1 ( z,2^1/αy ) M_Γ^2(z) z ≲∫_Γ| ρ_1 ( 2^1/αx,z )-(z) | M_Γ^2(z) z,
with the implied constant dependent only on α, β, Γ and R, but not otherwise dependent of y. Thus, by Corollary <ref>,
∫_Γ| ρ_1 ( 2^1/αx,z )-(z) | ρ_1 ( z,2^1/αy ) M_Γ^2(z) z < ϵ
for all y ∈Γ_R and x ∈Γ_1 small enough. Putting (<ref>) together with (<ref>) we arrive at (<ref>). Using the scaling property (<ref>) and Theorem <ref>,
lim_Γ∋ x → 0ρ_1(x,y) = 2^-(d+2β)/α∫_Γ(z) ρ_1 ( z,2^1/αy ) M_Γ^2(z) z
= ∫_Γ(z) ρ_1/2( 2^-1/αz,y ) M_Γ^2(z) z = L_ln 2(y) = (y).
The proof is complete.
Note that by the symmetry of ρ_t, for x ∈Γ,
ρ_t(x,0) := lim_Γ∋ y → 0ρ_t(x,y) = ρ_t(0,x) = t^-(d+2β)/α(t^-1/αx).
Recall also that by (<ref>) and (<ref>),
ρ_1(x,y) ≈ (1+|y|)^-d-α_y(τ_Γ>1)/M_Γ(y)∈ L^1(M_Γ^2(y) y).
Thus, by Theorem <ref> and the dominated convergence theorem,
∫_Γ(x)M_Γ^2(x) x=1.
Let us summarize the results of this section in one statement.
Assume Γ is a fat cone. Then the function ρ has a continuous extension to (0,∞) × (Γ∪{0})× (Γ∪{0}) and
ρ_t(0,y) := lim_Γ∋ x → 0ρ_t(x,y)∈ (0,∞), t>0, y ∈Γ,
satisfies
ρ_t(0,y) = t^-(d+2β)/αρ_1(0,t^-1/αy), t>0, y ∈Γ,
and
∫_Γρ_t(0,y)ρ_s(y,z) M_Γ^2(y) y = ρ_t+s(0,z), s,t>0, z ∈Γ.
The existence of the limit (<ref>) and the scaling property (<ref>) are proved in Theorem <ref>, see also Lemma <ref>. For the proof of (<ref>) we employ (<ref>) to write
ρ_t+s(0,z) = lim_Γ∋ y → 0ρ_t+s(y,z) = lim_Γ∋ y → 0∫_Γρ_t(y,w) ρ_s(w,z) M_Γ^2(w) w,
and use (<ref>), (<ref>), (<ref>), and the dominated convergence theorem.
Thus, it remains to prove the continuity of ρ on (0,∞) × (Γ∪{0}) × (Γ∪{0}). By symmetry and the Chapman-Kolmogorov property (<ref>) of ρ_1,
ρ_1(x,y) = ∫_Γρ_1/2(x,z)ρ_1/2(y,z)M_Γ^2(z) z, x,y ∈Γ.
The continuity of ρ_1 on Γ×Γ together with Theorem <ref>, for every x_0,y_0 ∈Γ∪{0} we have ρ_1/2(x,z) →ρ_1/2(x_0,z) and ρ_1/2(y,z) →ρ_1/2(y_0,z) as x → x_0 and y → y_0. Moreover, (<ref>) entails that
ρ_1/2(x,z)ρ_1/2(y,z)M_Γ^2(z) ≤ c (1+|z|)^-2d-2α,
with the constant c possibly dependent on x_0 and y_0. It follows by the dominated convergence theorem that
ρ_1(x,y) →∫_Γρ_1/2(x_0,z)ρ_1/2(y_0,z)M_Γ^2(z) z,
as x → x_0 and y → y_0 and in view of (<ref>), it is an extension of ρ_1 to (Γ∪{0}) × (Γ∪{0}), which will be denoted by the same symbol. It follows now from (<ref>) that
t^-(d+2β)/αρ_1 ( t^-1/αx,t^-1/αy ), x,y ∈Γ∪{0}, t>0,
is a finite continuous extension of ρ_t for every t>0. It remains to observe that the extension is unique and jointly continuous in (t,x,y) ∈_+ × (Γ∪{0}) × (Γ∪{0}.
We have
ρ_1(0,0) = lim_Γ∋ x,y → 0ρ_1(x,y) ∈ (0,∞).
By Theorem <ref>, ρ_1(0,0) = lim_Γ∋ y → 0(y)=: (0). Thus, the claim follows by Lemma <ref> and <cit.>.
By (<ref>), Ψ_t(x) = ρ_t(0,x)M_Γ(x), t>0, x ∈Γ. Thus, the existence of Ψ_t is just a reformulation of (<ref>). The scaling property (<ref>) follows immediately from (<ref>) and the homogeneity of the Martin kernel (<ref>), and (<ref>) is equivalent to (<ref>).
We conclude this part by rephrasing (<ref>) in terms of Ψ_t:
∫_ΓΨ_t(x)M_Γ(x) x=1, t>0.
§.§ Yaglom limit
The above results quickly lead to calculation of the
Yaglom limit
for the stable process (conditioned to stay in a cone).
Note that our proof is different than that in <cit.>. We also cover more general cones, including ∖{0} and ^2 ∖ ([0,∞) ×{0}).
First, we obtain the following extension of <cit.>.
Let Γ be a fat cone. For every t>0,
lim_Γ∋ x → 0_x(τ_Γ>t)/M_Γ(x) = C_1t^-β/α C_1 = ∫_Γ(z)M_Γ(z) z ∈ (0,∞).
It is enough to prove the claim for t=1; the general case follows by the scalings (<ref>) and (<ref>). We have
_x(τ_Γ>1)/M_Γ(x) = ∫_Γp_1^Γ(x,y)/M_Γ(x) y = ∫_Γρ_1(x,y) M_Γ(y) y, x ∈Γ.
We use (<ref>), the dominated convergence theorem, and Theorem <ref> to get the conclusion.
The first identity below is the
Yaglom limit.
Assume Γ is a fat cone and let B be a bounded subset of Γ. Then, uniformly in x∈ B,
lim_t →∞_x ( t^-1/αX_t ∈ A | τ_Γ>t ) = μ(A), A ⊂Γ,
where
μ(A)
:= 1/C_1∫_A (y)M_Γ(y) y, A ⊂Γ.
By (<ref>) and the scaling property (<ref>),
_x ( t^-1/αX_t ∈ A | τ_Γ>t ) = _x ( τ_Γ>t,t^-1/αX_t ∈ A )/_x(τ_Γ>t)
= _t^-1/αx( τ_Γ>1,X_1 ∈ A )/_t^-1/αx(τ_Γ>1)
= ∫_A p_1^Γ(t^-1/αx,y)/M_Γ(t^-1/αx) y ·M_Γ(t^-1/αx)/_t^-1/αx(τ_Γ>1).
The claim follows by Corollary <ref>, (<ref>), and the dominated convergence theorem.
If Γ is a fat cone and γ is a probability measure on Γ with ∫_Γ(1+|y|)^α γ( dy)<∞, then
lim_t →∞_γ( t^-1/αX_t ∈ A | τ_Γ>t ) = μ(A), A ⊂Γ.
Let t≥ 1. In view of <cit.>, we may write
_γ( t^-1/αX_t ∈ A | τ_Γ>t ) = _γ( t^-1/αX_t ∈ A, τ_Γ>t )/_γ( τ_Γ>t )
= ∫_Γ_x ( t^-1/αX_t ∈ A | τ_Γ>t )
_x ( τ_Γ>t )/_γ( τ_Γ>t ) γ( x) .
We first prove that for all x ∈Γ,
_γ( τ_Γ>t )/_x ( τ_Γ>t ):= ∫_Γ_y ( τ_Γ>t )/_x ( τ_Γ>t ) γ( y) →∫_ΓM_Γ(y)/M_Γ(x) γ( y),
as t →∞.
Indeed, fix x ∈Γ. First we note that by local boundedness of M_Γ and (<ref>),
∫_Γ M_Γ(y) γ( dy) ≤ c∫_Γ(1+|y|)^β γ( dy) <∞,
so the right-hand side of (<ref>) is finite. Next, by Corollary <ref>, (<ref>), and (<ref>),
lim_t →∞_y ( τ_Γ>t )/_x ( τ_Γ>t ) =
lim_t →∞_t^-1/αy( τ_Γ>1 )M_Γ(t^-1/αx)/_t^-1/αx( τ_Γ>1 )M_Γ(t^-1/αy)M_Γ(y)/M_Γ(x) =
M_Γ(y)/M_Γ(x), x,y ∈Γ.
Moreover, since x is fixed, we may assume that t ≥ 1∨ |x|^α. Thus, by <cit.>, Lemma <ref>, the local boundedness of M_Γ and (<ref>),
_y ( τ_Γ>t )/_x ( τ_Γ>t )≤ c (t^-β/α + t^-1|y|^α-β)M_Γ(y)/t^-β/αM_Γ(x)≤ c (1+|y|^α-β)M_Γ(y)/M_Γ(x)≤ c (1+|y|)^α/M_Γ(x).
Thus, the dominated convergence theorem yields (<ref>), as desired.
Next, we consider a family _1 of functions f_t of the form
f_t(x) = _x(τ_Γ>t)/_γ(τ_Γ>t), x ∈Γ, t ≥ 1.
Denote
f(x) = M_Γ(x)/∫_ΓM_Γ(y) γ( dy), x ∈Γ.
By virtue of (<ref>), f_t → f everywhere in Γ as t →∞.
Thus, f_t → f in measure γ as t →∞, see <cit.>. Moreover, we have
∫_Γ f(x) γ( dx) = 1 = lim_t →∞ 1 = lim_t →∞∫_Γf_t(x) γ( dx).
Therefore, by <cit.>, the family _1 is uniformly integrable. If we now consider the family _2 of functions f̃_t of the form
f̃_t(x) = _x ( t^-1/αX_t ∈ A | τ_Γ>t ) f_t(x), x ∈Γ, t ≥ 1,
then a trivial bound _x ( t^-1/αX_t ∈ A | τ_Γ>t ) ≤ 1 shows that _2 is uniformly integrable as well (see, e.g., <cit.>). By Theorem <ref>, (<ref>) and <cit.>,
lim_t →∞∫_Γ_x ( t^-1/αX_t ∈ A | τ_Γ>t )
_x ( τ_Γ>t )/_γ( τ_Γ>t ) γ( x) = ∫_Γμ(A) M_Γ(x)/∫_ΓM_Γ(y) γ( dy) γ( dx) = μ(A).
The proof is complete.
Note that β=0 if and only if Γ^c is a polar set and then M_Γ(x)=1 for all x ∈Γ, see <cit.>. Consequently, we have p_t^Γ(x,y) = p_t(x,y) and _x(τ_Γ>t)=1 for all x,y ∈Γ and all t>0. It follows that ρ_t(x,y) = p_t(x,y) and a direct calculation using the Chapman-Kolmogorov property entails that (y) = p_1(0,y) is the stationary density for the (classical) α-stable Ornstein-Uhlenbeck semigroup, see (<ref>) and Theorem <ref>. The statement of Theorem <ref> thus reduces to the continuity property the heat kernel of the isotropic α-stable Lévy process. Theorems <ref> and <ref> trivialize in a similar way. Incidentally, in this case the moment condition on γ in Theorem <ref> is superfluous. Further examples are given in Section <ref>.
§ ASYMPTOTIC BEHAVIOR FOR THE KILLED SEMIGROUP
This section is devoted to examples and applications in Functional Analysis and Partial Differential Equations. Note that in Lemmas <ref> and <ref> we do not assume that Γ is fat.
{P_t^Γ}_t>0
is a strongly continuous contraction semigroup on L^1(M_Γ) and
∫_Γ P_t^Γ f(x) M_Γ(x) x=∫_Γ f(x)M_Γ (x) x, t>0, f∈ L^1(M_Γ).
Let f ≥ 0. By the Fubini-Tonelli theorem, the symmetry of p_t^Γ and Theorem <ref>,
∫_Γ P_t^Γ f(x) M_Γ(x) x = ∫_Γ∫_Γ p_t^Γ(x,y) f(y) M_Γ(y) y x = ∫_Γ f(y)M_Γ(y) y.
Since |P_t^Γ f|≤ P_t^Γ|f|, the contractivity follows. Furthermore, for arbitrary f ∈ L^1(M_Γ) we write f= f_+ - f_- and use (<ref>) to prove (<ref>). The semigroup property follows from (<ref>).
To prove the strong continuity, we fix f ∈ L^1(M_Γ) and let G:=fM_Γ∈ L^1(Γ). There is a sequence g_n ∈ C_c^∞(Γ) such that g_n-G_L^1(Γ)→ 0 as n →∞. For f_n := g_n/M_Γ we get f_n ∈ C_c^∞(Γ) and f_n-f _L^1(M_Γ)=g_n-G_L^1(Γ)→ 0. By the first part of the proof,
P_t^Γf-f_L^1(M_Γ) ≤P_t^Γf-P_t^Γf_n_L^1(M_Γ)+P_t^Γf_n-f_n_L^1(M_Γ)+f_n-f_L^1(M_Γ)
≤ 2f_n-f_L^1(M_Γ) + P_t^Γf_n-f_n_L^1(M_Γ).
It remains to prove that P_t^Γf-f_L^1(M_Γ)→ 0 as t → 0^+ for every f ∈ C_c^∞(Γ). To this end we let ϵ > 0 and choose R >0 such that f ∈ B_R and
∫_Γ∖Γ_R P_t^Γ|f|(x)M_Γ(x) x < ϵ.
Then,
P_t^Γf-f_L^1(M_Γ) < ∫_Γ_R| P_t^Γf(x)-f(x) | M_Γ(x) x + ϵ.
Considering the integrand in (<ref>), for all x∈Γ_R we have
| P_t^Γf(x)-f(x) | ≤∫_Γ p_t^Γ(x,y)|f(y)-f(x)| y + |f(x)|_x(τ_Γ≤ t).
Since P_t f → f uniformly as t→ 0^+, for t>0 small enough we get
∫_Γ p_t^Γ(x,y)|f(y)-f(x)| y ≤∫_ p_t(x,y)|f(y)-f(x)| y < ϵ.
On the other hand,
|f(x)|_x(τ_Γ≤ t) ≤ f_∞sup_x ∈ K_x(τ_Γ≤ t),
where K:= f. We have r:= (K,Γ^c)>0, so
_x(τ_Γ≤ t) ≤_x(τ_B(x,r)≤ t) = _0(τ_B_r≤ t) ≤ ctr^-α<ϵ,
for t small enough, see, e.g., <cit.>. By (<ref>) and (<ref>) we get, as required,
P_t^Γf-f_L^1(M_Γ) < ϵ+(ϵ+f_∞ϵ) |Γ_R|sup_Γ_R M_Γ.
Recall that
f_q,M_Γ:=f/M_Γ_L^q(M_Γ^2)
=( ∫_Γ |f(x)|^q M_Γ^2-q(x) x ) ^1/q
=f_L^q(M_Γ^2-q),
if 1 ≤ q < ∞, and
f_∞, M_Γ:= esssup_x∈Γ |f(x)|/M_Γ(x).
The following characterization of hypercontractivity of P^Γ_t is crucial for the proof of (<ref>).
Let q ∈ [1,∞). We have
P_t^Γ f_q,M_Γ≤ Ct^-d+2β/αq-1/qf_1,M_Γ
for all t>0 and all non-negative functions f on if and only if
sup_y∈Γ∫_Γρ_1(x,y)^q M_Γ^2(x) x <∞.
Assume (<ref>). Let f≥ 0. With the notation F:=f/M_Γ we get
P_1^Γf _q,M_Γ = (∫_Γ( ∫_Γρ_1(x,y) F(y)M_Γ^2(y) y)^q M_Γ^2(x) x)^1/q.
Let c be the supremum in (<ref>).
By Minkowski integral inequality,
(∫_Γ( ∫_Γρ_1(x,y) F(y)M_Γ^2(y) y)^q M_Γ^2(x) x )^1/q ≤∫_Γ(∫_Γρ_1(x,y)^qM_Γ^2(x) x)^1/q F(y)M_Γ^2(y) y
≤ c ∫_Γ F(y)M_Γ^2(y) y = cf_1,M_Γ.
For t>0, by scaling we get (<ref>) as follows:
P_t^Γ f _q,M_Γ = t^d+β(2-q)/α q P_1^Γ f(t^1/α · ) _q,M_Γ≤ c t^d+β(2-q)/α q f(t^1/α · ) _1,M_Γ
= ct^d+β(2-q)/α q t^-d+β/α f _1,M_Γ = ct^-d+2β/αq-1/q f_1,M_Γ.
Conversely, assume (<ref>). Let y ∈Γ. Let g_n≥ 0, n∈, be functions in C^∞_c(Γ) approximating δ_y, the Dirac measure at y, as follows:
∫_Γ g_n(x) x = 1, and lim_n →∞∫_Γ h(x)g_n(x) x = h(y),
for every function h continuous near y. For f_n := g_n/M_Γ, f_n_1,M_Γ = g_n _1 = 1 and
P_1^Γ f_n(x) = ∫_Γ p_1^Γ(x,z)g_n(z)/M_Γ(z) z →p_1^Γ(x,y)/M_Γ(y),
as n →∞. By (<ref>) and Fatou's lemma,
C^q ≥lim inf_n →∞P_1^Γ f_n_q,M_Γ^q
=lim inf_n →∞∫_Γ| ∫_Γp_1^Γ(x,z)/M_Γ(z)g_n(z) z |^q M_Γ^2-q(x) x
≥∫_Γρ_1(x,y)^qM_Γ^2(x) x.
Since y ∈Γ was arbitrary, we obtain (<ref>).
Of course, (<ref>) extends to arbitrary f∈ L^1(M_Γ).
As in Example <ref>, we assume that β=0. In fact, to simplify notation, let Γ=. Then (<ref>) is trivially satisfied for every q∈ [1,∞), because ρ_1(x,y)=p_1(x,y) is bounded. Therefore, by (<ref>), for each f∈ L^1,
P_t f_q≤ Ct^-d/αq-1/qf_1.
This agrees with <cit.>, see also <cit.>.
Here is a refinement of Lemma <ref>.
Let q ∈ [1,∞), assume (<ref>) and suppose Γ is fat. If f∈ L^1(M_Γ), ∫_Γ f(x) M_Γ (x) x =0 then
lim_t →∞t^d+2β/αq-1/q P_t^Γ f_q,M_Γ=0.
If, additionally, f has compact support, then (<ref>) is true for q=∞, too.
Let ω>0. First, we prove prove lim_0_cond for a compactly supported function f ∈ L^1(M_Γ) satisfying
∫_Γ f (x) M_Γ (x) x =0.
Step 1. Case q=∞.
For t>0 we let
I(t) := t^d+2β/α P_t^Γ f _∞ ,M_Γ
=t^d+2β/αsup_x∈Γ| ∫_Γρ _t(x,y)M_Γ (y)f(y) y |.
By 0 con 1,
I(t) = t^d+2β/αsup_x∈Γ| ∫_Γ( ρ_t(x,y)-ρ_t(x,0))M_Γ (y)f (y) y |.
Since f has compact support, for sufficiently large t>0 we have
I(t) =t^d+2β/αsup_x∈Γ| ∫_|y|≤ t^1/αω( ρ_t(x,y)-ρ_t(x,0))M_Γ (y)f (y) y |
≤ t^d+2β/αsup_x∈Γ
|y|≤ t^1/αω|ρ_t(x,y) -ρ_ t(x,0)| ∫_|y|≤ t^1/αωM_Γ (y)|f (y)| y
= sup_x∈Γ
|y|≤ω| ρ_1(x,y ) - ρ_1( x, 0) | ∫_ΓM_Γ (y)| f(y)| y,
where in the last line we used scaling (<ref>) of ρ.
By Theorem <ref>, we can make it arbitrary small by choosing small ω, and (<ref>) follows in this case.
Step 2. Case q=1.
For t>0 we let
J(t):= P_t^Γ f _1,M_Γ= ∫_Γ| ∫_Γ p^Γ_t (x,y)M_Γ (x)f(y) y | x =∫_Γ| ∫_Γρ_t (x,y)M_Γ^2(x)f (y)M_Γ (y) y | x .
Applying 0 con 1, we get
J(t) ≤∫_Γ∫_Γ| ρ_t(x,y)-ρ_t(x,0)|M_Γ ^2(x) |f (y)|M_Γ (y) y x .
Since f has compact support,
J(t) ≤∫_Γ∫_|y|≤ t^1/αω| ρ_t (x,y)-ρ_t(x,0)|M_Γ ^2(x) |f (y)| M_Γ (y) y x
≤sup_|y|≤ t^1/αωρ_t(· , y)-ρ_t(· ,0)_L^1(M_Γ^2)∫_Γ M_Γ (y)|f (y)| y,
for sufficiently large t.
In view of (<ref>) and (<ref>), by changing variables t^-1/αx → x and t^-1/αy → y we obtain
sup_|y|≤ t^1/αωρ_t(· , y)-ρ_t(· ,0)_L^1(M_Γ^2)
= t^-d+2β/αsup_|y|≤ t^1/αω∫ | ρ_1 (t^-1/αx,t^-1/αy ) -ρ_1( t^-1/αx,0)|M_Γ ^2(x) x
=sup_|y|≤ωρ_1(· ,y) -ρ_1(·, 0)_L^1(M_Γ^2).
By Corollary <ref>, we can make it arbitrary small by choosing small ω, so lim_0_cond is true.
Step 3. Case q∈ (1,∞).
By Hölder inequality we get that, as t →∞,
t^d+2β/αq-1/qP_t^Γ f _q, M_Γ =t^d+2β/αq-1/q(∫_Γ| P_t^Γ f (x)/M_Γ (x)|^q-1| P_t^Γ f (x)M_Γ (x)| x )^1/q
≤(t^d+2β/αP_t^Γ f _∞ ,M_Γ)^q-1/q P_t^Γ f _1, M_Γ^1/q→ 0,
since both factors converge to zero as t→∞ by Step 1. and Step 2.
Finally, consider arbitrary f∈ L^1(M_Γ) with ∫_Γ f(x) M_Γ (x) x =0.
Let R>0 and f_R (x)= (f (x)-c_R)_|x| ≤ R, where c_R=∫_|x|≤ R f(x)M_Γ (x) x/ ∫_|x|≤ R M_Γ (x) x. Of course,
∫_Γ M_Γ (x) f_R (x) x =0,
and f_R is compactly supported. Furthermore, due to our assumptions,
f- f_R _L^1(M_Γ ) = |c_R| ∫_|x| ≤ RM_Γ (x) x+∫_|x|> RM_Γ (x)|f (x)| x
= | ∫_|x| ≤ R M_Γ (x)f (x) x | + ∫_|x|> RM_Γ (x)|f (x)| x → 0
as R→∞.
Let >0 and choose R>0 so large that
f -f_R _1,M_Γ<.
For q=1, by using the triangle inequality and Lemma <ref>, we get
P_t^Γ f_1,M_Γ ≤P_t^Γ f_R _1,M_Γ+ P_t^Γ(f-f_R) _1,M_Γ
≤P_t^Γ f_R _1,M_Γ+
f -f_R _1, M_Γ ,
and Step 2. yields
lim sup_t→∞P_t^Γ f_1,M_Γ≤,
which proves lim_0_cond in this case.
If 1<q<∞, then using the triangle inequality and Lemma <ref>, we obtain
t^d+2β/αq-1/qP_t^Γ f_q,M_Γ ≤ t^d+2β/αq-1/qP_t^Γ f_R _q,M_Γ+ t^d+2β/αq-1/qP_t^Γ(f-f_R) _q,M_Γ
≤ t^d+2β/αq-1/qP_t^Γ f_R _q,M_Γ+
Cf -f_R _1, M_Γ.
By (<ref>) and Step 3.,
lim sup_t→∞ t^d+2β/αq-1/qP_t^Γ f_q,M_Γ≤
2C.
This completes the proof of lim_0_cond for q∈ (1,∞).
Let q∈[1,∞), assume (<ref>) and suppose Γ is fat. Then for f∈ L^1(M_Γ) and A=∫_Γf (x) M_Γ(x) x,
lim_t→∞t^d+2β/αq-1/q P_t^Γ f-AΨ_t_q, M_Γ=0.
In view of (<ref>) and Lemma <ref>, the constant A in Theorem <ref> satisfies
∫_Γ(P_t^Γ f(x) - A Ψ_s(x)) M_Γ(x) x = 0, s, t >0.
By (<ref>), (<ref>), Remark <ref> and Lemma <ref>,
lim_t→∞t^d+2β/αq-1/q P_t^Γ f-AΨ_t_q, M_Γ = lim_t→∞t^d+2β/αq-1/q P_t+1^Γ f-AΨ_t+1_q, M_Γ
= lim_t→∞t^d+2β/αq-1/q P_t^Γ( P_1^Γ f-AΨ_1)_q, M_Γ =0.
§.§ Applications
We conclude the article by providing several applications and examples which apply to our results. In particular, we draw the reader's attention to Lemma <ref>, which provides sharp distinction between cones contained in the half-space ℝ_+^d:= {x=(x_1,…,x_d) ∈ x_d > 0} and those which contain ℝ_+^d. The same behavior is displayed by a bigger class of smooth cones, as we assert in Corollary <ref>. First, we note a simple observation.
Let q=1. By (<ref>), the condition (<ref>) holds for every fat cone Γ.
Let q ∈ (1,∞) and suppose Γ is a right-circular cone. Then (<ref>) holds if β≥α/2. Conversely, if d ≥ 2 and β<α/2, then (<ref>) does not hold.
Recall that by <cit.>,
p_1^Γ(x,y) ≈ p_1(x,y) (1 ∧δ_Γ(x))^α/2(1 ∧δ_Γ(y))^α/2/(1 ∧ |x|)^α/2-β(1 ∧ |y|)^α/2-β, x,y ∈Γ.
Moreover, <cit.> entails that
M_Γ(x) ≈δ_Γ(x)^α/2|x|^β-α/2, x ∈.
Using this together with (<ref>) and (<ref>), we infer that, for x,y ∈Γ,
ρ_1(x,y) ≈ p_1(x,y) (1 ∧δ_Γ(x))^α/2(1 ∧δ_Γ(y))^α/2/(1 ∧ |x|)^α/2-β(1 ∧ |y|)^α/2-βδ_Γ(x)^α/2|x|^β-α/2δ_Γ(y)^α/2|y|^β-α/2
≈(1+|x-y|)^-d-α(1+δ_Γ(x))^-α/2(1+δ_Γ(y))^-α/2/(1+|x|)^β-α/2 (1+|y|)^β-α/2.
Let q ∈ (1,∞) and assume β≥α/2.
Then it follows from (<ref>) that ρ_1(x,y) ≲ 1 for x,y ∈Γ, and (<ref>) entails that
∫_Γρ_1(x,y)^q M_Γ^2(x) x ≤ρ_1(x, · ) _∞^q-1∫_Γρ_1(x,y)M_Γ^2(y) y
= ρ_1(x, · ) _∞^q-1≲ 1.
Thus, we get (<ref>) as claimed.
Now assume that d ≥ 2 and β <α/2. Let y ∈Γ be such that δ_Γ(y)=2, so that A:=B(y,1) ⊆Γ. Then for x ∈ A one clearly has that 1+|x-y| ≈ 1 and δ_Γ(x) ≈ 1. Then it follows from (<ref>) that
∫_Γρ_1(x,y)^qM_Γ^2(x) x ≥∫_A ρ_1(x,y)^qM_Γ^2(x) x
≈∫_A (1+|x|)^q(α/2-β)(1+|y|)^q(α/2-β)δ_Γ(x)^α|x|^2β-α x
≈ |y|^(q-1)(α-2β).
Since α-2β>0 and q>1, by taking |y| →∞ we see that (<ref>) cannot hold in this case.
Considering the direct part of Lemma <ref>, we note that for d=1 one cannot have at the same time δ_Γ(y) ≈ 1 and |y| →∞. In fact, in this case either Γ=(0,∞) or Γ = ∖{0}. In both situations δ_Γ(y)=|y| for y ∈Γ and (<ref>) yields the boundedness of ρ_1. When Γ = (0,∞), then one has β = α/2 by <cit.>. If α∈ (1,2) and Γ=∖{0}, then Γ^c={0} is a non-polar set and β = α-1, see <cit.>. We note that both Γ = (0,∞) and Γ = ∖{0} are (trivially) smooth cones. In both cases, (<ref>) holds by (<ref>) and (<ref>).
Let d ≥ 2 and Γ be a right-circular cone, which is the subset of a half-space ℝ_+^d:= {x=(x_1,…,x_d) ∈ x_d > 0}. Then by <cit.> we have β≥α/2 and Lemma <ref> gives (<ref>). In particular, one can take Γ=ℝ_+^d. On the other hand, if Γ is such that ℝ_+^d ⊊Γ, then β < α/2 by <cit.>, and Lemma <ref> asserts that (<ref>) does not hold.
Let d ≥ 2 and suppose Γ is a smooth cone. Then (<ref>) holds if and only if β≥α/2.
Recall that Γ is open and C^1,1 outside of the origin.
From the harmonicity and homogeneity of M_Γ, by the boundary Harnack principle we get, as in <cit.>, that
M_Γ(x) ≈δ_Γ(x)^α/2|x|^β-α/2, x ∈Γ.
Moreover, since the smooth cone is fat, its Dirichlet heat kernel satisfies (<ref>). Thus, one can directly repeat the proof of Lemma <ref> to conclude the claim.
abbrv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.